Table(3pm) | User Contributed Perl Documentation | Table(3pm) |
Data::Table - Data type related to database tables, spreadsheets, CSV/TSV files, HTML table displays, etc.
News: The package now includes "Perl Data::Table Cookbook" (PDF), which may serve as a better learning material. To download the free Cookbook, visit https://sites.google.com/site/easydatabase/ # some cool ways to use Table.pm use Data::Table; $header = ["name", "age"]; $data = [ ["John", 20], ["Kate", 18], ["Mike", 23] ]; $t = Data::Table->new($data, $header, 0); # Construct a table object with # $data, $header, $type=0 (consider # $data as the rows of the table). print $t->csv; # Print out the table as a csv file. $t = Data::Table::fromCSV("aaa.csv"); # Read a csv file into a table object ### Since version 1.51, a new method fromFile can automatically guess the correct file format # either CSV or TSV file, file with or without a column header line # e.g. # $t = Data::Table::fromFile("aaa.csv"); # is equivalent. print $t->html; # Display a 'portrait' HTML TABLE on web. use DBI; $dbh= DBI->connect("DBI:mysql:test", "test", "") or die $DBI::errstr; my $minAge = 10; $t = Data::Table::fromSQL($dbh, "select * from mytable where age >= ?", [$minAge]); # Construct a table form an SQL # database query. $t->sort("age", 0, 0); # Sort by col 'age',numerical,ascending print $t->html2; # Print out a 'landscape' HTML Table. $row = $t->delRow(2); # Delete the third row (index=2). $t->addRow($row, 4); # Add the deleted row back as fifth row. @rows = $t->delRows([0..2]); # Delete three rows (row 0 to 2). $col = $t->delCol("age"); # Delete column 'age'. $t->addCol($col, "age",2); # Add column 'age' as the third column @cols = $t->delCols(["name","phone","ssn"]); # Delete 3 columns at the same time. $name = $t->elm(2,"name"); # Element access $t2=$t->subTable([1, 3..4],['age', 'name']); # Extract a sub-table $t->rename("Entry", "New Entry"); # Rename column 'Entry' by 'New Entry' $t->replace("Entry", [1..$t->nofRow()], "New Entry"); # Replace column 'Entry' by an array of # numbers and rename it as 'New Entry' $t->swap("age","ssn"); # Swap the positions of column 'age' # with column 'ssn' in the table. $t->colMap('name', sub {return uc}); # Map a function to a column $t->sort('age',0,0,'name',1,0); # Sort table first by the numerical # column 'age' and then by the # string column 'name' in ascending # order $t2=$t->match_pattern('$_->[0] =~ /^L/ && $_->[3]<0.2'); # Select the rows that matched the # pattern specified $t2=$t->match_pattern_hash('$_{"Amino acid"} =~ /^L-a/ && $_{"Grams \"(a.a.)\""}<0.2')); # use column name in the pattern, method added in 1.62 $t2=$t->match_string('John'); # Select the rows that matches 'John' # in any column $t2=$t->clone(); # Make a copy of the table. $t->rowMerge($t2); # Merge two tables $t->colMerge($t2); $t = Data::Table->new( # create an employ salary table [ ['Tom', 'male', 'IT', 65000], ['John', 'male', 'IT', 75000], ['Tom', 'male', 'IT', 65000], ['John', 'male', 'IT', 75000], ['Peter', 'male', 'HR', 85000], ['Mary', 'female', 'HR', 80000], ['Nancy', 'female', 'IT', 55000], ['Jack', 'male', 'IT', 88000], ['Susan', 'female', 'HR', 92000] ], ['Name', 'Sex', 'Department', 'Salary'], 0); sub average { # this is an subroutine calculate mathematical average, ignore NULL my @data = @_; my ($sum, $n) = (0, 0); foreach $x (@data) { next unless $x; $sum += $x; $n++; } return ($n>0)?$sum/$n:undef; } $t2 = $t->group(["Department","Sex"],["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"]); # For each (Department,Sex) pair, calculate the number of employees and average salary $t2 = $t2->pivot("Sex", 0, "Average Salary", ["Department"]); # Show average salary information in a Department by Sex spreadsheet
This perl package uses perl5 objects to make it easy for manipulating spreadsheet data among disk files, database, and Web publishing.
A table object contains a header and a two-dimensional array of scalars. Four class methods Data::fromFile, Data::Table::fromCSV, Data::Table::fromTSV, and Data::Table::fromSQL allow users to create a table object from a CSV/TSV file or a database SQL selection in a snap.
Table methods provide basic access, add, delete row(s) or column(s) operations, as well as more advanced sub-table extraction, table sorting, record matching via keywords or patterns, table merging, and web publishing. Data::Table class also provides a straightforward interface to other popular Perl modules such as DBI and GD::Graph.
The most updated version of the Perl Data::Table Cookbook is
available at
https://sites.google.com/site/easydatabase/
We use Data::Table instead of Table, because Table.pm has already been used inside PerlQt module in CPAN.
A table object has three data members:
Row-based/Column-based are two internal implementations for a table object. E.g., if a spreadsheet consists of two columns lastname and age. In a row-based table, $data = [ ['Smith', 29], ['Dole', 32] ]. In a column-based table, $data = [ ['Smith', 'Dole'], [29, 32] ].
Two implementations have their pros and cons for different operations. Row-based implementation is better for sorting and pattern matching, while column-based one is better for adding/deleting/swapping columns.
Users only need to specify the implementation type of the table upon its creation via Data::Table::new, and can forget about it afterwards. Implementation type of a table should be considered volatile, because methods switch table objects from one type into another internally. Be advised that row/column/element references gained via table::rowRef, table::rowRefs, table::colRef, table::colRefs, or table::elmRef may become stale after other method calls afterwards.
For those who want to inherit from the Data::Table class, internal method table::rotate is used to switch from one implementation type into another. There is an additional internal assistant data structure called colHash in our current implementation. This hash table stores all column names and their corresponding column index number as key-value pairs for fast conversion. This gives users an option to use column name wherever a column ID is expected, so that user don't have to use table::colIndex all the time. E.g., you may say $t->rename('oldColName', 'newColName') instead of $t->rename($t->colIndex('oldColName'), 'newColIdx').
$t_product->match_pattern_hash('$_{UnitPrice} > 20'); $t_product->setElm($t_product->{MATCH}, 'UnitPrice', 20);
Syntax: return_type method_name ( [ parameter [ = default_value ]] [, parameter [ = default_value ]] )
If method_name starts with table::, this is an instance method, it can be used as $t->method( parameters ), where $t is a table reference.
If method_name starts with Data::Table::, this is a class method,
it should be called as
Data::Table::method, e.g., $t =
Data::Table::fromCSV("filename.csv").
Conventions for local variables:
colID: either a numerical column index or a column name; rowIdx: numerical row index; rowIDsRef: reference to an array of column IDs; rowIdcsRef: reference to an array of row indices; rowRef, colRef: reference to an array of scalars; data: ref_to_array_of_ref_to_array of data values; header: ref to array of column headers; table: a table object, a blessed reference.
optional name argument delimiter and qualifier let user replace comma and double-quote by other meaningful single characters. <b>Exception</b>: if the delimiter or the qualifier is a special symbol in regular expression, you must escape it by '\'. For example, in order to use pipe symbol as the delimiter, you must specify the delimiter as '\|'.
optional name argument skip_lines let you specify how many lines in the csv file should be skipped, before the data are interpretted.
optional name argument skip_pattern let you specify a regular expression. Lines that match the regular expression will be skipped.
optional name argument encoding let you specify an encoding method of the csv file. This option is added to fromCSV, fromTSV, fromFile since version 1.69.
The following example reads a DOS format CSV file and writes a MAC format:
$t = Data::Table:fromCSV('A_DOS_CSV_FILE.csv', 1, undef, {OS=>1}); $t->csv(1, {OS=>2, file=>'A_MAC_CSV_FILE.csv'}); open(SRC, 'A_DOS_CSV_FILE.csv') or die "Cannot open A_DOS_CSV_FILE.csv to read!"; $t = Data::Table::fromCSV(\*SRC, 1); close(SRC);
The following example reads a non-standard CSV file with : as the delimiter, ' as the qaulifier
my $s="col_A:col_B:col_C\n1:2, 3 or 5:3.5\none:'one:two':'double\", single'''"; open my $fh, "<", \$s or die "Cannot open in-memory file\n"; my $t_fh=Data::Table::fromCSV($fh, 1, undef, {delimiter=>':', qualifier=>"'"}); close($fh); print $t_fh->csv; # convert to the standard CSV (comma as the delimiter, double quote as the qualifier) # col_A,col_B,col_C # 1,"2, 3 or 5",3.5 # one,one:two,"double"", single'" print $t->csv(1, {delimiter=>':', qualifier=>"'"}); # prints the csv file use the original definition
The following example reads bbb.csv file (included in the package) by skipping the first line (skip_lines=>1), then treats any line that starts with '#' (or space comma) as comments (skip_pattern=>'^\s*#'), use ':' as the delimiter.
$t = Data::Table::fromCSV("bbb.csv", 1, undef, {skip_lines=>1, delimiter=>':', skip_pattern=>'^\s*#'});
Use the optional name argument encoding to specify file
encoding method.
$t = Data::Table::fromCSV("bbb.csv", 1,
undef, {encoding=>'UTF-8'});
optional name argument skip_lines let you specify how many lines in the csv file should be skipped, before the data are interpretted.
optional name argument skip_pattern let you specify a regular expression. Lines that match the regular expression will be skipped.
optional name argument transform_element let you switch on/off \t to tab, \N to undef (etc.) transformation. See TSV FORMAT for details. However, elements are always transformed when export table to tsv format, because not escaping an element containing a tab will be disasterous.
optional name argument encoding enables one to provide an encoding method when open the tsv file.
See similar examples under Data::Table::fromCSV;
Note: read "TSV FORMAT" section for details.
fromFile is added after version 1.51. It relies on the following new methods to automatically figure out the correct file format in order to call fromCSV or fromTSV internally:
fromFileGuessOS($file_name, {encoding=>'UTF-8'}) returns integer, 0 for UNIX, 1 for PC, 2 for MAC fromFileGetTopLines($file_name, $os, $lineNumber, {encoding=>'UTF-8'}) # $os defaults to fromFileGuessOS($file_name), if not specified returns an array of strings, each string represents each row with linebreak removed. fromFileGuessDelimiter($lineArrayRef) # guess delimiter from ",", "\t", ":"; returns the guessed delimiter string. fromFileIsHeader($line_concent, $delimiter, $allowNumericHeader) # $delimiter defaults to $Data::Table::DEFAULTS{'CSV_DELIMITER'} returns 1 or 0.
It first ask fromFileGuessOS to figure out which OS (UNIX, PC or MAC) generated the input file. The fetch the first linesChecked lines using fromFileGetTopLines. It then guesses the best delimiter using fromFileGuessDelimiter, then it checks if the first line looks like a column header row using fromFileIsHeader. Since fromFileGuessOS and fromFileGetTopLines needs to open/close the input file, these methods can only take file name, not file handler. If user specify formatting parameters in $arg_ref, the routine will skip the corresponding guess work. At the end, fromFile simply calls either fromCSV or fromTSV with $arg_ref forwarded. So if you call fromFile({transform_element=>0}) on a TSV file, transform_elment will be passed onto fromTSV calls internally.
fromFileGuessOS finds the linebreak that gives shortest first line (in the priority of UNIX, PC, MAC upon tie). fromFileGuessDelimiter works based on the assumption that the correct delimiter will produce equal number of columns for the given rows. If multiple matches, it chooses the delimiter that gives maximum number of columns. If none matches, it returns the default delimiter. fromFileIsHeader works based on the assumption that no column header can be empty or numeric values. However, if we allow numeric column names (especially integer column names), set {allowNumericHeader => 1}
Data::Table::fromSQL now can take DBI::st instead of a SQL string. This is introduced, so that variable binding (such as CLOB/BLOB) can be done outside the method, for example:
$sql = 'insert into test_table (id, blob_data) values (1, :val)'; $sth = $dbh->prepare($sql); $sth->bind_param(':val', $blob, {ora_type => SQLT_BIN}); Data::Table::fromSQL($dbh, $sth);
Since 1.69, we allow integer to be used as a column header. The integer $colID will first be checked against column names, if matched, the corresponding column index is returned. E.g., if column name for the 3rd column is "1", colIndex(1) will return 2 instead of 1! In such case, if one need to access the second column, one has to access it by column name, i.e., $t->col(($t->header)[1]).
# these two are equivalent foreach my $i (0 .. $t->lastRow) foreach my $i (0 .. $t->nofRow - 1)
Note: read "TSV FORMAT" section for details.
Since version 1.74, users can prevent default coloring by passing in a color array reference ["", "", ""].
Before version 1.59, the parameter can only accept an array reference.
$tag_tbl: a reference to a hash that specifies any legal attributes such as name, border, id, class, etc. for the TABLE tag.
$tag_tr: a reference to a hash that specifies any legal attributes for the TR tag.
$tag_th: a reference to a hash that specifies any legal attributes for the TH tag.
$tag_td: a reference to a hash that specifies any legal attributes for the TD tag.
Notice $tag_tr and $tag_th controls all the rows and columns of the whole table. The keys of the hash are the attribute names in these cases. However, $tag_td is column specific, i.e., you should specify TD attributes for every column separately. The key of %$tag_td are either column names or column indices, the value is a reference to a hash. E.g., $tag_td = {col3 => {'style'=>'background-color:#cccc99;'}}. However, before version 1.74, the value is the full string to be inserted into the TD tag. E.g., $tag_td = {col3 => 'align=right valign=bottom} only change the TD tag in "col3" to be <TD align=right valign=bottom>;. This format is still supported for backward compatibility.
$portrait controls the layout of the table. The default is 1, i.e., the table is shown in the "Portrait" style, like in Excel. 0 means "Landscape". Since version 1.59, tbody and thead tags are added to the portrait mode output.
Since version 1.74, $callback is introduced to give users fine control on the tag for each cell, i.e., for each th/td cells. $callback is a subroutine reference, where the sub is expected to take parameters ($tag, $row_index, $col_index, $col_name, $table), $tag is reference to a hash containing existing TH/TD tags, the sub will return a new tag. The rest of the parameters give sub access to the identity of the table cell, as well as the table itself.
If the following example, the callback function colors each UnitPrice cell based on whether its value is >=20 or <20. It colors each Discontinued cell based on whether its value is TRUE or FALSE. One can also control the column header cells, which has row index of -1. That is the reason we use "$row >=0 " within callback to make sure it cell is not a column header.
$t=Data::Table::fromCSV("Data-Table-1.74/Product.csv",1,undef, {'OS'=>1}); my $callback = sub { my ($tag, $row, $col, $colName, $table) = @_; if ($row >=0 && $colName eq 'UnitPrice') { $tag->{'style'} = 'background-color:'. (($table->elm($row, $col)>=20) ? '#fc8d59':'#91bfdb') . ';'; } if ($row >=0 && $colName eq 'Discontinued') { $tag->{'style'} = 'background-color:'. (($table->elm($row, $col) eq 'TRUE') ? '#999999':'#af8dc3') .';'; } return $tag; }; print $t->html(undef, undef, undef, undef, undef, undef, $callback);
Attention: You will have to escape HTML-Entities yourself (for example '<' as '<'), if you have characters in you table which need to be escaped. You can do this for example with the escapeHTML-function from CGI.pm (or the HTML::Entities module).
use CGI qw(escapeHTML); [...] $t->colMap($columnname, sub{escapeHTML($_)}); # for every column, where HTML-Entities occur.
return a string corresponding to a "Landscape" html-tagged table. This is useful to present a table with many columns, but very few entries. Check the above table::html for parameter descriptions.
$t->setElm([0..2], ['ColA', 'ColB'], 'new value'); $t->setElm(0, [1..2], 'new value'); # puts a limit on the price of all expensive items $t_product->match_pattern_hash('$_{UnitPrice} > 20'); $t_product->setElm($t_product->{MATCH}, 'UnitPrice', 20);
# automatically add a new column "aNewColumn" to $t, in order to hold the new value $t->addRow({anExistingColumn => 123, aNewColumn => "XYZ"}, undef, {addNewCol => 1}); # $t only had one column, after this call, it will contain a new column 'col2', in order to hold the new value $t->addRow([123, "XYZ"], undef, {addNewCol => 1});
It returns 1 upon success, undef otherwise.
$t->addCol(undef, 'NewCol'); $t->addCol(0, 'NewIntCol'); $t->addCol('default', 'NewStringCol');
In 1.62, instead of memorize these numbers, you can use
constants instead (notice constants do not start with '$').
Data::Table::NUMBER
Data::Table::STRING
Data::Table::ASC
Data::Table::DESC
Sorting is done in the priority of colID1, colID2, ... It returns 1 upon success or undef otherwise. Notice the table is rearranged as a result! This is different from perl's list sort, which returns a sorted copy while leave the original list untouched, the authors feel inplace sorting is more natural.
table::sort can take a user supplied operator, this is useful when neither numerical nor alphabetic order is correct.
$Well=["A_1", "A_2", "A_11", "A_12", "B_1", "B_2", "B_11", "B_12"]; $t = Data::Table->new([$Well], ["PlateWell"], 1); $t->sort("PlateWell", 1, 0); print join(" ", $t->col("PlateWell")); # prints: A_1 A_11 A_12 A_2 B_1 B_11 B_12 B_2 # in string sorting, "A_11" and "A_12" appears before "A_2"; my $my_sort_func = sub { my @a = split /_/, $_[0]; my @b = split /_/, $_[1]; my $res = ($a[0] cmp $b[0]) || (int($a[1]) <=> int($b[1])); }; $t->sort("PlateWell", $my_sort_func, 0); print join(" ", $t->col("PlateWell")); # prints the correct order: A_1 A_2 A_11 A_12 B_1 B_2 B_11 B_12
Side effect: @Data::Table::OK (should use $t->{OK} after 1.62) stores a true/false array for the original table rows. Using it, users can find out what are the rows being selected/unselected. Side effect: @Data::Table::MATCH stores a reference to an array containing all row indices for matched rows.
In the $pattern string, a column element should be referred as $_->[$colIndex]. E.g., match_pattern('$_->[0]>3 && $_->[1]=~/^L') retrieve all the rows where its first column is greater than 3 and second column starts with letter 'L'. Notice it only takes colIndex, column names are not acceptable here!
Side effect: @Data::Table::OK stores a reference to a true/false array for the original table rows. Using it, users can find out what are the rows being selected/unselected. Side effect: @Data::Table::MATCH stores a reference to an array containing all row indices for matched rows.
In the $pattern string, a column element should be referred as ${column_name}. match_pattern_hash() is added in 1.62. The difference between this method and match_pattern is each row is fed to the pattern as a hash %_. In the case of match_pattern, each row is fed as an array ref $_. The pattern for match_pattern_hash() becomes much cleaner.
If a table has two columns: Col_A as the 1st column and Col_B as the 2nd column, a filter "Col_A > 2 AND Col_B < 2" is written before as $t->match_pattern('$_->[0] > 2 && $_->[1] <2'); where we need to figure out $t->colIndex('Col_A') is 0 and $t->colIndex('Col_B') is 1, in order to build the pattern. Now you can use column name directly in the pattern: $t->match_pattern_hash('$_{Col_A} >2 && $_{Col_B} <2'); This method creates $t->{OK}, as well as @Data::Table::OK, same as match_pattern().
Simple boolean operators such as and/or can be directly put into the pattern string. More complex logic can also be supported in the example below:
my $t= Data::Table->new([[2,5,'Jan'], [1,6,'Feb'], [-3,2,'Apr'], [6,-4,'Dec']], ['X','Y','Month'], 0); # we need to use our instead of my, so that %Q1 is accessible within match_pattern_hash our %Q1 = ('Jan'=>1, 'Feb'=>1, 'Mar'=>1); # find records belongin to Q1 months, we need to use %::Q1 to access the Q1 defined outside Data::Table $t2=$t->match_pattern_hash('exists $::Q1{$_{Month}}');
similarly, subroutines can be accessed inside match_pattern_hash using "::":
sub in_Q1 { my $x = shift; return ($x eq 'Jan' or $x eq 'Feb' or $x eq 'Mar'); } $t2=$t->match_pattern_hash('::in_Q1($_{Month})');
However, such usage is discouraged, as match_pattern_hash() does not throw errors when the pattern is invalid. For complex filtering logic, we strongly recommend you stick to row-based looping.
Side effect: @Data::Table::OK stores a reference to a true/false array for the original table rows. Side effect: @Data::Table::MATCH stores a reference to an array containing all row indices for matched rows. Using it, users can find out what are the rows being selected/unselected. The $s string is actually treated as a regular expression and applied to each row element, therefore one can actually specify several keywords by saying, for instance, match_string('One|Other').
E.g., $t1=$tbl->match_string('keyword'); $t2=$tbl->rowMask(\@Data::Table::OK, 1) creates two new tables. $t1 contains all rows match 'keyword', while $t2 contains all other rows.
mask is reference to an array, where elements are evaluated to
be true or false. The size of the mask must be equal to the nofRow of
the table. return
a new table consisting those rows where the corresponding mask element is
true (or false, when complement is set to true).
E.g., $t1=$tbl->match_string('keyword'); $t2=$tbl->rowMask(\@Data::Table::OK, 1) creates two new tables. $t1 contains all rows match 'keyword', while $t2 contains all other rows.
my $next = $t_product->iterator(); while (my $row = $next->()) { # have access to a row as a hash reference, access row number by &$next(1); $t_product->setElm($next->(1), 'ProductName', 'New! '.$row->{ProductName}); }
In this example, each $row is fetched as a hash reference, so one can access the elements by $row->{colName}. Be aware that the elements in the hash is a copy of the original table elements, so modifying $row->{colName} does not modify the original table. If table modification is intended, one needs to obtain the row index of the returned row. $next->(1) call with a non-empty argument returns the row index of the record that was previously fetched with $next->(). In this example, one uses the row index to modify the original table.
$colsToGroupBy, $colsToCalculate are references to array of colIDs. $funsToApply is a reference to array of subroutine references. $newColNames are a reference to array of new column name strings. If specified, the size of arrays pointed by $colsToCalculate, $funsToApply and $newColNames should be i dentical. A column may be used more than once in $colsToCalculate.
$keepRestCols is default to 1 (was introduced as 0 in 1.64, changed to 1 in 1.66 for backward compatibility) introduced in 1.64), otherwise, the remaining columns are returned with the first encountered value of that group.
E.g., an employee salary table $t contains the following columns: Name, Sex, Department, Salary. (see examples in the SYNOPSIS)
$t2 = $t->group(["Department","Sex"],["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"], 0);
Department, Sex are used together as the primary key columns, a new column "Nof Employee" is created by counting the number of employee names in each group, a new column "Average Salary" is created by averaging the Salary data falled into each group. As the result, we have the head count and average salary information for each (Department, Sex) pair. With your own functions (such as sum, product, average, standard deviation, etc), group method is very handy for accounting purpose. If primary key columns are not defined, all records will be treated as one group.
$t2 = $t->group(undef,["Name", "Salary"], [sub {scalar @_}, \&average], ["Nof Employee", "Average Salary"], 0);
The above statement will output the total number of employees and their average salary as one line.
Note: yes, it seems I made an incompatible change in version 1.64, where $colToSplitIsStringOrNumber used to be $colToSplitIsNumeric, where 0 meant STRING and 1 meant NUMBER. Now it is opposite. However, I also added auto-type detection code, that this parameter essentially is auto-guessed and most old code should behave the same as before.
When primary key columns are specified by $colsToGroupBy, all records sharing the same primary key collapse into one row, with values in $colToFill filling the corresponding new columns. If $colToFill is not specified, a cell is filled with the number of records fall into that cell.
$colToSplit and $colToFill are colIDs. $colToSplitIsNumeric is 1/0. $colsToGroupBy is a reference to array of colIDs. $keepRestCols is 1/0, by default is 0. If $keepRestCols is off, only primary key columns and new columns are exported, otherwise, all the rest columns are exported as well.
E.g., applying pivot method to the resultant table of the example of the group method.
$t2->pivot("Sex", 0, "Average Salary",["Department"]);
This creates a 2x3 table, where Departments are use as row keys, Sex (female and male) become two new columns. "Average Salary" values are used to fill the new table elements. Used together with group method, pivot method is very handy for accounting type of analysis. If $colsToGroupBy is left as undef, all rows are treated as one group. If $colToSplit is left as undef, the method will generate a column named "(all)" that matches all records share the corresponding primary key.
One needs to specify the columns consisting of primary keys, columns that are consider as variable columns. The output variable column is named 'variable' unless specified by $arg_ref{variableColName}. The output value column is named 'value', unless specified in $arg_ref{valueColName}. By default NULL values are not output, unless $arg_ref{skip_NULL} is set to false. By default empty string values are kept, unless one sets skip_empty to `.
For each object (id), we measure variable x1 and x2 at two time points $t = new Data::Table([[1,1,5,6], [1,2,3,5], [2,1,6,1], [2,2,2,4]], ['id','time','x1','x2'], Data::Table::ROW_BASED); # id time x1 x2 # 1 1 5 6 # 1 2 3 5 # 2 1 6 1 # 2 2 2 4 # melting a table into a tall-and-skinny table $t2 = $t->melt(['id','time']); #id time variable value # 1 1 x1 5 # 1 1 x2 6 # 1 2 x1 3 # 1 2 x2 5 # 2 1 x1 6 # 2 1 x2 1 # 2 2 x1 2 # 2 2 x2 4 # casting the table, &average is a method to calculate mean # for each object (id), we calculate average value of x1 and x2 over time $t3 = $t2->cast(['id'],'variable',Data::Table::STRING,'value', \&average); # id x1 x2 # 1 4 5.5 # 2 4 2.5
The table has been melten before. cast() group the table according to primary keys specified in $colsToGroupBy. For each group of objects sharing the same id, it further groups values (specified by $colToCalculate) according to unique variable names (specified by $colToSplit). Then it applies subroutine $funToApply to obtain an aggregate value. For the output, each unique primary key will be a row, each unique variable name will become a column, the cells are the calculated aggregated value.
If $colsToGroupBy is undef, all rows are treated as within the same group. If $colToSplit is undef, a new column "(all)" is used to hold the results.
$t = Data::Table->new( # create an employ salary table [ ['Tom', 'male', 'IT', 65000], ['John', 'male', 'IT', 75000], ['Tom', 'male', 'IT', 65000], ['John', 'male', 'IT', 75000], ['Peter', 'male', 'HR', 85000], ['Mary', 'female', 'HR', 80000], ['Nancy', 'female', 'IT', 55000], ['Jack', 'male', 'IT', 88000], ['Susan', 'female', 'HR', 92000] ], ['Name', 'Sex', 'Department', 'Salary'], Data::Table::ROW_BASED); # get a Department x Sex contingency table, get average salary across all four groups print $t->cast(['Department'], 'Sex', Data::Table::STRING, 'Salary', \&average)->csv(1); Department,female,male IT,55000,73600 HR,86000,85000 # get average salary for each department print $t->cast(['Department'], undef, Data::Table::STRING, 'Salary', \&average)->csv(1); Department,(all) IT,70500 HR,85666.6666666667 # get average salary for each gender print $t->cast(['Sex'], undef, Data::Table::STRING, 'Salary', \&average)->csv(1); Sex,(all) male,75500 female,75666.6666666667 # get average salary for all records print $t->cast(undef, undef, Data::Table::STRING, 'Salary', \&average)->csv(1); (all) 75555.5555555556
Since 1.62, you may provide {byName =>1, addNewCol=>1} as $argRef. If byName is set to 1, the columns in in $tbl do not need to be in the same order as they are in the first table, instead the column name is used for the matching. If addNewCol is set to 1, if $tbl contains a new column name that does not already exist in the first table, this new column will be automatically added to the resultant table. Typically, you want to specify there two options simultaneously.
Since 1.62, you can specify {renameCol => 1} as $argRef. This is to auto fix any column name collision. If $tbl contains a column that already exists in the first table, it will be renamed (by a suffix _2) to avoid the collision.
0: inner join 1: left outer join 2: right outer join 3: full outer join
In 1.62, instead of memorize these numbers, you can use
constants instead (notice constants do not start with '$').
Data::Table::INNER_JOIN
Data::Table::LEFT_JOIN
Data::Table::RIGHT_JOIN
Data::Table::FULL_JOIN
$cols1 and $cols2 are references to array of colIDs, where rows with the same elements in all listed columns are merged. As the result table, columns listed in $cols2 are deleted, before a new table is returned.
The implementation is hash-join, the running time should be linear with respect to the sum of number of rows in the two tables (assume both tables fit in memory).
If the non-key columns of the two tables share the same name, the routine will fail, as the result table cannot contain two columns of the same name. In 1.62, one can specify {renameCol=>1} as $argRef, so that the second column will be automatically renamed (with suffix _2) to avoid collision.
If you would like to treat the NULLs in the key columns as empty string, set {NULLasEmpty => 1}. If you do not want to treat NULLs as empty strings, but you still like the NULLs in two tables to be considered as equal (but not equal to ''), set {matchNULL => 1}. Obviously if NULLasEmpty is set to 1, matchNULL will have no effect.
All internal methods are mainly implemented for used by other methods in the Table class. Users should avoid using them. Nevertheless, they are listed here for developers who would like to understand the code and may derive a new class from Data::Table.
optional named arguments: delimiter and qualifier, in case user wants to use characters other than the defaults. The default delimiter and qualifier is taken from $Data::Table::DEFAULTS{'CSV_DELIMITER'} (defaults to ',') and $Data::Table::DEFAULTS{'CSV_QUALIFIER'} (defaults to '"'), respectively.
Please note that this function only escape one element in a
table. To escape the whole table row, you need to
join($delimiter, map {csvEscape($_)} @row .
$endl; $endl refers to
End-of-Line, which you may or may not want to add, and it is
OS-dependent. Therefore, csvEscape method is kept to the simplest form
as an element transformer.
optional argument size: specify the expected number of fields after csv-split. optional named arguments: delimiter and qualifier, in case user wants to use characters other than the defaults. respectively. The default delimiter and qualifier is taken from $Data::Table::DEFAULTS{'CSV_DELIMITER'} (defaults to ',') and $Data::Table::DEFAULTS{'CSV_QUALIFIER'} (defaults to '"'), respectively.
There is no standard for TSV format as far as we know. CSV format can't handle binary data very well, therefore, we choose the TSV format to overcome this limitation.
We define TSV based on MySQL convention.
"\0", "\n", "\t", "\r", "\b", "'", "\"", and "\\" are all escaped by '\' in the TSV file. (Warning: MySQL treats '\f' as 'f', and it's not escaped here) Undefined values are represented as '\N'.
However, you can switch off this transformation by setting {transform_element => 0} in the fromTSV or tsv method. Before if a cell reads 'A line break is \n', it is read in as 'A link break is [return]' in memory. When use tsv method to export, it is transformed back to 'A line break is \n'. However, if it is exported as a csv, the [return] will break the format. Now if transform_element is set to 0, the cell is stored as 'A line break is \n' in memory, so that csv export will be correct. However, do remember to set {transform_element => 0} in tsv export method, otherwise, the cell will become 'A line break is \\n'. Be aware that trasform_element controls column headers as well.
Spreadsheet is a very generic type, therefore Data::Table class provides an easy interface between databases, web pages, CSV/TSV files, graphics packages, etc.
Here is a summary (partially repeat) of some classic usages of Data::Table.
use DBI; $dbh= DBI->connect("DBI:mysql:test", "test", "") or die $DBI::errstr; my $minAge = 10; $t = Data::Table::fromSQL($dbh, "select * from mytable where age >= ?", [$minAge]); print $t->html;
$t = fromFile("mydata.csv"); # after version 1.51 $t = fromFile("mydata.tsv"); # after version 1.51 $t = fromCSV("mydata.csv"); $t->sort(1,1,0); print $t->csv; Same for TSV
Read in two tables from NorthWind.xls file, writes them out to XLSX format. See Data::Table::Excel module for details.
use Data::Table::Excel; my ($tableObjects, $tableNames)=xls2tables("NorthWind.xls"); $t_category = $tableObjects[0]; $t_product = $tableObjects[1]; tables2xlsx("NorthWind.xlsx", [$t_category, $t_product]);
use GD::Graph::points; $graph = GD::Graph::points->new(400, 300); $t2 = $t->match('$_->[1] > 20 && $_->[3] < 35.7'); my $gd = $graph->plot($t->colRefs([0,2])); open(IMG, '>mygraph.png') or die $!; binmode IMG; print IMG $gd->png; close IMG;
Copyright 1998-2008, Yingyao Zhou & Guangzhou Zou. All rights reserved.
It was first written by Zhou in 1998, significantly improved and maintained by Zou since 1999. The authors thank Tong Peng and Yongchuang Tao for valuable suggestions. We also thank those who kindly reported bugs, some of them are acknowledged in the "Changes" file.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Please send bug reports and comments to: easydatabase at gmail dot com. When sending bug reports, please provide the version of Table.pm, the version of Perl.
DBI, GD::Graph, Data::Table::Excel.
2022-11-19 | perl v5.36.0 |