tgridmap scans an input table to create one or more
N-dimensional density maps, or equivalently N-dimensional histograms, of the
values in an input table, and outputs the result as an, optionally sparse,
table containing a row for each grid cell. The maps/histograms can
optionally be weighted by some quantity from the input table, and various
options such as summing, averaging and counting are available for
aggregation of inputs into the output bins.
The supplied coords parameter defines which N numeric
columns of the input table form the coordinates of the bin grid, and the
cols parameter defines which quantities are aggregated into each bin.
Either the binsizes or nbins parameter must be supplied to
define the extents of the bins on each axis. The output table contains a row
for each bin, with columns giving the central (and upper/lower bound) values
of each grid coordinate, and a column for each aggregated value. The rows
are output in first-coordinate-slowest sequence, and the sparse
parameter determines whether a row is written for every cell in the
hypercube defined by the grid dimensions, or only for those cells with
non-blank data.
The tabular form of the output may not be the most appropriate or
compact way to write a density map, especially for multi-dimensional grids,
but it means the output can be manipulated later by other STILTS commands or
by TOPCAT. To do a similar job with more compact output, see tcube.
See also tskymap, which does the same thing for sky geometry (and is
probably a better choice if you find yourself accumulating onto a
longitude-latitude grid).
- ifmt=<in-format>
Specifies the format of the input table as specified by
parameter
in. The known formats are listed in SUN/256. This flag can be
used if you know what format your table is in. If it has the special value
(auto) (the default), then an attempt will be made to detect the format
of the table automatically. This cannot always be done correctly however, in
which case the program will exit with an error explaining which formats were
attempted. This parameter is ignored for scheme-specified tables.
- istream=true|false
If set true, the input table specified by the
in
parameter will be read as a stream. It is necessary to give the
ifmt
parameter in this case. Depending on the required operations and processing
mode, this may cause the read to fail (sometimes it is necessary to read the
table more than once). It is not normally necessary to set this flag; in most
cases the data will be streamed automatically if that is the best thing to do.
However it can sometimes result in less resource usage when processing large
files in certain formats (such as VOTable). This parameter is ignored for
scheme-specified tables.
- in=<table>
The location of the input table. This may take one of the
following forms:
- A filename.
- A URL.
- The special value "-", meaning standard input. In this
case the input format must be given explicitly using the ifmt
parameter. Note that not all formats can be streamed in this way.
- A scheme specification of the form
:<scheme-name>:<scheme-args>.
- A system command line with either a "<" character at
the start, or a "|" character at the end
("<syscmd" or "syscmd|"). This
executes the given pipeline and reads from its standard output. This will
probably only work on unix-like systems.
In any case, compressed data in one of the supported compression formats (gzip,
Unix compress or bzip2) will be decompressed transparently.
- icmd=<cmds>
Specifies processing to be performed on the input table
as specified by parameter
in, before any other processing has taken
place. The value of this parameter is one or more of the filter commands
described in SUN/256. If more than one is given, they must be separated by
semicolon characters (";"). This parameter can be repeated multiple
times on the same command line to build up a list of processing steps. The
sequence of commands given in this way defines the processing pipeline which
is performed on the table.
Commands may alteratively be supplied in an external file, by
using the indirection character '@'. Thus a value of
"@filename" causes the file filename to be read for
a list of filter commands to execute. The commands in the file may be
separated by newline characters and/or semicolons, and lines which are blank
or which start with a '#' character are ignored.
- ocmd=<cmds>
Specifies processing to be performed on the output table,
after all other processing has taken place. The value of this parameter is one
or more of the filter commands described in SUN/256. If more than one is
given, they must be separated by semicolon characters (";"). This
parameter can be repeated multiple times on the same command line to build up
a list of processing steps. The sequence of commands given in this way defines
the processing pipeline which is performed on the table.
Commands may alteratively be supplied in an external file, by
using the indirection character '@'. Thus a value of
"@filename" causes the file filename to be read for
a list of filter commands to execute. The commands in the file may be
separated by newline characters and/or semicolons, and lines which are blank
or which start with a '#' character are ignored.
- omode=out|meta|stats|count|checksum|cgi|discard|topcat|samp|tosql|gui
The mode in which the result table will be output. The
default mode is
out, which means that the result will be written as a
new table to disk or elsewhere, as determined by the
out and
ofmt parameters. However, there are other possibilities, which
correspond to uses to which a table can be put other than outputting it, such
as displaying metadata, calculating statistics, or populating a table in an
SQL database. For some values of this parameter, additional parameters
(
<mode-args>) are required to determine the exact behaviour.
Possible values are
- out
- meta
- stats
- count
- checksum
- cgi
- discard
- topcat
- samp
- tosql
- gui
Use the
help=omode flag or see SUN/256 for more information.
- out=<out-table>
The location of the output table. This is usually a
filename to write to. If it is equal to the special value "-" (the
default) the output table will be written to standard output.
This parameter must only be given if omode has its default
value of "out".
- ofmt=<out-format>
Specifies the format in which the output table will be
written (one of the ones in SUN/256 - matching is case-insensitive and you can
use just the first few letters). If it has the special value
"
(auto)" (the default), then the output filename will be
examined to try to guess what sort of file is required usually by looking at
the extension. If it's not obvious from the filename what output format is
intended, an error will result.
This parameter must only be given if omode has its default
value of "out".
- coords=<expr>
...
Defines the dimensions of the grid over which
accumulation will take place. The form of this value is a space-separated list
of words each giving a column name or algebraic expression defining one of the
dimensions of the output grid. For a 1-dimensional histogram, only one value
is required.
- logs=true|false
...
Determines whether each coordinate axis is linear or
logarithmic. By default the grid axes are linear, but if this parameter is
supplied with one or more true values, the bins on the corresponding axes are
assigned logarithmically instead.
If supplied, this parameter must have the same number of words as
the coords parameter.
- bounds=[<lo>]:[<hi>]
...
Gives the bounds for each dimension of the cube in data
coordinates. The form of the value is a space-separated list of words, each
giving an optional lower bound, then a colon, then an optional upper bound,
for instance "1:100 0:20" to represent a range for two-dimensional
output between 1 and 100 of the first coordinate (table column) and between 0
and 20 for the second. Either or both numbers may be omitted to indicate that
the bounds should be determined automatically by assessing the range of the
data in the table. A null value for the parameter indicates that all bounds
should be determined automatically for all the dimensions.
If any of the bounds need to be determined automatically in this
way, two passes through the data will be required, the first to determine
bounds and the second to calculate the map.
If supplied, this parameter must have the same number of words as
the coords parameter.
- binsizes=<size>
...
Gives the extent of of the data bins in each dimension in
data coordinates. The form of the value is a space-separated list of values,
giving a list of extents for the first, second, ... dimension. Either this
parameter or the
nbins parameter must be supplied.
If supplied, this parameter must have the same number of words as
the coords parameter.
- nbins=<num>
...
Gives the approximate number of bins in each dimension.
The form of the value is a space-separated list of integers, giving the number
of bins for the output histogram in the first, second, ... dimension. An
attempt is made to use round numbers for bin sizes so the bin counts may not
be exactly as specified. Either this parameter or the
binsizes
parameter must be supplied.
If supplied, this parameter must have the same number of words as
the coords parameter.
- cols=<expr>[;<combiner>[;<name>]]
...
Defines the quantities to be calculated. The value is a
space-separated list of items, one for each aggregated column in the output
table.
Each item is composed of one, two or three tokens, separated by
semicolon (";") characters:
- <expr>: (required) column name or expression using the
expression language for the quantity to be aggregated.
- <combiner>: (optional) combination method, using the same
options as for the combine parameter. If omitted, the value
specified for that parameter will be used.
- <name>: (optional) name of output column; if omitted, the
<expr> value (perhaps somewhat sanitised) will be used.
It is often sufficient just to supply a space-separated list of input table
column names for this parameter, but the additional syntax may be required for
instance if it's required to calculate both a sum and mean of the same input
column.
The default value is "1;count;COUNT" which simply
provides an unweighted histogram, i.e. a count of the rows in each bin
(aggregation of the value "1" using the combination method
"count", yielding an output column named
"COUNT").
- combine=sum|sum-per-unit|count|count-per-unit|mean|median|Q1|Q3|min|max|stdev|hit
Defines the default way that values contributing to the
same density map bin are combined together to produce the value assigned to
that bin. Possible values are:
- sum: the sum of all the combined values per bin
- sum-per-unit: the sum of all the combined values per unit of bin
size
- count: the number of non-blank values per bin (weight is
ignored)
- count-per-unit: the number of non-blank values per unit of bin size
(weight is ignored)
- mean: the mean of the combined values
- median: the median
- Q1: first quartile
- Q3: third quartile
- min: the minimum of all the combined values
- max: the maximum of all the combined values
- stdev: the sample standard deviation of the combined values
- hit: 1 if any values present, NaN otherwise (weight is
ignored)
Note this value may be overridden on a per-column basis by the
cols parameter.
- sparse=true|false
Determines whether a row is written for every cell in the
defined grid, or only for those cells in which data appears in the input. The
result will usually be more compact if this is set false, but if you want to
compare results from different runs it may be convenient to set it true.
- runner=sequential|parallel|parallel<n>|partest
Selects the threading implementation, i.e. to what extent
processing is done in parallel. The options are currently:
- sequential: runs using only a single thread
- parallel: runs using multiple threads for large tables, with
parallelism given by the number of available processors
- parallel<n>: runs using multiple threads for large tables,
with parallelism given by the supplied value <n>
- partest: runs using multiple threads even when tables are small
(only intended for testing purposes)
Using parallel processing can speed up execution considerably;
however, depending on the I/O operations required, it can also slow it down
by disrupting patterns of disk access. If the content of a file is on a
solid state disk, or is already in cache for instance because a similar
command has been run recently, then parallel will probably be faster.
However, if the data is being read directly from a spinning disk, for
instance because the file is too large to fit in RAM, then sequential
or parallel<n> with a small <n> may be faster.
The value of this parameter should make only very tiny differences
to the output table. If you notice significant discrepancies please
report them.