VNL-FILTER(1) | vnlog | VNL-FILTER(1) |
vnl-filter - filters vnlogs to select particular rows, fields
$ cat run.vnl # time x y z temperature 3 1 2.3 4.8 30 4 1.1 2.2 4.7 31 6 1 2.0 4.0 35 7 1 1.6 3.1 42 $ <run.vnl vnl-filter -p x,y,z | vnl-align # x y z 1 2.3 4.8 1.1 2.2 4.7 1 2.0 4.0 1 1.6 3.1 $ <run.vnl vnl-filter -p i=NR,time,'dist=sqrt(x*x + y*y + z*z)' | vnl-align # i time dist 1 3 5.41572 2 4 5.30471 3 6 4.58258 4 7 3.62905 $ <run.vnl vnl-filter 'temperature >= 35' | vnl-align # time x y z temperature 6 1 2.0 4.0 35 7 1 1.6 3.1 42 $ <run.vnl vnl-filter --eval '{s += temperature} END { print "mean temp: " s/NR}' mean temp: 34.5 $ <run.vnl vnl-filter -p x,y | feedgnuplot --terminal 'dumb 80,30' --unset grid --domain --lines --exit 2.3 +---------------------------------------------------------------------+ | + + *************** + | | ************** | | *******| 2.2 |-+ ************| | ******** | | ******** | 2.1 |-+ ********* +-| | ******** | | ******** | | **** | 2 |-+ * +-| | * | | * | | * | 1.9 |-+ * +-| | * | | * | | * | 1.8 |-+ * +-| | * | | * | 1.7 |-+ * +-| | * | | * | | * + + + + | 1.6 +---------------------------------------------------------------------+ 0.98 1 1.02 1.04 1.06 1.08 1.1
This tool is largely a frontend for awk to operate on vnlog files. Vnlog is both an input and an output. This tool makes it very simple to select specific rows and columns for output and to manipulate the data in various ways.
This is a UNIX-style tool, so the input/output of this tool is strictly STDIN/STDOUT. Furthermore, in its usual form this tool is a filter, so the format of the output is exactly the same as the format of the input. The exception to this is when using "--eval", in which the output is dependent on whatever expression we're evaluating.
This tool is convenient to process both stored data or live data; in the latter case, it's very useful to pipe the streaming output to "feedgnuplot --stream" to get a realtime visualization of the incoming data.
This tool reads enough of the input file to get a legend, at which point it constructs an awk program to do the main work, and execs to awk (it's possible to use perl as well, but this isn't as fast).
The input/output data is vnlog: a plain-text table of values. Any lines beginning with "#" are treated as comments, and are passed through. The first line that begins with "#" but not "##" or "#!" is a legend line. After the "#", follow whitespace-separated field names. Each subsequent line is whitespace-separated values matching this legend. For instance, this is a valid vnlog file:
#!/usr/bin/something ## more comments # x y z -0.016107 0.004362 0.005369 -0.017449 0.006711 0.006711 -0.018456 0.014093 0.006711 -0.017449 0.018791 0.006376
"vnl-filter" uses this format for both the input and the output. The comments are preserved, but the legend is updated to reflect the fields in the output file.
A string "-" is used to indicate an undefined value, so this is also a valid vnlog file:
# x y z 1 2 3 4 - 6 - - 7
To select specific columns, pass their names to the "-p" option (short for "--print" or "--pick", which are synonyms). In its simplest form, to grab only columns "x" and "y", do
vnl-filter -p x,y
See the detailed description of "-p" below for more detail.
To select specific rows, we use matches expressions. Anything on the "vnl-filter" commandline and not attached to any "--xxx" option is such an expression. For instance
vnl-filter 'size > 10'
would select only those rows whose "size" column contains a value > 10. See the detailed description of matches expressions below for more detail.
"vnl-filter" supports the context output options ("-A", "-B" and "-C") exactly like the "grep" tool. I.e to print out all rows whose "size" column contains a value > 10 but also include the 3 rows immediately before and after such matching rows, do this:
vnl-filter -C3 'size > 10'
"-B" reports the rows before matching ones and "-A" the rows after matching ones. "-C" reports both. Note that this applies only to matches expressions: records skipped because they fail "--has" or "--skipempty" are not included in contextual output.
By default, the parsing of arguments and the legend happens in perl, which then constructs a simple awk script, and invokes "mawk" to actually read the data and to process it. This is done because awk is lighter weight and runs faster, which is important because our data sets could be quite large. We default to "mawk" specifically, since this is a simpler implementation than "gawk", and runs much faster. If for whatever reason we want to do everything with perl, this can be requested with the "--perl" option.
For convenience we support several special functions in any expression passed on to awk or perl (named expressions, matches expressions, "--eval" strings). These generally maintain some internal state, and vnl-filter makes sure that this state is consistent. Note that these are evaluated after "--skipcomments" and "--has". So any record skipped because of a "--has" expression, for instance, will not be considered in "prev()", "diff()" and so on.
$ cat tst.vnl # time x 100 200 101 212 102 209 $ <tst.vnl vnl-filter -p 't=rel(time),x=rel(x) # t x 0 0 1 12 2 9
$ <tst.vnl vnl-filter -p x,'d1=diff(x),d2=diff(diff(x))' | vnl-align # x d1 d2 1 - - 8 7 7 27 19 12 64 37 18 125 61 24
Example:
$ <tst.vnl vnl-filter -p 'x,s=sum(x),ds=diff(sum(x))' | vnl-align # x s ds 1 1 - 8 9 8 27 36 27 64 100 64 125 225 125
These option provide the mechanism to select specific columns for output. For instance to pull out columns called "lat", "lon", and any column whose name contains the string "feature_", do
vnl-filter -p lat,lon,'feature_.*'
or, equivalently
vnl-filter --print lat --print lon --print 'feature_.*'
We look for exact column name matches first, and if none are found, we try a regex. If there was no column called exactly "feature_", then the above would be equivalent to
vnl-filter -p lat,lon,feature_
This mechanism is much more powerful than just selecting columns. First off, we can rename chosen fields:
vnl-filter -p w=feature_width
would pick the "feature_width" field, but the resulting column in the output would be named "w". When renaming a column in this way regexen are not supported, and exact field names must be given. But the string to the right of the "=" is passed on directly to awk (after replacing field names with column indices), so any awk expression can be used here. For instance to compute the length of a vector in separate columns "x", "y", and "z" you can do:
vnl-filter -p 'l=sqrt(x*x + y*y + z*z)'
A single column called "l" would be produced.
We can also exclude columns by preceding their name with "!". This works like you expect. Rules:
Example. To grab all the columns except the temperature(s) do this:
vnl-filter -p !temperature
To grab all the columns that describe something about a robot (columns whose names have the string "robot_" in them), but not its temperature (i.e. not "robot_temperature"), do this:
vnl-filter -p robot_,!temperature
Used to select records (rows) that have a non-empty value in a particular field (column). A null value in a column is designated with a single "-". If we want to select only records that have a value in the "x" column, we pass "--has x". To select records that have data for all of a given set of columns, the "--has" option can be repeated, or these multiple columns can be given in a whitespace-less comma-separated list. For instance if we want only records that have data in both columns "x" and "y" we can pass in "--has x,y" or "--has x --has y". If we want to combine multiple columns in an or (select rows that have data in any of a given set of columns), use a matches expression, as documented below.
If we want to select a column and pick only rows that have a value in this column, a shorthand syntax exists:
vnl-filter --has col -p col
is equivalent to
vnl-filter -p +col
Note that just like the column specifications in "-p" the columns given to "--has" must match exactly or as a regex. In either case, a unique matching column must be found.
Anything on the commandline not attached to any "--xxx" option is a matches expression. These are used to select particular records (rows) in a data file. For each row, we evaluate all the expressions. If all the expressions evaluate to true, that row is output. This expression is passed directly to the awk (or perl) backend.
Example: to select all rows that have valid data in column "a" or column "b" or column "c" you can
vnl-filter 'a != "-" || b != "-" || c != "-"'
or
vnl-filter --perl 'defined a || defined b || defined c'
As with the named expressions given to "-p" (described above), these are passed directly to awk, so anything that can be done with awk is supported here.
Output N lines following each matches expression, even those lines that do not themselves match. This works just like the "grep" options of the same name. See "Context lines"
Output N lines preceding each matches expression, even those lines that do not themselves match. This works just like the "grep" options of the same name. See "Context lines"
Output N lines preceding and following each matches expression, even those lines that do not themselves match. This works just like the "grep" options of the same name. See "Context lines"
Instead of printing out all matching records and picked columns, just run the given chunk of awk (or perl). In this mode of operation, "vnl-filter" acts just like a glorified awk, that allows fields to be accessed by name instead of by number, as it would be in raw awk.
Since the expression may print anything or nothing at all, the output in this mode is not necessarily itself a valid vnlog stream. And no column-selecting arguments should be given, since they make no sense in this mode.
In awk the expr is a full set of pattern/action statements. So to print the sum of columns "a" and "b" in each row, and at the end, print the sum of all values in the "a" column
vnl-filter --eval '{print a+b; suma += a} END {print suma}'
In perl the arbitrary expression fits in like this:
while(<>) # read each line { next unless matches; # skip non-matching lines eval expression; # evaluate the arbitrary expression }
Evaluates the given expression as a function that can be used in other expressions. This is most useful when you want to print something that can't trivially be written as a simple expression. For instance:
$ cat tst.vnl # s 1-2 3-4 5-6 $ < tst.vnl vnl-filter --function 'before(x) { sub("-.*","",x); return x }' \ --function 'after(x) { sub(".*-","",x); return x }' \ -p 'b=before(s),a=after(s)' # b a 1 2 3 4 5 6
See the CAVEATS section below if you're doing something sufficiently-complicated where you need this.
Do [not] skip records where all fields are blank. By default we do skip all empty records; to include them, pass "--noskipempty"
Don't output non-legend comments
By default all procesing is performed by "mawk", but if for whatever reason we want perl instead, pass "--perl". Both modes work, but "mawk" is noticeably faster. "--perl" could be useful because it is more powerful, which could be important since a number of things pass commandline strings directly to the underlying language (named expressions, matches expressions, "--eval" strings). Note that while variables in perl use sigils, column references should not use sigils. To print the sum of all values in column "a" you'd do this in awk
vnl-filter --eval '{suma += a} END {print suma}'
and this in perl
vnl-filter --perl --eval '{$suma += a} END {say $suma}'
The perl strings are evaluated without "use strict" or "use warnings" so I didn't have to declare $suma in the example.
With "--perl", empty strings ("-" in the vnlog file) are converted to "undef".
Used for debugging. This spits out all the final awk (or perl) program we run for the given commandline options and given input. This is the final program, with the column references resolved to numeric indices, so one can figure out what went wrong.
Flushes each line after each print. This makes sure each line is output as soon as it is available, which is crucial for realtime output and streaming plots.
Synonym for "--unbuffered"
This tool is very lax in its input validation (on purpose). As a result, columns with names like %CPU and "TIME+" do work (i.e. you can more or less feed in output from "top -b"). The downside is that shooting yourself in the foot is possible. This tradeoff is currently tuned to be very permissive, which works well for my use cases. I'd be interested in hearing other people's experiences. Potential pitfalls/unexpected behaviors:
vnl-filter 'key == 5'
This works. But unlike a real database this is clearly a linear lookup. With large data files, this would be significantly slower than the logarithmic searches provided by a real database. The meaning of "large" and "significant" varies, and you should test it. In my experience vnlog "databases" scale surprisingly well. But at some point, importing your data to something like sqlite is well worth it.
https://github.com/dkogan/vnlog/
Dima Kogan "<dima@secretsauce.net>"
Copyright 2016-2017 California Institute of Technology
Copyright 2017-2019 Dima Kogan "<dima@secretsauce.net>"
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
2020-12-09 |