Benchmark::Timer(3pm) | User Contributed Perl Documentation | Benchmark::Timer(3pm) |
Benchmark::Timer - Benchmarking with statistical confidence
# Non-statistical usage use Benchmark::Timer; $t = Benchmark::Timer->new(skip => 1); for(1 .. 1000) { $t->start('tag'); &long_running_operation(); $t->stop('tag'); } print $t->report; # -------------------------------------------------------------------- # Statistical usage use Benchmark::Timer; $t = Benchmark::Timer->new(skip => 1, confidence => 97.5, error => 2); while($t->need_more_samples('tag')) { $t->start('tag'); &long_running_operation(); $t->stop('tag'); } print $t->report;
The Benchmark::Timer class allows you to time portions of code conveniently, as well as benchmark code by allowing timings of repeated trials. It is perfect for when you need more precise information about the running time of portions of your code than the Benchmark module will give you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap portions of code that you want to benchmark with "start()" and "stop()" method calls. You can supply a tag to those methods if you plan to time multiple portions of code. If you provide error and confidence values, you can also use "need_more_samples()" to determine, statistically, whether you need to collect more data.
After you have run your code, you can obtain information about the running time by calling the "results()" method, or get a descriptive benchmark report by calling "report()". If you run your code over multiple trials, the average time is reported. This is wonderful for benchmarking time-critical portions of code in a rigorous way. You can also optionally choose to skip any number of initial trials to cut down on initial case irregularities.
In all of the following methods, $tag refers to the user-supplied name of the code being timed. Unless otherwise specified, $tag defaults to the tag of the last call to "start()", or "_default" if "start()" was not previously called with a tag.
If you specify an error you must also specify a confidence.
If you specify a confidence you must also specify an error.
This routine assumes that the data are normally distributed.
If called with a $tag, returns the raw timing data for that $tag as an array (or a reference to an array if called in scalar context). This is useful for feeding to something like the Statistics::Descriptive package.
If called with no arguments, returns the raw timing data as a hash keyed on tags, where the values of the hash are lists of timings for that code. In scalar context, it returns a reference to that hash. As with "results()", the data is internally represented as an array so you can recover the original tag order by assigning to an array instead of a hash.
Benchmarking is an inherently futile activity, fraught with uncertainty not dissimilar to that experienced in quantum mechanics. But things are a little better if you apply statistics.
This code is distributed under the GNU General Public License (GPL) Version 2. See the file LICENSE in the distribution for details.
The original code (written before April 20, 2001) was written by Andrew Ho <andrew@zeuscat.com>, and is copyright (c) 2000-2001 Andrew Ho. Versions up to 0.5 are distributed under the same terms as Perl.
Maintenance of this module is now being done by David Coppit <david@coppit.org>.
Benchmark, Time::HiRes, Time::Stopwatch, Statistics::Descriptive
2021-01-05 | perl v5.32.0 |