Benchmarks¶
Introduction¶
Benchmarks allow to track performance from release to release and verify that latest changes haven not affected it drastically. Benchmarks are based on the perf module.
How to run¶
requirements/dev.txt
should be installed before we can proceed
with benchmarks. Please also make sure that you have
configured
your OS to have reliable results.
To run benchmarks next command can be executed:
$ python benchmarks/benchmark.py
This would run benchmarks for both classes (MultiDict
and
CIMultiDict
) of both implementations (Python
and
Cython
).
To run benchmarks for a specific class of specific implementation
please use --impl
option:
$ python benchmarks/benchmark.py --impl multidict_cython
would run benchmarks only for MultiDict
implemented
in Cython
.
Please use --help
to see all available options. Most of the options are
described at perf’s Runner
documentation.
How to compare implementations¶
--impl
option allows to run benchmarks for a specific implementation of
class. Combined with the
compare_to
command of perf
module we can get a good picture of how implementation
performs:
$ python benchmarks/benchmark.py --impl multidict_cython -o multidict_cy.json
$ python benchmarks/benchmark.py --impl multidict_python -o multidict_py.json
$ python -m perf compare_to multidict_cy.json multidict_py.json