Awesome Open Source
Awesome Open Source

======== Overview

.. start-badges

.. list-table:: :stub-columns: 1

* - docs
  - |docs| |gitter|
* - tests
  - | |travis| |appveyor| |requires|
    | |coveralls| |codecov|
* - package
  - | |version| |wheel| |supported-versions| |supported-implementations|
    | |commits-since|

.. |docs| image:: :target: :alt: Documentation Status

.. |gitter| image:: :alt: Join the chat at :target:

.. |travis| image:: :alt: Travis-CI Build Status :target:

.. |appveyor| image:: :alt: AppVeyor Build Status :target:

.. |requires| image:: :alt: Requirements Status :target:

.. |coveralls| image:: :alt: Coverage Status :target:

.. |codecov| image:: :alt: Coverage Status :target:

.. |version| image:: :alt: PyPI Package latest release :target:

.. |wheel| image:: :alt: PyPI Wheel :target:

.. |supported-versions| image:: :alt: Supported versions :target:

.. |supported-implementations| image:: :alt: Supported implementations :target:

.. |commits-since| image:: :alt: Commits since latest release :target:

.. end-badges

A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.

See calibration_ and FAQ_.

  • Free software: BSD 2-Clause License



pip install pytest-benchmark


For latest release: <>_.

For master branch (may include documentation fixes): <>_.


But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first.
Take a look at the `introductory material <>`_
or watch `talks <>`_.

Few notes:

* This plugin benchmarks functions and only that. If you want to measure block of code
  or whole programs you will need to write a wrapper function.
* In a test you can only benchmark one function. If you want to benchmark many functions write more tests or
  use `parametrization <>`_.
* To run the benchmarks you simply use `pytest` to run your "tests". The plugin will automatically do the
  benchmarking and generate a result table. Run ``pytest --help`` for more details.

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.


.. code-block:: python

def something(duration=0.000001):
    Function that needs some serious benchmarking.
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

.. code-block:: python

def test_my_stuff(benchmark):
    def something():  # unnecessary function call

A better way is to just benchmark the final function:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic_:

.. code-block:: python

def my_special_setup():

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)


Normal run:

.. image:: :alt: Screenshot of pytest summary

Compare mode (--benchmark-compare):

.. image:: :alt: Screenshot of pytest summary in compare mode

Histogram (--benchmark-histogram):

.. image:: :alt: Histogram sample


Also, it has `nice tooltips <>`_.


To run the all tests run::



.. _FAQ: .. _calibration: .. _pedantic:

Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
python (54,487
performance (611
benchmark (251
benchmarking (87
pytest (77