☠ This project is not maintained anymore. We highly recommend switching to py-spy which provides better performance and usability.
The profiling package is an interactive continuous Python profiler. It is inspired from Unity 3D profiler. This package provides these features:
Install the latest release via PyPI:
$ pip install profiling
To profile a single program, simply run the
$ profiling your-program.py
Then an interactive viewer will be executed:
If your program uses greenlets, choose
$ profiling --timer=greenlet your-program.py
--dump option, it saves the profiling result to a file. You can
browse the saved result by using the
$ profiling --dump=your-program.prf your-program.py $ profiling view your-program.prf
If your script reads
sys.argv, append your arguments after
It isolates your arguments from the
$ profiling your-program.py -- --your-flag --your-param=42
If your program has a long life time like a web server, a profiling result
at the end of program is not helpful enough. Probably you need a continuous
profiler. It can be achived by the
$ profiling live-profile webserver.py
See a demo:
There's a live-profiling server also. The server doesn't profile the program at ordinary times. But when a client connects to the server, it starts to profile and reports the results to the all connected clients.
Start a profling server by the
$ profiling remote-profile webserver.py --bind 127.0.0.1:8912
And also run a client for the server by the
$ profiling view 127.0.0.1:8912
TracingProfiler, the default profiler, implements a deterministic profiler
for deep call graph. Of course, it has heavy overhead. The overhead can
pollute your profiling result or can make your application to be slow.
SamplingProfiler implements a statistical profiler. Like other
statistical profilers, it also has only very cheap overhead. When you profile
you can choose it by just
$ profiling live-profile -S webserver.py ^^
Do you use
timeit to check the performance of your code?
$ python -m timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())' 1000 loops, best of 3: 722 usec per loop
If you want to profile the checked code, simply use the
$ profiling timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())' ^^^^^^^^^
You can also profile your program by
from profiling.tracing import TracingProfiler # profile your program. profiler = TracingProfiler() profiler.start() ... # run your program. profiler.stop() # or using context manager. with profiler: ... # run your program. # view and interact with the result. profiler.run_viewer() # or save profile data to file profiler.dump('path/to/file')
CALLS- Total call count of the function.
OWN(Exclusive Time) - Total spent time in the function excluding sub calls.
OWN- Exclusive time per call.
OWN- Exclusive time per total spent time.
DEEP(Inclusive Time) - Total spent time in the function.
DEEP- Inclusive time per call.
DEEP- Inclusive time per total spent time.
OWN(Exclusive Samples) - Number of samples which are collected during the direct execution of the function.
OWN- Exclusive samples per number of the total samples.
DEEP(Inclusive Samples) - Number of samples which are collected during the excution of the function.
DEEP- Inclusive samples per number of the total samples.
There are some additional requirements to run the test code, which can be installed by running the following command.
$ pip install $(python test/fit_requirements.py test/requirements.txt)
Then you should be able to run
$ pytest -v