← Back to team overview

schooltool-developers team mailing list archive

Re: Down the rabbit hole, profiling your Python code - Remco Wendt

 

Remco posted a link to his slides on the python-nl list:

Hello all,

Thanks again for an interesting and inspiring Django Meetup. You can find the slides of my presentation 'Down the rabbit hole, profiling your Python code' on slideshare: http://www.slideshare.net/sshanx/down-the-rabbit-hole-profiling-in-django/ . Of course Reinout also made notes, which you can find here: http://reinout.vanrees.org/weblog/2012/04/18/profiling-python.html
Next meetup will be the 4th of July. Hope to see you all there! And do  
remember that anything interesting you discover, the classical A-HA  
erlebnis, could be great material for a lightning talk!
Also worth noting: on May 11th is the Pygrunn conference in Groningen.  
Featuring Michael Bayer (author of sqlalchemy) as keynote speaker and  
also Armin Ronachere, see: http://www.pygrunn.nl/  I'm attending and  
speaker, so hope to see you all there!
Cheers,
Remco
--
Maykin Media
Herengracht 416, 1017 BZ Amsterdam
tel.: +31 (0)20 753 05 23
mob.: +31 (0)6 187 967 06
http://www.maykinmedia.nl


On 19 Apr 2012 : 17, at 00:06, Tom Hoffman wrote:


Sent to you by Tom Hoffman via Google Reader:


Down the rabbit hole, profiling your Python code - Remco Wendt
via Reinout van Rees' weblog by Reinout van Rees on 4/18/12

(Talk at the April 2012 Dutch Django meeting)

There's a lot happening between an incoming request and an outgoing response. Part of it is your code, part is in libraries. You don't care about most of those parts, as you probably mostly care about the resulting end product for the customer.
There is a lot of interest in scaling, but not so much in profiling  
your performance. Profiling means running your code in such a way  
that Python's interpreter gathers statistics on all the calls you  
make. This has a huge performance impact, so don't use it in  
production. But it gives you invaluable data on what's actually  
happening in your code.
The most interesting thing about profiling is the low hanging fruit.  
Often there are two or three expensive functions that you can easily  
improve: with a limited effort you get a lot of extra performance.  
It is not effective to focus on a hard problem that you can only  
improve 2%.
Python has lots of tools. The most well-known is cProfile (profile  
is not that good; hotspot seems deprecated). Line profiler looks at  
the number of times a line is executed.
Run cProfile like this:

import cProfile
cProfile.run('your_method()')
An alternative is:

python -m cProfile your_script.py -o your_script.profile
With that -o option you get an output file that you can run through Python's pstats to get the actual statistics.
A very handy visualizer is run snake run that displays the profiling  
information as a "tree map". An alternative is kcachegrind, but you  
need to call pyprof2calltree to convert Python's profiling  
information to kcachegrind's.
What to look for:

• Things you didn't expect. Perhaps you spot something sub-obtimal or strange that needs investigating. • If there's much time spend in just one single function. This is possible low-hanging fruit.
	• Lots of calls to the same function.
Some things you can do to improve your performance:

	•
Caching

	•
Get stuff out of inner loops.

	•
Remove logging. And especially watch out when logging database objects in Django: your objects's __unicode__() might call more than you want, like self.parent.xyz...
Regarding debug logging: you can make them conditional with if  
__debug__:. Running python with -O optimizes them away.
Apart from code profiling (cpu/IO) there's also memory profiling for  
looking at memory usage. Small note on Django: it has an (intended)  
memory leak in debug mode (the query cache). No real problem, but  
keep it in mind when doing memory profiling.
Tools for memory profiling: heapy and meliea. Meliea is nice as you  
can run it on your server (ahem) and then copy it to your local  
machine for evaluation with, again, run snake run.
Profiling is all good and fun, but the environment is different on  
your production server. How to do profiling there? You might have  
one of several wsgi process that runs in profiling mode, for  
instance, with a load balancer that only trickles a few results to  
that single wsgi process.
Or you can use Boaz Leskes' pycounters, "instrumenting production  
code".
To close off: you should know about this. It should be part of your  
professional toolkit. And... it should be in IDEs. Several of them  
already have it. Komodo already has it, but what about PyCharm?  
Remco hopes that this blog entry sparks IDE vendors into action when  
needed :-)
Some input from the questions:

• Django has profiling middleware that you can switch on for a specific request with a GET parameter.
	• There's WSGI middleware (like dozer).




Things you can do from here:
	• Subscribe to Reinout van Rees' weblog using Google Reader
• Get started using Google Reader to easily keep up with all your favorite sites

_______________________________________________
Mailing list: https://launchpad.net/~schooltool-developers
Post to     : schooltool-developers@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~schooltool-developers
More help   : https://help.launchpad.net/ListHelp
--
Gauss - The People Magnet  http://getGauss.com/







References