Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>comparable to some compiled languages

Given that python programs usually run an order of magnitude slower than compiled languages even a 2x performance increase doesn't put it in the "comparable" range from my experience. Not bashing python - I use it regularly - but for computational stuff it's a hog unless you're just passing stuff to C libs - like I have a resource build pipeline that does some blender 3D model transformations - code is written in python and takes forever - equivalent code in C++ would take roughly 1/100 of the time and performance would be irrelevant but atm. we're seriously considering rewriting parts in C++ to reduce build times.



Blender Python lib by default is not optimized much. It has nothing to do with Python as a language.

Use numpy for matrices. If you have to implement an algo with a hot inner loop, use cython or numba.

I've never seen 100x difference in Python-C++ rewrite if Python was optimized already.

Here is a good article about some of the options: https://rare-technologies.com/word2vec-in-python-part-two-op...


The one time I saw 100x increase in performance in Python-to-C (which was done through Cython) was in code that worked with strings calculating a machine-learning related distance between two strings. The code was doing a lot of accessing particular positions in the strings, which in pure python resulted in slow retrieval of every character (lots of .__getitem__ calls), which were optimized to having 2 predefined empty arrays (in heap, not stack, and their corresponding counters of valid items) and then walking the strings and storing the "hot" values in them.

So it was a very specific case where we could get that 100x speedup at work.


Just noticed after a while: it was "in the stack, not heap" (about the arrays).


As long as you use the python procedures that are written in C you will not really get much by using pypy. Try it yourself, write some IO and string heavy code and compare. [1]

But as you say: for numeric computations python is slow as molasses.

[1] or just look at something like https://github.com/juditacs/wordcount/blob/master/README.md . The simple py2 version is 2.5 times slower than a java version someone spent a lot of time writing, and less than 2 times slower than a reasonably straightforward C program.


That was the point of the Heap Benchmarks. In CPython you would have to use HeapQ, writing something yourself in Python will be miles off the pace. Whereas in PyPy the Python implementation of a Heap or your own version is comparably fast. As it should be. The 'hunt down the written in C' parts of the standard library is what I am increasingly objecting to.


It all depends on the task -- if your program is calling into optimized C-extensions anyway, converting the whole thing into a compiled language of course offers less of a speedup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: