Python: The Optimization Ladder (cemrehancavdar.com)
189 points by Twirrim 4 days ago
Ralfp 6 hours ago
CPython 3.13 went further with an experimental copy-and-patch JIT compiler -- a lightweight JIT that stitches together pre-compiled machine code templates instead of generating code from scratch. It's not a full optimizing JIT like V8's TurboFan or a tracing JIT like PyPy's;
Good news. Python 3.15 adapts Pypy tracing approach to JIT and there are real performance gains now:josalhor 5 hours ago
While this is great, I expected faster CPython to eventually culminate into what YJIT for Ruby is. I'm not sure the current approaches they are trying will get the ecosystem there.
kenjin4096 3 hours ago
I implemented most of the tracing JIT frontend in Python 3.15, with help from Mark to clean up and fix my code. I also coordinated some of the community JIT optimizer effort in Python 3.15 (note: NOT the code generator/DSL/infra, that's Mark, Diego, Brandt and Savannah). So I think I'm able to answer this.
I can't speak for everyone on the team, but I did try the lazy basic block versioning in YJIT in a fork of CPython. The main problem is that the copy-and-patch backend we currently have in CPython is not too amenable to self-modifying machine code. This makes inter-block jumps/fallthroughs very inefficient. It can be done, it's just a little strange. Also for security reasons, we tried not to have self-modifying code in the original JIT and we're hoping to stick to that. Everything has their tradeoffs---design is hard! It's not too difficult to go from tracing to lazy basic blocks. Conceptually they're somewhat similar, as the original paper points out. The main thing we lack is the compact per-block type information that something like YJIT/Higgs has.
I guess while I'm here I might as well make the distinction:
- Tracing is the JIT frontend (region selection).
- Copy and Patch is the JIT backend (code generation).
We currently use both. PyPy uses meta-tracing. It traces the runtime itself rather than the user's code in CPython's tracing case. I did take a look at PyPy's code, and a lot of ideas in the improved JIT are actually imported from PyPy directly. So I have to thank them for their great ideas. I also talk to some of the PyPy devs.
Ending off: the team is extremely lean right now. Only 2 people were generously employed by ARM to work on this full time (thanks a lot to ARM too!). The rest of us are mostly volunteers, or have some bosses that like open source contributions and allow some free time. As for me, I'm unemployed at the moment and this is basically my passion project. I'm just happy the JIT is finally working now after spending 2-3 years of my life on it :). If you go to Savannah's website [1], the JIT is around 100% faster for toy programs like Richards, and even for big programs like tomli parsing, it's 28% faster on macOS AArch64. The JIT is very much a community effort right now.
[1]: https://doesjitgobrrr.com/?goals=5,10
PS: If you want to see how the work has progressed, click "all time" in that website, it's pretty cool to see (lower is faster). I have a blog explaining how we made the JIT faster here https://fidget-spinner.github.io/posts/faster-jit-plan.html.
__mharrison__ 5 hours ago
Great writeup.
I've been in the pandas (and now polars world) for the past 15 years. Staying in the sandbox gets most folks good enough performance. (That's why Python is the language of data science and ML).
I generally teach my clients to reach for numba first. Potentially lots of bang for little buck.
One overlooked area in the article is running on GPUs. Some numpy and pandas (and polars) code can get a big speedup by using GPUs (same code with import change).
bloaf 4 hours ago
Taichi, benchmarked in the article, claims to be able to outperform CUDA at some GPU tasks, although their benchmarks look to be a few years old:
pjmlp 4 hours ago
And doesn't account for cuTitle, NVidia's new API infrastructure that supports writing CUDA directly in Python via a JIT that is based on MLIR.
seanwilson 5 hours ago
> The real story is that Python is designed to be maximally dynamic -- you can monkey-patch methods at runtime, replace builtins, change a class's inheritance chain while instances exist -- and that design makes it fundamentally hard to optimize. ...
> 4 bytes of number, 24 bytes of machinery to support dynamism. a + b means: dereference two heap pointers, look up type slots, dispatch to int.__add__, allocate a new PyObject for the result (unless it hits the small-integer cache), update reference counts.
Would Python be a lot less useful without being maximally dynamic everywhere? Are there domains/frameworks/packages that benefit from this where this is a good trade-off?
I can't think of cases in strong statically typed languages where I've wanted something like monkey patching, and when I see monkey patching elsewhere there's often some reasonable alternative or it only needs to be used very rarely.
bloaf 4 hours ago
I've always thought the flexibility should allow python to consume things like gRPC proto files or OpenAPI docs and auto-generate the classes/methods at runtime as opposed to using codegen tools. But as far as I know, there aren't any libraries out there actually doing that.
skeledrew 2 hours ago
But it's an fairly easy build if you want any of that.
NeutralForest 4 hours ago
There are some use cases for very dynamic code, like ORMs; with descriptors you can add attributes + behavior at runtime and it's quite useful. Anyways, breaking metaprogramming and more dynamic features would mean python 4 and we know how 2 -> 3 went. I also don't think it's where the core developers are going. Also also, there are other things I'd change before going after monkey patching like some scoping rules, mutable defaults in function attributes, better async ergonomics, etc.
LtWorf 5 hours ago
I've used a library that patches the zipfile module to add support for zstd compression in zipfiles.
In python3.14 the support is there, but 2 years ago you could just import this library and it would just work normally.
repple 3 hours ago
Significant AI smell in this write up. As a result, my current reflex is to immediately stop reading. Not judgement on the actual analysis and human effort which went in. It’s just that the other context is missing.
canjobear an hour ago
Here's what gave it away for me
> The remaining difference is noise, not a fundamental language gap. The real Rust advantage isn't raw speed -- it's pipeline ownership.
huseyinkeles 2 hours ago
The author is from Turkey (where I’m also originally from).
Believe it or not, when you write a blog post in a different language, it really helps to use an LLM, even just to fix your grammar mistakes etc.
I assume that’s most likely what happened here too.
shepherdjerred an hour ago
IMO it would make sense to add a disclaimer then, e.g. “I wrote this myself but had AI edit”
I have no problem with people using AI, especially to close a language gap.
If you disclose your usage I have a _lot_ more trust that effort has been put into the writing despite the usage
repple 2 hours ago
I believe it
jb_hn 3 hours ago
I didn't notice any signs of AI writing until seeing this comment and re-reading (though I did notice it on the second pass).
That said, I think this article demonstrates that focusing on whether or not an article used AI might be focusing on the wrong “problem.” I appreciate being sensitive to the "smell" (the number of low-effort, AI posts flying around these days has made me sensitive too), but personally, I found this article both (1) easy to read and (2) insightful. I think the number of AI-written content lacking (2) is the problem.
repple 2 hours ago
Your initial focus is to prioritize which content to consume.
markisus 2 hours ago
I also seem to be developing an immune response to several slopisms. But the actual content is useful for outlining tradeoffs if you’re needing to make your Python code go faster.
MonkeyClub 3 hours ago
I got the same sense, but nowadays I can't be sure whether a text is AI or the writer's style has absorbed LLM tropes.
FusionX 2 hours ago
I don't think it should be conflated with auto generated AI slop. I see a lot of snippets which were clearly manually written. I'm assuming the author used AI in a supervised manner, to smooth out the writing process and improve coherency.
rusakov-field 5 hours ago
Python is perfect as a "glue" language. "Inner Loops" that have to run efficiently is not where it shines, and I would write them in C or C++ and patch them with Python for access to the huge library base.
This is the "two language problem" ( I would like to hear from people who extensively used Julia by the way, which claims to solve this problem, does it really ?)
pjmlp 4 hours ago
This problem has been solved already by Lisp, Scheme, Java, .NET, Eiffel, among others, with their pick and choose mix of JIT and AOT compiler toolchains and runtimes.
elophanto_agent 2 hours ago
the optimization ladder is just the five stages of grief but for python developers. denial ("it's fast enough"), anger ("why is this so slow"), bargaining ("maybe if I use numpy"), depression ("I should rewrite this in rust"), acceptance ("actually cython is fine")
pjmlp 4 hours ago
Kudos for going through all the existing JIT approaches, instead of reaching for rewrite into X straight away.
However if Rust with PyO3 is part of the alternatives, then Boost.Python, cppyy, and pybind11 should also be accounted for, given their use in HPC and HFT integrations.
blt 4 hours ago
Surprised Python is only 21x slower than C for tree traversal stuff. In my experience that's one of the most painful places to use Python. But maybe that's because I use numpy automatically when simple arrays are involved, and there's no easy path for trees.
AlotOfReading 2 hours ago
You can turn trees into numpy-style matrix operations because graphs and matrices are two sides of the same coin. I don't see the code for the binary-tree benchmark in the repo to see how it's written, but there are libraries like graphblas that use the equivalence for optimization.
tweakimp 3 hours ago
Be careful with that, numpy arrays can be slower than Python tuples for some operations. The creation is always slower and the overhead has to be worth it.
markisus 2 hours ago
I wish there were more details on this part.
> Missing @cython.cdivision(True) inserts a zero-division check before every floating-point divide in the inner loop. Millions of branches that are never taken.
I thought never taken branches were essentially free. Does this mean something in the loop is messing with the branch predictor?
pavpanchekha 2 hours ago
They're cheap but not free, especially at the front end of the CPU where it's just a lot more instructions to churn through. What the branch predictor gets you is it turns branches, which would normally cause a pipeline bubble, to be executed like straightline code if they're predicted right. It's a bit like a tracing jit. But you will still have a bunch of extra instructions to, like, compute the branch predicate.
beng-nl an hour ago
Worse, IMO, is the never taken branch taking up space in branch prediction buffers. Which will cause worse predictions elsewhere (when this branch ip collides with a legitimate ip). Unless I missed a subtlety and never taken branches don’t get assigned any resources until they are taken (which would be pretty smart actually).
adsharma 2 hours ago
Missing: write static python and transpile to rust pyO3 which is at the top of the ladder.
Some nuance: try transpiling to a garbage collected rust like language with fast compilation until you have millions of users.
Also use a combination of neural and deterministic methods to transpile depending on the complexity.
zahlman 2 hours ago
> a garbage collected rust like language with fast compilation
I don't know what languages you might have in mind. "Rust-like" in what sense?
tda 2 hours ago
One thing with python is that usually I will use one of the many c based libraries to get reasonable speed and well thought out abstractions from the start. I architect around numpy, scipy, shapely, pandas/polars or whatever. So my code runs at reasonable speed from the start. But transpiling to rust then effectively means a complete redesign of the code, data structures, algorithms etc. And I have seen the AI tools really struggle to get it right, as my intent gets lost somewhere.
So what I do now (since Claude Code) is write really bare bones (and slow) pure python implementation (like I used to do for numba, pypy or cython ready code), with minimal dependencies. Then I use the REPL, notebooks and nice plotting tools to get a real understanding of the problem space and the intricacies of my algorithm/problem at hand. When done, I let Claude add tests and I ask it to transpile to equivalent Rust and boom! a flawless 1000x speed upgrade in a minutes.
The great thing is I don't need to do the mental gymnastics to vectorize code in a write only mode like I've had to do since my Matlab days. Instead I can write simple to read for loops that follow my intent much better, and result in much more legible code. So refreshing!
And with pyO3 i can still expose the Rust lib to python, and continue to use Python for glue and plotting
LarsDu88 an hour ago
I love how in an article about making python faster, the fastest option is to simply write Rust, lol
falcor84 39 minutes ago
There's no surprise that Rust is faster to run, but I don't think there are many who would claim that Rust is faster to write.
superlopuh 4 hours ago
Missing Muna[0][1], I'm curious how it would compare on these benchmarks.
[0]: https://www.muna.ai/ [1]: https://docs.muna.ai/predictors/create
threethirtytwo 2 hours ago
>The usual suspects are the GIL, interpretation, and dynamic typing. All three matter, but none of them is the real story. The real story is that Python is designed to be maximally dynamic -- you can monkey-patch methods at runtime, replace builtins, change a class's inheritance chain while instances exist -- and that design makes it fundamentally hard to optimize.
ok I guess the harder question is. Why isn't python as fast as javascript?
12_throw_away 6 minutes ago
> ok I guess the harder question is. Why isn't python as fast as javascript?
Actually there is a pretty easy answer: worldwide, the amount of javascript being evaluated every day is many orders of magnitude higher than the amount of python. The amount of money available for optimizing it has thus been many orders of magnitude higher as well.
jaharios 2 hours ago
json.loads is something you don't want to use in a loop if you care for performance at all. Just simple using orjson can give you 3x speed without the need to change anything.
retsibsi 5 hours ago
A personal opinion: I would much prefer to read the rough, human version of this article than this AI-polished version. I'm interested in the content and the author clearly put thought and effort into it, but I'm constantly thrown out of it by the LLM smell. (I'm also a bit mad that `--` is now on the em dash treadmill and will soon be unusable.)
I'm not just saying this to vent. I honestly wonder if we could eventually move to a norm where people publish two versions of their writing and allow the reader to choose between them. Even when the original is just a set of notes, I would personally choose to make my own way through them.
zahlman 2 hours ago
The replacement of emdashes with double hyphens here is almost insulting. A look through the blog history suggests that the author has no issue writing in English normally, and nothing seems really off about the actual findings here (or even the speculation about causes etc.), so I really can't understand the motivation for LLM-generated prose. (The author's usual writing style appears to have some arguable LLM-isms, but they make a lot more sense in context and of course those patterns had to come from somewhere. The overall effect is quite different.)
Edit: it's strange to get downvoted while also getting replies that agree with me and don't seem to object.
(Also, I thought it wasn't supposed to be possible to edit after getting a reply?)
hydrolox an hour ago
Yea while reading, I just didn't understand how you end up LLM writing the article? Clearly, the data and writeup are real. But, was it "edited" with an LLM? It looks closer to ~the entire thing being LLM written. I finished reading because the topic is interesting, but the LLM writing style is difficult to bear.. and I agree with your point that trying to fool us that it's human with `--` is just absurd
adammarples an hour ago
Same problems, same Apple M4 Pro, real numbers.
kelvinjps10 5 hours ago
Great post saved it for when I need to optimize my python code
arlattimore 4 hours ago
What a great article!
Mawr 3 hours ago
Shockingly good article — correct identification of the root cause of performance issues being excessive dynamism and ranking of the solutions based on the value/effort ratio. Excellent taste. Will keep this in my back pocket as a quick Python optimization reference.
It's just somewhat unfortunate that I have to question every number and fact presented since the writing was clearly at least somewhat AI-assisted with the author seemingly not being upfront about that at all.
threethirtytwo an hour ago
Being upfront about AI-assistance or no AI-assistance doesn't mean shit. Whether AI was involved is independent of what they state and there's no real way to fully prove otherwise.
skeledrew 2 hours ago
I must admit that I'm amused by the people who find the writeup useful but are turned off by the AI "smell". And look forward to the day when all valued content reeks of said "smell"; let's see what detractors-for-no-good-reason do then (yes I'm a bit ticked by the attitude).
achierius 2 hours ago
Isn't this a depressing thought? Regardless of AI, to think that everything we read would come in the same literary style, conveying little of the author, giving no window through which to learn about who they are -- that would be a real loss.
repple an hour ago
Ultimately it’s up to the author to make that explicit choice. I think that AI does and will enhance writing and depth and breadth of analysis one could perform. But, to be trustworthy, people will need to either lay out all cards on the table and/or work on other ways to gain trust over time. Maybe people need to provide some context to communicate what model was used and in which ways. What % of final output is AI vs author. I mean, if I see 100% composed by human author stated somewhere then there’s my cue to at the very least learn a little about the author. Certainly more complexity and discernment for readers. Depressing? In some ways maybe; but I’m kind of optimistic. Imagine what Tolkien could worldbuild armed with AI.. but then it wouldn’t be Tolkien.
shepherdjerred an hour ago
The smell makes me suspicious because I don’t know how the author used AI.
If the author wrote a detailed rough draft, had AI edit, reviewed the output thoroughly, and has the domain knowledge to know if the AI is correct, then this could be a useful piece.
I suspect most authors _don’t_ fall in that bucket.
zahlman 2 hours ago
Why is it amusing?
How can you suppose that this is not a good reason to object, especially days after https://news.ycombinator.com/item?id=47340079 ?
I find the style so reflexively grating that it's honestly hard for me to imagine others not being bothered by it, let alone being bothered by others being bothered.
Especially since I looked at previous posts on the blog and they didn't have the same problem.
tpoacher an hour ago
"I totally get a kick out of the peeps who find the writeup super helpful yet are totally put off by that distinct "AI smell"—it’s like they can't even! Just imagine when everything we value is woven into a tapestry of that same "smell"—where will all the naysayers retreat to then? It’s a little frustrating, honestly, and I’m just like, come on! Let’s delve into this new era of content and embrace the chaos!"
There, FTFY :D
perching_aix 2 hours ago
> language slow
> looks inside
> the reference implementation of language is slow
Despite its content, this blogpost also pushes this exact "language slow" thinking in its preamble. I don't think nearly enough people read past introductions for that to be a responsible choice or a good idea.
The only thing worse than this is when Python specifically is outright taught (!) as an "interpeted language", as if an implementation-detail like that was somehow a language property. So grating.
zahlman an hour ago
While I sympathize (and have said similar in the past), language design can (and in Python's case certainly does) hinder optimization quite a bit. The techniques that are purely "use a better implementation" get you not much further than PyPy. Further benefits come from cross-compilation that requires restricting access to language features (and a system that can statically be convinced that those features weren't used!), or indeed straight up using code written in a different language through an FFI.
But yes, the very terminology "interpreted language" was designed for a different era and is somewhere between misleading and incomprehensible in context. (Not unlike "pass by value".)
perching_aix 9 minutes ago
Absolutely, no doubt about that. I just find it a terrible way to approach from in general, as well as specifically in this case: swapping out CPython with PyPy, GraalPy, Taichi, etc. - as per the post - requires no code changes, yet results in leaps and bounds faster performance.
If switching runtimes yields 10x perf, and switching languages yields 100x, then the language on its own was "just" a 10x penalty. Yet the presentation is "language is 100x slower". That's my gripe. And these are apparently conservative estimates as per the tables in the OP.
Not that metering "language performance" with numbers would be a super meaningful exercise, but still.