How to make a fast dynamic language interpreter (zef-lang.dev)

190 points by pizlonator 11 hours ago

pansa2 6 hours ago

In a similar vein, see this page about the performance of the interpreter for the dynamic language Wren: https://wren.io/performance.html

Unlike the Zef article, which describes implementation techniques, the Wren page also shows ways in which language design can contribute to performance.

In particular, Wren gives up dynamic object shapes, which enables copy-down inheritance and substantially simplifies (and hence accelerates) method lookup. Personally I think that’s a good trade-off - how often have you really needed to add a method to a class after construction?

psychoslave 5 hours ago

That’s basically what is done all the time in languages where monkey patching is accepted as idiomatic, notably Ruby. Ruby is not known for its speed-first mindset though.

On the other side, having a type holding a closed set of applicable functions is somehow questioning.

There are languages out there that allows to define arbitrary functions and then use them as a methods with dot notation on any variable matching the type of the first argument, including Nim (with macros), Scala (with implicit classes and type classes), Kotlin (with extension functions) and Rust (with traits).

versteegen 3 hours ago

Yes, language design is a hugely important determinant of interpreter or JIT speed. There are many highly optimised VMs for dynamic languages but LuaJIT is king because Lua is such a small and suitable language, and although it does have a couple difficult to optimise features, they are few enough that you can expend the effort. It's nothing like Python. It's not much of an exaggeration to say Python is designed to minimise the possibility of a fast JIT, with compounding layers of dynamism. After years of work, the CPython 3.15 JIT finally managed ~5% faster than the stock interpreter on x86_64.

dontlaugh 2 hours ago

Python is worse, but not by all that much. After all, PyPy has been several times faster for many years.

jiusanzhou 5 hours ago

The jump from change #5 to #6 (inline caches + hidden-class object model) doing the bulk of the work here really tracks with how V8/JSC got fast historically — dynamic dispatch on property access is where naive interpreters die, and everything else is kind of rounding error by comparison. Nice that it's laid out so you can see the contribution of each step in isolation; most perf writeups just show the final number.

Someone 2 hours ago

I agree, but there’s a tiny caveat that this is for one specific benchmark that, I think, doesn’t reflect most real-world code.

I’m basing that on the 1.6% improvement they got on speeding up sqrt. That surprised me, because, to get such an improvement, the benchmark must spend over 1.6% of its time in there, to start with.

Looking in the git repo, it seems that did happen in the nbody simulation (https://github.com/pizlonator/zef/blob/master/ScriptBench/nb...).

tnelsond4 2 hours ago

I use the bounds checker in TCC to check for memory errors in C, should I switch to Fil-C instead to debug my code? Obviously yolo-C is my target.

grg0 10 hours ago

Interesting, thanks for sharing. It is a topic I'd like to explore in detail at some point.

I also like how, according to Github, the repo is 99.7% HTML and 0.3% C++. A testament to the interpreter's size, I guess?

pizlonator 10 hours ago

I committed the statically generated site, which is wastefully large because how I generate the code browsers

But yeah the interpreter is very small

tiffanyh 9 hours ago

I see Lua was included, wish LuaJIT was as well.

pizlonator 8 hours ago

I bet LuaJIT crushes Zef! Or rather, I would hope that it does, given how much more engineering went into it

There are many runtimes that I could have included but didn’t.

Also, it’s quite impressive how much faster PUC Lua is than QuickJS and Python

raincole 8 hours ago

Because QuickJS is really slow. Don't be fooled by the name. It's almost an order of magnitude slower than node/v8.

(I suppose the quick in QuickJS means "quick for a pure interpreter without JIT compilation or something...)

pizlonator 7 hours ago

zephen 8 hours ago

> it’s quite impressive how much faster PUC Lua is than QuickJS and Python

Python's execution time is mostly spent looking up stuff. I don't think lua is quite as dynamic.

pizlonator 8 hours ago

injidup 6 hours ago

What is this YOLO-c++ compiler that is referenced in the article? Google searches turn up nothing and chatgpt seems not to know it either.

electroly 6 hours ago

The author of Fil-C, who is also the author of this language, uses "Yolo-C/C++" to mean regular C/C++ without Fil-C.

boulos 9 hours ago

How's your experience with Fil-C been? Is it materially useful to you in practice?

pizlonator 9 hours ago

I’m biased since I’m the Fil.

It was materially useful in this project.

- Caught multiple memory safety issues in a nice deterministic way, so designing the object model was easier than it would have been otherwise.

- C++ with accurate GC is a really great programming model. I feel like it speeds me up by 1.5x relative to normal C++, and maybe like 1.2x relative to other GC’d languages (because C++’s APIs are so rich and the lambdas/templates and class system is so mature).

But I’m biased in multiple ways

- I made Fil-C++

- I’ve been programming in C++ for like 35ish years now

vlovich123 7 hours ago

I’m curious. Given the overheads of Fil-C++, does it actually make sense to use it for greenfield projects? I like that Fil-C fills a gap in securing old legacy codebases, I’m just not sure I understand it for greenfield projects like this other than you happen to know C++ really well.

pizlonator 7 hours ago

catlifeonmars 2 hours ago

Do you run an optimization pass on the AST between parsing and evaluation?

valorzard 5 hours ago

Do you think this exercise has taught you anything that could make fil c itself better?