All elementary functions from a single binary operator (arxiv.org)

744 points by pizza 18 hours ago

SideQuark 36 minutes ago

This isn't unique, or even the least compute way to do this. For example, let f(x,y) = 1/(x-y). This too is universal. I think there's a theorem stating for any finite set of binary operators there is a single one replacing it.

write x#y for 1/(x-y).

x#0 = 1/(x-0) = 1/x, so you get reciprocals. Then (x#y)#0 = 1/((1/(x-y)) - 0) = x-y, so subtraction.

it's common problem to show in any (insert various algebraic structure here ) inverse and subtraction gives all 4 elementary ops.

I haven't checked this carefully, but this note seems to give a short proof (modulo knowing some other items...) https://dmg.tuwien.ac.at/goldstern/www/papers/notes/singlebi...

ano-ther 7 minutes ago

doctorpangloss 35 minutes ago

yes, but are you currently experiencing both hypergraphia and chatbot AI induced psychosis while also thinking about this problem?

SideQuark 8 minutes ago

It's math. You can check it yourself instead of this (and many other) thoughtless posts.

doctorpangloss 6 minutes ago

DoctorOetker 15 hours ago

EDIT: please change the article link to the most recent version (as of now still v2), it is currently pointing to the v1 version which misses the figures.

I'm still reading this, but if this checks out, this is one of the most significant discoveries in years.

Why use splines or polynomials or haphazardly chosen basis functions if you can just fit (gradient descent) your data or wave functions to the proper computational EML tree?

Got a multidimensional and multivariate function to model (with random samples or a full map)? Just do gradient descent and convert it to approximant EML trees.

Perform gradient descent on EML function tree "phi" so that the derivatives in the Schroedinger equation match.

But as I said, still reading, this sounds too good to be true, but I have witnessed such things before :)

ikrima 15 hours ago

From my experience of working in this problem domain for the last year, I'd say it is pretty powerful but the "too good to be true part" comes from that EML buys elegance through exponential expression blow-up. Multiplication alone requires depth-8 trees with 41+ leaves i.e. minimal operator vocabulary trades off against expression length. There's likely an information-theoretic sweet spot between these extremes.

It's interesting to see his EML approach whereas mine was more on generating a context sensitive homoiconic grammar.

I've had lots of success combining spectral neural nets (GNNs, FNOs, Neural Tangent Kernels) with symbolic regression and using Operad Theory and Category Theory as my guiding mathematical machinery

DoctorOetker 12 hours ago

In my experience this exponential expression blow-up is less the result of the approach of decomposing into a minimum of primitives, but rather a result from repetition in expression trees:

If we make the analogy from Bertrand Russel's Principia Mathematica, he derived fully expanded expressions, i.e. trees where the leaves only may refer to the same literals, everyone claimed this madness underscored how formal verification of natural mathematics was a fools errand, but nevertheless we see successful projects like metamath (us.metamath.org) where this exponential blow-up does not occur. It is easy to see why: instead of representing proofs as full trees, the proofs are represented as DAG's. The same optimization would be required for EML to prevent exponential blow-up.

Put differently: if we allow extra buttons besides {1, EML} for example to capture unary functions the authors mentally add an 'x' button so now the RPN calculator has {1, EML, x}; but wait if you want multivariate functions it becomes an RPN calculator with extra buttons {1, EML, x,y,z} for example.

But why stop there? in metamath proofs are compressed: if an expression or wff was proven before in the same proof, it first subproof is given a number, and any subsequent invocations of this N'th subproof refers to this number. Why only recall input parameters x,y,z but not recall earlier computed values/functions?

In fact every proof in metamath set.mm that uses this DAG compressibility, could be split into the main proof and the repeatedly used substatements could be automatically converted to explicitly separate lemma proofs, in which case metamath could dispose of the single-proof DAG compression (but it would force proofs to split up into lemma's + main proof.

None of the proven theorems in metamath's set.mm displays the feared exponential blowup.

yorwba 7 hours ago

gopalv 14 hours ago

> Multiplication alone requires depth-8 trees with 41+ leaves i.e. minimal operator vocabulary trades off against expression length.

That is sort of comparable to how NAND simplify scaling.

Division is hell on gates.

The single component was the reason scaling went like it did.

There was only one gate structure which had to improve to make chips smaller - if a chip used 3 different kinds, then the scaling would've required more than one parallel innovation to go (sort of like how LED lighting had to wait for blue).

If you need two or more components, then you have to keep switching tools instead of hammer, hammer, hammer.

tripletao 13 hours ago

meindnoch 11 hours ago

danbruc 5 hours ago

Where do you see exponential blow-up? If you replace every function in an expression tree with a tree of eml functions, that is a size increase by a constant factor. And the factor does not seem unreasonable but in the range 10 to 100.

wasabi991011 4 hours ago

nazgulnarsil 7 hours ago

Yeah, seems like classic representation vs traversal complexity trade-off a la David Marr.

canjobear 15 hours ago

Link to your work?

irchans 6 hours ago

ikrima, Do you have any links to your research or paper titles?

Folkert 7 hours ago

Its not too good to be trough...

Its a way to make mathematical formulas completely unreadable. Its a way to spend more time on computing functions like log (3 ems reqd) while using more precision. Its a way to blow the mind of muggles reading hacker news.

siddboots 7 hours ago

While I'm really enjoying this paper, I think you are way overstating the significance here. This is mathematically interesting, and conceptually elegant, but there is nothing in this paper that suggests a competitive regression or optimisation approach.

I might have misunderstood, but from the two "Why do X when you can do just Y with EML" sentences, I think you are describing symbolic regression, which has been around for quite some time and is a serious grown-up technique these days. But even the best symbolic regression tools do not typically "replace" other regression approaches.

gilgoomesh 15 hours ago

> Why use splines or polynomials or haphazardly chosen basis functions if you can just fit (gradient descent) your data or wave functions to the proper computational EML tree?

Same reason all boolean logic isn't performed with combinations of NAND – it's computationally inefficient. Polynomials are (for their expressivity) very quick to compute.

brucehoult 13 hours ago

The Cray 1 was built 100% from NOR gates and SRAM.

IAmBroom 6 hours ago

Nevermark 15 hours ago

They are done with transistors though. Transistors form an efficient, single element, universal digital basis.

And are a much less arbitrary choice than NAND, vs. NOR, XOR, etc.

Using transistors as conceptual digital logic primitives, where power dissipation isn't a thing, Pass Logic is "The Way".

samus 13 hours ago

throw-qqqqq 10 hours ago

SideQuark 42 minutes ago

The compute, energy, and physical cost of this versus a simple x+y is easily an order of magnitude. It will not replace anything in computing, except maybe fringe experiments.

canjobear 15 hours ago

> Why use splines or polynomials or haphazardly chosen basis functions if you can just fit (gradient descent) your data or wave functions to the proper computational EML tree?

Because the EML basis makes simple functions (like +) hard to express.

Not to diminish this very cool discovery!

DoctorOetker 12 hours ago

consider how a wavefunction might be stored for numerically searching the ground state of a molecule, or a crystal lattice, humans appreciate 2D imagery that is about 1 kilopixel x 1 kilopixel; so expanding this to 3 dimensions that means the wavefunction would have to store 10 ^ 9 complex numbers (easily 4 GB at 16 bit precision for real and imaginary compnents so 4 bytes per complex number), do we really believe that a DAG variant of the EML construction would consume a bigger value to represent the analytically correct solution? do we really believe that the 4GB DAG variant of EML would produce a less accurate representation (i.e. less fidelity with the schroedinger equation?) If the ground state hydrogen atom is any indication, my money is on EML-style constructions, not naive 3D arrays modelling the wavefunction by brute force.

This also re-opens a lot of "party pooper" results in mathematics: impossibility of representing solutions to general quintic (fine print: if we restrict ourselves to arithmetic and roots/radicals). In mathematics and physics there have been a lot of "party pooper" results which later found more profound and interesting positive results by properly rephrasing the question. A negative result for a myopic question isn't very informative on its own.

defmacr0 6 hours ago

eru 15 hours ago

> I'm still reading this, but if this checks out, this is one of the most significant discoveries in years.

It seems like a neat parlour trick, indeed. But significant discovery?

PaulHoule 7 hours ago

I can't say I'm surprised at this result at all, in fact I'm surprised something like this wasn't already known.

cryptonector 4 hours ago

Given this amazing work, an efficient EML operator HW implementation could revolutionize a bunch of things. So the next thing might be an efficient EML HW implementation.

jmyeet 4 hours ago

This isn't all that significant to anyone who has done Calculus 2 and knows about Taylor's Series.

All this really says is that the Taylor's expansions of e^x and ln x are sufficient to express to express trig functions, which is trivially true from Euler's formula as long as you're in the complex domain.

Arithmetic operations follow from the fact that e^x and ln x are inverses, in particular that e^ln(x) = x.

Taylor's series seem a bit like magic when you first see them but then you get to Real Analysis and find out there are whole classes of functions that they can't express.

This paper is interesting but it's not revolutionary.

dang 13 hours ago

What URL should we change it to?

fc417fc802 12 hours ago

IMO arxiv links should pretty much always be to the abstract (ie .../abs/...) as opposed to particular versions of the html or pdf.

lioeters 13 hours ago

The URL is already pointing at v2, which apparently is the newest one requested by the comment above.

> Submitted on 23 Mar 2026 (v1), last revised 4 Apr 2026 (this version, v2)

dang 13 hours ago

entaloneralie 15 hours ago

This is amazing! I love seeing FRACTRAN-shaped things on the homepage :) This reminds me of how 1-bit stacks are encoded in binary:

A stack of zeros and ones can be encoded in a single number by keeping with bit-shifting and incrementing.

    Pushing a 0 onto the stack is equivalent to doubling the number.
    Pushing a 1 is equivalent to doubling and adding 1.
    Popping is equivalent to dividing by 2, where the remainder is the number.
I use something not too far off for my daily a programming based on a similar idea:

Rejoice is a concatenative programming language in which data is encoded as multisets that compose by multiplication. Think Fractran, without the rule-searching, or Forth without a stack.

https://wiki.xxiivv.com/site/rejoice

SeanSullivan86 5 hours ago

Wouldn't you also need to keep track of the stack's size, to know if there are leading zeros?

entaloneralie 5 hours ago

For trailing zeros yeah, or if you care for stack overflow/underflow. Here's a few primitives if you wanna try it out:

https://paste.sr.ht/~rabbits/cd2369cc7c72bfad0fcd83e27682095...

StilesCrisis 8 hours ago

Did you just explain base-2 numbers on the HN forums as if it were novel?

entaloneralie 5 hours ago

I never claimed it was novel, chill.

lioeters 6 hours ago

You missed the part where the number is being used as a 1-bit stack. They never claimed novelty, it's just a neat technique that might be unfamiliar to most people.

mackeye 2 hours ago

adding a zero to the left of a binary integer doesn't double it

nyeah 7 hours ago

It's dry humor?

karpathy 3 hours ago

All possible 36 distinct level-2 eml functions of one variable (the first 18 of them with entirely Real outputs, the other 18 with "intermediate" complex-valued components):

https://imgur.com/a/K7AoOFi

eugene3306 14 hours ago

This makes a good benchmark LLMs:

``` look at this paper: https://arxiv.org/pdf/2603.21852

now please produce 2x+y as a composition on EMLs ```

Opus(paid) - claimed that "2" is circular. Once I told it that ChatGPT have already done this, finished successfully.

ChatGPT(free) - did it from the first try.

Grok - produced estimation of the depth of the formula.

Gemini - success

Deepseek - Assumed some pre-existing knowledge on what EML is. Unable to fetch the pdf from the link, unable to consume pdf from "Attach file"

Kimi - produced long output, stopped and asked to upgrade

GLM - looks ok

fc417fc802 12 hours ago

> Once I told it that ChatGPT have already done this, finished successfully.

TIL you can taunt LLMs. I guess they exhibit more competitive spirit than I thought.

varispeed 8 hours ago

Opus seems to be wired currently to get you to spend more money. Once you tell it "Stop defrauding me, just get to the right solution" it often gets it.

RALaBarge 8 hours ago

nurettin 7 hours ago

eru 11 hours ago

I copy and pasted the abstract into DeepSeek and asked your question. It's a bit unfair to penalise it for not knowing PDFs.

It got a result.

stared 5 hours ago

If you like creating such things, consider contributing to Terminal Bench Science, https://www.tbench.ai/news/tb-science-announcement.

theanonymousone 10 hours ago

I changed the prompt to this:

""" Consider a mathematical function EML defined as `eml(x,y)=exp(x)−ln(y)`

Please produce `sin(x)/x` as a composition on EMLs and constant number 1 (one). """

brrrrrm 4 hours ago

meta.ai in instant mode gets it first try too (I think?)

``` 2x + y = \operatorname{eml}\Big(1,\; \operatorname{eml}\big(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(L_2 + L_x, 1), 1) \cdot \operatorname{eml}(y,1)),1)\big),1\big)\Big) ```

for me Gemini hallucinated EML to mean something else despite the paper link being provided: "elementary mathematical layers"

spuz 11 hours ago

So what is the correct answer?

lioeters 15 hours ago

> A calculator with just two buttons, EML and the digit 1, can compute everything a full scientific calculator does

Reminds me of the Iota combinator, one of the smallest formal systems that can be combined to produce a universal Turing machine, meaning it can express all of computation.

js8 2 hours ago

That's quite interesting.

Few ideas that come to my mind when reading this:

1. One should also add absolute value (as sqrt(x*x)?) as a desired function and from that min, max, signum in the available functions. Since the domain is complex some of them will be a bit weird, I am not sure.

2. I think, for any bijective function f(x) which, together with its inverse, is expressible using eml(), we can obtain another universal basis eml(f(x),f(y)) with the added constant f^-1(1). Interesting special case is when f=exp or f=ln. (This might also explain the EDL variant.)

3. The eml basis uses natural logarithm and exponent. It would be interesting to see if we could have a basis with function 2^x - log_2(y) and constants 1 and e (to create standard mathematical functions like exp,ln,sin...). This could be computationally more feasible to implement. As a number representation, it kinda reminds me of https://en.wikipedia.org/wiki/Elias_omega_coding.

4. I would like to see an algorithm how to find derivatives of the eml() trees. This could yield a rather clear proof why some functions do not have indefinite integrals in a symbolic form.

5. For some reason, extending the domain to complex numbers made me think about fuzzy logics with complex truth values. What would be the logarithm and exponential there? It could unify the Lukasiewicz and product logics.

nullwiz 4 hours ago

I made https://github.com/nullwiz/emlvm/tree/main yesterday, for fun :^)

testaccount28 14 hours ago

derivation of -x seems wrong. we can look at the execution trace on a stack machine, but it's actually not hard to see. starting from the last node before the output, we see that the tree has the form

    eml(z, eml(x, 1))
      = e^z - ln(eml(x, 1))
      = e^z - ln(e^x)
      = e^z - x
and the claim is that, after it's expanded, z will be such that this whole thing is equal to -x. but with some algebra, this is happening only if

    e^z = 0,
and there is no complex number z that satisfies this equation. indeed if we laboriously expand the given formula for z (the left branch of the tree), we see that it goes through ln(0), and compound expressions.

x^-1 has the same problem.

both formulae work ...sort of... if we allow ln(0) = Infinity and some other moxie, such as x / Infinity = 0 for all finite x.

NooneAtAll3 13 hours ago

yeah, it's annoying that author talks about RPN notation, but only gives found formulas in form of images

looks like it computes ln(1)=0, then computes e-ln(0)=+inf, then computes e-ln(+inf)=-inf

testaccount28 14 hours ago

ah, the paper acknowledges this. my bad for jumping to the diagrams!

Rubicund 13 hours ago

On page 11, the paper explicitly states:

> EML-compiled formulas work flawlessly in symbolic Mathematica and IEEE754 floating-point… This is because some formulas internally might rely on the following properties of extended reals: ln 0 = −∞, e^(−∞) = 0.

And then follows with:

> But EML expressions in general do not work ‘out of the box’ in pure Python/Julia or numerical Mathematica.

Thus, the paper’s completeness claim depends on a non-standard arithmetic convention (ln(0) = -∞), not just complex numbers as it primarily advertises. While the paper is transparent about this, it is however, buried on page 11 rather than foregrounded as a core caveat. Your comment deserves credit for flagging it.

adrian_b 10 hours ago

Rubicund 13 hours ago

ks2048 an hour ago

This looks interesting. I haven't looked in-detail, but my first thought is - why hasn't this been found in the past? Surely, people have been interested kind of question for awhile?

krick 15 hours ago

> using EML trees as trainable circuits ..., I demonstrate the feasibility of exact recovery of closed-form elementary functions from numerical data at shallow tree depths up to 4

That's awesome. I always wondered if there is some way to do this.

qiller 16 hours ago

For completeness, there is also Peirce’s arrow aka NOR operation which is functionally complete. Fun applications iirc VMProtect copy protection system has an internal VM based on NOR.

Quick google seach brings up https://github.com/pr701/nor_vm_core, which has a basic idea

formerly_proven 8 hours ago

That’s boolean functional completeness, which is kind of a trivial result (NAND, NOR). It mirrors this one insofar as the EDL operator is also a combination of a computation and a negation in the widest senses.

CGamesPlay 13 hours ago

I made a fun marimo notebook to try and derive these myself. I structured each cell in order based on the diagram at the end of the paper. It uses Sympy to determine if the function is correct or not.

https://gist.github.com/CGamesPlay/9d1fd0a9a3bd432e77c075fb8...

lmf4lol 6 hours ago

Stupid question maybe (I am no mathematician), but aren't exp and ln really primitives? Aren't they implemented in terms of +,-,/,* etc? Or do we assume that we have an infinite lookup table for all possible inputs?

rnhmjoj 6 hours ago

> aren't exp and ln really primitives? Aren't they implemented in terms of +,-,/,* etc?

They're primitive in the sense that you can't compute exp(x) or log(x) using a finite combination of other elementary functions for any x. If you allow infinite many operations, then you can easily find infinite sums or products of powers, or more complicated expressions to represent exp and log and other elementary functions.

> Or do we assume that we have an infinite lookup table for all possible inputs?

Essentially yes, you don't necessarily need an "implementation" to talk about a function, or more generally you don't need to explicitly construct an object from simpler pieces: you can just prove it satisfies some properties and that it is has to exist.

For exp(x), you could define the function as the solution to the diffedential equal df/dx = f(x) with initial condition f(0) = 1. Then you would enstablish that the solution exists and it's unique (it follows from the properties of the differential equation), call exp=f and there you have it. You don't necessarily know how to compute for any x, but you can assume exp(x) exists and it's a real number.

freehorse 5 hours ago

You have a gate (called here "eml") that takes x and y and gives `exp(x) - log(y)`. Then you implement all other operations and elementary functions, including addition, multiplication etc, using only compositions of this gate/function (and the constant 1). You don't have addition as you start, you only have eml and 1. You define addition in terms of those.

lugao 5 hours ago

I think the point here is to explore the reduction of these functions to finite binary trees using a single binary operator and a single stopping constant. The operator used could be arbitrarily complex; the objective is to prove that other expressions in a certain family — in this case, the elementary functions — can be expanded as a finite (often incomplete) binary tree of that same operation.

In other words, this result does not aim to improve computability or bound the complexity of calculating the numerical value. Rather, it aims to exhibit this uniform, finite tree structure for the entire family of elementary expressions.

qbit42 4 hours ago

I think there is still an implicit restriction on the complexity of the operator for this to be interesting. Otherwise you could design an operator which accepts a pair x,y and performs one of 2^k elementary binary operations by reading off the first k bits of x and applying the specified operation on the remainder of x and y. (This is kind of like how real-valued computational models become too powerful for complexity theory to work if you allow bitwise operations.)

lugao 4 hours ago

Aardwolf an hour ago

Interesting!

One thing I wonder now: NAND is symmetric while this isn't, could something similar be found where function(x, y) = function(y, x)?

evnix 13 hours ago

Can someone explain how is this different from lambda calculus, it seems like you can derive the same in both. I don't understand both well enough and hence the question.

sigmoid10 13 hours ago

Lamda kind of does this in an analogous form, but does not allow you to derive this particular binary expression as a basis for elementary functions. There is a related concept with Iota [1], which allows you express every combinatoric SKI term and in turn every lambda definable function. But similar to this particular minimalist scientific function expression, it is mostly of interest for reductionist enthusiasts and not for any practical purpose.

[1] https://en.wikipedia.org/wiki/Iota_and_Jot

layer8 7 hours ago

Lambda calculus is about discrete computations, this is about continuous functions. You can’t reason about continuous functions in lambda calculus.

Twey 6 hours ago

Depending on your lambda calculus! From a categorical perspective a lambda calculus is just a nice syntax for Cartesian closed categories (or similar, e.g. *-autonomous categories for linear lambda calculus) so you can use it to reason about anything you can fit into that mould. For example, Paul Taylor likes to do exactly this: https://www.paultaylor.eu/ASD/analysis#lamcra

bollu 13 hours ago

Lambda calculus talks about computable functions, where the types of the inputs are typically something discrete, like `Bool` or `Nat`. Here, the domain is the real numbers.

tromp 13 hours ago

Any lambda term is equivalent to a combinatory term over a one-point basis (like λxλyλz. x z (y (λ_.z)) [1]). One difference is that lambda calculus doesn't distinguish between functions and numbers, and in this case no additional constant (like 1) is needed.

[1] https://github.com/tromp/AIT/blob/master/ait/minbase.lam

TJSomething 13 hours ago

The short answer is that the lambda calculus computes transformations on digital values while this is for building functions that can transform continuous (complex) values.

simplesighman 16 hours ago

> For example, exp(x)=eml(x,1), ln(x)=eml(1,eml(eml(1,x),1)), and likewise for all other operations

I read the paper. Is there a table covering all other math operations translated to eml(x,y) form?

sandrocksand 15 hours ago

I think what you want is the supplementary information, part II "completeness proof sketch" on page 12. You already spotted the formulas for "exp" and real natural "L"og; then x - y = eml(L(x), exp(y)) and from there apparently it is all "standard" identities. They list the arithmetic operators then some constants, the square root, and exponentials, then the trig stuff is on the next page.

You can find this link on the right side of the arxiv page:

https://arxiv.org/src/2603.21852v2/anc/SupplementaryInformat...

vbezhenar 15 hours ago

Didn't read the paper, but it was easy for me to derive constants 0, 1, e and functions x + y, x - y, exp(x), ln(x), x * y, x / y. So seems to be enough for everything. Very elegant.

adornKey 5 hours ago

Although x + y is surprisingly more complicated than you'd expect at first. The construction first goes for exp(x) and ln(x) then to x - y and finally uses -y to get to x + y.

saratogacx 15 hours ago

last page of the PDF has several tree's that represent a few common math functions.

jmyeet 15 hours ago

I was curious about that too. Gemini actually gave a decent list. Trig functions come from Euler's identity:

    e^ix = cos x + i sin x
which means:

    e^-ix = cos -x + i sin -x
          = cos x - i sin x
so adding them together:

   e^ix + e^-ix = 2 cos x
   cos x = (e*ix - e^-ix) / 2
So I guess the real part of that.

Multiplication, division, addition and subtraction are all straightforward. So are hyperbolic trig functions. All other trig functions can be derived as per above.

rurban 3 hours ago

ajs1998 3 hours ago

ks2048 2 hours ago

Definitely prefer his old-school page to the vibe-coded-design page,

https://th.if.uj.edu.pl/~odrzywolek/

vintermann 9 hours ago

I'm way too unschooled to say if it's important or not, but what really excites me is the Catalan structure ("Every EML expression is a binary tree [...] isomorphic to well-studied combinatorial objects like full binary trees and Catalan objects").

So, what happens if you take say the EML expression for addition, and invert the binary tree?

tgtweak 5 hours ago

This could have some interesting hardware implications as well - it suggests that a large dedicated silicon instruction set could accelerate any mathematical algorithm provided it can be mapped to this primitive. It also suggests a compiler/translation layer should be possible as well as some novel visualization methods for functions and methods.

benleejamin 5 hours ago

I'm not too familiar with the hardware world, but does EML look like the kind of computation that's hardware-friendly? Would love for someone with more expertise to chime in here.

AlotOfReading an hour ago

A similar function operating on the real domain for powers and logs of 2 would be extremely hardware friendly. You can build it directly out of the floating point format. First K significand bits index a LUT. Do that for each argument and subtract them.

It gets a bit more difficult for the complex domain because you need rotation.

tgtweak 5 hours ago

Yes actually, it is very regular which usually lends itself to silicon implementations - the paper event talks about this briefly.

I think the bigger question is whether it will be more energy-optimal or silicon density-optimal than math libraries that are currently baked into these processors (FPUs).

There are also some edge cases "exp(exp(x))" and infinities that seem to result in something akin to "division by zero" where you need more than standard floating-point representations to compute - but these edge cases seem like compiler workarounds vs silicon issues.

tgtweak 5 hours ago

This paper seems to suggest that a chip with 10 pipeline stages of EML units could evaluate any elementary function (table 4) in a single pass.

I'm curious how this would compare to the dedicated sse or xmx instructions currently inside most processor's instruction sets.

Lastly, you could also create 5-depth or 6-depth EML tree in hardware (fpga most likely) and use it in lieu of the rust implementation to discover weight-optimal eml formulas for input functions much quicker, those could then feed into a "compiler" that would allow it to run on a similar-scale interpreter on the same silicon.

In simple terms: you can imagine an EML co-processor sitting alongside a CPUs standard math coprocessor(s): XMX, SSE, AMX would do the multiplication/tile math they're optimized for, and would then call the EML coprocessor to do exp,sin,log calls which are processed by reconfiguring the EML trees internally to process those at single-cycle speed instead of relaying them back to the main CPU to do that math in generalized instructions - likely something that takes many cycles to achieve.

tgtweak 5 hours ago

You could also make an analog EML circuit in theory, using electrical primitives that have been around since the 60s. You could build a simple EML evaluator on a breadboard. Things like trig functions would be hard to reproduce, but you could technically evaluate output in electrical realtime (the time it takes the electrical signal to travel though these 8-10 analog amplifier stages).

notorandit 14 hours ago

Not sure it really compares to NAND() and the likes.

Simply because bool algebra doesn't have that many functions and all of them are very simple to implement.

A complex bool function made out of NANDs (or the likes) is little more complex than the same made out of the other operators.

Implementing even simple real functions out of eml() seems to me to add a lot of computational complexity even with both exp() and ln() implemented in hardware in O(1). I think about stuff sum(), div() and mod().

Of course, I might be badly wrong as I am not a mathematician (not even by far).

But I don't see, at the moment, the big win on this.

adrian_b 10 hours ago

This has no use for numeric computations, but it may be useful in some symbolic computations, where it may provide expressions with some useful properties, e.g. regarding differentiability, in comparison with alternatives.

boutell 3 hours ago

Halfway through I was imagining aliens to whom this operator comes naturally and our math is weird. By the end I found out that we might be those aliens.

prvc 14 hours ago

This is neat, but could someone explain the significance or practical (or even theoretical) utility of it?

WilcoKruijer 9 hours ago

From the paper:

> Everyone learns many mathematical operations in school: fractions, roots, logarithms, and trigonometric functions (+, −, ×, /, sqrt, sin, cos, log, …), each with its own rules and a dedicated button on a scientific calculator. Higher mathematics reveals that many of these are redundant: for example, trigonometric ones reduce to the complex exponential. How far can this reduction go? We show that it goes all the way: a single operation, eml(x, y), replaces every one of them. A calculator with just two buttons, EML and the digit 1, can compute everything a full scientific calculator does. This is not a mere mathematical trick. Because one repeatable element suffices, mathematical expressions become uniform circuits, much like electronics built from identical transistors, opening new ways to encoding, evaluating, and discovering formulas across scientific computing.

usernametaken29 7 hours ago

Actually we know this for a long time. The universal approximation theorem states that any arbitrary function can be modelled through a nonlinear basis function so long as capacity is big enough. The practical bit here is knowing how many basis functions can be approximated with a two operators. That’s new!

geocar 13 hours ago

Read the paper. On the third page is a "Significance statement".

fxwin 10 hours ago

eh, i didnt find that paragraph very helpful. it just restates what it means do decompose an expression into another one only relying on eml, and vaguely gestures at what this could mean, i was hoping for something more specific.

bluegatty 13 hours ago

second, please help us laypeople here

mmastrac 5 hours ago

I couldn't find any information on this, but is it possible that given how nicely exponentiation and logarithms differentiate and integrate, is it possible that this operator may be useful to simplify the process of finding symbolic solutions to integrals and derivatives?

gus_massa 4 hours ago

It transform a simple expression like x+y into a long chain of "eml" applications, so:

Derivatives: No. Exercise: Write the derivative of f(x)=eml(x,x)

Integrals: No. No. No. Integrals of composition are a nightmare, and here they use long composition chain like g(x)=eml(1,eml(eml(1,x),1)).

selcuka 16 hours ago

So, like brainf*ck (the esoteric programming language), but for maths?

Lerc 16 hours ago

But even tighter. With eml and 1 you could encode a funtion in rpn as bits.

Although you also need to encode where to put the input.

The real question is what emoji to use for eml when written out.

vintermann 9 hours ago

In rpn notation you just put the input on the stack, right? The encodings seems like they could get pretty big, and encodings certainly wouldn't be unique, but you should be able to encode pretty much any constant you could think of.

zephen 16 hours ago

> The real question is what emoji to use for eml when written out.

Some Emil or another, I suppose. Maybe the one from Ratatouille, or maybe this one: https://en.wikipedia.org/wiki/Emil_i_L%C3%B6nneberga

Charon77 16 hours ago

selcuka 15 hours ago

So brainf*ck in binary?

I'm kidding, of course. You can encode anything in bits this way.

nostrademons 15 hours ago

More like lambda calculus, but for continuous functions.

jekude 16 hours ago

What would physical EML gates be implemented in reality?

Posts like these are the reason i check HN every day

KK7NIL 2 hours ago

Both BJTs and FETs have intrinsic exponential/logarithmic behaviors (at low biases) due to charge density being given by the Fermi-Dirac distribution since electrons are fermions.

DoctorOetker 16 hours ago

probably with op-amps

drdeca 3 hours ago

“ Elementary functions, for many students epitomized by the dreaded sine and cosine, ” dreaded?

tripdout 16 hours ago

Interesting, but is the required combination of EML gates less complex than using other primitives?

jedimastert 2 hours ago

Depends on how you define complexity?

Like when the Apollo guidance computer was made, the bottleneck was making integrated chips so they only made one, the NOR gate, and a whackton of routing to build out an entire CPU. Horribly complex routing, very simplified integrated circuit construction

eru 15 hours ago

In general, no.

notorandit 8 hours ago

It's about symbolic computation more than calculations.

eru 7 hours ago

hughw 7 hours ago

eml(x, y) pronounced... "email"?

peterlk 16 hours ago

Reminds me a bit of the coolest talk I ever got to see in person: https://youtu.be/FITJMJjASUs?si=Fx4hmo77A62zHqzy

It’s a derivation of the Y combinator from ruby lambdas

Analemma_ 16 hours ago

If you've never worked through a derivation/explanation of the Y combinator, definitely find one (there are many across the internet) and work through it until the light bulb goes off. It's pretty incredible, it almost seems like "matter ex nihilo" which shouldn't work, and yet does.

It's one of those facts that tends to blow minds when it's first encountered, I can see why one would name a company after it.

thaumasiotes 16 hours ago

Have you gone through The Little Schemer?

More on topic:

> No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, sqrt, and log has always required multiple distinct operations.

I was taught that these were all hypergeometric functions. What distinction is being drawn here?

adrian_b 10 hours ago

Hypergeometric functions are functions with 4 parameters.

When you have a function with many parameters it becomes rather trivial to express simpler functions with it.

You could find a lot of functions with 4 parameters that can express all elementary functions.

Finding a binary operation that can do this, like in TFA, is far more difficult, which is why it has not been done before.

A function with 4 parameters can actually express not only any elementary function, but an infinity of functions with 3 parameters, e.g. by using the 4th parameter to encode an identifier for the function that must be computed.

thaumasiotes 9 hours ago

theodorethomas 3 hours ago

I wonder how this combines with Richardson's Theorem.

ryanhiebert 7 hours ago

I’d be really interested in an analysis of tau in light of this discovery. Would tau fit more naturally here than pi, as it does in other examples?

adornKey 5 hours ago

The construction so far uses ln(-1) to get to pi - so far no easy way to tau.

measurablefunc an hour ago

I guess you folks don't know about iota & jot: https://en.wikipedia.org/wiki/Iota_and_Jot

nonfamous 16 hours ago

How would an architecture with a highly-optimized hardware implementation of EML compare with a traditional math coprocessor?

tripletao 12 hours ago

It would almost always be much, much worse. Practical numerical libraries (whether implemented in hardware or software) contain lots of redundancy, because their goal is to give you an optimized primitive as close as possible to the operation you actually want. For example, the library provides an optimized tan(x) to save you from calling sin(x)/cos(x), because one nasty function evaluation (as a power series, lookup table, CORDIC, etc.) is faster than two nasty function evaluations and a divide.

Of course the redundant primitives aren't free, since they add code size or die area. In choosing how many primitives to provide, the designer of a numerical library aims to make a reasonable tradeoff between that size cost and the speed benefit.

This paper takes that tradeoff to the least redundant extreme because that's an interesting theoretical question, at the cost of transforming commonly-used operations with simple hardware implementations (e.g. addition, multiplication) into computational nightmares. I don't think anyone has found a practical application for their result yet, but that's not the point of the work.

tgtweak 4 hours ago

I actually don't think this is true -

Traditional processors, even highly dedicated ones like TMUs in gpus, still require being preconfigured substantially in order to switch between sin/cos/exp2/log2 function calls, whereas a silicon implementation of an 8-layer EML machine could do that by passing a single config byte along with the inputs. If you had a 512-wide pipeline of EML logic blocks in modern silicon (say 5nm), you could get around 1 trillion elementary function evaluations per second on 2.5ghz chip. Compare this with a 96 core zen5 server CPU with AVX-512 which can do about 50-100 billion scalar-equivalent evaluations per second across all cores only for one specific unchanging function.

Take the fastest current math processors: TMUs on a modern gpu: it can calculate sin OR cos OR exp2 OR log2 in 1 cycle per shader unit... but that is ONLY for those elementary functions and ONLY if they don't change - changing the function being called incurs a huge cycle hit, and chaining the calculations also incurs latency hits. An EML coprocessor could do arcsinh(x² + ln(y)) in the same hardware block, with the same latency as a modern cpu can do a single FMA instruction.

tripletao 3 hours ago

wildzzz 16 hours ago

Dreadfully slow for integer math but probably some similar performance to something like a CORDIC for specific operations. If you can build an FPU that does exp() and ln() really fast, it's simple binary tree traversal to find the solution.

AlotOfReading 15 hours ago

You already have an FPU that approximates exp() and ln() really fast, because float<->integer conversions approximate the power 2 functions respectively. Doing it accurately runs face-first into the tablemaker's dilemma, but you could do this with just 2 conversions, 2 FMAs (for power adjustments), and a subtraction per. A lot of cases would be even faster. Whether that's worth it will be situational.

xpe 6 hours ago

hyperhello 16 hours ago

> eml(x,y)=exp(x)-ln(y)

Exp and ln, isn't the operation its own inverse depending on the parameter? What a neat find.

thaumasiotes 15 hours ago

> isn't the operation its own inverse depending on the parameter?

This is a function from ℝ² to ℝ. It can't be its own inverse; what would that mean?

woopsn 15 hours ago

It's a kind of superposition representation a la Kolmogorov-Arnold, a learnable functional basis for elementary functions g(x,y)=f(x) - f^{-1}(y) in this sense with f=exp.

hyperhello 15 hours ago

eml(1,eml(x,1)) = eml(eml(1,x),1) = exp(ln(x)) = ln(exp(x)) = x

thaumasiotes 13 hours ago

freehorse 11 hours ago

khelavastr 11 hours ago

Is the the same as saying everything can be made from nand gates?

adrian_b 10 hours ago

With NAND gates you can make any discrete system, but you can only approximate a continuous system.

This work is about continuous systems, even if the reduction of many kinds of functions to compositions of a single kind of function is analogous to the reductions of logic functions to composing a few kinds or a single kind of logic functions.

I actually do not value the fact that the logic functions can be expressed using only NAND. For understanding logic functions it is much more important to understand that they can be expressed using either only AND and NOT, or only OR and NOT, or only XOR and AND (i.e. addition and multiplication modulo 2).

Using just NAND or just NOR is a trick that does not provide any useful extra information. There are many things, including classes of mathematical functions or instruction sets for computers, which can be implemented using a small number of really independent primitives.

In most or all such cases, after you arrive to the small set of maximally simple and independent primitives, you can reduce them to only one primitive.

However that one primitive is not a simpler primitive, but it is a more complex one, which can do everything that the maximally simple primitives can do, and it can recreate those primitives by composition with itself.

Because of its higher complexity, it does not actually simplify anything. Moreover, generating the simpler primitives by composing the more complex primitive with itself leads to redundancies, if implemented thus in hardware.

This is a nice trick, but like I have said, it does not improve in any way the understanding of that domain or its practical implementations in comparison with thinking in terms of the multiple simpler primitives.

For instance, one could take a CMOS NAND gate as the basis for implementing a digital circuit, but you understand better how the CMOS logic actually works when you understand that actually the AND function and the NOT function are localized in distinct parts of that CMOS NAND gate. This understanding is necessary when you have to design other gates for a gate library, because even if using a single kind of gate is possible, the performance of this approach is quite sub-optimal, so you almost always you have to design separately, e.g. a XOR gate, instead of making it from NAND gates or NOR gates.

In CMOS logic, NAND gates and NOR gates happen to be the simplest gates that can restore at output the same logic levels that are used at input. This confuses some people to think that they are the simplest CMOS gates, but they are not the simplest gates when you remove the constraint of restoring the logic levels. This is why you can make more complex logic gates, e.g. XOR gates or AND-OR-INVERT gates, which are simpler than they would be if you made them from distinct NAND gates or NOR gates.

psychoslave 13 hours ago

Very nice, though I'm not found of the name.

What comes to my mind as an alternative which I would subjectivity finer is "axe". Think axiom or axiology.

Anyone with other suggestions? Or even remarks on this one?

fxwin 10 hours ago

i think eml is fine, names should be connected to the thing they represent so 'exponential minus log' makes sense to me

psychoslave 3 hours ago

Gust and color are hard to conciliate.

On my side I like direct sementic connections, but find convoluted indirections conflated through lazy sigles strongly repulsive. I can appreciate an acronym that make and the direct connection and playful indirect reference to expanded terms.

pveierland 10 hours ago

Got curious to see whether SymPy could be used to evaluate the expressions, so I used Claude Code to build a quick evaluator. Numeric and symbolic results appear to agree:

    nix run github:pveierland/eml-eval
    EML Evaluator — eml(x, y) = exp(x) - ln(y)
    Based on arXiv:2603.21852v2 by A. Odrzywołek
    
    Constants
    ------------------------------------------------------------------------------
      1        K=1    d=0    got 1                    expected 1                    sym=ok   num=ok   [simplify]
      e        K=3    d=1    got 2.718281828          expected 2.718281828          sym=ok   num=ok   [simplify]
      0        K=7    d=3    got 0                    expected 0                    sym=ok   num=ok   [simplify]
      -1       K=17   d=7    got -1                   expected -1                   sym=ok   num=ok   [simplify]
      2        K=27   d=9    got 2                    expected 2                    sym=ok   num=ok   [simplify]
      -2       K=43   d=11   got -2                   expected -2                   sym=ok   num=ok   [simplify]
      1/2      K=51   d=15   got 0.5                  expected 0.5                  sym=ok   num=ok   [simplify]
      -1/2     K=67   d=17   got -0.5                 expected -0.5                 sym=ok   num=ok   [simplify]
      2/3      K=103  d=19   got 0.6666666667         expected 0.6666666667         sym=ok   num=ok   [simplify]
      -2/3     K=119  d=21   got -0.6666666667        expected -0.6666666667        sym=ok   num=ok   [simplify]
      sqrt2    K=85   d=21   got 1.414213562          expected 1.414213562          sym=ok   num=ok   [simplify]
      i        K=75   d=19   got i                    expected i                    sym=ok   num=ok   [i²=-1, simplify]
      pi       K=153  d=29   got 3.141592654          expected 3.141592654          sym=ok   num=ok   [simplify]
    
    Unary functions  (x = 7/3)
    ------------------------------------------------------------------------------
      exp(x)   K=3    d=1    got 10.3122585           expected 10.3122585           sym=ok   num=ok   [simplify]
      ln(x)    K=7    d=3    got 0.8472978604         expected 0.8472978604         sym=ok   num=ok   [simplify]
      -x       K=17   d=7    got -2.333333333         expected -2.333333333         sym=ok   num=ok   [simplify]
      1/x      K=25   d=8    got 0.4285714286         expected 0.4285714286         sym=ok   num=ok   [simplify]
      x - 1    K=11   d=4    got 1.333333333          expected 1.333333333          sym=ok   num=ok   [simplify]
      x + 1    K=27   d=9    got 3.333333333          expected 3.333333333          sym=ok   num=ok   [simplify]
      2x       K=67   d=17   got 4.666666667          expected 4.666666667          sym=ok   num=ok   [simplify]
      x/2      K=51   d=15   got 1.166666667          expected 1.166666667          sym=ok   num=ok   [simplify]
      x^2      K=41   d=10   got 5.444444444          expected 5.444444444          sym=ok   num=ok   [simplify]
      sqrt(x)  K=59   d=16   got 1.527525232          expected 1.527525232          sym=ok   num=ok   [simplify]
    
    Binary operations  (x = 7/3, y = 5/2)
    ------------------------------------------------------------------------------
      x + y    K=27   d=9    got 4.833333333          expected 4.833333333          sym=ok   num=ok   [simplify]
      x - y    K=11   d=4    got -0.1666666667        expected -0.1666666667        sym=ok   num=ok   [simplify]
      x * y    K=41   d=10   got 5.833333333          expected 5.833333333          sym=ok   num=ok   [simplify]
      x / y    K=25   d=8    got 0.9333333333         expected 0.9333333333         sym=ok   num=ok   [simplify]
      x ^ y    K=49   d=12   got 8.316526261          expected 8.316526261          sym=ok   num=ok   [simplify]

genxy 9 hours ago

I hope this was presented at SIGBOVIK.

zogomoox 7 hours ago

Could this be used to prove e+pi is transcendental?

theanonymousone 10 hours ago

Zero will also be handy in definitions: `0=eml(1,eml(eml(1,1),1))`.

And i is obviously `sqrt(-1)`

rvnx 10 hours ago

Looks like he bruteforced all combinations of two mathematical operations no ?

future_crew_fan 9 hours ago

there ought to be a special section on HN entitled "things that will make you feel thoroughly inadequate".

supermdguy 16 hours ago

Next step is to build an analog scientific calculator with only EML gates

lifis 8 hours ago

The paper somehow seems to be missing the most interesting part, i.e. the optimal constructions of functions from eml in a readable format.

Here is my attempt. I think they should be optimal up to around 15 eml.nodrs, the latter might not be:

# 0

1=1

# 1

exp(x)=eml(x,1)

e-ln(x)=eml(1,x)

e=exp(1)

# 2

e-x=e-ln(exp(x))

# 3

0=e-e

ln(x)=e-(e-ln(x))

exp(x)-exp(y)=eml(x,exp(exp(y)))

# 4

id(x)=e-(e-x)

inf=e-ln(0)

x-ln(y)=eml(ln(x),y)

# 5

x-y=x-ln(exp(y))

-inf=e-ln(inf)

# 6

-ln(x)=eml(-inf,x)

ln(ln(x))=ln(ln(x))

# 7

-x=-ln(exp(x))

-1=-1

x^-1=exp(-ln(x))

ln(x)+ln(y)=e-((e-ln(x))-ln(y))

ln(x)-ln(y)=ln(x)-ln(y) # using x - ln(y)

# 8

xy=exp(ln(x)+ln(y))

x/y=exp(ln(x)-ln(y))

# 9

x + y = ln(exp(x))+ln(exp(y))

2 = 1+1

# 10

ipi = ln(-1)

# 13

-ipi=-ln(-1)

x^y = exp(ln(x)y)

# 16

1/2 = 2^-1

# 17

x/2 = x/2

x2 = x2

# 20

ln(sqrt(x)) = ln(x)/2

# 21

sqrt(x) = exp(ln(sqrt(x)))

# 25

sqrt(xy) = exp((ln(x)+ln(y))/2)

# 27

ln(i)=ln(sqrt(-1))

# 28

i = sqrt(-1)

-pi^2 = (ipi)(ipi)

# 31

pi^2 = (ipi)(-ipi)

# 37

exp(xi)=exp(xi)

# 44

exp(-xi)=exp(-(xi))

# 46

pi = (ipi)/i

# 90+x?

2cos(x)=exp(xi)+exp(-xi))

# 107+x?

cos(x) = (2cos(x))/2

# 118+x?

2sin(x)=(exp(x*i)-exp(-xi))/i # using exp(x)-exp(y)

# 145+x?

sin(x) = (2sin(x))/2

# 217+3x?

tan(x) = 2sin(x)/(2cos(x))

moralestapia 6 hours ago

Whoa, this is huge!

My dearest congrats to the author in case s/he shows around this site ^^.

BobbyTables2 17 hours ago

How does one actually add with this?

curtisf 15 hours ago

It's basically using the "-" embedded in the definition of the eml operator.

Table 4 shows the "size" of the operators when fully expanded to "eml" applications, which is quite large for +, -, ×, and /.

Here's one approach which agrees with the minimum sizes they present:

        eml(x, y             ) = exp(x) − ln(y) # 1 + x + y
        eml(x, 1             ) = exp(x)         # 2 + x
        eml(1, y             ) = e - ln(y)      # 2 + y
        eml(1, exp(e - ln(y))) = ln(y)          # 6 + y; construction from eq (5)
                         ln(1) = 0              # 7
After you have ln and exp, you can invert their applications in the eml function

              eml(ln x, exp y) = x - y          # 9 + x + y
Using a subtraction-of-subtraction to get addition leads to the cost of "27" in Table 4; I'm not sure what formula leads to 19 but I'm guessing it avoids the expensive construction of 0 by using something simpler that cancels:

                   x - (0 - y) = x + y          # 25 + {x} + {y}

bzax 17 hours ago

Well, once you've derived unary exp and ln you can get subtraction, which then gets you unary negation and you have addition.

freehorse 14 hours ago

And then by using the fact that the exponential turns addition into multiplication, you get multiplication (and subtraction gives division).

nick238 16 hours ago

Don't know adding, but multiplication has diagram on the last page of the PDF.

xy = eml(eml(1, eml(eml(eml(eml(1, eml(eml(1, eml(1, x)), 1)), eml(1, eml(eml(1, eml(y, 1)), 1))), 1), 1)), 1)

From Table 4, I think addition is slightly more complicated?

simplesighman 15 hours ago

Thanks for posting that. You had a transcribing typo which was corrected in the ECMAScript below. Here's the calculation for 5 x 7:

    const eml = (x,y) => Math.exp(x) - Math.log(y);
    const mul = (x,y) => eml(eml(1,eml(eml(eml(1,eml(eml(1,eml(1,x)),1)),eml(1,eml(eml(1,eml(y,1)),1))),1)),1);
    console.log(mul(5,7));
> 35.00000000000001

For larger or negative inputs you get a NaN because ECMAScript has limited precision and doesn't handle imaginary numbers.

xigoi 3 hours ago

Charon77 15 hours ago

x+y = ln(exp(x) * exp(y))

exp(a) = eml(a, 1) ln(a)=eml(1,eml(eml(1,a),1))

Plugging those in is an excercise to the reader

freehorse 9 hours ago

jcgrillo 15 hours ago

nurettin 14 hours ago

The problem with symbolic regression is ln(y) is undefined at 0, so you can't freely generate expressions with it. We need to guard it with something like ln(1+y*y) or ln(1+|y|) or return undefined.

xigoi 3 hours ago

The article uses extended arithmetic where ln(0) = -∞.

noobermin 15 hours ago

I don't mean to shit on their interesting result, but exp or ln are not really that elementary themselves... it's still an interesting result, but there's a reason that all approximations are done using series of polynomials (taylor expansion).

traes 15 hours ago

Elementary function is a technical term that this paper uses correctly, not a generic prescription of simplicity.

See https://en.wikipedia.org/wiki/Elementary_function.

noobermin 6 hours ago

Then this is a good math paper. Everyone asking for "elm gates" (if such a thing even is possible or efficient) ought to relax a bit.

bfrankline 6 hours ago

In numerical analysis, elementary function membership, like special function membership, is ambiguous. In many circumstances, it’s entirely reasonable to describe the natural logarithm as a special function.

xpe 15 hours ago

> but there's a reason that all approximations are done using series of polynomials (taylor expansion).

"All" is a tall claim. Have a look at https://perso.ens-lyon.fr/jean-michel.muller/FP5.pdf for example. Jump to slide 18:

> Forget about Taylor series

> Taylor series are local best approximations: they cannot compete on a whole interval.

There is no need to worry about "sh-tt-ng" on their result when there is so much to learn about other approximation techniques.

noobermin 6 hours ago

Sorry, re-reading this, I should have said "most". As the other reply mentions, Pade approx. are also well liked for numerical methods.

I personally mostly do my everyday work using taylor expansion (mostly explicit numerical methods in comp. EM because they're cheaper these days and it's simpler to write down) so it's what first comes to mind.

xpe 4 hours ago

kmaitreys 7 hours ago

Padé approximations are not discussed as much, but they are much more stable than Taylor series approximations.

zephen 16 hours ago

Judging by the title, I thought I would have a good laugh, like when the doctor discovered numerical integration and published a paper.

But no...

This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding.

paulpauper 15 hours ago

I don't think this is ever making it past the editor of any journal, let alone peer review.

Elementary functions such as exponentiation, logarithms and trigonometric functions are the standard vocabulary of STEM education. Each comes with its own rules and a dedicated button on a scientific calculator;

What?

and No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, √ , and log has always required multiple distinct operations. Here we show that a single binary operator

Yeah, this is done by using tables and series. His method does not actually facilitate the computation of these functions.

There is no such things as "continuous mathematics". Maybe he meant to say continuous function?

Looking at page 14, it looks like he reinvented the concept of the vector valued function or something. The whole thing is rediscovering something that already exists.

traes 14 hours ago

This preprint was written by a researcher at an accredited university with a PhD in physics. I'm sure they know what a vector valued function is.

The point of this paper is not to revolutionize how a scientific calculator functions overnight, its to establish a single binary operation that can reproduce the rest of the typical continuous elementary operations via repeated application, analogous to how a NAND or NOR gate creates all of the discrete logic gates. Hence, "continuous mathematics" as opposed to discrete mathematics. It seems to me you're being overly negative without solid reasoning.

paulpauper 13 hours ago

avmich 13 hours ago

The principal result is "all elementary functions can be represented by this function and constant 1". I'm not sure if this was known before. Applications are another matter, but I suspect interesting ones do exist.

mah4k4l 7 hours ago

According to Gemini 3.1 Pro this would shoot the current weather forecasting power through the roof (and math processing in general):

The plan is to use this new "structurally flawless mathematical primitive" EML (this is all beyond me, was just having some fun trying to make it cook things together) in TPUs made out of logarithmic number system circuits. EML would have DAGs to help with the exponential bloat problem. Like CERN has these tiny fast "harcode models" as an inspiration. All this would be bounded by the deductive causality of Pedro Domingoses Tensor Logic and all of this would einsum like a mf. I hope it does.

Behold, The Weather Dominator!

Sharlin 7 hours ago

Congrats, you made a hallucination machine successfully hallucinate?

mah4k4l 5 hours ago

Here's my NoteboojLM podcast on the subject :-) Sick.

https://notebooklm.google.com/notebook/e0a54a54-c644-4c89-9d...

mah4k4l 7 hours ago

I understand enough for it's arguments' symmetry to have an impact. Used Deep Research, it had the paper from the link as input plus some previous discussions about Tensor Logic and the new hardcoded neuroweb - like processors. Didn't make those up either.

Sharlin 4 hours ago