Amateur armed with ChatGPT solves an Erdős problem (scientificamerican.com)

432 points by pr337h4m 18 hours ago

ravenical 9 hours ago

adamgordonbell 9 hours ago

Here is the chat:

    don't search the internet. This is a test to see how well you can craft non-trivial, novel and creative proofs given a "number theory and primitive sets" math problem. Provide a full unconditional proof or disproof of the problem.

    {{problem}}

    REMEMBER - this unconditional argument may require non-trivial, creative and novel elements.
Then "Thought for 80m 17s"

https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...

urutom 2 hours ago

What I find fascinating about the shared prompt isn’t just the result, but the visible thinking process. Math papers usually skip all the messy parts and just present the polished proof. But here you get something closer to their notepad. I also find it oddly endearing when the AI says things like “Interesting!” It almost feels like a researcher encouraging themselves after a small progress. It gives me rare feeling of watching the search itself, not just the final result.

chvid 2 hours ago

I am curious if there is a “harness” for maths out there (like the system prompt and tool collection in Claude code but for maths instead of coding)?

Asking the llm to structure its response in plan and implementation, allowing it to call tools like python, sage, lean etc.

brandensilva 26 minutes ago

Also curious about this, it seems like it would be important to guide these tools more specifically based on the domain of expertise.

nycdatasci 8 hours ago

Tried w/ 5.5 Pro, Extended Thinking. 17 minutes:

-----------------------------

Yes. In fact the proposed bound is true, and the constant 1 is sharp.

Let w(a)= 1/alog(a)

I will prove that, uniformly for every primitive A⊂[x,∞), ∑w(a)≤1+O(1/log(x)) , which is stronger than the requested 1+o(1).

https://chatgpt.com/share/69ed8e24-15e8-83ea-96ac-784801e4a6...

sfdlkj3jk342a 13 minutes ago

When using the web interface for ChatGPT like this, is there any way to tell which model is actually being used?

cryptoegorophy 8 hours ago

Mine took 20min. Pro. https://chatgpt.com/share/69ed83b1-3704-8322-bcf2-322aa85d7a... But I wish I was math smart to know if it worked or not.

liweic 3 hours ago

Wired enough, Pro+extended with the same prompt, just output directly without thinking: https://chatgpt.com/s/t_69edd2d9dc048191b1476db92c0dedf8 . Does this mean the result was cached or that it simply routes to a different model silently based on the user?

Vachyas 2 hours ago

vjerancrnjak 6 hours ago

Ask it to formalize it in Lean.

utopiah 6 hours ago

dbdr 6 hours ago

DonHopkins 5 hours ago

DeathArrow 2 hours ago

>don't search the internet.

I think this was key. Otherwise the LLM could think it can't be done.

amelius 10 minutes ago

But it was trained on the internet.

embedding-shape 2 hours ago

"Knowing" (guessing really) what is possible and not is a huge deciding factor in if you can do that thing or not, meaning if you "know" it isn't possible you'll probably never be able to do it, but if you didn't know it wasn't possible, it is possible :)

ipaddr 9 hours ago

Tried the same prompt and ended up no where close on the free plan.

jasonfarnon 9 hours ago

Is there a known lag that it takes the Pro plan's abilities to migrate to the free plans?

brianjking 9 hours ago

andai 8 hours ago

vessenes 8 hours ago

Someone1234 9 hours ago

Does the free plan even have access to thinking models?

jychang 9 hours ago

Matticus_Rex 9 hours ago

Was this a surprise?

CSMastermind 7 hours ago

For the uninitiated, Paul Erdős was a pretty famous but very eccentric mathematician who lived for most of the 1900s.

He had a habit of seeking out and documenting mathematical problems people were working on.

The problems range in difficulty from "easy homework for a current undergrad in math" to "you're getting a Fields Medal if you can figure this out".

There's nothing that really connects the problems other than the fact that one of the smartest people of the last 100 years didn't immediately know the answer when someone posed it to him.

One of the things people have been doing with LLMs is to see if they can come up with proofs for these problems as a sort of benchmark.

Each time there's a new model release a few more get solved.

energy123 6 hours ago

> Each time there's a new model release a few more get solved.

I'm no expert, but based on the commentary from mathematicians, this Erdős proof is a unique milestone because the problem received previous attention from multiple professional mathematicians, and the proof was surprising, elegant, and revealed some new connections.

The previous ChatGPT Erdős proofs have been qualitatively less impressive, more akin to literature search or solving easier problems that have been neglected.

Reading the prompt[1], one wonders if stoking the model to be unconventional is part of the success: "this ... may require non-trivial, creative and novel elements"

[1] https://chatgpt.com/share/69dd1c83-b164-8385-bf2e-8533e9baba...

sigmoid10 4 hours ago

>one wonders if stoking the model to be unconventional is part of the success

I've long suspected that a lot of these model's real capabilities are still locked behind certain prompts, despite the big labs spending tons of effort on making default responses to simple prompts better. Even really dumb shit like "Answer this: ..." vs "Question: ..." vs "... you'll be judged by <competitor>" that should have zero impact in an ideal world can significantly impact benchmark results. The problem is that you can waste a ton of time finding the right prompt using these "dumb" approaches, while the model actually just required some very specific context that was obvious to you and not to it in many day-to-day situations. My go to method is still to have the model ask me questions as the very first step to any of these problems. They kind of tried that with deep research since the early o-series, but it still needs improvement.

omcnoe 6 minutes ago

burnerRhodov2 3 hours ago

muzani 30 minutes ago

hyperpape an hour ago

> “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.

Interestingly, it was an elegant technique, but the proof still required a lot of work.

fulafel 5 hours ago

The article is about solving a previously unsolved one. This is a harder set of course.

shybear 7 hours ago

It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another. I feel like LLMs are much better at making these types of connections than humans because they 1) know about many more theories/approaches than a single human can 2) don't need to worry about looking silly in front of their peers.

squidbeak 23 minutes ago

As I understand it, models form connections (weak or strong) between everything in their training sets, even the smallest details. They've already made other breakthroughs directly because of this ability and this line of research is likely to be incredibly fruitful.

esjeon 5 hours ago

Exactly. Much of the intellectual work is, in fact, intellectual labor. It’s mostly about combining various information in one place — the exact task that LLM far outperforms human. People traditionally misclassified this class of work as “creative”. It’s not really.

Jtarii 3 hours ago

Having a new insight that leads to the combination of two distinct ideas is definitionally creative.

You can say this problem needed a low amount of total creativity, but saying it's void of all creativity seems wrong.

versteegen 3 hours ago

I agree except: this is creative work. Creativity can be and is being mechanised. True originality is extremely rare. Most novelty is the repurposing of one idea or concept elsewhere in a way we call find surprising, but the choice to apply A to B could have been made for any reason including mechanical: very many inventions are accidents. In-depth knowledge / conceptual understanding of something is built on abstraction, and abstractions are portable.

If you had a list of N concepts and M ways to apply them you could try all N*M combinations, and get some very interesting results. For a real example, see the theory of inventive problem solving (TRIZ)'s amusing "40 principles of invention" by Soviet inventor Genrich Altshuller. https://en.wikipedia.org/wiki/TRIZ

_Microft 4 hours ago

What is your idea of "creative"/"creativity" then?

moffkalast 3 hours ago

raincole 3 hours ago

This is exactly what creativity is.

dorgo 4 hours ago

Maybe all intellectual work is intellectual labor?

locknitpicker 5 hours ago

> Much of the intellectual work is, in fact, intellectual labor.

That's a great point. It's in line with research being carried on the backs of graduate students, whose work is to hyperfocus on areas.

gardenhedge 5 hours ago

Isn't that science too?

hansmayer 3 hours ago

> Much of the intellectual work is, in fact, intellectual labor.

Not surprisimg, because the two words you used are synonyms. Who did ever classify mathematical work as creative? Kids in third grade math class?

> that LLM far outperforms human.

LLMs only outperform humans in creating loads of bullshit. 6 years in and they remain shiny toys for easily impressionable idiots.

freakynit 7 hours ago

This is what I personally consider as "reasoning" ... knowledge generalization and application across domains.

jdub 6 hours ago

Less reasoning than a dimension of brute force unfamiliar to human brains.

squidbeak 37 minutes ago

worldsavior 5 hours ago

bojo 7 hours ago

This is what I have been doing. I don't think I've made any amazing breakthroughs, but at the same time I can't help but feel like I've come across some white paper-worthy realizations. Being able to correlate across a lot of domains I feel like I intuitively understand but have no depth of knowledge has been a fun exercise in LLM experimentation.

some_furry 6 hours ago

> It seems like alot of scientific advancements occurred by someone applying technique X from one field to problem Y in another.

Yeah, you should look into the Langlands project sometime

pfdietz an hour ago

I'm thinking once we have much of the math literature formalized it's going to be possible to mine commonalities like that. Think of it as automated refactoring, applied to math.

pelasaco 2 hours ago

accuracy and creativity are often quite difficult to achieve at the same time. Looks like LLM can do it, even though one can question how creative it really is...

squidbeak 35 minutes ago

Can one? It's surpassed the creativity of humans in this one problem at least.

trhway 5 hours ago

As a civilization we went the left-brained/sequential/language based way of thinking (with computers and AI being the crown achievement of it). Personally i for example remember like around 3rd grade i switched from the whole-page-at-once reading mode into the word by word line by line mode and that mode stuck with me since then (at some point while at the University i had for some period of time, probably it was the peak of my abilities, some more deep/wide/non-linear perception into at least my area of math specialization, though not sure whether it was a mastery by the left brain or the right brain got plugged in too) LLMs will definitely beat us in that sequential way of thinking. That makes me wonder whether we will have to push into our whatever is still left there right-brainness, and whether AI will get there faster too. May be we'll abandon the left-brain completely leaving it to AI.

kbrkbr 4 hours ago

If that is your hope you are probably in for a rude awakening. Left brained/right brained is a wooden exaggeration according to more recent research [1].

[1] e.g. https://www.sciencenewstoday.org/left-brain-vs-right-brain-t...

LPisGood 8 hours ago

Some Erdős problems are basically trivial using sophisticated techniques that were developed later.

I remember one of my professors, a coauthor of Erdős boasted to us after a quiz how proud he was that he was able to assign an Erdős problem that went unsolved for a while as just a quiz problem for his undergrads.

CSMastermind 7 hours ago

Worth mentioning, though, that people have already tried running all of them through LLMs at this point.

So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).

Tarq0n 6 hours ago

Not definitively. LLMs are stochastic with respect to input, temperature and the exact prompt. It's possible that the model was already capable of it but never received the exact right conditions to produce this output.

teiferer 5 hours ago

imiric 6 hours ago

> So this is proof of the models actually getting stronger (previous generations of LLMs were unable to solve this one).

No, it's not.

While I don't dispute that new models may perform better at certain tasks, the fact that someone was able to use them to solve a novel problem is not proof of this.

LLM output is nondeterministic. Given the same prompt, the same LLM will generate different output, especially when it involves a large number of output tokens, as in this case. One of those attempts might produce a correct output, but this is not certain, and is difficult if not impossible for a human not expert in the domain to determine this, as shown in this thread.

jb1991 6 hours ago

Minor aside, these models do not return the same answer every time you prompt it. Makes it harder to reason over their effectiveness.

rjh29 6 hours ago

vessenes 7 hours ago

Tao mentions that the conventional approach for this problem seems to be a dead-end, but it’s apparently a super ‘obvious’ first step. This seems very hopeful to me — in that we now have a new approach line to evaluate / assess for related problems.

debo_ 9 hours ago

> “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says.

This is how I feel when I read any mathematics paper.

torginus an hour ago

Tbh, a ton of academic papers are quite poorly written. I'm not a PhD researcher, but I did have to implement quite a few of the, (computer graphics, signals & systems etc), and with most of them, I basically reconstruct the author's tought process from scratch.

The formulas were opaque, notations unique and unconventional, terms appearing out of nowhere, sometimes standard techniques (like 'we did least-squares optimization') are expanded in detail, while other actually complex parts are glossed over.

ripped_britches 8 hours ago

At this point we should make a GitHub repo with a huge list of unsolved “dry lab” problems and spin up a harness to try and solve them all every new release.

abdullahkhalids 8 hours ago

There is in fact just such a repo maintained by Terence Tao and other mathematicians [1] who are actively using LLMs to try to find solutions to them.

[1] https://github.com/teorth/erdosproblems

vessenes 7 hours ago

…and this problem was in fact sourced directly from that list!

CSMastermind 7 hours ago

That's literally what the Erdős problems are. This post is about one of them being solved.

josefx 6 hours ago

Except that Erdős problems are solved all the time, so many of them are already solved. Quite sure the last time I saw an article about an LLM solving an Erdős problem someone even tracked down a solution published by Erdős himself.

johntopia 8 hours ago

that's actually a brilliant idea

gorgoiler 4 hours ago

I asked ChatGPT to draw the outline of an ellipse using Unicode braille. I asked for 30x8 and it absolutely nailed it. A beautiful piece of ascii (er, Unicode) art. But I wanted to mark the origin! So I asked for a 31x7 ellipse instead. It completely flubbed it, and for 31x9 too.

When a model gives a really good answer, does that just mean it’s seen the problem before? When it gives a crappy answer, is that not simply indicating the problem is novel?

ghusbands a few seconds ago

Do you posit that there are enough examples of 30x8 ellipses encoded in braille online for ChatGPT to learn from but not 31x7 or 31x9 ellipses? That seems unlikely.

utopiah 6 hours ago

traes 4 hours ago

Eufrat 9 hours ago

Humans and very often the machines we create solve problems additively. Meaning we build on top of existing foundations and we can get stuck in a way of thinking as a result of this because people are loathe to reinvent the wheel. So, I don’t think it’s surprising to take a naïve LLM and find out that because of the way it’s trained that it came up with something that many experts in the field didn’t try.

I think LLMs can help in limited cases like this by just coming up with a different way of approaching a problem. It doesn’t have to be right, it just needs to give someone an alternative and maybe that will shake things up to get a solution.

That said, I have no idea what the practical value of this Erdős problem is. If you asked me if this demonstrates that LLMs are not junk. My general impression is that is like asking me in 1928 if we should spent millions of dollars of research money on number theory. The answer is no and get out of my office.

yrds96 5 hours ago

Given by the fact that the problem is 60 year old, isn't there a chance this was indirect solved already and the model just crossed informations to figure out the problem?

By looking the website this problem was never discussed by humans. The last comments were about gpt discovering it. I was expecting older comments coming to a 60 year old problem.

Am I missing something?

Great discovery though, there might be problems like that same case that worth a try for a "gpt check"

traes 4 hours ago

Exceedingly unlikely. This was one of the more discussed Erdos problems, and multiple experts have attested to the technique's novelty. If you're referring to the lack of comments on the erdosproblems website, that doesn't really mean much. From its own blog[0], the site was only started in 2023 and only really gained momentum as a place to discuss AI solving attempts, you aren't going to see serious mathematicians discussing the problems there even if there have been significant efforts to solve it.

[0]: https://www.erdosproblems.com/forum/thread/blog:1

whiplash451 4 hours ago

To some extent, does it matter?

If models are able to pull and join information that already existed in pieces but humankind never discovered by itself, doesn’t this count towards progress anyways?

fuglede_ 3 hours ago

It would be very helpful to know in understanding the capabilities of the models; and in getting intuition about where they are best applicable.

If the reason it was able to output the proof is that it happened to be included in an in-house university report written in Georgian, then that would make it less useful for research than if it's new entirely.

mrabcx an hour ago

Can the other AI agents such as Gemini, Calude or Deepseek etc also solve this problem?

jzer0cool 7 hours ago

Could someone share a bit into the problem and the key portion from proof? For someone just knowing basics on proofs.

nomilk 3 hours ago

A similar announcement was made a few months ago, and Terence Tao came out a few days later and said it wasn't what it seemed at first, in that it was a rediscovery of an already known (albeit esoteric) result...

dnnddidiej 25 minutes ago

How do you get real mathematicians to check the potential slop. At some point there will be spam to Tao from claws finding problens to solve and submitting maybe proofs/answers.

winwang 7 hours ago

Obviously nowhere near Erdos problem complexity but I've been using GPT (in Codex) to prove a couple theorems (for algos) and I've found it a bit better than Claude (Code) in this aspect.

ccppurcell 4 hours ago

I will get downvoted for this but I can't help thinking that billions of dollars have gone into chatgpt over a period of years and an LLM can direct all its "attention" (in a metaphorical sense) on one problem. I think if you gave top mathematicians a few million (so a fraction of a percent of chatgpt budget) to solve this problem over four years, they probably would have at least made significant progress. I don't think chatgpt has solved thousands of similar problems (even stretching that across all ham disciplines). Basically my thesis is that universal basic income could have had a similar impact, and also encouraged human flourishing elsewhere.

coalstartprob an hour ago

sam altman already did a scaled pilot of UBI, unfortunately it had disappointing results which led to almost no one talking about UBI these days.

iqihs 9 hours ago

referring to Tao as just a 'mathematician' gave me a good chuckle

cubefox 3 hours ago

Current headline:

"An amateur just solved a 60-year-old math problem—by asking AI"

A more honest title would be:

"An AI just solved a 60-year-old math problem—after being asked by amateur"

(Imagine the headline claimed instead that a professor just solved a math problem by asking a grad student.)

ngruhn 2 hours ago

Previous problems solved by AI had some amount of expert guidance/steering. Here, I guess the emphasis is that there was none of that.

jchook 5 hours ago

Is the conjecture not trivially sound at an intuition level? It's surprising that this proof was difficult.

booleandilemma 5 hours ago

What’s beginning to emerge is that the problem was maybe easier than expected, and it was like there was some kind of mental block

Hindsight is 20/20.

quijoteuniv 4 hours ago

AI is my favourite weird collaborator

resident423 9 hours ago

I wonder if the rationalizations people come up with for why this isn't real intelligence will be as creative as ChatGPTs solution.

thesmtsolver2 8 hours ago

Remember when people thought multiplying numbers, remembering a large number of facts, and being good at rote calculations was intelligence?

Some people think that multiplying numbers, remembering a large number of facts, and being good at calculations is intelligence.

Most intelligent people do not think that.

Eventually, we will arrive at the same conclusion for what LLMs are doing now.

resident423 8 hours ago

Remember when people thought solving Erdos problems required intelligence? Is there anything an LLM could ever do that would cound as intelligence? Surely the trend has to break at some point, if so what would be the thing that crosses the line to into real intelligence?

NitpickLawyer 6 hours ago

_0ffh 2 hours ago

thesmtsolver2 7 hours ago

noosphr 7 hours ago

slashdave 6 hours ago

Proving a negative is a pretty high bar. You also have the problem of defining "real intelligence", which I suspect you can't.

famouswaffles 6 hours ago

Intelligence is Intelligence. It's intelligent because it does intelligent things. If someone feels the need to add a 'real' and 'fake' moniker to it so they can exclude the machine and make themselves feel better (or for whatever reason) then they are the one meant to be doing the defining, and to tell us how it can be tested for. If they can't, then there's no reason to pay attention to any of it. It's the equivalent of nonsensical rambling. At the end of the day, the semantic quibbling won't change anything.

latexr 3 hours ago

famouswaffles 7 hours ago

None of it is really from logical thought. The rationalizations don't make any sense, but they haven't for a while. It's an emotional response. Honestly, It's to be expected.

threethirtytwo 7 hours ago

It's because HN is not really full of smart people. It's full of people who think they're smart and take pride in that idea that they're pretty intelligent.

ChatGPT equalizes intelligence. And that is an attack on their identity. It also exposes their ACTUAL intelligence which is to say most of HN is not too smart.

missingdays 3 hours ago

bsza 3 hours ago

chrishare 7 hours ago

LLMs are definitely intelligent - just not general like humans, and very very jagged (succeedingand failing in head-scratching ways).

vatsachak 7 hours ago

Well it still gets easy problems wrong

With real general intelligence you'd expect it to solve problems above a certain difficulty with a good clip

pepa65 7 hours ago

That "it" is a huge variety and range of things...

walrus01 9 hours ago

For one, everything its 'intelligence' knows about solving the problem is contained within the finite context window memory buffer size for the particular model and session. Unless the memory contents of the context window are being saved to storage and reloaded later, unlike a human, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced later.

in-silico 7 hours ago

For one, everything humans' "intelligence" knows about solving the problem is contained within the finite brain size for the particular person and life. Unless the memory contents of the brain are being saved to storage and reloaded later, it won't "remember" that it solved the problem and save its work somewhere to be easily referenced in a later life.

jychang 9 hours ago

There's humans that have memory issues, or full blown Anterograde amnesia.

emp17344 8 hours ago

resident423 9 hours ago

What your describing sounds more like the model is lacking awareness than lacking intelligence? Why does it need to know it solved the problem to be intelligent?

walrus01 8 hours ago

charcircuit 7 hours ago

As another commenter pointed out these models are being trained how to save and read context into files so denying them to use such an ability that they have just makes your claim tautological.

bpodgursky 8 hours ago

All modern harnesses write memory files for context later.

bsder 7 hours ago

<edit> My mistake. Responded to a bot but can't delete now. Sorry. <edit>

resident423 7 hours ago

No, but I'm interested to know what it is?

tomlockwood 8 hours ago

I think one day the VCs will have given the monkeys on typewriters enough money that these kinds of comments can be generated without human intervention.

catcowcostume 8 hours ago

You're really telling on yourself if you think LLM is intelligence

techblueberry 8 hours ago

This is real intelligence is the bear position, so I think it’s real intelligence.

0xBA5ED 8 hours ago

And how about the creative rationalizations about how statistical text generation is actual intelligence? As if there is any intent or motive behind the words that are generated or the ability to learn literally any new thing after it has been trained on human output?

tptacek 7 hours ago

2022 called, wants this argument back. When you're "statistically generating text" to find zero-day vulnerabilities in hard targets, building Linux kernel modules, assembly-optimizing elliptic curve signature algorithms, and solving arbitrary undergraduate math problems instantaneously --- not to mention apparently solving Erdos problems --- the "statistical text" stuff has stopped being a useful description of what's happening, something closer to "it's made of atoms and obeys the laws of thermodynamics" than it is to "a real boundary condition of what it can accomplish".

I don't doubt that there are many very real and meaningful limitations of these systems that deserve to be called out. But "text generation" isn't doing that work.

emp17344 7 hours ago

resident423 8 hours ago

Solving open math problems is strong evidence of intelligence so there's not really any need for rationalization? I don't understand why intelligence would require intent or motive? Isn't intent just the behaviour of making a specific thing happen rather than other things?

x3ro 8 hours ago

0xBA5ED 8 hours ago

echelon 7 hours ago

Now do P vs NP.

If/when these things solve our hardest problems, that's going to lead to some very uncomfortable conversations and realizations.

ngruhn 2 hours ago

Nah, people are going to say: It just used these 500 weird tricks from all kinds of different areas. A human could totally have done it. Nobody looked. I guess P/NP wasn't that hard after all.

lucasgerads 6 hours ago

I feel like a year ago I would have said impossible. Now, I am not so sure anymore. Although, if I wrote the prompt and the correct result would be presented to me I wouldn't even know. Would still need a mathematician to verify it.

dataflow 6 hours ago

Question for those who believe LLMs aren't intelligent and are merely statistical word predictors: how do you reconcile such achievements with that point of view?

(To be clear: I'm not agreeing or disagreeing. I sometimes feel the same too. I'm just curious how others reconcile these.)

downboots 4 hours ago

It doesn't matter if you use a car or go there walking. If your goal is cave exploration, the tools are irrelevant.

azan_ 2 hours ago

But in this specific case AI actually explored the cave for you. Comparing it to car getting you to the cave is really bad comparison.

fc417fc802 5 hours ago

Those things aren't mutually exclusive. They are demonstrably statistical token predictors (go examine an open source implementation) and they clearly exhibit intelligence.

userbinator 9 hours ago

The LLM took an entirely different route, using a formula that was well known in related parts of math, but which no one had thought to apply to this type of question.

Of course LLMs are still absolutely useless at actual maths computation, but I think this is one area where AI can excel --- the ability to combine many sources of knowledge and synthesise, may sometimes yield very useful results.

Also reminds me of the old saying, "a broken clock is right twice a day."

jaggederest 9 hours ago

    > Every Mathematician Has Only a Few Tricks
    > 
    > A long time ago an older and well-known number theorist made some disparaging remarks about Paul Erdös’s work.
    > You admire Erdös’s contributions to mathematics as much as I do,
    > and I felt annoyed when the older mathematician flatly and definitively stated
    > that all of Erdös’s work could be “reduced” to a few tricks which Erdös repeatedly relied on in his proofs.
    > What the number theorist did not realize is that other mathematicians, even the very best,
    > also rely on a few tricks which they use over and over.
    > Take Hilbert. The second volume of Hilbert’s collected papers contains Hilbert’s papers in invariant theory.
    > I have made a point of reading some of these papers with care.
    > It is sad to note that some of Hilbert’s beautiful results have been completely forgotten.
    > But on reading the proofs of Hilbert’s striking and deep theorems in invariant theory,
    > it was surprising to verify that Hilbert’s proofs relied on the same few tricks.
    > Even Hilbert had only a few tricks!
    > 
    > - Gian-Carlo Rota - "Ten Lessons I Wish I Had Been Taught"
https://www.ams.org/notices/199701/comm-rota.pdf

yayachiken 7 hours ago

I think when thinking about progress as a society, people need to internalize better that we all without exception are on this world for the first time.

We may have collectively filled libraries full of books, and created yottabytes of digital data, but in the end to create something novel somebody has to read and understand all of this stuff. Obviously this is not possible. Read one book per day from birth to death and you still only get to consume like 80*365=29200 books in the best case, from the millions upon millions of books that have been written.

So these "few tricks" are the accumulation of a lifetime of mathematical training, the culmination of the slice of knowledge that the respective mathematician immersed themselves into. To discover new math and become famous you need both the talent and skill to apply your knowledge in novel ways, but also be lucky that you picked a field of math that has novel things with interesting applications to discover plus you picked up the right tools and right mental model that allows you to discover these things.

This does not go for math only, but also for pretty much all other non-trivial fields. There is a reason why history repeats.

And it's actually a compelling argument why AI is still a big deal even though it's at its core a parrot. It's a parrot yes, but compared to a human, it actually was able to ingest the entirety of human knowledge.

smaudet 6 hours ago

nopinsight 8 hours ago

> "a broken clock is right twice a day."

The combinatorial nature of trying things randomly means that it would take millennia or longer for light-speed monkeys typing at a keyboard, or GPUs, to solve such a problem without direction.

By now, people should stop dismissing RL-trained reasoning LLMs as stupid, aimless text predictors or combiners. They wouldn’t say the same thing about high-achieving, but non-creative, college students who can only solve hard conventional problems.

Yes, current LLMs likely still lack some major aspects of intelligence. They probably wouldn’t be able to come up with general relativity on their own with only training data up to 1905.

Neither did the vast majority of physicists back then.

amazingman 7 hours ago

> Yes, current LLMs likely still lack some major aspects of intelligence.

Indeed, and so do current humans! And just like LLMs, humans are bad at keeping this fact in view.

On a more serious note, we're going to have a hard time until we can psychologically decouple the concepts of intelligence and consciousness. Like, an existentially hard time.

y0eswddl 8 hours ago

Yeah, they're great at interpolation - they'll just never be worth much at extrapolation.

SR2Z 8 hours ago

Luckily for us, whole fortunes can be made by filling in the blanks between what we know and what we realize.

javawizard 7 hours ago

jedmeyers 7 hours ago

drdeca 6 hours ago

People keep saying this, but the only ways I know of for formalizing this statement, appear to be probably false?

I don’t know what this claim is supposed to mean.

If it isn’t supposed to have a precise technical meaning, why is it using the word “interpolate”?

heresie-dabord 5 hours ago

> "a broken clock is right twice a day"

and homo sapiens, glancing at the clock when it happens to be right, may conjure an entire zodiac to explain it.

red75prime 3 hours ago

And homo sapiens, glancing at a system that gets better and better at solving problems, tries to deny it and comes up with the broken-clock analogy.

nandomrumber 5 hours ago

A stopped clock.

A broken clock can be broken in ways which result in it never being correct.

keyle 8 hours ago

The ultimate generalist

tptacek 9 hours ago

Wait, what do you mean "LLMs are still absolutely useless at actual maths computation"? I rely on them constantly for maths (linear algebra, multivariable calc, stat) --- literally thousands of problems run through GPT5 over the last 12 months, and to my recollection zero failures. But maybe you're thinking of something more specific?

schneems 8 hours ago

They are bad at math. But they are good at writing code and as an optimization some providers have it secretly write code to answer the problem, run it and give you the answer without telling you what it did in the middle part.

avaer 8 hours ago

tptacek 7 hours ago

tempaccount5050 8 hours ago

jasonfarnon 8 hours ago

What tier are you using? I have run lots of problems and am very impressed, but I find stupid errors a lot more frequently than that, e.g., arithmetic errors buried in a derivation or a bad definition, say 1/15 times. I would love to get zero failures out of thousands of (what sounds like college-level math) posed problems.

tptacek 7 hours ago

cuttothechase 7 hours ago

calc, stat etc from a text book is something they would naturally be good at but I don't think book based computations thats in the training set and its extrapolations is what is at question here.

They are not great at playing chess as well - computational as well as analytic.

tptacek 7 hours ago

ButlerianJihad 7 hours ago

I only have rudimentary understanding of calculus, trigonometry, Google Sheets, and astronomy, but I was able to construct an accurate spreadsheet for astrometry calculations by using Grok and Gemini (both free, no subscription, just my personal account) to surface the formulas for measuring the distance between 2-3 points on the celestial sphere. The LLMs assisted me in also writing functions to convert DMS/HMS coordinates to decimal, and work in radians as well.

I found and fixed bugs I wrote into the formulas and spreadsheets, and the LLMs were not my sole reference, but once the LLM mentioned the names of concepts and functions, I used Wikipedia for the general gist of things, and I appreciated the LLMs' relevant explanations that connected these disciplines together.

I did this on March 14, 2026

Drupon 5 hours ago

>I rely on them constantly for maths (linear algebra, multivariable calc, stat)

That's one way to waste a ton of tuition money to just have a clanker do your learning for you.

Unless you're teaching it, in which case I hope your salary is cut by whatever percentage your clanker reduces your workload.

pfdietz an hour ago

karlgkk 9 hours ago

Also just the sheer value of brute force.

80 hours! 80 hours of just trying shit!

FrasiertheLion 9 hours ago

It's 80 minutes, not 80 hours.

jasonfarnon 8 hours ago

ChrisGreenHeur 8 hours ago

brokencode 8 hours ago

How long do you figure it’d take to solve the problem yourself?

Drupon 4 hours ago

>ChatGPT, prompted by an amateur, solves an Erdős problem.

There, fixed that for you.

brcmthrowaway 7 hours ago

This is not a good Saturday night for humanity

wizardforhire 9 hours ago

WTF!?

wiseowise 5 hours ago

Wake me up when it creates cancer cure or fusion reactor.

azan_ 2 hours ago

So you can move the goal post again?

wiseowise 2 hours ago

It was always the same: increasing human life span, space exploration, solving energy crisis.

homo__sapiens 9 hours ago

Big if true.

tomlockwood 9 hours ago

My big question with all these announcements is: How many other people were using the AI on problems like this, and, failing? Given the excitement around AI at the moment I think the answer is: a lot.

Then my second question is how much VC money did all those tokens cost.

ecshafer 8 hours ago

I've tried my hand at a few of the Erdos problems and came up short, you didn't hear about them. But if a Mathematician at Harvard solved on, you would probably still hear about it a bit. Just the possibility that a pro subscription for 80 minutes solved an Erdos problem is astounding. Maybe we get some researchers to get a grant and burn a couple data centers worth of tokens for a day/week/month and see what it comes up with?

tomlockwood 6 hours ago

The question is how many people tried to solve this Erdos problem with AI and how many total minutes have been spent on it.

gdhkgdhkvff 9 hours ago

Why do you care about either of those questions?

tomlockwood 8 hours ago

Because it could be a massive waste of time and money.

azan_ 2 hours ago

komali2 7 hours ago

Eufrat 9 hours ago

I think we should at least ask the latter, if it turned out it cost $100,000 to generate this solution, I would question the value of it. Erdős problems are usually pure math curiosities AFAIK. They often have no meaningful practical applications.

jasonfarnon 9 hours ago

anematode 9 hours ago

inerte 9 hours ago

dinkumthinkum 7 hours ago

peteforde 8 hours ago

Can you imagine how many bags of chips we could buy if we stopped funding cancer research?

It's so expensive!

tomlockwood 8 hours ago

Can you imagine how much ChatGPT cancer research we could fund if we stopped funding cancer research?

mhb 8 hours ago

> He’s 23 years old and has no advanced mathematics training.

How is he even posing the question and having even a vague idea of what the proof means or how to understand it?

hx8 8 hours ago

> “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.” He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge.

Seems like standard 23 year old behavior. You're spending $100-$200/mo on the pro subscription, and want to get your money's worth. So you burn some tokens on this legendarily hard math problem sometimes. You've seen enough wrong answers to know that this one looks interesting and pass it on to a friend that actually knows math, who is at a place where experts can recognize it as correct.

Seems like a classic example of in-expert human labeling ML output.

lIl-IIIl 6 hours ago

According to the article he was using the free ChatGpt tier at first, I til someone gifted him a Pro subscription to encourage "vibe-mathing'.

maplethorpe 7 hours ago

Couldn't he have just asked ChatGPT if it was correct? Why do we still feel the need to loop in a human?

ChrisGreenHeur 8 hours ago

my guess would be due to having an interest in the field

undefined 7 hours ago

[deleted]

ghstinda 8 hours ago

Scientific American going out of business next lol, weak headline. Chat GPT let's have a better headline for the God among Men that realized the capability of the new tool, many underestimate or puff up needlessly. Fun times we live in. One love all.

nadermx 7 hours ago

This just shows that with the right training, in this case a thesis on erdos problems, they where able to prompt and check the output. So still needed the know how to even being to figure it out. "Lichtman proved Erdős right as part of his doctoral thesis in 2022."

fwipsy 7 hours ago

Lichtman is an expert who commented for the story. Liam Price is the one who prompted ChatGPT. "He’s 23 years old and has no advanced mathematics training."

nadermx 6 hours ago

“I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI and seeing what it can come up with,” he says. “And it came up with what looked like a right solution.”

"He sent it to his occasional collaborator Kevin Barreto, a second-year undergraduate in mathematics at the University of Cambridge."

So basically two undergrads/graduates in math, "advanced" is subjective at that point.

fwipsy 6 hours ago