GPT-5.2 derives a new result in theoretical physics (openai.com)

283 points by davidbarker 3 hours ago

outlace 3 hours ago

The headline may make it seem like AI just discovered some new result in physics all on its own, but reading the post, humans started off trying to solve some problem, it got complex, GPT simplified it and found a solution with the simpler representation. It took 12 hours for GPT pro to do this. In my experience LLM’s can make new things when they are some linear combination of existing things but I haven’t been to get them to do something totally out of distribution yet from first principles.

CGMthrowaway 3 hours ago

This is the critical bit (paraphrasing):

Humans have worked out the amplitudes for integer n up to n = 6 by hand, obtaining very complicated expressions, which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms. And from these base cases, no one was then able to spot a pattern and posit a formula valid for all n. GPT did that.

Basically, they used GPT to refactor a formula and then generalize it for all n. Then verified it themselves.

I think this was all already figured out in 1986 though: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.56... see also https://en.wikipedia.org/wiki/MHV_amplitudes

godelski 2 hours ago

  > I think this was all already figured out in 1986 though
They cite that paper in the third paragraph...

  Naively, the n-gluon scattering amplitude involves order n! terms. Famously, for the special case of MHV (maximally helicity violating) tree amplitudes, Parke and Taylor [11] gave a simple and beautiful, closed-form, single-term expression for all n.
It also seems to be a main talking point.

I think this is a prime example of where it is easy to think something is solved when looking at things from a high level but making an erroneous conclusion due to lack of domain expertise. Classic "Reviewer 2" move. Though I'm not a domain expert and so if there was no novelty over Parke and Taylor I'm pretty sure this will get thrashed in review.

CGMthrowaway 2 hours ago

nyc_data_geek1 40 minutes ago

btown 2 hours ago

It bears repeating that modern LLMs are incredibly capable, and relentless, at solving problems that have a verification test suite. It seems like this problem did (at least for some finite subset of n)!

This result, by itself, does not generalize to open-ended problems, though, whether in business or in research in general. Discovering the specification to build is often the majority of the battle. LLMs aren't bad at this, per se, but they're nowhere near as reliably groundbreaking as they are on verifiable problems.

lupsasca an hour ago

That paper from the 80s (which is cited in the new one) is about "MHV amplitudes" with two negative-helicity gluons, so "double-minus amplitudes". The main significance of this new paper is to point out that "single-minus amplitudes" which had previously been thought to vanish are actually nontrivial. Moreover, GPT-5.2 Pro computed a simple formula for the single-minus amplitudes that is the analogue of the Parke-Taylor formula for the double-minus "MHV" amplitudes.

woeirua 2 hours ago

You should probably email the authors if you think that's true. I highly doubt they didn't do a literature search first though...

emp17344 2 hours ago

godelski 2 hours ago

suuuuuuuu 2 hours ago

ericmay 2 hours ago

Still pretty awesome though, if you ask me.

fsloth 2 hours ago

_aavaa_ 2 hours ago

torginus an hour ago

I'm not sure if GPTs ability goes beyond a formal math package's in this regard or its just its just way more convienient to ask ChatGPT rather than using these software.

randomtoast 3 hours ago

> but I haven’t been to get them to do something totally out of distribution yet from first principles

Can humans actually do that? Sometimes it appears as if we have made a completely new discovery. However, if you look more closely, you will find that many events and developments led up to this breakthrough, and that it is actually an improvement on something that already existed. We are always building on the shoulders of giants.

davorak 2 hours ago

> Can humans actually do that?

From my reading yes, but I think I am likely reading the statement differently than you are.

> from first principles

Doing things from first principles is a known strategy, so is guess and check, brute force search, and so on.

For an llm to follow a first principles strategy I would expect it to take in a body of research, come up with some first principles or guess at them, then iteratively construct and tower of reasonings/findings/experiments.

Constructing a solid tower is where things are currently improving for existing models in my mind, but when I try openai or anthropic chat interface neither do a good job for long, not independently at least.

Humans also often have a hard time with this in general it is not a skill that everyone has and I think you can be a successful scientist without ever heavily developing first principles problem solving.

dotancohen 3 hours ago

Relativity comes to mind.

You could nitpick a rebuttal, but no matter how many people you give credit, general relativity was a completely novel idea when it was proposed. I'd argue for special relatively as well.

Paracompact 2 hours ago

johnfn 2 hours ago

poplarsol 2 hours ago

lamontcg 2 hours ago

tjr 2 hours ago

Go enough shoulders down, and someone had to have been the first giant.

nextaccountic an hour ago

pram 2 hours ago

CooCooCaCha 2 hours ago

Depends on what you think is valid.

The process you’re describing is humans extending our collective distribution through a series of smaller steps. That’s what the “shoulders of giants” means. The result is we are able to do things further and further outside the initial distribution.

So it depends on if you’re comparing individual steps or just the starting/ending distributions.

godelski an hour ago

  > Can humans actually do that? 
Yes

Seriously, think about it for a second...

If that were true then science should have accelerated a lot faster. Science would have happened differently and researchers would have optimized to trying to ingest as many papers as they can.

Dig deep into things and you'll find that there are often leaps of faith that need to be made. Guesses, hunches, and outright conjectures. Remember, there are paradigm shifts that happen. There are plenty of things in physics (including classical) that cannot be determined from observation alone. Or more accurately, cannot be differentiated from alternative hypotheses through observation alone.

I think the problem is when teaching science we generally teach it very linearly. As if things easily follow. But in reality there is generally constant iterative improvements but they more look like a plateau, then there are these leaps. They happen for a variety of reasons but no paradigm shift would be contentious if it was obvious and clearly in distribution. It would always be met with the same response that typical iterative improvements are met with "well that's obvious, is this even novel enough to be published? Everybody already knew this" (hell, look at the response to the top comment and my reply... that's classic "Reviewer #2" behavior). If it was always in distribution progress would be nearly frictionless. Again, with history in how we teach science we make an error in teaching things like Galileo, as if The Church was the only opposition. There were many scientists that objected, and on reasonable grounds. It is also a problem we continually make in how we view the world. If you're sticking with "it works" you'll end up with a geocentric model rather than a heliocentric model. It is true that the geocentric model had limits but so did the original heliocentric model and that's the reason it took time to be adopted.

By viewing things at too high of a level we often fool ourselves. While I'm criticizing how we teach I'll also admit it is a tough thing to balance. It is difficult to get nuanced and in teaching we must be time effective and cover a lot of material. But I think it is important to teach the history of science so that people better understand how it actually evolves and how discoveries were actually made. Without that it is hard to learn how to actually do those things yourself, and this is a frequent problem faced by many who enter PhD programs (and beyond).

  > We are always building on the shoulders of giants.
And it still is. You can still lean on others while presenting things that are highly novel. These are not in disagreement.

It's probably worth reading The Unreasonable Effectiveness of Mathematics in the Natural Sciences. It might seem obvious now but read carefully. If you truly think it is obvious that you can sit in a room armed with only pen and paper and make accurate predictions about the world, you have fooled yourself. You have not questioned why this is true. You have not questioned when this actually became true. You have not questioned how this could be true.

https://www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableE...

  You are greater than the sum of your parts

emil-lp 3 hours ago

"GPT did this". Authored by Guevara (Institute for Advanced Study), Lupsasca (Vanderbilt University), Skinner (University of Cambridge), and Strominger (Harvard University).

Probably not something that the average GI Joe would be able to prompt their way to...

I am skeptical until they show the chat log leading up to the conjecture and proof.

Sharlin 3 hours ago

I'm a big LLM sceptic but that's… moving the goalposts a little too far. How could an average Joe even understand the conjecture enough to write the initial prompt? Or do you mean that experts would give him the prompt to copy-paste, and hope that the proverbial monkey can come up with a Henry V? At the very least posit someone like a grad student in particle physics as the human user.

buttered_toast 2 hours ago

lamontcg 2 hours ago

slopusila 2 hours ago

jmalicki an hour ago

"Grad Student did this". Co-authored by <Famous advisor 1>, <Famous advisor 2>, <Famous advisor 3>.

Is this so different?

famouswaffles 3 hours ago

The paper has all those prominent institutions who acknowledge the contribution so realistically, why would you be skeptical ?

kristopolous 3 hours ago

Refreeze5224 3 hours ago

stouset 2 hours ago

When chess engines were first developed, they were strictly worse than the best humans. After many years of development, they became helpful to even the best humans even though they were still beatable (1985–1997). Eventually they caught up and surpassed humans but the combination of human and computer was better than either alone (~1997–2007). Since then, humans have been more or less obsoleted in the game of chess.

Five years ago we were at Stage 1 with LLMs with regard to knowledge work. A few years later we hit Stage 2. We are currently somewhere between Stage 2 and Stage 3 for an extremely high percentage of knowledge work. Stage 4 will come, and I would wager it's sooner rather than later.

TGower 2 hours ago

With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth. It's worth keeping in mind just how little we understand about LLM capability scaling. Ask 10 different AI researchers when we will get to Stage 4 for something like programming and you'll get wild guesses or an honest "we don't know".

stouset an hour ago

baq an hour ago

NitpickLawyer an hour ago

empath75 8 minutes ago

We are already at stage 3 for software development and arguably step 4

bluecalm 2 hours ago

The evolution was also interesting: first the engines were amazing tactically but pretty bad strategically so humans could guide them. With new NN based engines they were amazing strategically but they sucked tactically (first versions of Leela Chess Zero). Today they closed the gap and are amazing at both strategy and tactics and there is nothing humans can contribute anymore - all that is left is to just watch and learn.

slibhb 44 minutes ago

> In my experience LLM’s can make new things when they are some linear combination of existing things but I haven’t been to get them to do something totally out of distribution yet from first principles.

What's the distinction between "first principles" and "existing things"?

I'm sympathetic to the idea that LLMs can't produce path-breaking results, but I think that's true only for a strict definition of path-breaking (that is quite rare for humnans too).

hellisad an hour ago

Hmm feels a bit trivializing, we don't know exactly how difficult it was to come up with the generic set of equations mentioned from the human starting point.

I can claim some knowledge of physics from my degree, typically the easy part is coming up with complex dirty equations that work under special conditions, the hard part is the simplification into something elegant, 'natural' and general.

Also "LLM’s can make new things when they are some linear combination of existing things"

Doesn't really mean much, what is a linear combination of things you first have to define precisely what a thing is?

getnormality 43 minutes ago

Insert perfunctory HN reply of "but do humans ever do anything totally out of distribution from first principles?"

(This is deep)

tedd4u 30 minutes ago

What does a 12-hour solution cost an OpenAI customer?

epolanski 2 hours ago

Serious questions, I often hear about this "let the LLM cook for hours" but how do you do that in practice and how does it manages its own context? How doesn't it get lost at all after so many tokens?

javier123454321 2 hours ago

From what I've seen is a process of compacting the session once it reaches some limit, which basically means summarizing all the previous work and feeding it as the initial prompt for the next session.

lovecg 2 hours ago

I’m guessing, would love someone who has first hand knowledge to comment. But my guess is it’s some combination of trying many different approaches in parallel (each in a fresh context), then picking the one that works, and splitting up the task into sequential steps, where the output of one step is condensed and is used as an input to the next step (with possibly human steering between steps)

ctoth 3 hours ago

In my experience humans can make new things when they are some linear combination of existing things but I haven’t been able to get them to do something totally out of distribution yet from first principles[0].

[0]: https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g...

bpodgursky 3 hours ago

I don't want to be rude but like, maybe you should pre-register some statement like "LLMs will not be able to do X" in some concrete domain, because I suspect your goalposts are shifting without you noticing.

We're talking about significant contributions to theoretical physics. You can nitpick but honestly go back to your expectations 4 years ago and think — would I be pretty surprised and impressed if an AI could do this? The answer is obviously yes, I don't really care whether you have a selective memory of that time.

RandomLensman 3 hours ago

I don't know enought about theoretical physics: what makes it a significant contribution there?

terminalbraid 2 hours ago

epolanski 2 hours ago

outlace 3 hours ago

I never said LLMs will not be able to do X. I gave my summary of the article and my anecdotal experiences with LLMs. I have no LLM ideology. We will see what tomorrow brings.

nozzlegear 3 hours ago

> We're talking about significant contributions to theoretical physics.

Whoever wrote the prompts and guided ChatGPT made significant contributions to theoretical physics. ChatGPT is just a tool they used to get there. I'm sure AI-bloviators and pelican bike-enjoyers are all quite impressed, but the humans should be getting the research credit for using their tools correctly. Let's not pretend the calculator doing its job as a calculator at the behest of the researcher is actually a researcher as well.

famouswaffles 3 hours ago

bpodgursky 3 hours ago

bottlepalm 3 hours ago

Is every new thing not just combinations of existing things? What does out of distribution even mean? What advancement has ever made that there wasn’t a lead up of prior work to it? Is there some fundamental thing that prevents AI from recombining ideas and testing theories?

fpgaminer 2 hours ago

> Is every new thing not just combinations of existing things?

If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?

I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".

Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.

All this to say: Every new thing is a combination of existing things + sweat and tears.

The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.

drdeca 19 minutes ago

red75prime 2 hours ago

outlace 3 hours ago

For example, ever since the first GPT 4 I’ve tried to get LLM’s to build me a specific type of heart simulation that to my knowledge does not exist anywhere on the public internet (otherwise I wouldn’t try to build it myself) and even up to GPT 5.3 it still cannot do it.

But I’ve successfully made it build me a great Poker training app, a specific form that also didn’t exist, but the ingredients are well represented on the internet.

And I’m not trying to imply AI is inherently incapable, it’s just an empirical (and anecdotal) observation for me. Maybe tomorrow it’ll figure it out. I have no dogmatic ideology on the matter.

amelius 2 hours ago

Just wait until LLMs are fast and cheap enough to be run in a breadth first search kind of way, with "fuzzy" pruning.

Davidzheng 3 hours ago

"An internal scaffolded version of GPT‑5.2 then spent roughly 12 hours reasoning through the problem, coming up with the same formula and producing a formal proof of its validity."

When I use GPT 5.2 Thinking Extended, it gave me the impression that it's consistent enough/has a low enough rate of errors (or enough error correcting ability) to autonomously do math/physics for many hours if it were allowed to [but I guess the Extended time cuts off around 30 minute mark and Pro maybe 1-2 hours]. It's good to see some confirmation of that impression here. I hope scientists/mathematicians at large will be able to play with tools which think at this time-scale soon and see how much capabilities these machines really have.

mmaunder 3 hours ago

Yes and 5.3 and the latest codex cli client is incredibly good across compactions. Anyone know the methodology they're using to maintain state and manage context for a 12 hour run? It could be as simple as a single dense document and its own internal compaction algrorithm, I guess.

knicholes 3 hours ago

slopusila 2 hours ago

after those 30 min you can manually ask it again to continue working on the problem

Davidzheng 2 hours ago

It's a bit unclear to me what happens if I do that after it thinks for 30 minutes and ends with no response. Does it start off where it left off? Does it start from scratch again? Like I don't know how the compaction of their prior thinking traces work

square_usual 2 hours ago

It's interesting to me that whenever a new breakthrough in AI use comes up, there's always a flood of people who come in to handwave away why this isn't actually a win for LLMs. Like with the novel solutions GPT 5.2 has been able to find for erdos problems - many users here (even in this very thread!) think they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, LLMs have driven these proofs: https://github.com/teorth/erdosproblems/wiki/AI-contribution...

loire280 2 hours ago

It's easy to fall into a negative mindset when there are legions of pointy haired bosses and bandwagoning CEOs who (wrongly) point at breakthroughs like this as justification for AI mandates or layoffs.

dakolli 29 minutes ago

Yes, all of these stories, and frequent model releases are just intended to psyop "decision makers" into validating their longstanding belief that the labour shouldn't be as big of a line item in a companies expenses, and perhaps can be removed altogether.. They can finally go back to the good old days of having slaves (in the form of "agentic" bots), they yearn to own slaves again.

CEOs/decision makers would rather give all their labour budget to tokens if they could just to validate this belief. They are bitter that anyone from a lower class could hold any bargaining chips, and thus any influence over them. It has nothing to do with saving money, they would gladly pay the exact same engineering budget to Anthropic for tokens (just like the ruling class in times past would gladly pay for slaves) if it can patch that bitterness they have for the working class's influence over them.

The inference companies (who are also from this same class of people) know this, and are exploiting this desire. They know if they create the idea that AI progress is at an unstoppable velocity decision makers will begin handing them their engineering budgets. These things don't even have to work well, they just need to be perceived as effective, or soon to be for decision makers to start laying people off.

I suspect this is going to backfire on them in one of two ways.

1. French Revolution V2, they all get their heads cutoff in 15 years, or an early retirement on a concrete floor.

2. Many decisions makers will make fools of themselves, destroy their businesses and come begging to the working class for our labor, giving the working class more bargaining chips in the process.

Either outcome is going to be painful for everyone, lets hope people wake up before we push this dumb experiment too far.

lovecg 2 hours ago

Let’s have some compassion, a lot of people are freaking out about their careers now and defense mechanisms are kicking in. It’s hard for a lot of people to say “actually yeah this thing can do most of my work now, and barrier of entry dropped to the ground”.

dakolli 14 minutes ago

Yeah but you know what, this is a complete psyop.

They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.

Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.

It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.

If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.

epolanski 2 hours ago

It's an obvious tension created by the title.

The reality is: "GPT 5.2 found a more general and scalable form of an equation, after crunching for 12 hours supervised by 4 experts in the field".

Which is equivalent to taking some of the countless niche algorithms out there and have few experts in that algo have LLMs crunch tirelessly till they find a better formula. After same experts prompted it in the right direction and with the right feedback.

Interesting? Sure. Speaks highly of AI? Yes.

Does it suggest that AI is revolutionizing theoretical physics on its own like the title does? Nope.

jdthedisciple an hour ago

> GPT 5.2 after crunching 12 hours mathematical formulas supervised and prompted by 4 experts in the field

Yet, if some student or child achieved the same – under equal supervision – we would call him the next Einstein.

epolanski an hour ago

hgfda 2 hours ago

It is not only the the peanut gallery that is skeptical:

https://www.math.columbia.edu/~woit/wordpress/?p=15362

Let's wait a couple of days whether there has been a similar result in the literature.

gjm11 2 hours ago

For the sake of clarity: Woit's post is not about the same alleged instance of GPT producing new work in theoretical physics, but about an earlier one from November 2025. Different author, different area of theoretical physics.

etraql 9 minutes ago

cpard 2 hours ago

AI can be an amazing productivity multiplier for people who know what they're doing.

This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.

The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.

supern0va an hour ago

>AI can be an amazing productivity multiplier for people who know what they're doing.

>[...]

>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?

If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.

For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?

bagacrap 28 minutes ago

Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.

jonahx 2 hours ago

> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.

But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.

cpard an hour ago

The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.

When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.

decidu0us9034 2 hours ago

I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.

cpard an hour ago

sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.

What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?

BrouteMinou 37 minutes ago

elzbardico 2 hours ago

Actually, the results were far worse and way less impressive than what the media said.

cpard an hour ago

the c compiler results or the physics results this post is about?

elzbardico 38 minutes ago

NewsaHackO an hour ago

kylehotchkiss an hour ago

> for people who know what they're doing.

I worry we're not producing as many of those as we used to

blks an hour ago

We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot

fragmede 2 hours ago

Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.

computator 13 minutes ago

I have a weird long-shot idea for GPT to make a new discovery in physics: Ask it to find a mathematical relationship between some combination of the fundamental physical constants[1]. If it finds (for example) a formula that relates electron mass, Bohr radius, and speed of light to a high degree of precision, that might indicate an area of physics to explore further if those constants were thought to be independent.

[1] https://en.wikipedia.org/wiki/List_of_physical_constants

lich_king 10 minutes ago

There are known mathematical relationships between almost all fundamental physical constants? In particular, in your example, Bohr radius is calculated from electron mass and the speed of light in vacuum... I don't think this path is as promising as it sounds.

nilkn 3 hours ago

It would be more accurate to say that humans using GPT-5.2 derived a new result in theoretical physics (or, if you're being generous, humans and GPT-5.2 together derived a new result). The title makes it sound like GPT-5.2 produced a complete or near-complete paper on its own, but what it actually did was take human-derived datapoints, conjecture a generalization, then prove that generalization. Having scanned the paper, this seems to be a significant enough contribution to warrant a legitimate author credit, but I still think the title on its own is an exaggeration.

pear01 14 minutes ago

If a researcher uses LLM to get a novel result should the llm also reap the rewards? Could a nobel prize ever be given to a llm or is that like giving a nobel to a calculator?

Insanity 3 hours ago

They also claimed ChatGPT solved novel erdös problems when that wasn’t the case. Will take with a grain of salt until more external validation happened. But very cool if true!

famouswaffles 3 hours ago

Well they (OpenAI) never made such a claim. And yes, LLMs have made unique solutions/contributions to a few erdos problems.

smokel 3 hours ago

How was that not the case? As far as I understand it ChatGPT was instrumental to solving a problem. Even if it did not entirely solve it by itself, the combination with other tools such as Lean is still very impressive, no?

emil-lp 3 hours ago

It didn't solve it, it simply found that it had been solved in a publication and that the list of open problems wasn't updated.

Davidzheng 3 hours ago

vonneumannstan 3 hours ago

Wasnt that like some marketing bro? This is coming out the front door with serious physicists attached.

mym1990 2 hours ago

Many innovations are built off cross pollination of domains and I think we are not too far off from having a loop where multiple agents grounded very well in specific domains can find intersections and optimizations by communicating with each other, especially if they are able to run for 12+ hours. The truth is that 99% of attempts at innovation will fail, but the 1% can yield something fantastic, the more attempts we can take, the faster progress will happen.

alansaber a minute ago

I find it hard not to agree with this line of thinking (albeit will be less than 1%)

major4x an hour ago

elashri 3 hours ago

I would be less interested in scattering amplitude of all particle physics concepts as a test case because the scattering amplitudes because it is one of the concisest definition and its solution is straightforward (not easy of course). So once you have a good grasp of the QM and the scattering then it is a matter of applying your knowledge of math to solve the problem. Usually the real problem is to actually define your parameters from your model and define the tree level calculations. Then for LLM to solve these it is impressive but the researchers defined everything and came up with the workflow.

So I would read this (with more information available) with less emphasize on LLM discovering new result. The title is a little bit misleading but actually "derives" being the operative word here so it would be technically correct for people in the field.

crorella 3 hours ago

jtrn 18 minutes ago

This is my favorite field for me to have opinions about, without not having any training or skill. Fundamental research i just a something I enjoy thinking about, even tho I am psychologist. I try to pull inn my experience from the clinic and clinical research when i read theoretical physics. Don't take this text to seriously, its just my attempt at understanding whats going on.

I am generally very skeptical about work on this level of abstraction. only after choosing Klein signature instead of physical spacetime, complexifying momenta, restricting to a "half-collinear" regime that doesn't exist in our universe, and picking a specific kinematic sub-region. Then they check the result against internal consistency conditions of the same mathematical system. This pattern should worry anyone familiar with the replication crisis. The conditions this field operates under are a near-perfect match for what psychology has identified as maximising systematic overconfidence: extreme researcher degrees of freedom (choose your signature, regime, helicity, ordering until something simplifies), no external feedback loop (the specific regimes studied have no experimental counterpart), survivorship bias (ugly results don't get published, so the field builds a narrative of "hidden simplicity" from the survivors), and tiny expert communities where fewer than a dozen people worldwide can fully verify any given result.

The standard defence is that the underlying theory — Yang-Mills / QCD — is experimentally verified to extraordinary precision. True. But the leap from "this theory matches collider data" to "therefore this formula in an unphysical signature reveals deep truth about nature" has several unsupported steps that the field tends to hand-wave past.

Compare to evolution: fossils, genetics, biogeography, embryology, molecular clocks, observed speciation — independent lines of evidence from different fields, different centuries, different methods, all converging. That's what robust external validation looks like. "Our formula satisfies the soft theorem" is not that.

This isn't a claim that the math is wrong. It's a claim that the epistemic conditions are exactly the ones where humans fool themselves most reliably, and that the field's confidence in the physical significance of these results outstrips the available evidence.

I wrote up a more detailed critique in a substack: https://jonnordland.substack.com/p/the-psychologists-case-ag...

another_twist an hour ago

Thats great. I think we need to start researching how to get cheaper models to do math. I have a hunch it should be possible to get leaner models to achieve these results with the right sort of reinforcement learning.

vbarrielle 2 hours ago

I' m far from being an LLM enthusiast, but this is probably the right use case for this technology: conjectures which are hard to find, but then the proof can be checked with automated theorem provers. Isn't it what AlphaProof does by the way?

emp17344 2 hours ago

Cynically, I wonder if this was released at this time to ward off any criticism from the failure of LLMs to solve the 1stproof problems.

pruufsocial 3 hours ago

All I saw was gravitons and thought we’re finally here the singularity has begun

snarky123 3 hours ago

So wait,GPT found a formula that humans couldn't,then the humans proved it was right? That's either terrifying or the model just got lucky. Probably the latter.

JasonADrury 3 hours ago

> found a formula that humans couldn't

Couldn't is an immensely high bar in this context, didn't seems more appropriate and renders this whole thing slightly less exciting.

vessenes 3 hours ago

I'd say "couldn't in 20 hours" might be more defensible. Depends on how many humans though. "couldn't in 20 GPT watt-hours" would give us like 2,000 humans or so.

getnormality an hour ago

I'll believe it when someone other than OpenAI says it.

Not saying they're lying, but I'm sure it's exaggerated in their own report.

baalimago 3 hours ago

Well, anyone can derive a new result in anything. Question is most often if the result makes any sense

sfmike 2 hours ago

5.2 is the best model on the market.

PlatoIsADisease 2 hours ago

I'll read the article in a second, but let me guess ahead of time: Induction.

Okay read it: Yep Induction. It already had the answer.

Don't get me wrong, I love Induction... but we aren't having any revolutions in understanding with Induction.

ares623 2 hours ago

I guess the important question is, is this enough news to sustain OpenAI long enough for their IPO?

danny_codes 2 hours ago

Well it’ll be at least a whole month before some other company announces similar capability. The moat will hold!

dyauspitr 2 hours ago

I believe Gemini holds the moat now.

gaigalas 3 hours ago

I like the use of the word "derives". However, it gets outshined by "new result" in public eyes.

I expect lots of derivations (new discoveries whose pieces were already in place somewhere, but no one has put them together).

In this case, the human authors did the thinking and also used the LLM, but this could happen without the original human author too (some guy posts some partial on the internet, no one realizes is novel knowledge, gets reused by AI later). It would be tremendously nice if credit was kept in such possible scenarios.

vonneumannstan 3 hours ago

Interesting considering the Twitter froth recently about AI being incapable in principle of discovering anything.

baq 3 hours ago

Anything but recent.

mrguyorama 2 hours ago

Don't lend much credence to a preprint. I'm not insinuating fraud, but plenty of preprints turn out to be "Actually you have a math error here", or are retracted entirely.

Theoretical physics is throwing a lot of stuff at the wall and theory crafting to find anything that might stick a little. Generation might actually be good there, even generation that is "just" recombining existing ideas.

I trust physicists and mathematicians to mostly use tools because they provide benefit, rather than because they are in vogue. I assume they were approached by OpenAI for this, but glad they found a way to benefit from it. Physicists have a lot of experience teasing useful results out of probabilistic and half broken math machines.

If LLMs end up being solely tools for exploring some symbolic math, that's a real benefit. Wish it didn't involve destroying all progress on climate change, platforming truly evil people, destroying our economy, exploiting already disadvantaged artists, destroying OSS communities, enabling yet another order of magnitude increase in spam profitability, destroying the personal computer market, stealing all our data, sucking the oxygen out of investing into real industry, and bold faced lies to all people about how these systems work.

Also, last I checked, MATLAB wasn't a trillion dollar business.

Interestingly, the OpenAI wrangler is last in the list of Authors and acknowledgements. That somewhat implies the physicists don't think it deserves much credit. They could be biased against LLMs like me.

When Victor Ninov (fraudulently) analyzed his team's accelerator data using an existing software suite to find a novel SuperHeavy element, he got first billing on the authors list. Probably he contributed to the theory and some practical work, but he alone was literate in the GOOSY data tool. Author lists are often a political game as well as credit, but Victor got top billing above people like his bosses, who were famous names. The guy who actually came up with the idea of how to create the element, in an innovative recipe that a lot of people doubted, was credited 8th

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83...

brcmthrowaway 3 hours ago

End times approach..

longfacehorrace 3 hours ago

Car manufacturers need to step up their hype game...

New Honda Civic discovered Pacific Ocean!

New F150 discovers Utah Salt Flats!

Sure it took humans engineering and operating our machines, but the car is the real contributor here!