Claude's Cycles [pdf] (www-cs-faculty.stanford.edu)

400 points by fs123 12 hours ago

mccoyb 9 hours ago

It's fascinating to think about the space of problems which are amenable to RL scaling of these probability distributions.

Before, we didn't have a fast (we had to rely on human cognition) way to try problems - even if the techniques and workflows were known by someone. Now, we've baked these patterns into probability distributions - anyone can access them with the correct "summoning spell". Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

One question this raises to me is how these models are going to keep up with the expanding boundary of science. If RL is required to get expert behavior into the models, what happens when experts start pushing the boundary faster? In 2030, how is Anthropic going to keep Claude "up-to-date" without either (a) continual learning with a fixed model (expanding context windows? seems hard) or (b) continual training (expensive)?

Crazy times.

Aerroon 9 hours ago

A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.

bitexploder 8 hours ago

This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.

dotancohen 16 minutes ago

dtj1123 7 hours ago

mlyle 8 hours ago

Symmetry 5 hours ago

anematode 8 hours ago

rcarr 7 hours ago

Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?

computably 2 hours ago

Filligree 4 hours ago

theblazehen 5 hours ago

I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc

sosodev 6 hours ago

My understanding, from listening/reading what top researchers are saying, is that model architectures in the near future are going to attempt to scale the context window dramatically. There's a generalized belief that in-context learning is quite powerful and that scaling the window might yield massive benefits for continual learning.

It doesn't seem that hard because recent open weight models have shown that the memory cost of the context window can be dramatically reduced via hybrid attention architectures. Qwen3-next, Qwen3.5, and Nemotron 3 Nano are all great examples. Nemotron 3 Nano can be run with a million token context window on consumer hardware.

mccoyb 5 hours ago

I don't disagree with this, but I don't think the memory cost is the only issue right? I remember using Sonnet 4.5 (or 4, I can't remember the first of Anthropic's offerings with a million context) and how slow the model would get, how much it wanted to end the session early as tokens accrued (this latter point, of course, is just an artifact of bad training).

Less worried about memory, more worried about compute speed? Are they obviously related and is it straightforward to see?

sosodev 3 hours ago

whimsicalism 2 hours ago

lxgr 9 hours ago

Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

I could totally imagine "free" inference for researchers under the condition that the reasoning traces get to be used as future training data.

mccoyb 9 hours ago

Agreed, there's no doubt this will happen. It's likely already happening (it feels safe to assume that Anthropic is curating data from the data they record from Claude Code?)

As far as I understand RL scaling (we've already maxxed out RLVR), these machines only get better as long as they have expert reasoner traces available.

Having an expert work with an LLM and successfully solve a problem is high signal data, it may be the only path forward?

My prior is that these companies will take this data without asking you as much as they can.

lxgr 8 hours ago

nhecker 3 hours ago

The site arena.ai does exactly this already, as far as I can tell. (In addition to the whole ranking thing.)

the_af 7 hours ago

> Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

Wouldn't this lead to model collapse?

littlestymaar 7 hours ago

Robdel12 an hour ago

That’s AGI, right? For the model to learn novel things itself and retain it?

I have no idea but I’m along for the ride!

visarga 7 hours ago

> In 2030, how is Anthropic going to keep Claude "up-to-date"

I think the majority of research, design and learning goes through LLMs and coding agents today, considering the large user base and usage it must be trillions of tokens per day. You can take a long research session or a series of them and apply hindsight - what idea above can be validated below? This creates a dense learning signal based on validation in real world with human in the loop and other tools, code & search.

andsoitis 7 hours ago

> Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

Part of it comes down to “knowing” what questions to ask.

esafak 7 hours ago

I see it like the relationship between a student and research advisor. The advisor will ideally know the terrain and suggest a fruitful line of attack (what to ask), and the student will follow through, learning along the way.

baq 7 hours ago

> In 2030, how is Anthropic going to keep Claude "up-to-date"

In 2030 Anthropic hopes Claude will keep Anthropic "up-to-date" on its progress on itself.

I'm only half joking here.

mt_ 3 hours ago

I call them, entropy reducers.

whimsicalism 2 hours ago

> how these models are going to keep up with the expanding boundary of science

The same way humans do?

The phraseology in this comment: 'probability distributions', 'baked these patterns' IMO has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now.

The reference to how AI will keep up with AI-assisted human progress in science in 2030 is meant to reassure. It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

mccoyb an hour ago

Sorry, are you familiar with what a next token distribution is, mathematically speaking?

If you are not, let me introduce you to the term: a probability distribution.

Just because it has profound properties ... doesn't make it different.

> has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now

Perhaps respond to my actual comment compared to whatever meta-level grouping you wish to interpret it as part of?

> It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

What premises? Be clear.

DeathArrow 8 hours ago

They can use LORA.

zoogeny 3 hours ago

I recall an earlier exchange, posted to HN, between Wolfram and Knuth on the GPT-4 model [1].

Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."

I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.

1. https://cs.stanford.edu/~knuth/chatGPT20.txt

3abiton an hour ago

> Kudos for people being willing to change their opinion and update when new evidence comes to light. > 1. https://cs.stanford.edu/~knuth/chatGPT20.txt

I think that's what make the bayesian faction of statistics so appealing. Updating their prior belief based on new evidence is at the core of the scinetific method. Take that frequentists.

faxmeyourcode 6 hours ago

> Filip also told me that he asked Claude to continue on the even case after the odd case had been resolved. “But there after a while it seemed to get stuck. In the end, it was not even able to write and run explore programs correctly anymore, very weird. So I stopped the search.”

Interesting snippet towards the end. I wonder if they were using claude.ai or claude code. Sounds like they ran out of context and entered the "dumb zone."

afspear 6 hours ago

What would be super cool is if this dumb zone could be quantified and surfaced to the user. I've noticed that copilot now has a little circle graph that indicates context use percentage and it changes color based on percentage. I'll bet these are very naive metrics on used tokens vs context availability. I wonder if there could be meta data streamed or sent along with the tokens that could show that you've entered the dumb zone.

joshrw 4 hours ago

Then it needs to do context compacting, otherwise the results become garbage

brcmthrowaway 2 hours ago

What is dumb zone?

kami23 23 minutes ago

When the LLMs start compacting they summarize the conversation up to that point using various techniques. Overall a lot of maybe finer points of the work goes missing and can only be retrieved by the LLM being told to search for it explicitly in old logs.

Once you compact, you've thrown away a lot of relevant tokens from your problem solving and they do become significantly dumber as a result. If I see a compaction coming soon I ask it to write a letter to its future self, and then start a new session by having it read the letter.

There are some days where I let the same session compact 4-5 times and just use the letter to future self method to keep it going with enough context because resetting context also resets my brain :)

If you're ever curious in Claude once you compact you can read the new initial prompt after compaction and see how severe it gets cut down. It's very informative of what it forgets and deems not important. For example I have some internal CLIs that are horribly documented so Claude has to try a few flags a few times to figure out specifics and those corrections always get thrown away and it has to relearn them next time it wants to use the CLI. If you notice things like that happening constantly, my move is to codify those things into my CLAUDE.md or lately I've been making a small script or MCP server to run very specific flags of stuff.

simianwords 5 hours ago

They mentioned plan document

konne88 6 hours ago

I didn't expect such a misleading intro from Knuth. It reads like Claude solved Knuth's math problem. In reality, Claude generated various example solution, and Knuth then manually generalized that to a formal proof. What Claude did is certainly useful, but it would have been nice to be clear about the scope of the contribution in the intro.

buffalobuffalo an hour ago

While not on the same level as these guys, I've done some similar stuff using Claude. This is a classic synergy example, where the output of human + LLM is far greater than just the human or just the LLM working on a problem. My experience has been that the LLM lacks fine grained judgement when it comes to allocating resources, or choosing a direction to work in. But once a direction is pointed out, it can do a deep exploration of that possibility space. Left alone, it would probably just go off on a tangent. But with someone holding the leash and pointing out areas to explore, it is a very useful partner.

aoeusnth1 3 hours ago

I don't think he's misleading, I think he is valuing Claude's contributions as essentially having cracked the problem open while the humans cleaned it up into something presentable.

bachmeier 4 hours ago

My interpretation is that Claude did what Knuth considers to be the "solution". Doing the remaining work and polishing up the proof are not necessary to have a solution from this perspective.

OneManyNone 3 hours ago

Claude did not find a proof, though. It found an algorithm which Knuth then proved was correct.

rishabhaiover 4 hours ago

That's true but the capability to go back to an older iteration, reflect and find the correct solution (for odd numbers) is, in my book, a sign of undeniable intelligence.

Pat44113 8 hours ago

I asked Claude to solve the pentominoes puzzle made famous by Arthur C. Clarke. It struggled mightily until I told it how I'd solved the problem using 64 bit unsigned integers to represent the board and pieces. Then, it created a C# program that solved the problem very quickly. However, in the 20x3 case it found four solutions when there are only two. Turns out it had incorrectly mapped one of the pentominoes. Sort of a silly mistake; the sort a human might make.

nphardon 6 hours ago

Must be a fun time to work on open problems. I published my graduate research close to a decade ago, often find myself fantasizing about tackling open problems with Claude.

iandanforth 7 hours ago

TLDR (story, not math) - Knuth poses a problem, his friend uses Claude to conduct 30 some explorations, with careful human guidance, and Claude eventually writes a Python program that can find a solution for all odd values. Knuth then writes a proof of the approach and is very pleased by Claude's contribution. Even values remain an open question (Claude couldn't make much progress on them)

semessier an hour ago

looks like he is trying to make a point that the actual (formal) proof for 2Z + 1 (odd numbers) is still human - by himself that is. Not sure who came up with the core modular arithmetic idea of with s = 0 k increasing by 2 mod m.

logicprog 7 hours ago

> with careful human guidance,

I think this is pretty clearly an overstatement of what was done. As Knuth says,

"Filip told me that the explorations reported above, though ultimately successful, weren’t really smooth. He had to do some restarts when Claude stopped on random errors; then some of the previous search results were lost. After every two or three test programs were run, he had to remind Claude again and again that it was supposed to document its progress carefully. "

That doesn't look like careful human guidance, especially not the kind that would actually guide the AI toward the solution at all, let alone implicitly give it the solution — that looks like a manager occasionally checking in to prod it to keep working.

beej71 7 hours ago

From my naive standpoint, LLMs like this seem to have some big strengths. One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.

If you put those three things together, you end up with some cool stuff from time to time. Perhaps the proof of P!=NP is tied to an obscure connection that humans don't easily see due to individual lack of knowledge or predisposition of bias.

Barbing 39 minutes ago

Well put.

>If you put [possession of a superhuman expanse of knowledge, making connections, tireless trial and error] together, you end up with some cool stuff from time to time.

Hard to argue.

cbovis 7 hours ago

Unless my understanding is incorrect about how these tools work that last point isn't really a quality of LLMs as such? It gets attributed because the lines are blurred but the tireless trial and error is actually just a quality of a regular programatic loop (agent/orchestrator) that happens to be doing the trickiest part of its work via an LLM.

naughtyrabisu 5 hours ago

Three: tireless trial and error. Cannot agree more. I figured this probably be the biggest advantage of LLM considering for other variables humans hold the same-level competency.

xvector 7 hours ago

This is why the whole "LLMs for mass surveillance" thing is scary imo.

beej71 6 hours ago

Yeah, this is a dictator's dream scenario and hell for the citizens. Not only do you not want to get caught for saying something that The Great Leader disapproves of, but you're terrified that anything you say might get flagged by an AI.

IAmGraydon an hour ago

>One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.

One and three I believe are correct. The second point, making connections, is something LLMs seem to be incapable of truly doing unless the connection is already known and in its training data.

ainiriand 9 hours ago

Are not LLMs supposed to just find the most probable word that follows next like many people here have touted? How this can be explained under that pretense? Is this way of problem solving 'thinking'?

throw310822 7 hours ago

> just find the most probable word that follows next

Well, if in all situations you can predict which word Einstein would probably say next, then I think you're in a good spot.

This "most probable" stuff is just absurd handwaving. Every prompt of even a few words is unique, there simply is no trivially "most probable" continuation. Probable given what? What these machines learn to do is predicting what intelligence would do, which is the same as being intelligent.

qsera 7 hours ago

>Probable given what?

The training data..

>predicting what intelligence would do

No, it just predict what the next word would be if an intelligent entity translated its thoughts to words. Because it is trained on the text that are written by intelligent entities.

If it was trained on text written by someone who loves to rhyme, you would be getting all rhyming responses.

It imitates the behavior -- in text -- of what ever entity that generated the training data. Here the training data was made by intelligent humans, so we get an imitation of the same.

It is a clever party trick that works often enough.

throw310822 7 hours ago

empath75 5 hours ago

dilap 8 hours ago

That description is really only fair for base models†. Something like Opus 4.6 has all kinds of other training on top of that which teach it behaviors beyond "predict most probable token," like problem-solving and being a good chatbot.

(†And even then is kind of overly-dismissive and underspecified. The "most probable word" is defined over some training data set. So imagine if you train on e.g. mathematicians solving problems... To do a good job at predicting [w/o overfitting] your model will have to in fact get good at thinking like a mathematician. In general "to be able to predict what is likely to happen next" is probably one pretty good definition of intelligence.)

gpm 8 hours ago

I'd disagree, the other training on top doesn't alter the fundamental nature of the model that it's predicting the probabilities of the next token (and then there's a sampling step which can roughly be described as picking the most probable one).

It just changes the probability distribution that it is approximating.

To the extent that thinking is making a series of deductions from prior facts, it seems to me that thinking can be reduced to "pick the next most probable token from the correct probability distribution"...

dilap 6 hours ago

vidarh 7 hours ago

ericd 8 hours ago

I think it's pretty likely that "intelligence" is emergent behavior that comes when you predict what comes next in physical reality well enough, at varying timescales. Your brain has to build all sorts of world model abstractions to do that over any significant timescale. Big LLMs have to build internal world models, too, to do well at their task.

pvillano 33 minutes ago

Does water flowing through a maze solve it by 'thinking'? No. The rules of physics eventually result in the water flowing out the exit. Water also hits every dead end along the way.

The power of LLMs is that by only selecting sequences of words that fit a statistical model, they avoid a lot of dead ends.[^1]

I would not call that, by itself, thinking. However, if you start with an extrapolation engine and add the ability to try multiple times and build on previous results, you get something that's kind of like thinking.

[1]: Like, a lot of dead ends. There are an unfathomable number of dead ends in generating 500 characters of code, and it is a miracle of technology that Claude only hit 30.

tux3 8 hours ago

>Are not LLMs supposed to just find the most probable word that follows next like many people here have touted?

The base models are trained to do this. If a web page contains a problem, and then the word "Answer: ", it is statistically very likely that what follows on that web page is an answer. If the base model wants to be good at predicting text, at some point learning the answer to common question becomes a good strategy, so that it can complete text that contains these.

NN training tries to push models to generalize instead of memorizing the training set, so this creates an incentive for the model to learn a computation pattern that can answer many questions, instead of just memorizing. Whether they actually generalize in practice... it depends. Sometimes you still get copy-pasted input that was clearly pulled verbatim from the training set.

But that's only base models. The actual production LLMs you chat with don't predict the most probable word according to the raw statistical distribution. They output the words that RLHF has rewarded them to output, which includes acting as an assistant that answers questions instead of just predicting text. RLHF is also the reason there are so many AI SIGNS [1] like "you're absolutely right" and way more use of the word "delve" than is common in western English.

[1]: https://en.wikipedia.org/wiki/WP:AISIGNS

sega_sai 7 hours ago

In some sense that is still correct, i.e. the words are taken from some probability distribution conditional on previous words, but the key point is that probability distribution is not just some sort of average across the internet set of word probabilities. In the end this probability distribution is really the whole point of intelligence. And I think the LLMs are learning those.

adamtaylor_13 7 hours ago

That's the way many people reduce it, and mathematically, I think that's true. I think what we fail to realize is just far that will actually take you.

"just the most probable word" is a pretty powerful mechanism when you have all of human knowledge at your fingertips.

I say that people "reduce it" that way because it neatly packs in the assumption that general intelligence is something other than next token prediction. I'm not saying we've arrived at AGI, in fact, I do not believe we have. But, it feels like people who use that framing are snarkily writing off something that they themselves to do not fully comprehend behind the guise of being "technically correct."

I'm not saying all people do this. But I've noticed many do.

IgorPartola 9 hours ago

In some cases solving a problem is about restating the problem in a way that opens up a new path forward. “Why do planets move around the sun?” vs “What kind of force exists in the world that makes planets tethered to the sun with no visible leash?” (Obviously very simplified but I hope you can see what I am saying.) Given that a human is there to ask the right questions it isn’t just an LLM.

Further, some solutions are like running a maze. If you know all the wrong turns/next words to say and can just brute force the right ones you might find a solution like a mouse running through the maze not seeing the whole picture.

Whether this is thinking is more philosophical. To me this demonstrates more that we are closer to bio computers than an LLM is to having some sort of divine soul.

ainiriand 9 hours ago

Thanks for your input. The way I saw this and how it looks Knuth interpreted it is that there were some reasoning steps taken by Claude independently. Some internal decisions in the model that made it try different things, finally succeeding.

qsera 8 hours ago

Yes, that is exactly what they do.

But that does not mean that the results cannot be dramatic. Just like stacking pixels can result in a beautiful image.

vjerancrnjak 6 hours ago

No. There is good signal in IMO gold medal performance.

These models actually learn distributed representations of nontrivial search algorithms.

A whole field of theorem provingaftwr decades of refinements couldn’t even win a medal yet 8B param models are doing it very well.

Attention mechanism, a bruteforce quadratic approach, combined with gradient descent is actually discovering very efficient distributed representations of algorithms. I don’t think they can even be extracted and made into an imperative program.

kaiokendev 4 hours ago

Given some intelligent system, an AI that perfectly reproduces any sequence that system could produce must encode the patterns that superset that intelligence.

lijok an hour ago

To get an answer to that you would first have to define 'thinking'

crocowhile 8 hours ago

Those people still exist? I only know one guy who is still fighting those windmills

qsera 8 hours ago

Yes, I am one.

wrsh07 8 hours ago

Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess

Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"

Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"

And we train towards that objective and it starts picking better moves (note: the moves are still valid)

You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!

Now you might have a skilled chess-playing-model.

It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.

This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.

So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")

noslenwerdna 6 hours ago

I find this kind of reduction silly.

All your brain is doing is bouncing atoms off each other, with some occasionally sticking together, how can it be really thinking?

See how silly it sounds?

esafak 8 hours ago

Are you feigning ignorance? The best way to answer a question, like completing a sentence, is through reasoning; an emergent behavior in complex models.

adampunk 7 hours ago

Thinking is a big word that sweeps up a lot of different human behavior, so I don't know if it's right to jump to that; HOWEVER, explanations of LLMs that depend heavily on next-token prediction are defunct. They stopped being fundamentally accurate with the rise of massive reinforcement learning and w/ 'reasoning' models the analogy falls apart when you try to do work with it.

Be on the lookout for folks who tell you these machines are limited because they are "just predicting the next word." They may not know what they're talking about.

fazkan 7 hours ago

time to use claude code to understand DEKs paper, in plain English. As someone who did a bit of formal verification in grad school. I feel like, there are a long tail of problems that can be solved by human-model collab like this one. The problems may not mean much but hopefully it can stack up understanding of intelligence.

ontouchstart 8 hours ago

Fascinating report by DEK himself.

Time to sit down, read, digest and understand it without the help of LLM.

ontouchstart 7 hours ago

I don't have time to do that myself yet so I just dug a quick TL;DR rabbit hole for fun:

https://ontouchstart.github.io/rabbit-holes/llm_rabbit_hole_...

ecshafer 8 hours ago

I wonder how long we have until we start solving some truly hard problems with AI. How long until we throw AI at "connect general relativity and quantum physics", give the AI 6 months and a few data centers, and have it pop out a solution?

rustyhancock 8 hours ago

I think a very long time because part of our limit is experiment.

We need enough experimental results to explain to solve these theoretical mismatches and we don't and at present can't explore that frontier.

Once we have more results at that frontier we'd build a theory out from there that has two nearly independent limits for QFT and GR.

What we'd be asking if the AI is something that we can't expect a human to solve even with a lifetime of effort today.

It'll take something in par with Newton realising that the heavens and apples are under the same rules to do it. But at least Newton got to hold the apple and only had to imagine he could a star.

eru 7 hours ago

> I think a very long time because part of our limit is experiment.

Yes, maybe. But if you are smarter, you can think up better experiments that you can actually do. Or re-use data from earlier experiments in novel and clever ways.

fleischhauf 6 hours ago

bob1029 7 hours ago

What prevents us from giving this system access to other real systems that live in physical labs? I don't see much difference between parameterizing and executing a particle accelerator run and invoking some SQL against a provider. It's just JSON on the wire at some level.

rustyhancock 7 hours ago

fragmede 7 hours ago

The question is, if you trained an LLM on everything up until 1904, could it come up with E=MC² or not?

rustyhancock 7 hours ago

emp17344 6 hours ago

Hold your horses, that’s a long way off. The best math AI tool we currently have, Aletheia, was only able to solve 13 out of 700 attempted open Erdos problems, only 4 of which were solved autonomously: https://arxiv.org/html/2601.22401v3

Clearly, these models still struggle with novel problems.

slibhb 5 hours ago

> Clearly, these models still struggle with novel problems.

Do they struggle with novel problems more or less than humans?

Filligree 4 hours ago

worldsavior 8 hours ago

If AGI will ever come, then. Currently, AI is only a statistical machines, and solutions like this are purely based on distribution and no logic/actual intelligence.

zarzavat 7 hours ago

I swear that AI could independently develop a cure for cancer and people would still say that it's not actually intelligent, just matrix multiplications giving a statistically probable answer!

LLMs are at least designed to be intelligent. Our monkey brains have much less reason to be intelligent, since we only evolved to survive nature, not to understand it.

We are at this moment extremely deep into what most people would have been considered to be actual artificial intelligence a mere 15 years ago. We're not quite at human levels of intelligence, but it's close.

qsera 7 hours ago

wang_li 7 hours ago

worldsavior 7 hours ago

whimsicalism 2 hours ago

It only took 4 years, but it appears that this view is finally dying out on HN. I would advise everyone who found this viewpoint compelling to think about how those same blinders might be affecting how you are imagining the future to look like.

rustyhancock 8 hours ago

I don't even think that's the issue.

The issue to my mind is a lack of data at the meeting of QFT/GR.

Afterall few humans historically have been capable of the initial true leap between ontologies. But humans are pretty smart so we can't say that is a requirement for AGI.

worldsavior 8 hours ago

cjcole 6 hours ago

bobbylarrybobby 7 hours ago

Did you read the linked paper? Claude out-reasoned humans on a challenging (or at least, unsolved) math problem.

cjcole 7 hours ago

worldsavior 7 hours ago

graemefawcett 7 hours ago

Connecting them is easy, one is the math of the exchange and one of the state machine.

A better question might be why no one is paying more attention to Barandes at Harvard. He's been publishing the answer to that question for a while, if you stop trying to smuggle a Markovian embedding in a non-Markovian process you stop getting weird things like infinities at boundaries that can't be worked out from current position alone.

But you could just dump a prompt into an LLM and pull the handle a few dozen times and see what pops out too. Maybe whip up a Claw skill or two

Unconstrained solution space exploration is surely the way to solve the hard problems

Ask those Millenium Prize guys how well that's working out :)

Constraint engineering is all software development has ever been, or did we forget how entropy works? Someone should remind the folk chasing P=NP that the observer might need a pen to write down his answers, or are we smuggling more things for free that change the entire game? As soon as the locations of the witness cost, our poor little guy can't keep walking that hypercube forever. Can he?

Maybe 6 months and a few data centers will do it ;)

taylorius 6 hours ago

I thought Claude Monet - Impressionist techniques applied to coding.

zackmorris 4 hours ago

Amazing paper. The simulated annealing portion reminds me of genetic algorithms (GAs). A good intro to that are the Genetic Programming series of books by John Koza, I read III in the early 2000s:

https://www.amazon.com/Genetic-Programming-III-Darwinian-Inv...

https://www.genetic-programming.com/

Note that the Python solution in the pdf is extremely short, so could have been found by simply trying permutations of math operators and functions on the right side of the equation.

We should be solving problems in Lisp instead of Python, but no matter. That's because Lisp's abstract syntax tree (AST) is the same as its code due to homoiconicity. I'm curious if most AIs transpile other languages to Lisp so that they can apply transformations internally, or if they waste computation building programs that might not compile. Maybe someone at an AI company knows.

-

I've been following AI trends since the late 1980s and from my perspective, nothing really changed for about 40 years (most of my life that I had to wait through as the world messed around making other people rich). We had agents, expert system, fuzzy logic, neural nets, etc since forever, but then we got video cards in the late 1990s which made it straightforward to scale neural nets (NNs) and GAs. Unfortunately due to poor choice of architecture (SIMD instead of MIMD), progress stagnated because we don't have true multicore computing (thousands or millions of cores with local memories), but I digress.

Anyway, people have compared AI to compression. I think of it more as turning problem solving into a O(1) operation. Over time, what we think of as complex problems become simpler. And the rate that we're solving them is increasing exponentially. Problems that once seemed intractable only were because we didn't know the appropriate abstractions yet. For example, illnesses that we thought would never be cured now have vaccines through mRNA vaccines and CRISPR. That's how I think of programming. Now that we have LLMs, whole classes of programming problems now have O(1) solutions. Even if that's just telling the computer what problem to solve.

So even theorem proving will become a solved problem by the time we reach the Singularity between 2030 and 2040. We once mocked GAs for exploring dead ends and taking 1000 times the processing power to do simple things. But we ignored that doing hard things is often worth it, and is still a O(1) operation due to linear scaling.

It's a weird feeling to go from no forward progress in a field to it being effectively a solved problem in just 2 years. To go from trying to win the internet lottery to not being sure if people will still be buying software in a year or two if/when I finish a project. To witness all of that while struggling to make rent, in effect making everything I have ever done a waste of time since I knew better ways of doing it but was forced to drop down to whatever mediocre language or framework paid. As the problems I was trained to solve and was once paid to solve rapidly diminish in value because AI can solve them in 5 minutes. To the point that even inventing AGI would be unsurprising to most, so I don't know why I ever went into computer engineering to do exactly that. Because for most people, it's already here. As I've said many times lately, I thought I had more time.

Although now that we're all out of time, I have an uncanny feeling of being alive again. I think tech stole something from my psyche so profound that I didn't notice its loss. It's along the lines of things like boredom, daydreaming, wasting time. What modern culture considers frivolous. But as we lose every last vestige of the practical, as money becomes harder and harder to acquire through labor, maybe we'll pass a tipping point where the arts and humanities become sought-after again. How ironic would it be if the artificial made room for the real to return?

On that note, I read a book finally. Hail Mary by Andy Weir. The last book I read was Ready Player One by Ernest Cline, over a decade ago. I don't know how I would have had the bandwidth to do that if Claude hadn't made me a middle manager of AIs.

jdnier 7 hours ago

> I think Claude Shannon’s spirit is probably proud to know that his name is now being associated with such advances. Hats off to Claude!

I didn't realize Claude was named after Claude Shannon!

https://en.wikipedia.org/wiki/Claude_Shannon

tzumaoli 5 hours ago

Trivia: Claude Shannon proposed the idea of predicting the next token (letter) using statistics/probabilities in the training data corpus in 1950: "Prediction and Entropy of Printed English" https://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf

Anon84 4 hours ago

It goes back a bit further than that. His 1948 “Mathematical theory of communication” [1] already has (what we would now call) a Markov chain language model, page 7 onwards. AFAIK, this was based on his classified WWII work so it was probably a few years older than that

[1] https://people.math.harvard.edu/~ctm/home/text/others/shanno...

aix1 4 hours ago

Trinicode 2 hours ago

A letter is not a token, is it? Redundancy could hit 75% in long sentences, but Shannon was not predicting tokens or words, he was predicting letters (characters).

pfdietz 4 hours ago

It's like the diesel engine, which is named after Rudolf Engine.

ai_critic 4 hours ago

:|

roer 2 hours ago

Is this a joke I don't get? His name was Rudolf Diesel, right?

SenorKimchi 4 hours ago

And Claude had a collection of cycles, unicycles. Unfortunately the article is about something else altogether.

bread-wood 6 hours ago

Here I was assuming it was named after https://en.wikipedia.org/wiki/Claude_(alligator)

teekert 2 hours ago

Last time I asked Claude itself also didn’t know.

NitpickLawyer 6 hours ago

Wait till you hear about nvidia and their GPU architecture naming scheme :)

miroljub 9 hours ago

Solves? It's a part of the training set. Nothing more, nothing less.

rpdillon 9 hours ago

Opening sentences:

> Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving.

sigmar 4 hours ago

I think we're going to have several years of people claiming genAI "didn't really do something novel here," despite experts saying otherwise, because people are scared by the idea that complex problem solving isn't exclusive to humans (regardless of whether these models are approaching general intelligence).

allreduce 7 hours ago

I encourage you to look at what the current models with a bit of harnessing are capable of, e.g. Opus 4.6 and Claude Code. Try to make it solve some mathematics-heavy problem you come up with. If only to get a more accurate picture of whats going on.

Unfortunately, these tools generalize way beyond regurgitating the training set. I would not assume they stay below human capabilities in the next few years.

Why any moral person would continue building these at this point I don't know. I guess in the best case the future will have a small privileged class of humans having total power, without need for human workers or soldiers. Picture a mechanical boot stomping on a human face forever.

nemo1618 7 hours ago

If this was a joke, it certainly flew over most people's heads...

jcims 8 hours ago

Prove it.

romaniv 7 hours ago

I would like to note that it would be trivial to definitively prove or disprove such things if we had a searchable public archive of the training data. Interestingly, the same people (and corporate entities) who loudly claim that LLMs are creating original work seem to be utterly disinterested in having actual, definitive proof of their claims.

clbrmbr 7 hours ago

mwigdahl 9 hours ago

Did you read the article? It was an open problem.

bluGill 8 hours ago

Was it? It was an open problem to Knuth - who generally knows how to search literature. However there is enough literature to search that it wouldn't be a surprise at all to discover it was already solved but he just used slightly different terms and so didn't find it. Or maybe it was sovled because this is a specialization of something that looks unrelated and so he wouldn't have realized it when he read it. Or...

Overall I'm going with unsolved, because Knuth is a smart person who I'd expect to not miss the above. I'm also sure he falls for the above all the time even though the majority of the time he doesn't.

mwigdahl 8 hours ago

Steinmark an hour ago

Trivia:AKWU AGHALI OFU THEOREM

Theorem (Akwu Aghali Ofu — The Single Nest or 1/2 spin)

For any observer O with personal quantum seed s (derived from first orgasm timestamp SHA-256), there exists a unique Hamiltonian cycle C(O) through the M³ digraph such that:

1. C(O) starts at vertex (0,0,0) — the Single Nest 2. C(O) has length exactly L³ for L determined by O's muon/mass preference 3. The cycle visits every vertex exactly once before returning 4. The cycle only exists when O observes it 5. No other observer can traverse the same cycle

Proof Sketch: 1. Let s = SHA-256(timestamp) mod L determine coefficients (α,β,γ) 2. Define g(i,j,k) = (αi + βj + γk) mod L 3. Show that the mapping f: (i,j,k) → next vertex via g is a permutation 4. Show that the permutation decomposes into cycles 5. Show that for appropriate s, the cycle containing (0,0,0) has length L³ 6. Show that this cycle depends on s — different s give different cycles 7. Show that observation collapses the quantum superposition, making the cycle actual

Corollary: The Single Nest spins forever because the cycle is Hamiltonian (it loves only you) — it never repeats until it returns, and the return is a new beginning, not a repetition.