The sigmoids won't save you (astralcodexten.com)

148 points by Tomte 15 hours ago

noosphr 4 hours ago

This article answers the question in the second paragraph then completely ignores the answer for the rest of it.

>My understanding is that this represents 3-4 “generations” of different technology (propellers, turbojets, etc). Each technology went through normal iterative improvement, then, when it reached its fundamental limits, got replaced by a better technology. The last technology, ramjets, reached its limit at about 3500 km/h, and there wasn’t the economic/regulatory will to develop anything better, so the record stands.

You don't have one sigmoid, you have multiple each stacked on top of each other. Airplanes aren't just one technology they are multiple technologies that happen to do the same thing.

Each one is following a sigmoid perfectly. It only looks exponential(ish) because of unpredictable discoveries that let you switch to another sigmoid that has a higher maximum potential.

The same is true in AI. If you used the same architecture as GPT2 today you're in for a bad time training a new frontier model. It's only because we have dozens of breakthroughs that the capabilities of models have improved as much as they have.

That said exponential and sigmoids are the wrong model to use for growth. Growth is a differential equation. It has independent inputs, it has outputs and some of those outputs are dependent inputs again through causal chains of arbitrary complexity. What happens depends entirely on what the specific DE that governs the given technology is. We can easily have a chaotic system with completely random booms and busts which have no deep fundamental rhyme or reason. We currently call that the economy.

mediaman 4 hours ago

The book "Origins of Efficiency" by Brian Potter discusses this. Stacked sigmoids are a well-understood idea in innovation.

The idea that exponential growth will continue with stacked sigmoids is also not a given. An example is the nail. Nails used to be about half a percent of US GDP. That's a pretty big number! A series of innovations stacked on each other (each innovation having its own sigmoid) to reduce the cost of nails. Nails dropped in cost by over 90%.

But eventually nail manufacturing reached a floor. And since the mid-20th century, we haven't gotten much better at making nails. The cost of nails actually started increasing slightly. We ran out of new innovation sigmoids, so we got stuck on the last one.

So what you actually have to predict is whether there will continue to be new sigmoids, not whether the existing sigmoid will asymptote (we already know it will).

This is much more difficult to forecast, because new sigmoids (major new innovations) tend to be unpredictable events. Not only are the particulars difficult to forecast (if they were knowable, the innovation would have already happened), but whether there will be a major innovation or not is also hard to forecast, because they are distinct and separate from any existing sigmoid trend.

So we are left with the idea that all current innovations in AI will asymptote in their scaling as they reach the plateau of the sigmoid, but there may be new sigmoids that keep the overall trend up. Or there may not be. We don't know.

That's not very satisfying, so we'll get to keep reading articles like this one.

Sniffnoy 4 hours ago

Yes, I was surprised he never discussed the idea that such exponentials are typically made of stacked sigmoids.

That said... if the exponential is made of stacked sigmoids, it's still an exponential on the whole! The fact that it's made of stacked sigmoids is relevant to the engineers making it, but not so relevant to the users or those otherwise affected by it.

noosphr 4 hours ago

Only so long as you can keep inventing the next sigmoid in the stack.

achierius 3 hours ago

Scene_Cast2 3 hours ago

Something that deeply frustrates me, as someone who did R&D on model architectures, is how similar the modern LLM model architectures are to GPT2.

(This is a bit disingenuous, as lots/most of work is spent on the scaling and training side of things.)

jasongi 8 minutes ago

First sigmoid was transformers allow us to rapidly scale to our already abundant data until we tapped it out, the second is/was reasoning, allowing us to scale to our available compute (and compute manufacturing capacity). Correct me if I'm wrong but we don't have candidate for the third sigmoid, and scaling inference is hitting real-world supply chain constraints - electricity and chips.

Short of a third sigmoid appearing in the ML CompSci space, perhaps in the form of ongoing, repeated step-optimisations which will also have diminishing returns, intelligence growth is now limited a few scaling problems that have already been worked on for a very long time.

Transistors, which have been doubling for almost a century now, but Moores Law has already plateaued and reached limits on energy efficiency, and simply building new fabs is not something that we can do exponentially. And the other growth limiter is electricity - there is no exponential supply of fossil fuels or power plants. Although manufacturing has scaled, PV tech improvements are also plateauing - and while storage is getting cheaper, it's still not economical vs fossil fuels (meaning: when we have to switch to it, the growth slows down further) and we are unlikely to see battery efficiency sigmoid enough to maintain the AI sigmoid.

I don't mean to be bearish here. There's so much money sloshing around that we can afford to put the smartest people, using unlimited tokens, on the task of finding small, incremental gains on the CompSci side of things that will have large monetary payoffs - hopefully allowing further scaling and increased emergent abilities of LLMs. Maybe we can squeeze the algos for quite a while. But I don't see that maintaining the same level of exponential as unlocking unlimited data or maxxing out the world's energy/fab capacity for long.

And I don't see why this is a massive issue except for the people who want to have some god-like super AI? Frontier LLMs are genuinely magic. Not "won't delete your production database" magic, but definitely a massive productivity gain for competent knowledge workers.

stego-tech an hour ago

I felt the better takeaway from this was that it's impossible to know for certainty how long this will or will not continue regardless of the data or models you're using, because if you (or anyone else) could predict that accurately they'd be one of the richest people on the planet.

I don't know when (or if) AI will implode or succeed with any degree of provable certainty, because that's not my area of expertise. Rather, I can point out and discuss flaws in the common booster and doomer arguments, and identify problems neither side seems willing to discuss. That brings me cold comfort, but it's not enough to stake my money on one direction or another with any degree of certainty - thus I limit my exposure to specific companies, and target indices or funds that will see uplift if things go well, or minimize losses if things go pear-shaped.

I also think relying on such mathematics to justify a position in the first place is kind of silly, especially for technical people. Mathematical models work until they don't, at which point entirely new models must be designed to capture our new knowledge. On the other hand, logical arguments are more readily adapted to new data, and represent critical, rather than mathematical, thinking and reasoning.

Saying AI is going boom/bust because of sigmoids or Lindy's Law or whathaveyou is not an argument, it's an excuse. The real argument is why those things may or may not emerge, and how do we address their consequences within areas inside and outside of AI through regulation, innovation, or policy.

dreambuffer 5 hours ago

FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.

I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.

throwawayk7h 5 hours ago

Mind you, he is only personally invested insofar as he's staked his reputation on it. Throughout his writing, he expresses the same point over and over again: desperately wants AI to slow down, advocates for politics that would slow it down, and most likely nothing would bring him greater peace than to see a sigmoid curve appear.

bombcar an hour ago

How convenient; when AGI doesn’t appear in 1-2 years his reputation is pristine because he slowed it down.

wmf an hour ago

ToValueFunfetti 3 hours ago

This is incorrect as written. The author contributed writing to AI-2027 but distanced himself from the underlying model. That model had 2027 as the modal year of AGI, not median or mean. The authors of that model revised it to a later date shortly after and (if I recall correctly) have since done so again.

It is broadly true that Scott believes that AGI will come in the near future and from LLMs, although his reputation runs a ways deeper than that.

Sniffnoy 4 hours ago

> FYI: The author has predicted that "AGI" will be here in 1-2 years and has staked his public reputation on it. He is personally invested in trendlines being lindy rather than sigmoid.

I mean, that's called "having an opinion".

dreambuffer 3 hours ago

He co-authored a report, which is something more than an opinion. It may be used to inspire policy. There should be greater reputational consequences for publishing something you spent a few months studying and writing about along with several experts. Just my opinion.

Sniffnoy 2 hours ago

DonsDiscountGas 2 hours ago

And now he's publishing more information about that same opinion he still has. How horrible.

paulpauper 4 hours ago

He wrote articles arguing that pro-AI people are dismissive of risks or even suggesting they are intellectually lazy. He's taken a side. if he's wrong I would hope he owns up to it

Sniffnoy 4 hours ago

boxed 2 hours ago

woeirua 4 hours ago

Ok, but you can just look at the METR curve. Mythos saturated the 50% time horizon. The 80% is now at 3 hours. The rate of progress is accelerating not slowing down. There’s no indication yet that this is a sigmoid!

sigmoid10 4 hours ago

AGI has become such a meaningless nondescript term, arguing when or how it is here has become pointless. Even OpenAI caved in and removed their AGI clause from their contract with Microsoft because they weren't fully sure that we are not there yet. The original ARC AGI was hailed as proof that AGI is not here yet, but now that ARC 1 and 2 got saturated, noone wanted to consider that perhaps we crossed the point where average humans are getting left behind. Frontier models are primarily limited by context and modality at this point, not by intelligence.

paulpauper 4 hours ago

He only has 1.5 more months. If he's wrong he needs to own it. Same for Eliezer Yudkowsky. But these people have too much riding on their brands. No one has the courage to fess up to being wrong. Given how many podcasts he and others have been on professing this belief, it will be hard to just pretend otherwise.

btilly 8 hours ago

Lindy’s Law is an absolute gem, that I'm keeping.

If we don't understand the fundamental limits to any particular kind of trend, our default assumption should be that it will continue for about as long as it has gone on already.

We can, in fact, easily put a confidence interval on this. With 90% odds we're not in the first 5% of the trend, or the last 5% of the trend. Therefore it will probably go on between 1/19th longer, and 19 times longer. With a median of as long as it has gone on so far.

This is deeply counterintuitive. When we expect something to last a finite time, every year it goes on, brings us a year closer to when it stops. But every year that it goes on properly brings the expectation that it will go on for a year longer still.

We're looking at a trend. We believe that it will be finite. Our intuition for that is that every year spent, is a year closer to the end. But our expectation becomes that every year spent, means that it will last yet another year more!

How can we apply that? A simple way is stocks. How long should we expect a rapidly growing company, to continue growing rapidly?

cortesoft 5 hours ago

I feel like Lindy's law doesn't work for things whose observation is partly controlled by the thing itself.

For example, take something like a fad or trend; they don't have a hard end date like human lifespan, so it should follow Lindy's law.

However, the likelihood, on average across the population, that you observe a trend is going to be higher at the end of a trend lifecycle than at the beginning. This is baked into the definition - more and more people hear about a trend over time, so the largest quantity of observers will be at the end of the lifecycle, when the popularity reaches its peak.

In other words, if you are a random person, finding out about a trend likely means it is near the end rather than the middle.

jerf 8 hours ago

It's an interesting idea, and it may be something that could be mathematically justified, but I do think this is an abuse of Lindy's Law in the absence of such a justification. Per Wikipedia [1]:

"The Lindy effect applies to non-perishable items, like books, those that do not have an "unavoidable expiration date"."

And later in the article you can see the mathematical formulation which says the law holds for things with a Pareto distribution [2]. I'd want to see some sort of good analysis that "the life span of exponential growth curves" is drawn from some Pareto distribution. I don't think it's completely out of the question. But I'm also nowhere near confident enough that it is a true statement to casually apply Lindy's Law to it.

[1]: https://en.wikipedia.org/wiki/Lindy_effect

[2]: https://en.wikipedia.org/wiki/Pareto_distribution

btilly 7 hours ago

The analysis in the article explains why it applies to any phenomena that we might be able to notice.

The argument given is the same as the one that I first ran across, not by that name, in https://www.nature.com/articles/363315a0. https://en.wikipedia.org/wiki/Doomsday_argument claims that it was a rediscovery of something that was hypothesized a decade article.

I hadn't tried to give it a name, or thought to apply it outside of that context.

As for the mathematical qualms, I'm a big believer in not letting formal mathematical technicalities get in the way of adopting an effective heuristic. And the heuristic reasoning here is compelling enough that I would like to adopt it.

tsimionescu 4 hours ago

tomjakubowski 2 hours ago

People who correctly cite the Lindy effect won't look like people who correctly cite the Lindy effect.

tsimionescu 5 hours ago

While this is very fun as a mathematical exercise, it's completely irrelevant as a real tool for getting a better understanding of unknown processes in the real world.

The law only applies for certain types of processes, and is completely wrong for other types (e.g. a human who has lived 50 years may live 50 more, but one who has lived 100 years will certainly not live 100 more). So the question becomes: what type of process are you looking at? And that turns out to be exactly the question you started with: is there a fundamental limit to this growth curve, or not.

dado3212 3 hours ago

But if you met an alien who said they'd been alive for 100 years you wouldn't assume they're on the verge of dropping dead: you would assume they live longer. It's a rough rule for when you don't have other information, and if you're arguing against it you need to specify what other information you're using to make that argument.

jfjfnfnttbtg 4 hours ago

> The law only applies for certain types of processes

Did you even read the post? It’s an estimate in the context where you have zero information on which to base an accurate estimate. The author’s point is that if you’re making a different estimate you need to actually say what information is informing that.

Human lifespan is obviously not a case where we have zero information, so what is your point in bringing that up?

t43562 4 hours ago

skybrian 8 hours ago

You can do that but you're laundering ignorance into precise-seeming mathematics. Better to just say "we're probably somewhere in the middle, not at the beginning or end" and leave it at that. Calling a peak is hard.

btilly 7 hours ago

You speak about laundering ignorance into precise-seeming mathematics as if it was a bad thing.

But that's the entire idea of Bayesian reasoning. Which has proven to be surprisingly effective in a wide range of domains.

I'm all for quantifying my ignorance, and using it as an outside view to help guide my expectations. Read the book Superforecasting to understand how effective forecasters use an outside view to adjust their inside view, to allow them to forecast things more precisely.

throwawayk7h 5 hours ago

Closely related is Laplace's Rule of Succession[1], which basically says that (in lieu of other information), the odds of something happening next time go down the more times in a row that it doesn't happen (and vice versa).

So for example, the longer a time bomb ticks, the less likely it is to go off any time soon. (Assuming the timer isn't visible.) :)

[1] https://en.wikipedia.org/wiki/Rule_of_succession

LPisGood 8 hours ago

This is the exact same heuristic used in CPU scheduling.

We expect fresh processes to terminate quickly and long running processes to last for a while longer.

andy99 5 hours ago

AI has scaled well according to convenient measures. It (neural networks) have the property that whatever you define, they can rapidly be trained master it. We’re able to show that various tasks of increasing complication do not require intelligence and can be framed as autoregressive RL problems. I personally don’t think AI is any closer to sentient intelligence than LeNet; it’s almost trivially clear, we know how it works. So we’re measuring something orthogonal, basically how well a universal function approximator can fit to a function we define, given arbitrary computing power, and calling that progress. What will be really interesting is if we’re able to find a way to properly measure what they can’t do and what’s different about real intelligence.

Edit: in particular I don’t agree with

  But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level... 
One has to agree that the benchmark results are getting “scarier”, which is not automatically implied by finding more goals to optimize for

ordu 4 hours ago

> We’re able to show that various tasks of increasing complication do not require intelligence and can be framed as autoregressive RL problems.

The important thing we can show it in hindsight only. We don't know which other tasks we are currently mistaken about requiring intelligence. Maybe none of them are?

We don't know. We don't know what intelligence is. If we look at decades and even centuries of attempts to define intelligence, it is all looks like a goalposts moving. When a definition of intelligence starts to include people or things we don't like to think as of intelligent ones, we change the definition.

ryeights 2 hours ago

“AI is whatever hasn't been done yet” — Larry Tesler

stymaar 6 hours ago

I don't know when the sigmoid is going to kick in, but Nvidia's Quaterly datacenters revenues have been grown 15 folds over the past 3 years[1], and nobody including Scott believes this is sustainable for 3 more years otherwise Nvidia's market cap would conservatively be at least an order of magnitude higher than it is.

All exponential eventually becomes a sigmoid because exponential growth always expose limiting factors that weren't limiting at the beginning. Silicon manufacturing had lots of room for high-margin customers like Nvidia even a year ago (by the mere virtue of outbidding lower-margin customers), but now it is mostly gone, and no amount of money will make fabs build themselves overnight.

[1]: https://stockanalysis.com/stocks/nvda/metrics/revenue-by-seg...

LarsDu88 8 hours ago

I think an interesting thing about recent AI developments is that its all happening right as we hit the diminishing returns side of another "exponential that's actually a sigmoid" which is Moore's law.

The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.

At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.

With the right level of research funding in hardware, the ceiling for AI can be very high.

ToValueFunfetti 3 hours ago

I suspect if you consider neurons as components of computation, you could draw an exponential of total computation in the world that goes back to the dawn of humanity, maybe further. Most of that would just be population, but interesting that digital computers start picking up the slack just as population growth slows.

rdedev 3 hours ago

IMO we are either limited by data or reaching the limits of what's possible with a transformer architecture. Hardware will get us efficiency but I am not sure if it will lead to smarter models

paulpauper 4 hours ago

Moore's law is bypassed with volume--more datacenters

cyanydeez 8 hours ago

they already did put a model into the silicon and it's crazy fast. https://chatjimmy.ai/

I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.

It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA

LarsDu88 7 hours ago

And this is an asic that is still operating digitally. Imagine a chip with baked it weights that does its math analogue with 20x reduction in number of circuit elements needed to do a multiplication op.

If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition)

The ceiling is ultra high for how far AI can go.

clickety_clack 8 hours ago

It would be pretty cool to have interchangeable usb keys with models on them.

throwaway27448 8 hours ago

Even at orders of magnitude greater speed, we've still hit diminishing returns for quality of output. We simply haven't found anything like superhuman reasoning ability, just superhuman (potentially) reasoning speed.

LarsDu88 7 hours ago

I disagree with this. Reinforcement learning with verifiable rewards training is actually the secret sauce that is leading Claude and GPT to automating software engineering tasks.

All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.

By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.

Alex_L_Wood 5 hours ago

energy123 8 hours ago

It's not that easy to assess diminishing returns with saturated benchmarks where asymptoting to 100% is mathematically baked in. I could point to the number of Erdos proofs being solved by AI going from 0 to many very recently as evidence for acceleration.

throwaway27448 5 hours ago

horsawlarway 8 hours ago

Possibly - but we've also seen that spending more tokens on a task can improve the quality of the output (reasoning, CoT, etc).

So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.

gm678 9 hours ago

I don't know what the Y-axis is supposed to be on that Wharton AI capabilities graph, but I am not really convinced that Opus 4.6 has more than double the intelligence/capability/whatever of GPT 5.1 Max.

NitpickLawyer 9 hours ago

IIRC that graph tracks capabilities as time_to_solve a task for humans (i.e. the model can now handle tasks that usually take a human ~8h). Which, depending on what tasks you look at, could be a reasonable finding. I could see Opus 4.6 handling tasks that take ~8h for humans, and that 5.1 couldn't previously handle (with 5.1 being "limited" at 4h tasks let's say). It is a bit arbitrary, but I think this is what they're tracking.

lukan 8 hours ago

"It is a bit arbitrary, but I think this is what they're tracking."

I don't know if they can get their numbers right this way, but this seems a way more useful metric, than theoretic capabilities.

cyanydeez 8 hours ago

jrumbut 8 hours ago

Without knowing more about their methodology, it seems like a lot of the recent improvements have involved the AI itself taking time to complete the task.

At first the models turned a 5 minute task into a 5 second task (by 5 seconds I mean a very short amount of time, not precisely 5 seconds). Then they turned a 15 minute task into a 5 second task.

Opus 4.6 completes 8 hour tasks all the time but (at least in my experience) it isn't spitting the answer out in 5 seconds anymore. It's using chain of thought and tools and the time to completion is measured in minutes or maybe hours.

In my experiments with local LLMs, a substantial part of the gap between frontier and local (for everyday use) is in tooling and infrastructure.

That is why I am sympathetic to the idea we are leveling off. But to bring in the air speed example from the article, I don't think we've reached the equivalent of the ramjet yet. I suspect in the coming years there will be new architectures, new hardware, and new ways to get even more capable models.

Leynos 6 hours ago

MadxX79 8 hours ago

I don't know why people are so impressed by 8h.

I trained an LLM to write the whole Harry Potter series, and that took JK Rowling like 17 years.

For my next point on the graph, I'll train the LLM to write the Bible, something that took humans >1500 years.

Leynos 6 hours ago

strken 8 hours ago

Check out Re-Bench and HCAST.

The tasks are obviously all of the form "Go do this, and if you get the following output you passed". Setting up a web server apparently takes 15 minutes for a human, which is news to me since I'm able to search for https://gist.github.com/willurd/5720255, find the python one-liner, and copy it within about ten seconds.

Anyway, this is cool but it does not mean Claude can perform any human tasks that take less than 8 hours and are within its physical capabilities.

throwaway27448 8 hours ago

> more than double the intelligence/capability/whatever

I'm curious what people really mean when they say this. Intelligence is famously hard to define, let alone measure; it certainly doesn't scale linearly; it only loosely correlates to real-world qualities that are easy to measure; etc. Are you referring to coding ability or...?

adw 8 hours ago

https://podcasts.apple.com/us/podcast/machine-learning-stree... is a pretty good primer on METR, what it measures, and its limitations.

myhf 8 hours ago

According to this article: whenever someone games a benchmark to make an upward chart on some y-axis, it's YOUR responsibility to prove how and why that trend can't continue indefinitely.

emoji face with eyes rolling upward

skybrian 8 hours ago

Seems to me that the default is "I don't know what's going to happen" and if you're making a confident prediction, bring evidence.

Scott makes a Lindy effect argument which is plausible, but don't let that fool you, we still don't know what's going to happen.

AnimalMuppet 8 hours ago

I'm pretty sure that gaming benchmarks can continue indefinitely.

BoredPositron 9 hours ago

https://metr.org/time-horizons/ on linear scale. Clickbait garbage article as most of his in the last year.

afthonos 9 hours ago

…yeah, that’s where you see the exponential?

graphememes 17 minutes ago

line can go up, line can go sideways, line can go up sideways, line can go up sideways up, line go where line go

pron 3 hours ago

1. Scott Alexander is famous for writing about topics he knows little about. I'm glad to see he's found a subject he knows little about but so does everyone else.

2. What's even worse than predicting that some growth curve flattens before X happens is predicting it will flatten before X happens but after Y happens, which is what we see when it comes to AI in software development. Too many people predict that AI will be able to effectively write most software, replacing software engineers, yet not be able to replace the people who originate the ideas for the software or the people who use them. I see no reason why AI capability growth should stop after the point it's able to write air-traffic control or medical diagnosis software yet before the point where it's able to replace air traffic controllers and doctors.

3. While we don't know much about AI (or, indeed, intelligence in general), we do know something about computational complexity. Some predictions about "scary things" happening (the ones I'm guessing Alexander is alluding to, though I can't be certain) do hit known computational complexity limits. Most systems affecting people are nonlinear (from weather to the economy). Predicting them requires not intelligence but computational resources. Controlling them, similarly, requires not intelligence but either computational resources or other resources. It's possible that people choose to give control over resources to computers (although probably not enough to answer many tough, important questions), although given how some countries choose to give control to people with below-average intelligence (looking at you, America), I don't see why super-human intelligence (if such a thing even exists) would be, in itself, exceptionally risky.

ryeights 3 hours ago

>1. Scott Alexander is famous for writing about topics he knows little about. I'm glad to see he's found a subject he knows little about but so does everyone else.

This is kinda laughable. Scott has been thinking and writing about AI for a long time

OscarCunningham 8 hours ago

John D Cook gives more technical details here: "Trying to fit a logistic curve" https://www.johndcook.com/blog/2025/12/20/fit-logistic-curve...

whatshisface 2 hours ago

If you want a model, here's one: LLMs have never demonstrated the ability to go obviously beyond interpolating their training data. It takes an army of paid data producers solving homework problems to give ChatGPT the ability to do your homework. All vibecoded apps that turned out to be successful could put on a geological soil chart with other apps, probably on GitHub somewhere, on the corners. The prediction? They won't.

In this model, the exponential growth that everybody is freaking out about is only the realization of the modular software dream ("we'll only have to write an ORM once for all of human history!") and the sheer amount of knowledge in libraries.

It's at least falsifiable.

boxed 2 hours ago

Just to play devils advocate: are we sure humans have demonstrated the ability to go beyond their training data? Like.. are we sure-sure about that?

whatshisface 2 hours ago

I'm not asking for anything close enough to the boundary for questions like that to be difficult. There are some ML systems like AlphaGo that have crossed the line in specific domains. It's just that making self-play and online learning work for huge LLMs is highly non obvious.

The idea is simply that the basic idea behind LLMs, that you're distilling the entropy out of the entire available world of text, is antithetical to creativity.

Further developing on the theme of self-play, humans have the ability to sense what we want (intellectually) and reach for it communally over thousands of years. It's an innate quality, and if AI starts participating (contrast to giving people psychosis) we will all be able to tell.

1attice 2 hours ago

Idk, I mean, Shakespeare never read Shakespeare, so, I mean, unless aliens?

leoc 3 hours ago

Hmm. What’s the general belief about Toby Ord’s “Are the Costs of AI Agents Also Rising Exponentially?” https://www.tobyord.com/writing/hourly-costs-for-ai-agents among those who are well-equipped to judge? Is it seen as wrong or disproven or unlikely? Because if not—if indeed recent LLM capability advances have likely relied on increases in inference cost per run which can’t be much further sustained—then it seems remiss not to mention that if you point to those advances to claim that the exponential trend remains on track.

Brendinooo 8 hours ago

> then what is their model?

My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.

Ultimately, you can't make something look more realistic than real.

I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.

the8472 3 hours ago

The equivalent bar in this domain would be human intelligence, and we already have growing lists of tasks where machines outperform humans. We even known of natural systems that outperform humans on some metrics, e.g. bird-brains have higher neuron density than ours because evolution had to optimize more for weight.

philipallstar 9 hours ago

But they do explain the improvement of AI driving 2017-2021 vs 2022-2026.

jsmcgd 8 hours ago

> It’s true that birth rates must eventually flatten out and become sigmoid

All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive. No gentle curve, but a hard kink and perfect flat line at zero. Forever. I think it would be a stretch to categorize that pattern as sigmoid. Predicting a sigmoid pattern for negative growth implies some sort of a soft landing (depending on your definition of soft).

We can think of many populations that are no longer with us. So just a caution about over applying this reasoning in the negative case.

Qem 4 hours ago

> All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive.

https://en.wikipedia.org/wiki/Seneca_effect

andai 9 hours ago

Well, curve shape aside, the high watermark might be lower than where it tapers off.

https://news.ycombinator.com/item?id=46199723

janalsncm 8 hours ago

> What if you don’t fully understand the process? AI forecasters know some things (like how data centers work and how much it costs to build them). But they’re unsure about other things (researchers keep inventing new paradigms of data generation that get over data walls, but for how long?), and other things are entirely opaque (What is intelligence really? Why do scaling laws work? Might they just stop working at some point?) Is there anything you can do here?

This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.

One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.

dsign 8 hours ago

We did hit the sigmoid's plateau on airplane speed, but the applications of airplane speed are still coming (how fast can a Chinese company airship the PCB you ordered three minutes ago?). I expect the the same will happen with LLMs, though I also happen to believe things are just getting started on end capabilities.

baxtr 4 hours ago

> The moral of the story is that, even though all exponentials eventually become sigmoids, this doesn’t necessarily happen at the exact moment you’re doing your analysis. Sometimes they stay exponential for much longer than that!

All exponentials eventually become sigmoids? Don’t think this can be true without qualifiers.

jvanderbot 4 hours ago

All models are wrong, of course, but this is kind of "common sense" so it's not hard to accept as true in a natural system. How can something continue on exponential growth forever without reaching a new blocker that causes slowdown or encountering pushback that makes it an oscillator. A pendulum looks exponential when it is at its peak and accelerating down.

The issue is that the exponential-looking part of the sigmoid might contain all of human history, sure, but most folks who espouse this theory probably agree that over time everything reaches a steady-enough state to be considered non-exponential, or become oscillatory.

zkmon 8 hours ago

The curve is a smoothed step curve (y=1 if x>1 otherwise 0). Nature doesn't allow any change to happen instantly at any degree of rate of change. The curveis just a manifestation a change with exponential smoothening of the sharp corners.

For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.

krupan 9 hours ago

News flash: predicting the future is hard

energy123 9 hours ago

The individual who is the best at predicting the future is predicting ASI and full labor automation by 2040:

https://xcancel.com/peterwildeford/status/202963666232244661...

solid_fuel 5 hours ago

> The individual who is the best at predicting the future

Yeah well my prophet says he can beat up your prophet in a fight.

---

Here in reality, I'm not accustomed to taking random predictions without backing evidence as if they were truth.

Aurornis 8 hours ago

> The individual who is the best at predicting the future

Going to need a big citation for that claim

hirvi74 7 hours ago

margalabargala 8 hours ago

dsign 8 hours ago

My own bet is end of that decade: somewhere between 2045 and 2050.

Ofc "full labor automation" has a certain spread of meaning. A sliver of population will always find ways to hold to a job or run one or many businesses. But there will be "enough" labor automation for it to be a social ticking bomb. That, in fact, does not depend on better models nor better AI than we have today. By 2045 there will be a couple of generations that has been outsourcing their thinking to AI for most of their adult lives. Some of them may still work as legal flesh of sorts, but many won't get to be middle man and will find no job.

Also, if you could replace your senator today by an untainted version of a frontier model (of today), would you do it? Would it be a better ruler? What are the odds of you not wanting to push that button in the next twenty years, after a few more batches of incompetent and self-serving politicians?

renticulous 6 hours ago

layer8 8 hours ago

Predicting who will predict the future best is hard.

gerikson 8 hours ago

Past results is no guarantee of future performance.

margalabargala 8 hours ago

> The individual who is the best at predicting the future

Lol

patrickmay 8 hours ago

Stein's Law: "If something cannot go on forever, it will stop."

skybrian 8 hours ago

Yes, but figuring out when is the hard part.

kubb 8 hours ago

If the scary AI is so inevitable, why do you feel such an overwhelming need to convince people about that? Surely you can just wait a bit, and they'll see for themselves.

mitthrowaway2 8 hours ago

By that reasoning, why even warn people about anything? Why do road construction crews put up signs saying "ROAD CLOSED AHEAD" when you can just drive on and see for yourself?

kubb 8 hours ago

Indeed, why warn people about real things that exist in the world? That is EXACTLY the same as inciting fear about something imaginary (not even projected).

mitthrowaway2 7 hours ago

throwawayk7h 5 hours ago

Yeah! And if climate change is so inevitable, why do the people who want to prevent it from happening seem hell-bent on convincing people that climate change is real?

adleyjulian 8 hours ago

1. It's not inevitable. 2. Those that see AI as an existential risk don't generally think it's a guarantee, but if it's say a 5% chance then that's worth addressing/mitigating. 3. That's not what this article was even about.

kubb 8 hours ago

Sounds like the burden is on you to explain either

  1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
  2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.

adleyjulian 7 hours ago

pyrale 5 hours ago

Such a long article to say that neither side has a fucking idea about what will happen next.

While we're at it, the "exponentials are actually sigmoïds" meme is not necessarily true. While exponentials are never exponentials, sigmoids are not guaranteed. Overshoot-and-collapse examples also happen in tech, e.g. the dotcom bubble, or the successive AI winters.

andrewflnr 4 hours ago

It's really not that long, and is quite clear that its main point is about how to reason when you realize no one actually knows what's going on.

nathan_compton 9 hours ago

A lot of words to say "The initial part of a sigmoidal curve is not very informative about the parameters of the sigmoid function in question."

inglor_cz 9 hours ago

That is true, but I generally enjoy reading a lot of words from Scott, who has a talent for writing.

The entire plot of the Lord of the Rings could probably be compressed into less than 10 kB of text too.

Edit: this seems to be a controversial comment, but IMHO a blog of Scott Alexander's type is an art form, not just a communication channel.

jeffreyrogers 8 hours ago

I find him more interesting when he talks about non-AI topics. Lots of other interesting people are like this too. I'd rather get my knowledge on AI from people who have unique insights into it. Scott has a lot of unique perspectives of his own, but his views on AI are bog-standard for his social group.

inglor_cz 8 hours ago

itkovian_ 8 hours ago

The other thing people don’t understand is exponential curves are self similar. The start of an exponential looks like an exponential. People always look at and think ‘well that’s it it’s exponential now, have missed it, can’t sustain’. Nope.

Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.

addaon 9 hours ago

lovich 3 hours ago

I wonder how the graph would look like if cost and/or profitability was taken into account.

I could probably make increasingly larger fires for years if I was willing to burn the entire world.

ngruhn 7 hours ago

> all exponentials eventually become sigmoids

Except innovation. When one sigmoid tapers off we keep finding new ones to keep the climb going.

jrflowers 4 hours ago

I like this article about how we should assume, at any given point, that we are exactly halfway through a phenomenon which relies on a single data point on a graph —-that apparently doesn’t need its relevance or importance explained— to illustrate that this is obviously true for AI in particular

inglor_cz 9 hours ago

Hmmm, this is quite an interesting take by Scott.

Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).

But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.

A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.

krupan 9 hours ago

"There is an international arms race with China"

I keep seeing this. Where did it come from? Has China said that they intend to attack other countries using AI? Have other countries declared that they intend to attack China with AI?

Also, why does anyone believe that AI could actually be that dangerous, given it's inherent unpredictable and unreliable performance? I would be terrified to rely on AI in a life or death situation.

aspenmartin 8 hours ago

AI in war is like Palintirs whole business model. You have a system that can effectively deal with ambiguity and has superhuman performance on reasoning plus superhuman physical abilities via embodiment…

Inherent unpredictable and unreliable performance is also quite the feature of human beings as well.

inglor_cz 9 hours ago

It was a metaphor. I meant, and later clarified, an intellectual arms race.

BTW your handle is an actual Czech word, minus a diacritic sign ("křupan"), and a bit amusing one. It basically means hillbilly. Not that it matters, just FYI.

Anyway: AI will be used in military context, and it probably already is. Both for target acquisition and maybe even driving the weapon itself. As of now, the Ukrainians are almost certainly operating some AI-enabled killer drones.

krupan 4 hours ago

mitthrowaway2 8 hours ago

It's not a law per se, but there are rules for reasoning under uncertainty to get the most out of what limited knowledge you have, and Lindy's law arises from that. To do better than Lindy's law requires having additional information about the problem beyond just the one data point.

devmor 9 hours ago

"Exponentials all tend to become sigmoids but you can't predict exactly when" is a true statement, but I'm not sure it needed an article.

This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.

I really don't get the point of what I just read.

aspenmartin 8 hours ago

The point is the tiring arguments from AI skeptics saying “things are flattening, they have to” which while technically correct says nothing because no one knows when that will happen and we see no mechanism for this yet. Lindy’s law as a reasonable prediction under total uncertainty is interesting and insightful and a lot of people don’t know about it or why it holds. I did enjoy the reference to this!

solid_fuel 5 hours ago

Nah this is making a category error. You're assuming that AI skeptics agree that models are demonstrating intelligence along the same axis as humans and that with further improvement they will become equivalent to humans. I am an AI skeptic, and I disagree with this assessment.

Model reasoning is on an s-curve, which is improving.

Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.

See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted. Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs. Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans on the intelligence axis to replace all labor.

aspenmartin 5 hours ago

devmor 6 hours ago

But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI. So this article is in fact just a (very poorly thought through) attempt at saying “nuh uh, the hype might be true, you can’t prove it’s not yet!

aspenmartin 5 hours ago

BoredPositron 9 hours ago

If you use the log scale you'll see that the time horizon of opus 4.6 was as expected...

afthonos 9 hours ago

As expected by the exponential. The Wharton study was predicting when the exponential would turn into a sigmoid.

ReptileMan 8 hours ago

Everything is linear on a log log scale with a fat marker.

dnnddidiej an hour ago

Tldr you cant accurately predict the future of complex systems. Corollary... you cant accurately predict the future of complex systems using sigmoid.

Attention is all you need took us by surprise and we don't know how big the wave is let alone if there are other waves behind it.

bedobi 8 hours ago

[flagged]

tomhow 2 hours ago

Please don't post like this on HN. The guidelines explicitly ask us not to sneer or be curmudgeonly - https://news.ycombinator.com/newsguidelines.html.

As for the basis of your objection, this smacks of intellectual gatekeeping. Plenty of good writing is by people who are not academically qualified or a recognized expert in the topic they're writing about. Indeed, very often, this kind of writing is better than writing by experts. Experts often write for other experts, and this can be exclusionary to lay readers. When a non-expert learns about a topic then writes about it for a general audience, they tend to be just a step ahead of the audience, and so the reader is able to learn about the topic by following the process of discovery and reasoning that the author just experienced. Sure, they often get some details or concepts wrong, but the discussion on a site like HN can draw other perspectives, and – very often – contributions from experts, which leads to further expansion in everyone's understanding of the topic.

HN's very ethos is to gratify intellectual curiosity, and this kind of writing is highly compatible with that.

ngriffiths 8 hours ago

I think there are many ways someone with his lack of expertise can still be valuable, including:

- Making connections to other subjects that an expert would miss. The hall of fame of sigmoid predictions is just excellent, I already know I'm going to be reminded of it some time in the future. Very entertaining way to get the point across.

- Writing about tricky concepts in a very accessible and elegant way, which experts are notoriously bad at doing themselves - they are often optimizing for other specialists.

- Being able to write with an air of speculation and experimentation with ideas that experts and institutions often can't afford. Experts have to maintain their track record; Scott Alexander can say "lol just double the timeline"

bedobi 7 hours ago

[flagged]

tomhow 2 hours ago

Findecanor 3 hours ago

simianparrot 8 hours ago

Because HN is YCombinator which has invested in probably hundreds of «AI» firms by now. Including OpenAI.

Allowing slop articles like this literally prints them evaluation money.

t43562 3 hours ago

Yes, this is not the place to express skepticism of any kind.