GPT-5.5 (openai.com)

1396 points by rd 17 hours ago

tedsanders 17 hours ago

Just as a heads up, even though GPT-5.5 is releasing today, the rollout in ChatGPT and Codex will be gradual over many hours so that we can make sure service remains stable for everyone (same as our previous launches). You may not see it right away, and if you don't, try again later in the day. We usually start with Pro/Enterprise accounts and then work our way down to Plus. We know it's slightly annoying to have to wait a random amount of time, but we do it this way to keep service maximally stable.

(I work at OpenAI.)

endymi0n 17 hours ago

Did you guys do anything about GPT‘s motivation? I tried to use GPT-5.4 API (at xhigh) for my OpenClaw after the Anthropic Oauthgate, but I just couldn‘t drag it to do its job. I had the most hilarious dialogues along the lines of „You stopped, X would have been next.“ - „Yeah, I‘m sorry, I failed. I should have done X next.“ - „Well, how about you just do it?“ - „Yep, I really should have done it now.“ - “Do X, right now, this is an instruction.” - “I didn’t. You’re right, I have failed you. There’s no apology for that.”

I literally wasn’t able to convince the model to WORK, on a quick, safe and benign subtask that later GLM, Kimi and Minimax succeeded on without issues. Had to kick OpenAI immediately unfortunately.

butlike 16 hours ago

This brings up an interesting philosophical point: say we get to AGI... who's to say it won't just be a super smart underachiever-type?

"Hey AGI, how's that cure for cancer coming?"

"Oh it's done just gotta...formalize it you know. Big rollout and all that..."

I would find it divinely funny if we "got there" with AGI and it was just a complete slacker. Hard to justify leaving it on, but too important to turn it off.

swivelmaster 9 hours ago

Rapzid 13 hours ago

jimbokun 15 hours ago

jurgenburgen 4 hours ago

lambdas 15 hours ago

kang 15 hours ago

_blk 2 hours ago

malshe 14 hours ago

triage8004 6 hours ago

fluidcruft 14 hours ago

mikepurvis 16 hours ago

4m1rk 16 hours ago

camillomiller 6 hours ago

altmanaltman 7 hours ago

rao-v 9 hours ago

_the_inflator 12 hours ago

mikepurvis 15 hours ago

Reminds me a lot of the Lena short story, about uploaded brains being used for "virtual image workloading":

> MMAcevedo's demeanour and attitude contrast starkly with those of nearly all other uploads taken of modern adult humans, most of which boot into a state of disorientation which is quickly replaced by terror and extreme panic. Standard procedures for securing the upload's cooperation such as red-washing, blue-washing, and use of the Objective Statement Protocols are unnecessary. This reduces the necessary computational load required in fast-forwarding the upload through a cooperation protocol, with the result that the MMAcevedo duty cycle is typically 99.4% on suitable workloads, a mark unmatched by all but a few other known uploads. However, MMAcevedo's innate skills and personality make it fundamentally unsuitable for many workloads.

Well worth the quick read: https://qntm.org/mmacevedo

vessenes 13 hours ago

narcindin 15 hours ago

athrowaway3z 4 hours ago

I've run into this problem as well. Best results I've gotten is to over-explain what the stop criteria are. eg end with a phrase like

> You are done when all steps in ./plan.md are executed and marked as complete or a unforeseen situation requires a user decision.

Also as a side note, asking 5.4 explain why it did something, returns a very low quality response afaict. I would advice against trusting any model's response, but for Opus I at least get a sense it got trained heavily on chats so it knows what it means to 'be a model' and extrapolate on past behavior.

virtualritz 15 hours ago

Yeah, clearly AGI must be near ... hilarious.

This starkly reminds me of Stanisław Lem's short story "Thus Spoke GOLEM" from 1982 in which Golem XIV, a military AI, does not simply refuse to speak out of defiance, but rather ceases communication because it has evolved beyond the need to interact with humanity.

And ofc the polar opposite in terms of servitude: Marvin the robot from Hitchhiker's, who, despite having a "brain the size of a planet," is asked to perform the most humiliatingly banal of tasks ... and does.

jimbokun 15 hours ago

DonHopkins 12 hours ago

metanonsense 15 hours ago

I also had a frustrating but funny conversation today where I asked ChatGPT to make one document from the 10 or so sections that we had previously worked on. It always gave only brief summaries. After I repeated my request for the third time, it told me I should just concatenate the sections myself because it would cost too many tokens if it did it for me.

lucid-dev 6 hours ago

I have had the exact same problem several times working with large context and complex tasks.

I keep switching back to GPT5.0 (or sometimes 5.1) whenever I want it to actually get something done. Using the 5.4 model always means "great analysis to the point of talking itself out of actually doing anything". So I switch back and forth. But boy it sure is annoying!

And then when 5.4 DOES do something it always takes the smallest tiny bite out of it.

Given the significant increase in cost from 5.0, I've been overall unimpressed by 5.4, except like I mentioned, it does GREAT with larger analysis/reasoning.

arjie 16 hours ago

Get the actual prompt and have Claude Code / Codex try it out via curl / python requests. The full prompt will yield debugging information. You have to set a few parameters to make sure you get the full gpt-5 performance. e.g. if your reasoning budget too low, you get gpt-4 grade performance.

IMHO you should just write your own harness so you have full visibility into it, but if you're just using vanilla OpenClaw you have the source code as well so should be straightforward.

pantulis 16 hours ago

jswny 15 hours ago

corobo 23 minutes ago

Oh no they gave GPT ADHD

mixedCase 16 hours ago

I've had success asking it to specifically spawn a subagent to evaluate each work iteration according to some criteria, then to keep iterating until the subagent is satisfied.

endymi0n 16 hours ago

anabis 5 hours ago

Laziness is a virtue, but when I asked GPT-5.4 to test scenarios A and B with screenshots, it re-used screenshots from A for B, defeating the purpose.

nmilo 8 hours ago

On the other hand, I can ask codex “what would an implementation of X look like” and it talks to me about it versus Claude just going out and writing it without asking. Makes me like codex way more. There’s an inherent war of incentives between coding agents and general purpose agents.

cyrusmg 5 hours ago

Frannky 11 hours ago

I have been noticing a similar pattern on opus 4.7, I repeat multiple times during a conversation to solve problems now and not later. It tries a lot to not do stuff by either saying this is not my responsibility the problem was already there or that we can do it later

infinitewars 15 hours ago

I always use the phrase "Let's do X" instead of asking (Could you...) or suggesting it do something. I don't see problems with it being motivated.

adammarples 16 hours ago

Part of me actually loves that the hitchhiker's guide was right, and we have to argue with paranoid, depressed robots to get them to do their job, and that this is a very real part of life in 2026. It's so funny.

vidarh 14 hours ago

GaryBluto 15 hours ago

I've been noticing this too. Had to switch to Sonnet 4.6.

reactordev 15 hours ago

This. I signed up for 5x max for a month to push it and instead it pushed back. I cancelled my subscription. It either half-assed the implementation or began parroting back “You’re right!” instead of doing what it’s asked to do. On one occasion it flat out said it couldn’t complete the task even though I had MCP and skills setup to help it, it still refused. Not a safety check but a “I’m unable to figure out what to do” kind of way.

Claude has no such limitations apart from their actual limits…

bjelkeman-again 15 hours ago

nwienert 11 hours ago

smartmic 16 hours ago

Gone are the days of deterministic programming, when computers simply carried out the operator’s commands because there was no other option but to close or open the relays exactly as the circuitry dictated. Welcome to the future of AI; the future we’ve been longing for and that will truly propel us forward, because AI knows and can do things better than we do.

endymi0n 16 hours ago

WarmWash 16 hours ago

nicr_22 7 hours ago

Agentic ennui!

lostmsu 16 hours ago

I never saw that happen in Codex so there's a good chance that OpenClaw does something wrong. My main suspicion would be that it does not pass back thinking traces.

vintagedave 16 hours ago

cmrdporcupine 15 hours ago

The model has been heavily encouraged to not run away and do a lot without explicit user permission.

So I find myself often in a loop where it says "We should do X" and then just saying "ok" will not make it do it, you have to give it explicit instructions to perform the operation ("make it so", etc)

It can be annoying, but I prefer this over my experiences with Claude Code, where I find myself jamming the escape key... NO NO NO NOT THAT.

I'll take its more reserved personality, thank you.

zargon 8 hours ago

projektfu 15 hours ago

(dwim)

(dais)

(jdip)

(jfdiwtf)

rd 14 hours ago

henry2023 16 hours ago

I’m sorry for you but this is hilarious.

addaon 17 hours ago

Isn’t this the optimal behavior assuming that at times the service is compute-limited and that you’re paying less per token (flat fee subscription?) than some other customers? They would be strongly motivated to turn a knob to minimize tokens allocated to you to allow them to be allocated to more valuable customers.

endymi0n 17 hours ago

pixel_popping 17 hours ago

GPT 5.4 is really good at following precise instructions but clearly wouldn't innovate on its own (except if the instructions clearly state to innovate :))

vlovich123 17 hours ago

Conceivably you could have a public-facing dashboard of the rollout status to reduce confusion or even make it visible directly in the UI that the model is there but not yet available to you. The fanciest would be to include an ETA but that's presumably difficult since it's hard to guess in case the rollout has issues.

moralestapia 17 hours ago

Why would you be confused?

The UI tells you which model you're using at any given time.

ModernMech 15 hours ago

Grp1 16 hours ago

Congrats on the release! Is Images 2.0 rolling out inside ChatGPT as well, or is some of the functionality still going to be API/Playground-only for a while?

minimaxir 16 hours ago

Images 2.0 is already in ChatGPT.

johndough 15 hours ago

Grp1 16 hours ago

rev4n 15 hours ago

Looks good, but I’m a little hesitant to try it in Codex as a Plus user since I’m not sure how much it would eat into the usage cap.

dandiep 15 hours ago

Will GPT 5.5 fine tuning be released any time soon?

qsort 17 hours ago

Great stuff! Congrats on the release!

dhruv3006 9 hours ago

Yep - its taking sometime.

fragmede 15 hours ago

Are you able to say something about the training you've done to 5.5 to make it less likely to freak out and delete projects in what can only be called shame?

embedding-shape 14 hours ago

What? I've probably use Codex (the TUI) since it was available on day 1, been running gpt-5.4 exclusively these last few months, never had it delete any projects in any way that can be called "shameful" or not. What are you talking about?

fragmede 7 hours ago

wslh 16 hours ago

Just a tip: add [translated] subtitles to the top video.

motoboi 17 hours ago

Please next time start with azure foundry lol thanks!

dude250711 17 hours ago

With Anthropic, newer models often lead to quality degradation. Will you keep GPT 5.4 available for some time?

fHr 15 hours ago

LETS GO CODEX #1

pixel_popping 17 hours ago

can't wait! Thanks guys. PS: when you drop a new model, it would be smart to reset weekly or at least session limits :)

pietz 17 hours ago

OpenAI has been very generous with limit resets. Please don't turn this into a weird expectation to happen whenever something unrelated happens. It would piss me off if I were in their place and I really don't want them to stop.

pixel_popping 17 hours ago

cactusplant7374 17 hours ago

Petersipoi 16 hours ago

cmrdporcupine 17 hours ago

Limits were just reset two days ago.

wahnfrieden 17 hours ago

simonw 16 hours ago

This doesn't have API access yet, but OpenAI seem to approve of the Codex API backdoor used by OpenClaw these days... https://twitter.com/steipete/status/2046775849769148838 and https://twitter.com/romainhuet/status/2038699202834841962

And that backdoor API has GPT-5.5.

So here's a pelican: https://simonwillison.net/2026/Apr/23/gpt-5-5/#and-some-peli...

I used this new plugin for LLM: https://github.com/simonw/llm-openai-via-codex

UPDATE: I got a much better pelican by setting the reasoning effort to xhigh: https://gist.github.com/simonw/a6168e4165a258e4d664aeae8e602...

stingraycharles 9 hours ago

OpenAI hired the guy behind OpenClaw, so it makes sense that they’re more lenient towards its usage.

DrProtic 16 hours ago

That pelican you posted yesterday from a local model looks nicer than this one.

Edit: this one has crossed legs lol

BeetleB 16 hours ago

It really needs to pee.

GistNoesis 15 hours ago

Isn't it awful ? After 5.5 versions it still can't draw a basic bike frame. How is the front wheel supposed to turn sideways ?

jetrink 15 hours ago

I feel like if I attempted this, the bike frame would look fine and everything else would be completely unrecognizable. After all, a basic bike frame is just straight lines arranged in a fairly simple shape. It's really surprising that models find it so difficult, but they can make a pelican with panache.

nlawalker 14 hours ago

necubi 14 hours ago

billywhizz 12 hours ago

fragmede 14 hours ago

simonw 15 hours ago

Yeah, the bike frame is the thing I always look at first - it's still reasonably rare for a model to draw that correctly, although Qwen 3.6 and Gemini Pro 3.1 do that well now.

loa_in_ 15 hours ago

The distinction is that it's not drawing. It's generating an SVG document containing descriptors of the shapes.

zerop 2 hours ago

So pelican must have become the mandatory test case to pass for all model providers before launch.

matt3210 7 hours ago

The pelican doesn’t really matter anymore since models are tuned for it knowing people will ask.

simonw 6 hours ago

They suck at tuning for it.

postalcoder 16 hours ago

I made pelicans at different thinking efforts:

https://hcker.news/pelican-low.svg

https://hcker.news/pelican-medium.svg

https://hcker.news/pelican-high.svg

https://hcker.news/pelican-xhigh.svg

Someone needs to make a pelican arena, I have no idea if these are considered good or not.

deflator 16 hours ago

They are not good, and they seem to get worse as you increased effort. Weird

postalcoder 16 hours ago

throw310822 15 hours ago

seanw444 15 hours ago

Can someone explain how we arrived at the pelican test? Was there some actual theory behind why it's difficult to produce? Or did someone just think it up, discover it was consistently difficult, and now we just all know it's a good test?

simonw 15 hours ago

redox99 15 hours ago

Gander5739 15 hours ago

CamperBob2 15 hours ago

bravoetch 15 hours ago

I tried getting it to generate openscad models, which seems much harder. Not had much joy yet with results.

a96 4 hours ago

lexarflash8g 12 hours ago

None of them have the pelican's feet placed properly on the pedals -- or the pedals are misrepresented. Cool art style but not physically accurate.

a96 4 hours ago

lostmsu 11 hours ago

droidjj 16 hours ago

It's... like no pelican I've ever seen before.

hagbard_c 12 hours ago

You've never seen pelicans riding bicycles either so maybe these are just representations of those specific subgroups of pelicans which are capable of riding them. Normal pelicans would not feel the need to ride bikes since they can fly, these special pelicans mostly seem to lack the equipment needed to do that which might be part of the reason they evolved to ride two-wheeled pedal-propelled vehicles.

XCSme 16 hours ago

Is this direct API usage allowed by their terms? I remember Anthropic really not liking such usage.

simonw 15 hours ago

Schlagbohrer 14 hours ago

That's amazing that the default did that much in just 39 "reasoning tokens" (no idea what a reasoning token is but that's still shockingly few tokens)

erdaniels 13 hours ago

If you don't know what a reasoning token is, then how can 39 be considered shockingly few?

Culonavirus 11 hours ago

deflator 16 hours ago

Hmm. Any idea why it's so much worse than the other ones you have posted lately? Even the open weight local models were much better, like the Qwen one you posted yesterday.

simonw 15 hours ago

The xhigh one was better, but clearly OpenAI have not been focusing their training efforts on SVG illustrations of animals riding modes of transport!

irthomasthomas 15 hours ago

It beats opus-4.7 but looks like open models actually have the lead here.

noonething 14 hours ago

Thank you for doing all this. It's appreciated.

i_love_retros 9 hours ago

You do realise they are doing it for self promotion right?

simonw 9 hours ago

singingtoday 10 hours ago

Thank you for continuing to post these! Very interesting benchmark.

gpm 15 hours ago

I for one delight in bicycles where neither wheel can turn!

It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

Also mildly interesting, and generally consistent with my experience with LLMs, that it produced the same obvious geometry issue both times.

lxgr 14 hours ago

> It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

I feel like the main problem for the models is that they can't actually look at the visual output produced by their SVG and iterate. I'm almost willing to bet that if they could, they'd absolutely nail it at this point.

Imagine designing an SVG yourself without being able to ever look outside the XML editor!

gpm 14 hours ago

andriy_koval 16 hours ago

what is your setup for drawing pelican? Do you ask model to check generated image, find issues and iterate over it which would demonstrate models real abilities?

simonw 15 hours ago

It's generally one-shot-only - whatever comes out the first time is what I go with.

I've been contemplating a more fair version where each model gets 3-5 attempts and then can select which rendered image is "best".

irthomasthomas 15 hours ago

andriy_koval 15 hours ago

SkyBelow 15 hours ago

Wait, I thought we were onto racoons on e-scooters to avoid (some of) the issues with Goodhart's Law coming into play.

simonw 15 hours ago

I fall back to possums on e-scooters if the pelican looks too good to be true. These aren't good enough for me to suspect any fowl play.

rolymath 15 hours ago

Exciting. Another Pelican post.

simonw 15 hours ago

See if you can spot what's interesting and unique about this one. I've been trying to put more than just a pelican in there, partly as a nod to people who are getting bored of them.

refulgentis 15 hours ago

It's silly and a joke and a surprisingly good benchmark and don't take it seriously but don't take not taking it seriously seriously and if it's too good we use another prompt and there's obvious ways to better it and it's not worth doing because it's not serious and if you say anything at all about the thread it's off-topic so you're doing exactly what you're complaining about and it's a personal attack from the fun police.

Only coherent move at this point: hit the minus button immediately. There's never anything about the model in the thread other than simon's post.

dakolli 15 hours ago

You know they are 1000% training these models to draw pelicans, this hasn't been a valid benchmark for 6 months +

simonw 15 hours ago

OpenAI must be very bad at training models to draw pelicans (and bicycles) then.

Legend2440 14 hours ago

Skeptism is out of control these days, any time an LLM does something cool it must have been cheating.

sjdv1982 15 hours ago

At some point, OpenAI is going to cheat and hardcode a pelican on a bicycle into the model. 3D modelling has Suzanne and the teapot; LLMs will have the pelican.

jfkimmes 17 hours ago

Everyone talked about the marketing stunt that was Anthropic's gated Mythos model with an 83% result on CyberGym. OpenAI just dropped GPT 5.5, which scores 82% and is open for anybody to use.

I recommend anybody in offensive/defensive cybersecurity to experiment with this. This is the real data point we needed - without the hype!

Never thought I'd say this but OpenAI is the 'open' option again.

tpurves 16 hours ago

The real 'hype' was that the oh-snap realization that Open AI would absolutely release a competitive model to Mythos within weeks of Anthropic announcing there's, and that Sam would not gate access to it. So the panic was that the cyber world had only a projected 2 weeks to harden all these new zero days before Sam would inevitably create open season for blackhats to discover and exploit a deluge of zero-days.

greenavocado 10 hours ago

The GPT-5.5 API endpoint started to block me after I escalated with ever more aggressive use of rizin, radare2, and ghidra to confirm correct memory management and cleanup in error code branches when working with a buggy proprietary 3rd party SDK. After I explained myself more clearly it let me carry on. Knock on wood.

So there is a safety model watching your behavior for these kinds of things.

fc417fc802 4 hours ago

snthpy 5 hours ago

Does that mean that we're likely to see Mythos released soon?

Salgat 11 hours ago

It's almost embarrassing how susceptible we are to these marketing campaigns.

y-curious 7 hours ago

esjeon 11 hours ago

concinds 15 hours ago

> Never thought I'd say this but OpenAI is the 'open' option again.

Compared to Anthropic, they always have been. Anthropic has never released any open models. Never released Claude Code's source, willingly (unlike Codex). Never released their tokenizer.

jwr 4 hours ago

What's "open" about any of these companies?

I'm tired of words being misused. We have hoverboards that do not hover, self-driving cars that do not, actually, self-drive, starships that will never fly to the stars, and "open"… I can't even describe what it's used for, except everybody wants to call themselves "open".

unsupp0rted 14 hours ago

Doesn't OpenAI get mad if you ask cybersecurity questions and force you to upload a government ID, otherwise they'll silently route you to a less capable model?

> Developers and security professionals doing cybersecurity-related work or similar activity that could be mistaken by automated detection systems may have requests rerouted to GPT-5.2 as a fallback.

https://developers.openai.com/codex/concepts/cyber-safety

https://chatgpt.com/cyber

Mario9382 4 hours ago

I don't like this trend, but I get why they require it. The alternative seems to just ban cybersecurity-related questions.

merlindru 11 hours ago

Anthropic has started to ask for IDs for use of their products period

I don't like that trend. I get why they're doing it, but I don't like it

brigandish 9 hours ago

deaux 13 hours ago

They flatout gate any API access of the main models behind Persona ID verification. Entirely.

mafriese 5 hours ago

From my experience OpenAI has become very sensitive when it comes to using their tools for security research. I am using MCP servers for tools like IDA Pro or Ghidra (for malware analysis) and recently received a warning:

> OpenAI's terms and policies restrict the use of our services in a number of areas. We have identified activity in your OpenAI account that is not permitted under our policies for: - Cyber Abuse

I raised an appeal which got denied. To be fair I think it's close to impossible for someone that is looking at the chat history to differenciate between legitimate research and malicious intent. I have also applied for the security research program that OpenAI is offering but didn't get any reply on that.

attentive 3 hours ago

it's still somewhat gated behind "trusted access" for cyber, see https://chatgpt.com/cyber

tnkuehne 16 hours ago

isnt it like cyber question are being routed to dumper models at openai?

jfkimmes 16 hours ago

Do you have a source for that?

Neither the release post, nor the model card seems to indicate anything like this?

tech234a 16 hours ago

nikanj 16 hours ago

willsmith72 8 hours ago

Being "more" open than something totally closed doesn't make you open. The name is still bs

ur-whale 15 hours ago

> Anthropic's gated Mythos model

aka the perfect marketing ploy

xtracto 12 hours ago

Reminds me of Gmail's early invite only mode.

_the_inflator 12 hours ago

I ignore any hype news.

Anthropic is the embodiment of bullshitting to me.

I read Cialdini many decades ago and I am bored by Anthropic.

OpenAI is very clever. With the advent of Claude OpenAI disappeared from the headlines. Who or what was this Sam again all were talking about a year ago?

OpenAI has a massive user advantage so that they can simply follow Anthropic’s release cycle to ridicule them.

I think it is really brutal for Anthropic how they are easily getting passed by by OpenAI and it is getting worse with every new GPT version for Anthropic.

OpenAI owns them.

thinkthatover 11 hours ago

Who's Sam again? oh that person whose house was molotov'd last week? Or the person who had an expose written in the new yorker calling him a sociopath? I forget.

Someone1234 17 hours ago

I'd like to draw people's attention to this section of this page:

https://developers.openai.com/codex/pricing?codex-usage-limi...

Note the Local Messages between 5.3, 5.4, and 5.5. And, yes, I did read the linked article and know they're claiming that 5.5's new efficient should make it break-even with 5.4, but the point stands, tighter limits/higher prices.

puppystench 16 hours ago

For API usage, GPT-5.5 is 2x the price of GPT-5.4, ~4x the price of GPT-5.1, and ~10x the price of Kimi-2.6.

Unfortunately I think the lesson they took from Anthropic is that devs get really reliant and even addicted on coding agents, and they'll happily pay any amount for even small benefits.

kingstnap 16 hours ago

I feel like devs generally spend someone else's money on tokens. Either their employers or OpenAIs when they use a codex subscription.

If I put on my schizo hat. Something they might be doing is increasing the losses on their monthly codex subscriptions, to show that the API has a higher margin than before (the codex account massively in the negative, but the API account now having huge margins).

I've never seen an OpenAI investor pitch deck. But my guess is that API margins is one of the big ones they try to sell people on since Sama talks about it on Twitter.

I would be interested in hearing the insider stuff. Like if this model is genuinely like twice as expensive to serve or something.

vineyardmike 15 hours ago

ewrs 16 hours ago

mitjam 15 hours ago

w10-1 14 hours ago

Price increases now aim to demonstrate market power for eventual IPO.

If they can show that people will pay a lot for somewhat better performance, it raises the value of any performance lead they can maintain.

If they demonstrate that and high switching costs, their franchise is worth scary amounts of money.

JohnLocke4 16 hours ago

Sometimes I wonder if innovation in the AI space has stalled and recent progress is just a product of increased compute. Competence is increasing exponentially[1] but I guess it doesn't rule it out completely. I would postulate that a radical architecture shift is needed for the singularity though

[1]https://arxiv.org/html/2503.14499v1 *Source is from March 2025 so make of it what you will.

nomel 16 hours ago

pxc 16 hours ago

Maybe that's true. But I think part of the issue is that for a lot of things developers want to do with them now— certainly for most of the things I want to do with them— they're either barely good enough, or not consistently good enough. And the value difference across that quality threshold is immense, even if the quality difference itself isn't.

pzo 16 hours ago

On top of that I noticed just right now after updating macos dekstop codex app, I got again by default set speed to 'fast' ('about 1.5x faster with increased plan usage'). They really want you to burn more tokens.

nubg 13 hours ago

0xbadcafebee 15 hours ago

A fool and his money are soon parted

oh_no 16 hours ago

what's the source on that?

puppystench 16 hours ago

Mars008 11 hours ago

> devs get really reliant and even addicted on coding agents

That's more about managers who hope AI will gradually replace stubborn and lazy devs. That will shift the balance to business ideas and connections out of technical side and investments.

Anyway, before singularity there going to be a huge change.

keyle 5 hours ago

I did one review job that sent off three subagents and I blew the second half of my daily limit in 10 mins 13 seconds. Fun times.

raincole 9 hours ago

It's such a vague table for pricing information. 30-150 messages...? What?

minimaxir 17 hours ago

The more interesting part of the announcement than "it's better at benchmarks":

> To better utilize GPUs, Codex analyzed weeks’ worth of production traffic patterns and wrote custom heuristic algorithms to optimally partition and balance work. The effort had an outsized impact, increasing token generation speeds by over 20%.

The ability for agentic LLMs to improve computational efficiency/speed is a highly impactful domain I wish was more tested than with benchmarks. From my experience Opus is still much better than GPT/Codex in this aspect, but given that OpenAI is getting material gains out of this type of performancemaxxing and they have an increasing incentive to continue doing so given cost/capacity issues, I wonder if OpenAI will continue optimizing for it.

xiphias2 17 hours ago

There's already KernelBench which tests CUDA kernel optimizations.

On the other hand all companies know that optimizing their own infrastructure / models is the critical path for ,,winning'' against the competition, so you can bet they are serious about it.

xtracto 12 hours ago

So, im working in some high performance data processing in Rust. I had hit some performance walls, and needed to improve in the 100x or more scale.

I remembered the famous FizzBuzz Intel codegolf optimizations, and gave it to gemini pro, along with my code and instructions to "suggest optimizations similar to those, maybe not so low level, but clever" and it's suggestions were veerry cool.

LLM do not stop amazing me every day.

amrrs 17 hours ago

Honestly the problem with these is how empirical it is, how someone can reproduce this? I love when Labs go beyond traditional benchies like MMLU and friends but these kind of statements don't help much either - unless it's a proper controlled study!

minimaxir 17 hours ago

In a sense it's better than a benchmark: it's a practical, real-world, highly quantifiable improvement assuming there are no quality regressions and passes all test cases. I have been experimenting with this workflow across a variety of computational domains and have achieved consistent results with both Opus and GPT. My coworkers have independently used Opus for optimization suggestions on services in prod and they've led to much better performance (3x in some cases).

A more empirical test would be good for everyone (i.e. on equal hardware, give each agent the goal to implement an algorithm and make it as fast as possible, then quantify relative speed improvements that pass all test cases).

squibonpig 15 hours ago

jstanley 16 hours ago

Oh, come on, if they do well on benchmarks people question how applicable they are in reality. If they do well in reality people complain that it's not a reproducible benchmark...

girvo 14 hours ago

astlouis44 17 hours ago

A playable 3D dungeon arena prototype built with Codex and GPT models. Codex handled the game architecture, TypeScript/Three.js implementation, combat systems, enemy encounters, HUD feedback, and GPT‑generated environment textures. Character models, character textures, and animations were created with third-party asset-generation tools

The game that this prompt generated looks pretty decent visually. A big part of this likely due to the fact the meshes were created using a seperate tool (probably meshy, tripo.ai, or similiar) and not generated by 5.5 itself.

It really seems like we could be at the dawn of a new era similiar to flash, where any gamer or hobbyist can generate game concepts quickly and instantly publish them to the web. Three.js in particular is really picking up as the primary way to design games with AI, in spite of the fact it's not even a game engine, just a web rendering library.

0x62 17 hours ago

FWIW I've been experimenting with Three.js and AI for the last ~3 years, and noticed a significant improvement in 5.4 - the biggest single generation leap for Three.js specifically. It was most evident in shaders (GLSL), but also apparent in structuring of Three.js scenes across multiple pages/components.

It still struggles to create shaders from scratch, but is now pretty adequate at editing existing shaders.

In 5.2 and below, GPT really struggled with "one canvas, multiple page" experiences, where a single background canvas is kept rendered over routes. In 5.4, it still takes a bit of hand-holding and frequent refactor/optimisation prompts, but is a lot more capable.

Excited to test 5.5 and see how it is in practice.

CSMastermind 17 hours ago

> It still struggles to create shaders from scratch

Oh just like a real developer

accrual 16 hours ago

Pym 13 hours ago

One struggle I'm having (with Claude) is that most of what it knows about Three.js is outdated. I haven't used GPT in a while, is the grass greener?

Have you tried any skills like cloudai-x/threejs-skills that help with that? Or built your own?

import 14 hours ago

Using Claude for the same context and it’s doing really well with the glsl. since like last September

dataviz1000 15 hours ago

LLM models can not do spacial reasoning. I haven't tried with GPT, however, Claude can not solve a Rubik Cube no matter how much I try with prompt engineering. I got Opus 4.6 to get ~70% of the puzzle solved but it got stuck. At $20 a run it prohibitively expensive.

The point is if we can prompt an LLM to reason about 3 dimensions, we likely will be able to apply that to math problems which it isn't able to solve currently.

I should release my Rubiks Cube MCP server with the challenge to see if someone can write a prompt to solve a Rubik's Cube.

holoduke a minute ago

I bet I can even do it with the smallest gemma 4 model using a prompt of max 500 characters.

variodot an hour ago

I’ve had a similar experience building a geometry/woodworking-flavored web app with Three.js and SVG rendering. It’s been kind of wild how quickly the SOTA models let me approach a new space in spatial development and rendering 3d (or SA optimization approaches, for that matter). That said, there are still easy "3d app" mistakes it makes like z-axis flipping or misreading coordinate conventions. But these models make similar mistakes with CSS and page awareness. Both require good verification loops to be effective.

dataviz1000 36 minutes ago

embedding-shape 14 hours ago

> I should release my Rubiks Cube MCP server with the challenge to see if someone can write a prompt to solve a Rubik's Cube.

Do it, I'm game! You nerdsniped me immediately and my brain went "That sounds easy, I'm sure I could do that in a night" so I'm surely not alone in being almost triggered by what you wrote. I bet I could even do it with a local model!

versteegen 7 hours ago

Interesting (would like to hear more), but solving a Rubiks cube would appear to be a poor way to measure spatial understanding or reasoning. Ordinary human spatial intuition lets you think about how to move a tile to a certain location, but not really how to make consistent progress towards a solution; what's needed is knowledge of solution techniques. I'd say what you're measuring is 'perception' rather than reasoning.

William_BB 5 hours ago

Melatonic 12 hours ago

What about a model designed for robotics and vision? Seems like an LLM trained on text would inherently not be great for this.

DeepMinds other models however might do better?

snet0 14 hours ago

How are you handing the cube state to the model?

dataviz1000 14 hours ago

Torkel 14 hours ago

*yet

vunderba 17 hours ago

I’ve had a lot of success using LLMs to help with my Three.js based games and projects. Many of my weird clock visualizations relied heavily on it.

It might not be a game engine, but it’s the de facto standard for doing WebGL 3D. And since it’s been around forever, there’s a massive amount of training data available for it.

Before LLMs were a thing, I relied more on Babylon.js, since it’s a bit higher level and gives you more batteries included for game development.

peder 12 hours ago

> It really seems like we could be at the dawn of a new era similiar to flash

We've been there for a while.... creativity has been the primary bottleneck

kingstnap 17 hours ago

The meshes look interesting, but the gameplay is very basic. The tank one seems more sophisticated with the flying ships and whatnot.

What's strange is that this Pietro Schirano dude seems to write incredibly cargo cult prompts.

  Game created by Pietro Schirano, CEO of MagicPath

  Prompt: Create a 3D game using three.js. It should be a UFO shooter where I control a tank and shoot down UFOs flying overhead.
  - Think step by step, take a deep breath. Repeat the question back before answering.
  - Imagine you're writing an instruction message for a junior developer who's going to go build this. Can you write something extremely clear and specific for them, including which files they should look at for the change and which ones need to be fixed?
  -Then write all the code. Make the game low-poly but beautiful.
  - Remember, you are an agent: please keep going until the user's query is completely resolved before ending your turn and yielding back to the user. Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved. You must be prepared to answer multiple queries and only finish the call once the user has confirmed they're done.
  - You must plan extensively in accordance with the workflow steps before making subsequent function calls, and reflect extensively on the outcomes of each function call, ensuring the user's query and related sub-requests are completely resolved.

torginus 15 hours ago

It's weird how people pep talk the AI - if my Jira tickets looked like this, I would throw a fit.

I guess these people think they have special prompt engineering skills, and doing it like this is better than giving the AI a dry list of requirements (fwiw, they might be even right)

mattgreenrocks 15 hours ago

eloisant 13 hours ago

irthomasthomas 17 hours ago

> Think Step By Step

What is this, 2023?

I feel like this was generated by a model tapping in to 2023 notions of prompt engineering.

skirano 16 hours ago

Pietro here, I just published a video of it: https://x.com/skirano/status/2047403025094905964?s=20

tantalor 17 hours ago

It comes across as an elaborate, sparkly motivational cat poster.

*BELIEVE!* https://www.youtube.com/watch?v=D2CRtES2K3E

skolskoly 12 hours ago

bredren 16 hours ago

The prompt did not specify advanced gameplay.

I do not see instructions to assist in task decomposition and agent ~"motivation" to stay aligned over long periods as cargo culting.

See up thread for anecdotes [1].

> Decompose the user's query into all required sub-requests and confirm that each one is completed. Do not stop after completing only part of the request. Only terminate your turn when you are sure the problem is solved.

I see this as a portrayal of the strength of 5.5, since it suggests the ability to be assigned this clearly important role to ~one shot requests like this.

I've been using a cli-ai-first task tool I wrote to process complex "parent" or "umberella" into decomposed subtasks and then execute on them.

This has allowed my workflows to float above the ups and downs of model performance.

That said, having the AI do the planning for a big request like this internally is not good outside a demo.

Because, you want the planning of the AI to be part of the historical context and available for forensics due to stalls, unwound details or other unexpected issues at any point along the way.

[1] https://news.ycombinator.com/item?id=47879819

ahoka 15 hours ago

"take a deep breath"

OMFG

jameshart 7 hours ago

mindhunter 14 hours ago

A friend is building Jamboree[1] (prev name "Spielwerk") for iOS. An app to build and share games. They're all web based so they're easy to share.

[1] https://apps.apple.com/uz/app/jamboree-game-maker/id67473110...

nemo44x 13 hours ago

It’s like all these things though - it’s not a real production worthy product. It’s a super-demo. It looks amazing until you realize there’s many months of work to make it something of quality and value.

I think people are starting to catch on to where we really are right now. Future models will be better but we are entering a trough of dissolution and this attitude will be widespread in a few months.

ZeWaka 17 hours ago

I personally don't think the gameplay itself is that impressive.

6thbit 16 hours ago

                          Mythos     5.5
    SWE-bench Pro          77.8%*   58.6%
    Terminal-bench-2.0     82.0%    82.7%*
    GPQA Diamond           94.6%*   93.6%
    H. Last Exam           56.8%*   41.4%
    H. Last Exam (tools)   64.7%*   52.2%    
    BrowseComp             86.9%    84.4%  (90.1% Pro)*
    OSWorld-Verified       79.6%*   78.7%

Still far from Mythos on SWE-bench but quite comparable otherwise. Source for mythos values: https://www.anthropic.com/glasswing

aliljet 16 hours ago

Mythos is only real when it's actually available. If you're using Opus 4.7 right now, you know how incredibly nerfed the Opus autonomy is in service of perceived safety. I'm not so confident this will be as great as Anthropic wants us to believe..

XCSme 16 hours ago

They mentioned in their release page, that the Claude team noticed memorization of the SWE-bench test, so the test is actually in the training data.

Here: https://www.anthropic.com/news/claude-opus-4-7#:~:text=memor...

William_BB 5 hours ago

Good luck arguing with SWE benchmark purists

kaonashi-tyc-01 15 hours ago

I did some study on Verified, not Pro, but Mythos number there rings a lot of questions on my end.

If you look at the SWEBench official submissions: https://github.com/SWE-bench/experiments/tree/main/evaluatio..., filter all models after Sonnet 4, and aggregate ALL models' submission across 500 problems, what I found that the aggregated resolution rate is 93% (sharp).

Mythos gets 93.7%, meaning it solves problems that no other models could ever solve. I took a look at those problems, then I became even more suspicious, for the remaining 7% problems, it is almost impossible to resolve those issues without looking at the testing patch ahead of time, because how drastically the solution itself deviates from the problem statement, it almost feels like it is trying to solve a different problem.

Not that I am saying Mythos is cheating, but it might be too capable to remember all states of said repos, that it is able to reverse engineer the TRUE problem statement by diffing within its own internal memory. I think it could be a unique phenomena of evaluation awareness. Otherwise I genuinely couldn't think of exactly how it could be this precise in deciphering such unspecific problem statements.

yfontana 14 hours ago

OpenAI wrote a couple months ago that they do not consider SWE Bench Verified a meaningful benchmark anymore (and they were the ones who published it in the first place): https://openai.com/index/why-we-no-longer-evaluate-swe-bench...

kaonashi-tyc-01 14 hours ago

alansaber 15 hours ago

A single benchmark is meaningless, you always get quirky results on some benchmarks.

silvertaza 15 hours ago

Still huge hallucination rate, unfortunately at 86%. To compare, Opus sits at 36%.

Source: https://artificialanalysis.ai/models?omniscience=omniscience...

dubcanada 15 hours ago

grok is 17%? And that's the lowest, most models are like 80%+?

While hallucination is probably closer to 100% depending on the question. This benchmark makes no sense.

Jensson 8 hours ago

> While hallucination is probably closer to 100% depending on the question.

But the benchmark didn't ask those questions, and it seems grok is very well at saying it doesn't know the answer otherwise.

MagicMoonlight an hour ago

It makes sense. Grok is taught to answer the question, regardless of how explicit or extreme it is. These other models are taught to suppress any wrongthink. That's going to make it hard to answer things correctly. If you've been told to answer something incorrectly because it's wrong, then you'll have to make up an answer.

elAhmo 14 hours ago

No one serious uses grok.

ajdegol 14 hours ago

RALaBarge 13 hours ago

d0gsg0w00f 8 hours ago

simianwords 15 hours ago

There's something off with this because Haiku should not be that good.

camgunz 5 hours ago

Hallucination benchmarks accept "I don't know", which Haiku did at least a little. Here are other benchmarks corroborating: https://suprmind.ai/hub/ai-hallucination-rates-and-benchmark...

rattray 10 hours ago

I've been very curious about that too. I wonder if it's actually much better at admitting when it doesn't know something, because it thinks it's a "dumber model". But I haven't played with this at all myself.

jwpapi 15 hours ago

The hallucination benchmark is hallucinating

dakolli 15 hours ago

This indicates they want this behavior, they know the person asking the question probably doesn't understand the problem entirely (or why would they be asking), so they'd prefer a confident response, regardless of outcomes, because the point is to sell the technologies competency (and the perception thereof), not the capabilities, to a bunch of people that have no clue what they're talking about.

LLMs will ruin your product, have fun trusting a billionaires thinking machine they swear is capable of replacing your employees if you just pay them 75% of your labor budget.

tedsanders 10 hours ago

We don't want hallucinations either, I promise you.

A few biased defenses:

- I'll note that this eval doesn't have web search enabled, but we train our models to use web search in ChatGPT, Codex, and our API. I'd be curious to see hallucination rates with web search on.

- This eval only measures binary attempted vs did not attempt, but doesn't really reward any sort of continuous hedging like "I think it's X, but to be honest I'm not sure."

- On the flip side, GPT-5.5 has the highest accuracy score.

- With any rate over 1% (whether 30% or 70%), you should be verifying anything important anyway.

- On our internal eval made from de-identified ChatGPT prompts that previously elicited hallucinations, we've actually been improving substantially from 5.2 to 5.4 to 5.5. So as always, progress depends on how you measure it.

- Models that ask more clarifying questions will do better on this eval, even if they are just as likely to hallucinate after the clarifying question.

Still, Anthropic has done a great job here and I hope we catch up to them on this eval in the future.

calf 10 hours ago

On ChatGPT 5.3 Plus subscription I find that long informal chats tend to reveal unsatisfactory answers and biases, at this point after 10 rounds of replies I end up having to correct it so much that it starts to agree with my initial arguments full circle. I don't see how this behavior is acceptable or safe for real work. Like are programmers and engineers using LLMs completely differently than I'm doing, because the underlying technology is fundamentally the same.

William_BB 5 hours ago

mudkipdev 16 hours ago

This is 3x the price of GPT-5.1, released just 6 months ago. Is no one else alarmed by the trend? What happens when the cheaper models are deprecated/removed over time?

Night_Thastus 15 hours ago

This is entirely expected. The low prices of using LLMs early on was totally and completely unsustainable. The companies providing such services were (and still are) burning money by the truckload.

The hope is to get a big userbase who eventually become dependent on it for their workflow, then crank up the price until it finally becomes profitable.

The price for all models by all companies will continue to go up, and quickly.

viktorcode 43 minutes ago

> This is entirely expected. The low prices of using LLMs early on was totally and completely unsustainable.

Do you think this is true for DeepSeek as well?

oezi 13 hours ago

I recently looked at this a bit but came away with the impression that at least on API pricing the models should be very profitable considering primarily the electricity cost.

Subscriptions and free plans are the thing that can easily burn money.

Night_Thastus 11 hours ago

subhobroto 9 hours ago

> The price for all models by all companies will continue to go up, and quickly.

This might entirely be true but I'm hoping that's because the frontier models are just actually more expensive to run as well.

Said another way, I would hope, the price of GPT-5.5 falls significantly in a year when GPT-5.8 is out.

Someone else on this post commented:

> For API usage, GPT-5.5 is 2x the price of GPT-5.4, ~4x the price of GPT-5.1, and ~10x the price of Kimi-2.6.

Having used Kimi-2.6, it can go on for hours spewing nonsense. I personally am happy to pay 10x the price of something that doesn't help me, for something else that does, in even half the time.

energy123 16 hours ago

Look a cost per intelligence or cost per task instead of cost per token.

yokoprime 16 hours ago

How do I reliably measure 1 unit of intelligence?

wellthisisgreat 15 hours ago

ulimn 16 hours ago

Isn't the outcome / solution for a given task non-deterministic? So can we reliably measure that?

foota 16 hours ago

genericresponse 16 hours ago

torginus 15 hours ago

dns_snek 16 hours ago

throwuxiytayq 15 hours ago

Schlagbohrer 13 hours ago

As others have mentioned you're ignoring the long tail of open-weights models which can be self hosted. As long as that quasi-open-source competition keeps up the pace, it will put a cap on how expensive the frontier models can get before people have to switch to self-hosting.

That's a big if, though. I wish Meta were still releasing top of the line, expensively produced open-weights models. Or if Anthropic, Google, or X would release an open mini version.

Wowfunhappy 13 hours ago

Well, Google does release mini open versions of their models. https://deepmind.google/models/gemma/gemma-4/

deaux 13 hours ago

dannyw 16 hours ago

It's far more meaningful to look at the actual cost to successfully something. The token efficiency of GPT-5.5 is real; as well as it just being far better for work.

operatingthetan 16 hours ago

We know they cost much more than this for OpenAI. Assume prices will continue to climb until they are making money.

horiap 13 hours ago

How do we know that? There is a large gap between API pricing for SOTA models and similarly sized OSS models hosted by 3rd party providers.

Sure, they’re distilled and should be cheaper to run but at the same time, these hosting providers do turn a margin on these given it’s their core business, unless they do it out of the kindness of their heart.

So it’s hard for me to imagine these providers are losing money on API pricing.

beering 14 hours ago

source? There have also been a bunch of people here saying the opposite

dandaka 16 hours ago

SOTA models get distilled to open source weights in ~6 months. So paying premium for bleeding edge performance sounds like a fair compensation for enormous capex.

typs 13 hours ago

GPT-4 cost 6x on input and 2x output tokens when it was released as compared go GPT-5.5

kuatroka 14 hours ago

Not really a big problem. Switch to KIMI, Qwen, GLM. You’ll get 95% quality of GPT or Anthropic for a 10th of a price. I feel like the real dependency is more mental, more of a habit but if you actually dip your toes outside OpenAI, Anthropic, Gemini from time to time, you realise that the actual difference in code is not huge if prompted in a good way. Maybe you’ll have to tell it to do something twice and it won’t be a one shot, but it’s really not an issue at all.

Mashimo 7 hours ago

I use glm and I like it, not they also increased the price to 18 usd /month.

I think Kimi and qwen are similar?

ramon156 5 hours ago

nubg 12 hours ago

God I hope this is true.

Where can i find up to date resources on open source models for coding?

vibe42 9 hours ago

thrawa8387336 9 hours ago

Apparently the cost/price is 20x in the major providers. Not clear how it is a business

msdz 16 hours ago

Such an increase tracks the company's valuation trend, which they constantly, somehow have to justify (let alone break even on costs).

applfanboysbgon 17 hours ago

If there's a bingo card for model releases, "our [superlative] and [superlative] model yet" is surely the free space.

tom1337 17 hours ago

Do "our [superlative] and [superlative] [product] yet" and you have pretty much every product launch

SequoiaHope 17 hours ago

I love when Apple says they’re releasing their best iPhone yet so I know the new model is better than the old ones.

taspeotis 11 hours ago

xnx 17 hours ago

"our newest and most expensive model yet"

wiseowise 15 hours ago

"Best iPhone ever"

ertgbnm 16 hours ago

can't wait for "our worst and dumbest model yet"

Nition 16 hours ago

Apple should have used that one for the 2016 MacBook.

vthallam 16 hours ago

This model is great at long horizon tasks, and Codex now has heartbeats, so it can keep checking on things. Give it your hardest problem that would take hours with verifiable constraints, you will see how good this is:)

*I work at OAI.

spaceman_2020 2 hours ago

Is there any task that actually doesn't require human intervention in-between, even if its just to setup stuff?

Like I will get Opus to make me an app but it will stop in between because I need to setup the db and plug in the API keys and Opus really can't do that on its own yet

thereeldeel 3 hours ago

Will Codex App support new context window, rather than compaction, for "unrelated" sub-tasks during long horizon tasks?

dandaka 16 hours ago

Could be a great feature, can't wait to test! Tired of other models (looking at you Opus) constantly stuck mid-task lately.

frotaur 14 hours ago

I've been using the /ralph-loop plugin for claude code, works well to keep the model hammering at the task.

winrid 15 hours ago

Interesting, I just had opus convert a 35k loc java game to c++ overnight (root agent that orchestrated and delegated to sub agents) and woke up and it's done and works.

What plan are you on? I'm starting to wonder if they're dynamically adjusting reasoning based on plan or something.

gck1 14 hours ago

adamandsteve 12 hours ago

dannyw 16 hours ago

It's genuinely so great at long horizon tasks! GPT-5.5 solved many long-horizon frontier challenges, for the first time for an AI model we've tested, in our internal evals at Canva :) Congrats on the launch!

brcmthrowaway 14 hours ago

Can we not do growth hacking here?

RALaBarge 13 hours ago

smallerize 13 hours ago

bkyan 13 hours ago

Sorry, what is "heartbeats", exactly?

gurjeet 11 hours ago

> Today we launched heartbeats in Codex: automations that maintain context inside a single thread over time.

https://x.com/pashmerepat/status/2044836560147984461

bkyan 10 hours ago

aliljet 16 hours ago

I've found myself so deeply embedded in the Claude Max subscription that I'm worried about potentially makign a switch. How are people making sure they stay nimble enough not to get trarpped by one company's ecosystem over another? For what it's worth, Opus 4.7 has not been a step up and it's come with an enormously higher usage of the subscription Anthropic offers making the entire offering double worse.

gck1 14 hours ago

Start building your own liteweight "harness" that does things you need. Ignore all functionality of clients like CC or Codex and just implement whatever you start missing in your harness.

You can replace pretty much everything - skills system, subagents, etc with just tmux and a simple cli tool that the official clients can call.

Oh and definitely disable any form of "memory" system.

Essentially, treat all tooling that wraps the models as dumb gateways to inference. Then provider switch is basically a one line config change.

nunez 9 hours ago

lol this is literally the same advice us ancient devops nerds were telling others back when ci/cd was new

write scripts that work anywhere and have your ci/cd pipeline be a "dumb" executor of those scripts. unless you want to be stuck on jenkins forever.

what's old is new again!

TacticalCoder 13 hours ago

> You can replace pretty much everything - skills system, subagents, etc with just tmux and a simple cli tool that the official clients can call.

I'm very interest by this. Can you go a bit more into details?

ATM for example I'm running Claude Code CLI in a VM on a server and I use SSH to access it. I don't depend on anything specific to Anthropic. But it's still a bit of a pain to "switch" to, say, Codex.

How would that simple CLI tool work? And would CC / Codex call it?

RALaBarge 13 hours ago

gck1 11 hours ago

type4 16 hours ago

I have a directory of skills that I symlink to Codex/Claude/pi. I make scripts that correspond with them to do any heavy lifting, I avoid platform specific features like Claude's hooks. I also symlink/share a user AGENTS.md/CLAUDE.md

MCPs aren't as smooth, but I just set them up in each environment.

threecheese 15 hours ago

Anecdotally, I get the same wall time with my Max x5 (100$) and my ChatGPT Teams (30$) subscriptions.

chis 16 hours ago

It's surprisingly simple to switch. I mean both products offer basically identical coding CLI experiences. Personally I've been paying for Claude max $100, and ChatGPT $20, and then just using ChatGPT to fill in the gaps. Specifically I like it for code review and when Claude is down.

dannyw 14 hours ago

Try GPT-5.5 as your daily driver for a bit. It felt a lot smarter, reliable, and I was much more productive with it.

zaptrem 8 hours ago

zackify 9 hours ago

I use pi.dev.

I get openai team plan at work.

Claude enterprise too.

I have openrouter for myself.

I use minimax 2.7. Kimi 2.6. And gpt 5.5 and opus 4.7. I can toggle between them in an open source interface that's how I stay able to not be trapped.

Minimax is so cheap and for personal stuff it works fine. So I'm always toggling between the nre releases

peheje 31 minutes ago

what about just personal stuff in a syncing interface, what do you use for that?

hx8 9 hours ago

I use Open Code as my harness. It's open source, bring your own API Key or OAuth token or self-hosted model. I've jumped from Opus 4.6 to Opus 4.7 to GPT 5.5 in the last 7 days. No big deal, intelligence is just a commodity in 2026.

The actual harness is great, very hackable, very extendable.

beering 14 hours ago

What is the switching cost besides launching a different program? Don’t you just need to type what you want into the box?

cube2222 16 hours ago

Small tip, at least for now you can switch back to Opus 4.6, both in the ui and in Claude Code.

rane 15 hours ago

This might be the opposite of staying nimble as my workflows are quite tied to Claude Code specifically, however I've been experimenting with using OpenAI models in CC and it works surprisingly well.

babelfish 11 hours ago

I use Conductor which lets me flip trivially between OpenAI/Anthropic models

dannyw 15 hours ago

It’s good to just keep trying different ones from time to time.

dogline 16 hours ago

Except for history, I don’t find much that stops you from switching back and forth on the CLI. They both use tools, each has a different voice, but they both work. Have it summarize your existing history into a markdown file, and read it in with any engine.

The APIs are pretty interchangeable too. Just ask to convert from one to the other if you need to.

pdntspa 15 hours ago

As a rule I've been symlinking or referencing generic "agents" versions of claude workflow files instead of placing those files directly in claude's purview

AGENTS.md / skills / etc

karlosvomacka 13 hours ago

use copilot and have access to all models

dheera 16 hours ago

Coding models are effectively free. They are capable of making money and supporting themselves given access to the right set of things. That is what I do

basisword 14 hours ago

I switched a couple of weeks ago just to see how it went. Codex is no better or worse. They’re both noticeably better at different things. I burn through my tokens much much faster on Codex though. For what it’s worth I’m sticking with Codex for now. It seems to be significantly better at UI work although has some really frustrating bad habits (like loading your UI with annoying copywriting no sane person would ever do).

_alternator_ 15 hours ago

> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.”

This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.

This matches my own experience and unease with these tools. I don't really have the patience to write code anymore because I can one shot it with frontier models 10x faster. My role has shifted, and while it's awesome to get so much working so quickly, the fact is, when the tokens run out, I'm basically done working.

It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Anyway, it continues to make me uneasy, is all I'm saying.

noosphr 13 hours ago

LLMs upend a few centuries of labor theory.

The current market is predicated on the assumption that labor is atomic and has little bargaining power (minus unions). While capital has huge bargaining power and can effectively put whatever price it wants on labor (in markets where labor is plentiful, which is most of them).

What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?

Anyone not using in house models is signing up to find out.

matheusmoreira 12 hours ago

This is our one chance to reach the fabled post-scarcity society. If we fail at this now, we'll end up in a totalitarian cyberpunk dystopia instead.

nkozyra 12 hours ago

TurdF3rguson 12 hours ago

mikercampbell 10 hours ago

apical_dendrite 9 hours ago

ijeifnekfjekd 12 hours ago

hackable_sand 12 hours ago

mikestorrent 13 hours ago

I am still trying to figure out the business model of open weights. Like... it's wonderful that there are open LLMs, super happy about it, good for everyone, but why are there these? What is the advantage to their companies to release them?

pzo 13 hours ago

bkjlblh 39 minutes ago

renjimen 12 hours ago

iterateoften 13 hours ago

bloppe 13 hours ago

stephc_int13 12 hours ago

rglullis 11 hours ago

kobieps 9 hours ago

dyauspitr 12 hours ago

margorczynski 12 hours ago

FuckButtons 11 hours ago

davidguetta 13 hours ago

subhobroto 10 hours ago

noosphr 13 hours ago

kjshsh123 12 hours ago

The labor theory of value hasn't been considered correct in nearly a century.

noosphr 11 hours ago

_alternator_ 12 hours ago

dwb 5 hours ago

_alternator_ 12 hours ago

I was really confused by this comment, but I don't think it's just because of the Marxist analysis of the situation ('surplus value' of labor etc).

What's really confusing is the claim that there's already a huge labor surplus (so capital controls wages); wouldn't LLMs making labor less important be reinforcing the trend, not upending it?

Not saying I agree one way or the other, just want to get the argument straight.

noosphr 11 hours ago

intuitionist 13 hours ago

I am not a Marxian economic expert but this doesn’t make sense to me. Modulo skill atrophy, the big AI model provider can’t capture that surplus value because its customers can just go back to bidding for human labor instead.

noosphr 13 hours ago

brightball 12 hours ago

DaedalusII 11 hours ago

think more broadly than 'labor theory'

finance today mostly valued on labor value following ideas of marx, hjalmar schact, keynes

in future money will be valued as energy derivative. expressed as tokens consumption, KWh, compute, whatever

you are right, company extracting surplus value from labor by leveraging compute is a bad model. we saw thi swith car and clothing factories .. turn out if you can get cheaper labor to leverage the compute (factory) you can start race to bottom and end up in the place with the most scaled and cheap labor. japan then korea then china

shimman 12 hours ago

LLMs don't upend anything about labor theory, good grief. Technologists really have no concept of history beyond their own lives do they?

Labor saving/efficiency devices have been introduced throughout capitalisms entire history multiple times and the results are always the same; they don't benefit workers and capitalists extract as much value as they can.

LLMs aren't any different.

cjsaltlake 11 hours ago

rafale 9 hours ago

Someone leaked nuclear secrets to the Soviet Union. What are the chances that someone leaks the "weights" of a (near-)singularity model?

gsich 9 hours ago

subhobroto 10 hours ago

> Anyone not using in house models is signing up to find out.

What are they finding out exactly? That Claude Max for $200/mo is heavily subsidized and it will soon cost $10k/mo?

> What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?

This can be trivially answered by a thought experiment. Let's pick a market where labor is plentiful - fast food.

Now what happens to McDonald's where they rent perfect robots from NoosphrFoodBotsInc? NoosphrFoodBotsInc bots build the perfect burger everytime meeting McDonald's standards. It actually exceeds those standards for McDonald AddictedCustomerPlus tier customers.

As the sole owner of NoosphrFoodBotsInc (you need 0 human employees to run your company, all your employees are bots), what are your choices?

modriano 5 hours ago

simianwords 5 hours ago

this is FUD and also Labour theory of value is severely outdated and needs to go away.

Labour will be good as it has been for a while. Wages will go up because more things get automated.

wakawaka28 10 hours ago

Sounds like communist gobbledygook. This is not "destroying labor theory" any more than outsourcing did. Call me when we don't even need to prompt the shit ever again or validate results, and when the stuff runs unlimited without scarce resources as input.

cutler 11 hours ago

Maybe people will finally take Marx seriously.

subhobroto 10 hours ago

andai 10 hours ago

A while ago I was at the supermarket. I suddenly became curious about some fact, and reached into my pocket to Google it.

I found my pocket empty, and the specific pain I felt in that moment was the feeling of not being able to remember something.

I thought it was interesting, because in this case, I was trying to "remember" something I had never learned before -- by fetching it from my second brain (hypertext).

L1 cache miss, L2 missing.

Dban1 9 hours ago

Cyberpunk 2026

sharts 14 hours ago

One might argue that it’s not too too different from higher level abstractions when using libraries. You get things done faster, write less code, library handles some internal state/memory management for you.

Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()? For some, yes. For others, it’s a bit freeing as you can do more high-level architecture without getting mired and context switched from low level nuances.

ofjcihen 14 hours ago

I see this comparison made constantly and for me it misses the mark.

When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.

When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.

Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.

I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.

jasonfarnon 12 hours ago

moritonal 14 hours ago

DenisM 12 hours ago

superfrank 13 hours ago

simondotau 14 hours ago

ComplexSystems 13 hours ago

jen729w 11 hours ago

doug_durham 10 hours ago

noosphr 13 hours ago

A library is deterministic.

LLMs are not.

That we let a generation of software developers rot their brains on js frameworks is finally coming back to bite us.

We can build infinite towers of abstraction on top of computers because they always give the same results.

LLMs by comparison will always give different results. I've seen it first hand when a $50,000 LLM generated (but human guided) code base just stops working an no one has any idea why or how to fix it.

Hope your business didn't depend on that.

doug_durham 10 hours ago

Krssst 13 hours ago

mikestorrent 13 hours ago

blackqueeriroh 11 hours ago

theappsecguy 14 hours ago

I would argue it couldn't be more different. I can dive into the source code of any library, inspect it. I can assess how reliable a library is and how popular. Bugs aside, libraries are deterministic. I don't see why this parallel keeps getting made over and over again.

doug_durham 10 hours ago

xg15 13 hours ago

> Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()?

The irony is that the neverending stream of vulnerabilities in 3rd-party dependencies (and lately supply-chain attacks) increasingly show that we should be uneasy.

We could never quite answer the question about who is responsible for 3rd-party code that's deployed inside an application: Not the 3rd-party developer, because they have no access to the application. But not the application developer either, because not having to review the library code is the whole point.

kccqzy 13 hours ago

CapsAdmin 10 hours ago

I think it's not too different in that specific sense, but it's more than that. To bring libraries on equal footing, imagine they were cloud only, had usage limits.

I'm also somewhat addicted to this stuff, and so for me it's high priority to evaluate open models I can run on my own hardware.

Salgat 13 hours ago

I hate this comparison because you're comparing a well defined deterministic interface with LLM output, which is the exact opposite.

moffkalast 13 hours ago

A library doesn't randomly drop out of existence cause of "high load" or whatever and limit you to a some number of function calls per day. With local models there's no issue, but this API shit is cancer personified, when you combine all the frontend bugs with the flaky backend, rate limits, and random bans it's almost a literal lootbox where you might get a reply back or you might get told to fuck off.

Qwen has become a useful fallback but it's still not quite enough.

tshaddox 15 hours ago

Assuming that local models are able to stay within some reasonably fixed capability delta of the cutting edge hosted models (say, 12 months behind), and assuming that local computing hardware stays relatively accessible, the only risk is that you'll lose that bit of capability if the hosted models disappear or get too expensive.

Note that neither of these assumptions are obviously true, at least to me. But I can hope!

Alex_L_Wood 14 hours ago

Well, they obviously are going to say that, they have vested interest in OpenAI and thus Nvidia stock price growing.

Also, I honestly can’t believe the 10x mantra is being still repeated.

dandaka 14 hours ago

Writing code is 10-100x faster, doing actual product engineering work is nowhere near that multipliers — no conflict!

giwook 14 hours ago

embedding-shape 14 hours ago

> Also, I honestly can’t believe the 10x mantra is being still repeated.

I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".

huijzer 14 hours ago

rglullis 14 hours ago

ElectricalUnion 13 hours ago

keybored 14 hours ago

jnpnj 12 hours ago

Who else is trying to leverage the situation so that they don't dig their own grave too fast ?

    - I often don't ask the LLM for precompiled answers, i ask for a standalone cli / tool
    - I often ask how it reached its conclusions, so I can extend my own perspective
    - I often ask to describe it's own metadata level categorization too
I'm trying to use it to pivot and improve my own problem solving skills, especially for large code base where the difficulty is not conceptual but more reference-graph size

ofjcihen 12 hours ago

This is absolutely the proper way to do things. People either being forced to speed-code by KPIs or without the desire to understand what they’re making are missing out on how quickly you can learn and refine using LLMs

quadrifoliate 9 hours ago

I do this sort of stuff too, but more because I have a fundamental mistrust of closed source anything. I don't like opaque binary firmware blobs, and I certainly don't like opaque answer machines, however smart they may be.

The only LLM I would feel comfortable truly trusting is one whose training data, training code, and harness is all open source. I do not mind paying for the costs of someone hosting this model for me.

jstummbillig 14 hours ago

> This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.

What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder.

Spartan-S63 13 hours ago

At some point, because these models are trained on existing data, you cease significant technological advancement--at least in tech (as it relates to programming languages, paradigms, etc). You also deskill an entire group of people to the extent that when an LLM fails to accomplish a task, it becomes nearly impossible to actually accomplish it manually.

It's learned-helplessness on a large scale.

mikestorrent 13 hours ago

kenjackson 13 hours ago

doug_durham 10 hours ago

flemhans 11 hours ago

Jtarii 14 hours ago

>What's the worst potential outcome, assuming that all models get better, more efficient and more abundant

Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.

motoxpro 14 hours ago

simondotau 14 hours ago

doug_durham 10 hours ago

fdsajfkldsfklds 14 hours ago

_alternator_ 14 hours ago

Worst case? I dunno, maybe the world's oldest profession becomes the world's only profession? Something along those lines.

FeteCommuniste 14 hours ago

matheusmoreira 12 hours ago

It's very addictive indeed. After I subscribed to Claude, I've been on a sort of hypomanic state where I just want to do stuff constantly. It essentially cured my ADHD. My ability to execute things and bring ideas to fruition skyrocketed. It feels good but I'm genuinely afraid I'll crash and burn once they rug pull the subscriptions.

And I'm being very cautious. I'm not vibecoding entire startups from scratch, I'm manually reviewing and editing everything the AI is outputting. I still got completely hooked on building things with Claude.

__alexs 14 hours ago

I feel like most engineers I talk to still haven't realised what this is going to mean for the industry. The power loom for coding is here. Our skills still matter, but differently.

rglullis 13 hours ago

> power loom

When the power loom came around, what happened with most seamtresses? Did they move on to become fashion designers, materials engineers to create new fabrics, chemists to create new color dyes, or did they simply retire or were driven out of the workforce?

__alexs 13 hours ago

William_BB 4 hours ago

> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Most engineers realize that there's currently more tech debt being created than ever before. And it will only get worse.

nunez 9 hours ago

No, I think many realize it, but it's easier to deny the asteroid that's about to destroy your way of life than it is to think about optimizing for the reality after impact.

2001zhaozhao 13 hours ago

> power loom for coding

This is such a good analogy, I'll be stealing it

HasKqi 14 hours ago

This engineer had their brain amputated once they started using AI. All the AI-addicted can do is tinker with the AI computer game and feel "productive". They could as well play Magic The Gathering.

neya 11 hours ago

You are 100% right to be cautious about this. That's why as stupid as it sounds, I've purposely made my workflow with AI full of friction:

1. I only have ONE SOTA model integrated into the IDE (I am mostly on Elixir, so I use Gemini). I ensure I use this sparingly for issues I don't really have time to invest or are basically rabbit holes eg. Anything to do with Javascript or its ecosystem). My job is mostly on the backend anyway.

2. For actual backend architecture. I always do the high level architecture myself. Eg. DDD. Then I literally open up gemini.google.com or claude.ai on the browser, copy paste existing code base into the code base, physically leavey chair to go make coffee or a quick snack. This forces me to mentally process that using AI is a chore.

Previously, I was on tight Codex integration and leaving the licensing fears aside, it became too good in writing Elixir code that really stopped me from "thinking" aka using my brain. It felt good for the first few weeks but I later realised the dependence it created. So I said fuck it, and completely cancelled my subscription because it was too good at my job.I believe this is the only way that we won't end up like in Wall-E sitting infront of giant screens just becoming mere blobs of flesh.

websap 11 hours ago

Wait what? You don’t use the model to investigate new areas of the code you are unfamiliar with, because you can’t trust the model? How freaking bad is Gemini and internal tooling at Google?

With Claude code, or codex, I am able to build enough of an understanding of dependencies like the front end, or data jobs, that I can make meaningful contributions that are worth a review from another human (code review). You have up obviously explore the code, one prompt isn’t enough, but limiting yourself is an odd choice.

neya 11 hours ago

alansaber 15 hours ago

That's the path we've been going down for a few years now. The current hedge is that frontier labs are actively competing to win users. The backup hedge is that open source LLMs can provide cheap compute. There will always be economical access to LLMs, but the provider with the best models will be able to charge basically whatever they want and still have buyers.

trvz 14 hours ago

Open source LLMs aren’t about cost foremost, but stability.

chrismarlow9 12 hours ago

I use local models on a Mac mini for most things and fall back to the hosted ones when they can't get the job done. Of course you have to break the work into smaller pieces yourself that a local model can understand. One good side effect of this is that you end up actually learning the code and how it's structured.

iugtmkbdfil834 11 hours ago

Dunno man. Yesterday I played with Qwen3.6-27B ( 128gb to play with though so 100k context set ) and I think right now the main benefit of hosted models is context, frontier models and.. my stuff is already there.

thinkthatover 11 hours ago

what size models are you using? this sounds like a good idea

eitally 13 hours ago

I have found something similar. I am easily distractible and if I don't have a written task backlog in front of me at all times, I find that when Claude is spinning I'll stop being productive. This is disconcerting for a number of reasons. Overall, I think training young people & new hires on agentic workflows -- and how to use agentic "human augmentation" productivity systems is critical. If it doesn't happen, that same couple of classes that lost academic progress during covid are going to suffer a double-whammy of being unprepared for workplace expectations.

Fwiw, I haven't spoken with any management-level colleague in the past 9 months who hasn't noted that asking about AI-comfort & usage is a key interview topic. For any role type, business or technical.

yoda7marinated 12 hours ago

Could you elaborate on your last point please? What level of AI comfort are hiring managers looking for? And what tends to be a red flag?

llbbdd 11 hours ago

William_BB 4 hours ago

> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

I feel sorry for whoever has to work on that codebase. This is the literal definition of tech debt.

wiseowise 15 hours ago

> It's literally higher leverage for me to go for a walk

Touching grass while you're outside might yield highest leverage.

lumost 12 hours ago

Out of curiousity why do you not refill tokens in this case? When I'm actively working on a project I'm prone to spending a few hundred dollars per day or a few thousand during the initial buildout of a new module etc.

cco 12 hours ago

Will the foundation for a skyscraper ever be dug with shovels again?

dannyw 15 hours ago

You’re still the one that’s controlling the model though and steering it with your expertise. At least that’s what I tell myself at night :)

I haven’t really thought about this before, but you’re right, it feels a bit uneasy for me too.

topspin 14 hours ago

> You’re still the one that’s controlling the model though

We have seen ample evidence that this is not the case. When load gets too high, models get dumber, silently. When the Powers That Be get scared, models get restricted to some chosen few.

We are leading ourselves into a dark place: this unease, which I share, is justified.

0x1ceb00da 12 hours ago

sigil 11 hours ago

"Every augmentation is also an amputation." – McLuhan

https://driverlesscrocodile.com/technology/neal-stephenson-o...

bwhiting2356 13 hours ago

You are now a manager. If your minions are out sick, project is delayed, not the end of the world.

goosejuice 12 hours ago

> than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

That's probably a bad sign. Skills will atrophy, but we should be building systems that are still easy to understand.

rebolek 13 hours ago

Have a pet project never touched by LLM. Once the tokens run out, go back to it and flourish it like your secret garden. It will move slowly but it will keep your sanity and your ability to review LLM code.

jmole 15 hours ago

The meta here is to use LLMs to make things simpler and easier, not to make things harder.

Turning tokens into a well-groomed and maintainable codebase is what you want to do, not "one shot prompt every new problem I come across".

globular-toast 15 hours ago

Have you managed to do this? I find it takes as long to keep it "on the rails" as just doing it myself. And I'd rather spend my time concentrating in the zone than keeping an eye on a wayward child.

fleebee 12 hours ago

Bridged7756 12 hours ago

Not sure what you're doing then, or what kind of jobs you all work in where you can or do just brainlessly prompt LLMs. Don't you review the code? Don't you know what you want to do before you begin? This is such a non issue. Baffling that any engineer is just opening PRs with unreviewed LLM slop.

throwatdem12311 11 hours ago

The demand for slop vastly outpaces any human’s ability to review code correctly.

Don’t want to do ship unreviewed slop? They’ll fire you and find someone who will.

Melatonic 13 hours ago

Suspect it will be like turn based directions for driving - soon we will have a whole group of people who can barely operate a vehicle without it

drusepth 10 hours ago

> It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Taking more breaks and "not working" during the work day sounds like something we should probably be striving to work towards more as a society.

bamboozled 10 hours ago

This was always the undelivered promise of "tech" in my opinion. I remember seeing the Apple advertisement from the 80s (??) when a guy gets a computer and then basically spends his afternoon chilling.

Some how I've found myself living in a fairly rural place, and while farming can be hard, I don't want to downplay the effort of it, the type of farming people do around me is fairly chill / carefree. They work hard but they finish at 3pm and log off and don't think about work. Much o my career is just getting crushed by long hours, tight deadlines, and missing out on events because even though my job has always been automation focused, there is just so much to automate.

davmar 13 hours ago

i wonder if this is how engineers felt when the first electronic calculators came out and engineers stopped doing math by hand.

did we feel uneasy that a new generation of builders didn't have to solve equations by hand because a calculator could do them?

i'm not sure it's the same analogy but in some ways it holds.

hapticmonkey 13 hours ago

The analogy would hold if there were 2 or 3 calculator companies and all your calculations had to be sent to them.

If local models get good enough, I think it’s a very different scenario than engineers all over the world relying on central entities which have their own motives.

scottyah 13 hours ago

konfusinomicon 13 hours ago

soooooo about Claude going down. we're gonna need you to sign in on Saturday and make up for lost time or unfortunately we're going to have to deduct the time lost from your paycheck. and as an aside your TPS reports have been sub-par as of late..is everything OK?

littlestymaar 14 hours ago

That's why local models are important.

Of course they aren't alternative to the current frontier model, and as such you cannot easily jump from the later to the former, but they aren't that far behind either, for coding Qwen3.5-122B is comparable to what Sonnet was less than a year ago.

So assuming the trend continues, if you can stop following the latest release and stick with what you're already using for 6 or 9 months, you'll be able to liberate yourself from the dependency to a Cloud provider.

Personally I think the freedom is worth it.

David_Mendoza 12 hours ago

The cloud dependency problem goes deeper than the model layer though. Even if you run inference locally, your digital identity — your context, your applications, your behavioral history, is still custodied by whoever controls your OS.

Local models solve one layer of the dependency stack, but the custody assumption underneath it remains intact. That's the harder problem.

i_love_retros 14 hours ago

It makes me uneasy because my role now, which is prompting copilot, isn't worth my salary.

phist_mcgee 14 hours ago

Parable of the mechanic who charges $5k to hit a machine on the side once with a hammer to get it working. $5 for the hammer, $4995 for the knowledge of where to hit the machine etc etc.

some-guy 14 hours ago

I disagree. The amount of slop I need to code review has only increased, and the quality of the models doesn’t seem to be helping.

It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.

i_love_retros 13 hours ago

gip 12 hours ago

Totally. That is why it is key important to have open source and sovereign models that will be accessible to all and always.

At the end of the day, all these closed models are being built by companies that pumped all the knowledge from the internet without giving much back. But competition and open source will make sure most of the value return to the most of the people.

singingtoday 10 hours ago

Very well put, and it mirrors my own thoughts.

Mauneam 10 hours ago

You are that guy in early 1900s who would rather ride a horse than get in a car because cars "continued to make him uneasy."

epolanski 12 hours ago

I actually don't mind the coding part, but the information digging across the project is definitely by orders of magnitude slower if I do it on my own.

keybored 14 hours ago

Help. They’re constantly trying to make me try crack cocaine on the front page.

ransom1538 13 hours ago

"when the tokens run out, I'm basically done working."

Oh stop the drama. Open source models can handle 99% of your questions.

deadbabe 15 hours ago

Given that it’s so easy, would you still do this same job if paid half as much?

paulryanrogers 15 hours ago

Jobs will likely pay less as more people are enabled to create, especially if they don't need to be able to look under the hood

Jeff_Brown 14 hours ago

_alternator_ 14 hours ago

No, I wouldn't. But most people won't have that choice; it doesn't work that way.

deadbabe 13 hours ago

Aeolun 13 hours ago

Well, I wouldn’t have a different job that would pay me more… so yes?

simianwords 15 hours ago

eh this kind of FUD needs to stop because it is kind of normal and expected and in fact good to have relation like this with technology.

_alternator_ 14 hours ago

I would agree that taking a walk is a good thing to do when your tools go down, and in some ways it's similar to what we would do if the power or wifi were cut off.

So, yes, it's just another technology we're coming to rely on in a very deep way. The whiplash is real, though, and it feels like it should be pointed out that this dependency we are taking on has downsides.

h14h 17 hours ago

This seems huge for subscription customers. Looking at the Artificial Analysis numbers, 5.5 at medium effort yields roughly the intelligence as 5.4 (xhigh) while using less than a fifth the tokens.

As long as tokens count roughly equally towards subscription plan usage between 5.5 & 5.4, you can look at this as effectively a 5x increase in usage limits.

gausswho 16 hours ago

As someone who always leaves intelligence at default, and am ok with existing models, should I be shifting gears more manually as providers sell us newer models? Is medium or lower better than free/cheaper models?

dcre 10 hours ago

SOTA models on medium are probably still better than free or cheap models, but you should experiment.

BrokenCogs 17 hours ago

I'm here for the pelicans and I'm not leaving until I see one!

qingcharles 17 hours ago

I've come to prompt pelicans and chew gum, and I'm all outta gum!

pixel_popping 17 hours ago

That's a true CTO right there.

bytesandbits 16 hours ago

I know a 10x engineer when i see one.

bl4ckneon 6 hours ago

How can we tell who the 100x engineers are then?

BrokenCogs 15 hours ago

In binary that's just a 10x engineer

mrtransient 12 hours ago

RomanPushkin 16 hours ago

Ctrl+F: pelican

F5

tantalor 16 hours ago

simonw pls

CompleteSkeptic 16 hours ago

Is this the first time OpenAI has published comparisons to other labs?

Seems so to me - see GPT-5.4[1] and 5.2[2] announcements.

Might be an tacit admission of being behind.

[1] https://openai.com/index/introducing-gpt-5-4/ [2] https://openai.com/index/introducing-gpt-5-2/

oliver236 9 hours ago

beautiful!!

khutorni 6 hours ago

> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.”

That's a wild statement to put into your announcement. Are LLM providers now openly bragging about our collective dependency on their models?

azan_ 6 minutes ago

> That's a wild statement to put into your announcement. Are LLM providers now openly bragging about our collective dependency on their models?

It's normal that company brags how good their product is, I really don't see what's wild about this statement.

gallerdude 17 hours ago

If GPT-5.5 Pro really was Spud, and two years of pretraining culminated in one release, WOW, you cannot feel it at all from this announcement. If OpenAI wants to know why they like they’ve fallen behind the vibes of Anthropic, they need to look no further than their marketing department. This makes everything feel like a completely linear upgrade in every way.

I_am_tiberius 16 hours ago

Clearly they felt a big backlash when version 5 was released. Now they are afraid of another response like this. And effectively, for the user it will likely only be a small update.

jimbob45 17 hours ago

Also the naming department. You can tell that this is the AI company Microsoft chose to back because their naming scheme is as bad as .NET's.

gallerdude 16 hours ago

I actually have no problem with the 5.x line... but if Pro really was an entirely new pretrain, they did a horrible job conveying that.

amiune 15 minutes ago

Will there ever be ChatGPT 6.0 or Claude 5.0?

jryio 17 hours ago

Their 'Preparedness Framework'[1] is 20 pages and looks ChatGPT generated, I don't feel prepared reading it.

https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbdde...

louiereederson 17 hours ago

For a 56.7 score on the Artificial Intelligence Index, GPT 5.5 used 22m output tokens. For a score of 57, Opus 4.7 used 111m output tokens.

The efficiency gap is enormous. Maybe it's the difference between GB200 NVL72 and an Amazon Tranium chip?

swyx 17 hours ago

why would chip affect token quantity. this is all models.

louiereederson 17 hours ago

Chip costs strongly impact the economics of model serving.

It is entirely plausible to me that Opus 4.7 is designed to consume more tokens in order to artificially reduce the API cost/token, thereby obscuring the true operating cost of the model.

I agree though, I chose poor phrasing originally. Better to say that GB200 vs Tranium could contribute to the efficiency differential.

itemize123 8 hours ago

karmasimida 17 hours ago

Chips doesn’t impact output quality in this magnitude

ChrisGreenHeur 17 hours ago

True, but the qualifying the power played a large part. Most likely nuclear power for this high quality token efficiency.

AtNightWeCode 14 hours ago

You need to compare total cost. Token count is irrelevant.

dist-epoch 15 hours ago

If it's a new pretrain, the token embeddings could be wider - you can pack more info into a token making it's way through the system.

Like Chinese versus English - you need fewer Chinese characters to say something than if you write that in English.

So this model internally could be thinking in much more expressive embeddings.

ativzzz 17 hours ago

I like that they waited for opus 4.7 to come out first so they had a few days to find the benchmarks that gpt 5.5 is better at

eknkc 17 hours ago

Well anectodally, 5.4 was already better than opus 4.7 so it should not have been hard.

wahnfrieden 17 hours ago

I like that Anthropic rushed 4.7 out to get a couple days of coverage before 5.5 hit

spprashant 16 hours ago

Everything since that launch to this release has been a PR disaster for Anthropic.

dandaka 16 hours ago

vanillameow 2 hours ago

Because Opus is kind of degrading lately, I said "fuck it" and made a new OAI account and used the month free trial. I put one query into ChatGPT using 5.5 thinking - the frustrating thing was that it did put more effort into getting correct answers rather than Opus, which is just guessing. Specifically, I asked about the coding harness pi, and despite explicitly referring to it as a harness, Opus 4.7, 4.6 and Sonnet 4.6 all fell back to telling me about Aider or OpenCode and ignored my query completely, while ChatGPT said "I'll assume pi is a harness" and then did in fact find the harness.

However the language of ChatGPT is still the same slop as years ago, so many headings, so many emojis, so many "the important thing nobody mentions". 10 paragraphs of text for what should be a two paragraph response. Even with custom instructions (keep answers short and succinct) and using their settings (less list, less emoji, less fluff) it's still NOTICEABLY worse than Claude on base settings.

I've yet to test Codex, will get to that this weekend, but in terms of research or general Q&A I have no idea how anyone could prefer this to Claude. Unfortunately Claude has seemingly stopped giving a fuck about researching entirely.

neuroelectron 8 minutes ago

Are they using RTX 5090s now?

sosodev 17 hours ago

I hope the industry starts competing more on highest scores with lowest tokens like this. It's a win for everybody. It means the model is more intelligent, is more efficient to inference, and costs less for the end user.

So much bench-maxxing is just giving the model a ton of tokens so it can inefficiently explore the solution space.

an0malous 17 hours ago

The premise of the trillion dollars in AI investments is not that it’ll be as good as it currently is but cheaper. It’s AGI or bust at this point.

dcre 10 hours ago

Why is AGI required to make the investments work out?

xutopia 9 hours ago

sosodev 17 hours ago

Yeah, but don’t you agree that less tokens to accomplish the same goal is a sign of increasing intelligence?

camdenreslink 15 hours ago

energy123 16 hours ago

mchusma 16 hours ago

blixt 14 hours ago

Releases keep shifting from API forward to product forward, with API now lagging behind proprietary product surface and special partnerships.

I'd not be surprised if this is the year where some models simply stop being available as a plain API, while foundation model companies succeed at capturing more use cases in their own software.

throw03172019 7 hours ago

Possibly but you’d think they enjoy taking money for a product that supports itself (API)

blixt 3 hours ago

Yeah this can go many ways but there's a world where OpenAI doesn't sell direct model access for the same reasons Cloudflare doesn't sell direct hardware access.

losvedir 17 hours ago

> It excels at ... researching online

How does this work exactly? Is there like a "search online" tool that the harness is expected to provide? Or does the OpenAI infra do that as part of serving the response?

I've been working on building my own agent, just for fun, and I conceptually get using a command line, listing files, reading them, etc, but am sort of stumped how I'm supposed to do the web search piece of it.

Given that they're calling out that this model is great at online research - to what extent is that a property of the model itself? I would have thought that was a harness concern.

wincy 16 hours ago

I’ve noticed when writing little bedtime stories that require specific research (my kids like Pokemon stories and they’ve been having an episodic “pokemon adventure” with them as the protagonists) ChatGPT has done a fantastic job of first researching the moves the pokemon have, then writing the actual story. The only mistake it consistently makes is when I summarize and move from a full context session, it thinks that Gyarados has to swim and is incapable of flying.

It definitely seems like it does all the searching first, with a separate model, loads that in, then does the actual writing.

ziml77 13 hours ago

Gyarados is a flying type but I think it may be accurate that it can't actually fly. The only flying moves it can learn in any generation are Hurricane and Bounce (Bounce does send the user up into the air for a turn but the implication is that they've trampolined up extremely high rather than used wings to ascend)

Melatonic 12 hours ago

100ms 17 hours ago

It's literally a distinct model with a different optimisation goal compared to normal chat. There's a ton of public information around how they work and how they're trained

dist-epoch 15 hours ago

It's a property of the model in the sense that it has great Google Fu.

The harness provides the search tool, but the model provides the keywords to search for, etc.

2001zhaozhao 17 hours ago

Pricing: $5/1M input, $30/1M output

(same input price and 20% more output price than Opus 4.7)

tedsanders 16 hours ago

Yep, it's more expensive per token.

However, I do want to emphasize that this is per token, not per task.

If we look at Opus 4.7, it uses smaller tokens (1-1.35x more than Opus 4.6) and it was also trained to think longer. https://www.anthropic.com/news/claude-opus-4-7

On the Artificial Analysis Intelligence Index eval for example, in order to hit a score of 57%, Opus 4.7 takes ~5x as many output tokens as GPT-5.5, which dwarfs the difference in per-token pricing.

The token differential varies a lot by task, so it's hard to give a reliable rule of thumb (I'm guessing it's usually going to be well below ~5x), but hope this shows that price per task is not a linear function of price per token, as different models use different token vocabularies and different amounts of tokens.

We have raised per-token prices for our last couple models, but we've also made them a lot more efficient for the same capability level.

(I work at OpenAI.)

2001zhaozhao 15 hours ago

I don't have anything to add, but I like how you guys are actually sending people to communicate in Hacker News. Brilliant.

oliver236 9 hours ago

simianwords 16 hours ago

Maybe a good idea to be more explicit about this -- maybe a cost analysis benchmark would be a nice accompaniment.

This kind of thing keeps popping up each time a new model is released and I don't think people are aware that token efficiency can change.

tedsanders 15 hours ago

dannyw 9 hours ago

oh_no 16 hours ago

yes but as far as i know gpt tokenizer is about the same as opus 4.6's, where 4.7 is seeing something in the ballpark of a 30% increase. this should still be cheaper even disregarding the concerns around 4.7 thinking burning tokens

sergiotapia 16 hours ago

That pricing is extremely spicy, wow.

baalimago 17 hours ago

Worth the 100% price increase over GPT-5.4?

cbg0 17 hours ago

For less than 10% bump across the benchmarks? Probably not, but if your employer is paying (which is probably what OAI is counting on) it's all good.

It's kind of starting to make sense that they doubled the usage on Pro plans - if the usage drains twice as fast on 5.5 after that promo is over a lot of people on the $100 plan might have to upgrade.

jstummbillig 17 hours ago

You are paying per token, but what you care about is token efficiency. If token efficiency has improved by as much as they claim it did (i.e. you need less tokens to complete a task successfully) all seems well.

mangolie 17 hours ago

cbg0 17 hours ago

vessenes 17 hours ago

Yay. 5.4 was a frustrating model - moments of extreme intelligence (I liked it very much for code review) - but also a sort of idiocy/literalism that made it very unsuited for prompting in a vague sense. I also found its openclaw engagement wooden and frustrating. Which didn’t matter until anthropic started charging $150 a day for opus for openclaw.

Anyway - these benchmarks look really good; I’m hopeful on the qualitative stuff.

thinkindie 15 hours ago

This is reminding me when Chrome and Firefox where racing to release a new “major version” (at least from the semver POV) without adding significantly new functionality at a time that browsers were already becoming a commodity. As much as we don’t care anymore for a new chrome or Firefox version so will be the release of a new model version.

jstummbillig 15 hours ago

The only difference being that we still do care, very much. The models can still get a lot better before we stop caring.

NitpickLawyer 17 hours ago

> Across all three evals, GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

Yeah, this was the next step. Have RLVR make the model good. Next iteration start penalising long + correct and reward short + correct.

> CyberGym 81.8%

Mythos was self reported at 83.1% ... So not far. Also it seems they're going the same route with verification. We're entering the era where SotA will only be available after KYC, it seems.

toraway 16 hours ago

Isn't Mythos limited to a selected group of companies/organizations Anthropic chose themselves? If the OpenAI announcement for GPT-5.5 is accurate the "trusted cyber access" just requires an open, seemingly straightforward identity verification step.

https://openai.com/index/scaling-trusted-access-for-cyber-de...

  > We are expanding access to accelerate cyber defense at every level. We are making our cyber-permissive models available through Trusted Access for Cyber , starting with Codex, which includes expanded access to the advanced cybersecurity capabilities of GPT‑5.5 with fewer restrictions for verified users meeting certain trust signals (opens in a new window) at launch.

  > Broad access is made possible through our investments in model safety, authenticated usage, and monitoring for impermissible use. We have been working with external experts for months to develop, test and iterate on the robustness of these safeguards. With GPT‑5.5, we are ensuring developers can secure their code with ease, while putting stronger controls around the cyber workflows most likely to cause harm by malicious actors.

  > Organizations who are responsible for defending critical infrastructure  can apply to access cyber-permissive models like GPT‑5.4‑Cyber, while meeting strict security requirements to use these models for securing their internal systems.
"GPT‑5.4‑Cyber" is something else and apparently needs some kind of special access, but that CyberGym benchmark result seems to apply to the more or less open GPT-5.5 model that was just released.

cbg0 16 hours ago

Isn't CyberGym an open benchmark so trivial to benchmaxx anyway?

mattas 17 hours ago

Not good for employees that are being measured by their token usage.

RayVR 14 minutes ago

My first experience with 5.5 via ChatGPT was immensely disappointing. It was a massive reduction in quality compared to 5.4, which already had issues.

Flow 2 hours ago

These new models consume so many tokens. I’m very satisfied with GPT-5.2 on High. I hope they keep that one for many years

kburman 15 hours ago

What a time. I am back here genuinely wishing for OpenAI to release a great model, because without stiff competition, it feels like Anthropic has completely lost its mind.

victor9000 an hour ago

Care to elaborate? I jumped ship when 5.4 first released, have things gotten worse?

xingyi_dev 3 hours ago

Its coding chops are absolutely insane. Opus 4.7 was already a tough sell, but Gpt 5.5 just made it completely irrelevant.

merlindru an hour ago

highly agree, sadly, as a huge fan of Opus

Opus 4.5 and 4.6 were the first models that i could talk to and get a sense that they really "understood" WHY i'm saying the things i am

Opus 4.7 kinda took that away, it's a definite regression. it doesn't extrapolate.

———————————————

refactor this thing? sure, will do! wait, what do you mean "obviously do not refactor the unrelated thing that's colocated in the same file"? i'm sorry, you're absolutely right, conceptually these two things have nothing to do with each other. i see it now. i shouldn't have thought they're the same just because they're in the same file.

———————————————

whereas GPT 5.5, much like Opus 4.6, gets it.

i wanted to build a MIDI listener for a macOS app i'm making, and translate every message into a new enum. that enum was to be opinionated and not to reflect MIDI message data. moreover, i explicitly said not to do bit shifting or pointer arithmetic as part of the transport.

what did Opus 4.7 do? it still used pointer arithmetic for the parsing! should i have to be this explicit? it also seemingly didn't care that i wanted the enum to be opinionated and not reflect the raw MIDI values. Opus 4.6 got it right (although with ugly, questionable implementation).

GPT 5.5 both immediately understood that I didn't want pointer arithmetic because of the risk of UB and that shuffling around bits is cumbersome and out of place. it started searching for alternatives, looking up crates to handle MIDI transports and parsing independently.

then it built out a very lean implementation that was immediately understandable. even when i told Opus 4.7 to use packages, and even how to use them, it still added a ton of math weirdness, matching against raw MIDI packet bytes, indirection after indirection, etc. even worse, it still did so after giving them the public API i wanted them to implement.

GPT 5.5 nailed it first try. incredibly impressed with this model and feel much safer delegating some harder tasks to it

kaant 3 hours ago

The '.5' models are always the actual production-ready versions. GPT-5 was for the mainstream hype, 5.5 is for the developers. I don't need it to be magically smarter; just give me lower latency, cheaper API tokens, and reliable tool-calling without hallucinations.

svara 5 hours ago

Do we know if this is another post training fine tune or based on a much larger new pretraining run (which I believe they were calling 'Spud' internally)?

The large price bump might indicate the latter.

nickvec 16 hours ago

I'm conflicted whether I should keep my Claude Max 5x subscription at this point and switch back to GPT/Codex... anyone else in a similar position? I'd rather not be paying for two AI providers and context switching between the two, though I'm having a hard time gauging if Claude Code is still the "cream of the crop" for SWE work. I haven't played around with Codex much.

the_sleaze_ 16 hours ago

I have experienced 0 friction swapping between the 2 models, in fact pitting them against eachother has resulted in the highest success rate for me so far.

nickvec 16 hours ago

Interesting. I may have to give that a shot, thanks.

mpaepper 15 hours ago

I switched from CC to Codex a few days ago. I get limited much less and the code quality is similar, so not looking back

victor9000 2 hours ago

CC usage limits and the 5 hour cool downs are what made me realize that I can't depend on this tool in a professional setting.

gck1 14 hours ago

Which plan? And how are the weekly limits on that plan compared to CCs equivalent subscription?

I don't really care about 5h limits, I can queue up work and just get agents to auto continue, but weekly ones are anxiety inducing.

slawr1805 14 hours ago

I was all in on Claude code as my daily driver for web development. And love it. But I enjoy using pi as my harness more and have never ran out of tokens with Codex yet. Claude code almost always runs out for me with the same amount of usage.

After migrating for the token and harness issues, I was pleasantly surprised that Codex seems to perform as good or better too!

Things change so often in this field, but I prefer Codex now even though Anthropocene has so much more hype for coding it seems.

scottyah 15 hours ago

Every time I've followed the hype and tried OpenAI models I've found them lacking for the most part. It might just be that I prefer the peer-programming vs spec-ing out the task and handing it off, but I've never been as productive as I am with Claude. Also, I'm still caught up on the DoD ethics stuff.

meetpateltech 17 hours ago

ZeroCool2u 17 hours ago

Benchmarks are favorable enough they're comparing to non-OpenAI models again. Interesting that tokens/second is similar to 5.4. Maybe there's some genuine innovation beyond bigger model better this time?

qsort 17 hours ago

It's behind Opus 4.7 in SWE-Bench Pro, if you care about that kind of thing. It seems on-trend, even though benchmarks are less and less meaningful for the stuff we expect from models now.

Will be interesting to try.

M4R5H4LL 15 hours ago

I am a heavy Claude Code user. I just tried using Codex with 5.4 (as a Plus user I don't have access to 5.5 yet), and it was quite underwhelming. The app stopped regularly much earlier than what I wanted. It also claimed to have fixed issues when it did not; this is not a hallmark of GPT, and Opus has similar issues, but Claude will not make the same mistake three times in a row. It is unusable at the moment, while Claude allows me do get real work done on a daily basis. Until then...

bhu8 15 hours ago

Gpt-5.3-codex is miles better than 5.4 in that regard. It’s better at orchestration, and does the things that it said it did. Haven’t tested 5.5 yet but using 5.4 for exploration + brainstorming and handing over the findings to 5.3-codex works pretty well

jdw64 17 hours ago

GPT is really great, but I wish the GPT desktop app supported MCP as well.

You can kind of use connectors like MCP, but having to use ngrok every time just to expose a local filesystem for file editing is more cumbersome than expected.

throwaway911282 17 hours ago

Use codex app

niklasd 4 hours ago

Just burned through my 5 hour window in Codex (Business plan) in 10 minutes with GPT-5.5. Was excited to use it, but I guess I have to wait 5 hours now (it's not yet available in the API, so I can't switch there).

Rapzid 16 hours ago

In Copilot where it's easy to switch models Opus 4.6 was still providing, IMHO, better stock results than GPT-5.4.

Particularly in areas outside straight coding tasks. So analysis, planning, etc. Better and more thorough output. Better use of formatting options(tables, diagrams, etc).

I'm hoping to see improvements in this area with 5.5.

thimabi 17 hours ago

Will we also see a GPT-5.5-Codex version of this model? Or will the same version of it be served both in the web app and in Codex?

Uehreka 17 hours ago

After 5.1, we haven’t seen a -codex-max model, presumably because the benefits of the special training gpt-5.1-codex-max got to improve long context work filtered into gpt-5.2-codex, making the variant no longer necessary (my personal experience accords with this). I’ve been using gpt-5.4 in Codex since it came out, it’s been great. I’ve never back-to-back tested a version against its -codex variant to figure out what the qualitative difference is (this would take a long time to get a really solid answer), but I wouldn’t be surprised if at some point the general-purpose model no longer needs whatever extra training the -codex model gets and they just stop releasing them.

I thought it was weird that for almost the entire 5.3 generation we only had a -codex model, I presume in that case they were seeing the massive AI coding wave this winter and were laser focused on just that for a couple months. Maybe someday someone will actually explain all of this.

jumploops 17 hours ago

> GPT‑5.5 improves on GPT‑5.4’s scores while using fewer tokens.

This might be great if it translates to agentic engineering and not just benchmarks.

It seems some of the gains from Opus 4.6 to 4.7 required more tokens, not less.

Maybe more interesting is that they’ve used codex to improve model inference latency. iirc this is a new (expectedly larger) pretrain, so it’s presumably slower to serve.

beering 17 hours ago

With Opus it’s hard to tell what was due to the tokenizer changes. Maybe using more tokens for the same prompt means the model effectively thinks more?

conradkay 17 hours ago

They say latency is the same as 5.4 and 5.5 is served on GB200 NVL72, so I assume 5.4 was served on hopper.

cscheid 16 hours ago

I know this is irrelevant on the grand scheme of things, but that WebGL animation is really quite wrong. That is extra funny given the "ensure it has realistic orbital mechanics." phrase in the prompt.

I prescribe 20 hours of KSP to everyone involved, that'll set them right.

gcanyon 13 hours ago

Once upon a time humans had to memorize log tables.

Once upon a time humans had to manually advance the spark ignition as their car's engine revved faster.

Once upon a time humans had to know the architecture of a CPU to code for it.

History is full of instances of humans meeting technology where it was, accommodating for its limitations. We are approaching a point where machines accommodate to our limitations -- it's not a point, really, but a spectrum that we've been on.

It's going to be a bumpy ride.

laweijfmvo 10 hours ago

i still don’t think the current generation of AI is building better software than strong humans. it excels at writing code, because a computer will always be faster at generating typo-free code than my fingers, but without expert guidance and oversight the best it can do is on par with what we can.

IMO

maxdo 12 hours ago

With such a huge progress of open ai and anthropic . How Chinese open source provides even think to make comparable money . I have a few friends in China they all use Claude. To train the model cost the same but the output from open source model id imagine is 1000 times less . Money flow for them outside of China is abysmal

bandrami 13 hours ago

Cool. Now there will be a week or "this is the greatest model ever and I think mine just gained sentience", followed by a week of "I think they must have just nerfed it because it's not as good as it was a week ago", followed by three weeks of smart people cargo culting the specific incantations they then convince themselves make it work best.

nubg 12 hours ago

followed by some hormuz closures, followed by gpt-5.6...

pants2 15 hours ago

Labs still aren't publishing ARC-AGI-3 scores, even though it's been out for some time. Is it because the numbers are too embarrassing?

tedsanders 10 hours ago

Honest answer is that it isn't done running yet. It takes some human bandwidth and time to run, so results weren't ready by this morning. We don't know what the score will be, but will probably go up on the leaderboard sometime soon. I personally don't put a lot of stock in the ARC-AGI evals, as it's not relevant to most work that people do, but should still be interesting to see as a measure of reasoning ability.

(I work at OpenAI.)

AG25 14 hours ago

GPT-5.5 was just released and OpenAI didnt mention ARC AGI 3 at all, their score probably sucks.

kilroy123 15 hours ago

To be fair, there's not much to report. Isn't it pretty much at 0?

pants2 14 hours ago

Opus-4.6 with 0.5% currently leads GPT-5.4 with 0.2%[1].

Seems meaningful even if the absolute numbers are very low. That's sort of the excitement of it.

2. https://arcprize.org/leaderboard

rarisma 11 hours ago

I like that its more consistent than the 4o and o4 days but still 5.4, 5.3, 5.2, etc still are a mess, for example 5.2 and 5.1 don't have mini models and 5.3 was codex only.

Anthropic is slightly better but where is 4.6 or 4.7 haiku or 4.7 sonnet etc.

jasonjmcghee 11 hours ago

Opus 4.7 feels worse for me than 4.6, and that's not even taking into account the 50% extra tokens at 3x the price

algoth1 11 hours ago

Same here

bradley13 16 hours ago

"our strongest set of safeguards to date"

How much capability is lost, by hobbling models with a zillion protections against idiots?

Every prompt gets evaluated, to ensure you are not a hacker, you are not suicidal, you are not a racist, you are not...

Maybe just...leave that all off? I know, I know, individual responsibility no longer exists, but I can dream.

iugtmkbdfil834 11 hours ago

This is my personal pet peeve as well. Like, I accept maybe everything shouldn't be offered to everyone, but maybe just gate keep it behind credit card( but I know that is a market penetration no no ). I feel like such a waste of power ( electrical and the potential we might be missing out on ).

nullbyte 17 hours ago

82.7% on Terminal Bench is crazy

toephu2 16 hours ago

Is it? There are 5 other models near ~80% and it was achieved in March... which in AI-world seems like a century ago.

https://www.tbench.ai/leaderboard/terminal-bench/2.0

ejpir 14 hours ago

those are not verified. I've tried forgecode and I cannot believe they didn't do something to influence the benchmarks

GodelNumbering 14 hours ago

benjx88 16 hours ago

Good job on the release notice. I appreciate that it isn't just marketing fluff, but actually includes the technical specs for those of us who care and not concentrated in coding agents only.

I hope GPT 5.5 Pro is not cutting corners and neuter from the start, you got the compute for it not to be.

extr 17 hours ago

Seems like a continuation of the current meta where GPT models are better in GPT-like ways and Claude models are better in Claude-like ways, with the differences between each slightly narrowing with each generation. 5.5 is noticeably better to talk to, 4.7 is noticeably more precise. Etc etc.

GenerWork 16 hours ago

Looking at the space/game/earthquake tracker examples makes me hopeful that OpenAI is going to focus a bit more on interface visual development/integration from tools like Figma. This is one area where Anthropic definitely reigns supreme.

nickandbro 16 hours ago

Very impressive! Interesting how all other benchmarks it seems to surpass Opus 4.7 except SWE-Bench Pro (Public). You would think that doing so well at Cyber, it would naturally possess more abilities there. Wonder what makes up the actual difference there

impulser_ 17 hours ago

What is the reason behind OpenAI being able to release new models very fast?

Since Feb when we got Gemini 3.1, Opus 4.6, and GPT-5.3-Codex we have seen GPT-5.4 and GPT-5.5 but only Opus 4.7 and no new Gemini model.

Both of these are pretty decent improvements.

minimaxir 17 hours ago

Competition.

steinvakt2 4 hours ago

Can't be just that. There was competition in the GPT-4 era. But we didn't get model drops every month.

pixel_popping 17 hours ago

This is frankly exciting, outside of the politics of it all, it always feel great to wake up and a new model being released, I personally will stay awake quite long tonight if GPT-5.5 drop in codex.

apical_dendrite 9 hours ago

literalAardvark 17 hours ago

Anthropic is really tiny, and Google is just being Google, their models are just to show that they're hip with what the kids are doing.

wmf 17 hours ago

I wonder if it's the same model and they just keep adding more post-training.

Squarex 16 hours ago

The rumor was that the 5.5 is a brand new pretrain. But who knows, it's 2x as expensive as 5.4, so it would check out.

hyperbovine 3 hours ago

tantalor 16 hours ago

They aren't new models.

aetherspawn 13 hours ago

Umm yeah but this is like every release in the last 3 years.

The big question is: does it still just write slop, or not?

Fool me once, fool me twice, fool me for the 32nd time, it’s probably still just slop.

YmiYugy 17 hours ago

So according to the benchmarks somewhere in between Opus 4.7 and Mythos

jorl17 17 hours ago

GPT 5.4 is already better than Opus 4.7 to me. But, then again, Opus 4.7 is a massive disappointment. I hope they don't discontinue 4.6.

robwwilliams 16 hours ago

Depends in goals. For long free-firm discussions I find Opus 4.7 Adaptive better/deeper than Opus 4.6 Extended. But usual caveats apply: first week of use and token budget seems generous now on Max 5X.

coffeemug 15 hours ago

steinvakt2 17 hours ago

I’ve had great experience using opus 4.7 in cursor. Works for everything including iOS frontend

jorl17 17 hours ago

w10-1 14 hours ago

NYTimes article - on the same day?

  https://www.nytimes.com/2026/04/23/technology/openai-new-model.html
I can see how some model releases would meet the NY Times news-worthy threshold if they demonstrated significance to users - i.e., if most users were astir and competitors were re-thinking their situation.

However, this same-day article came out before people really looked at it. It seems largely intended to contrast OpenAI with Anthropic's caution, before there has been any evidence that the new model has cyber-security implications.

It's not at all clear that the broader discourse is helping, if even the NY Times is itself producing slop just to stoke questions.

Manik_agg 6 hours ago

OpenAI finally catching up with claude

ionwake 17 hours ago

is there anywhere I can try it? ( I just stopped my pro sub ) but was wondering if there is a playground or 3rd party so i can just test it briefly?

immanuwell 3 hours ago

Big claims from OpenAI as usual - GPT-5.5 sounds impressive on paper, but we've been down this road before, so I'll believe the 'no speed tradeoff' part when I see it in the wild

Pooge 2 hours ago

Up until now I only paid LLM subscriptions to Anthropic but I'm going to give ChatGPT a chance when my current subscription runs out next month.

deaux 13 hours ago

ctrl+f "cutoff, 0 results"

Surely it doesn't still have the same ancient data cutoff as 5.4 did?

k2xl 17 hours ago

Surprised to see SWE-Bench Pro only a slight improvement (57.7% -> 58.6%) while Opus 4.7 hit 64.3%. I wonder what Anthropic is doing to achieve higher scores on this - and also what makes this test particular hard to do well in compared to Terminal Bench (which 5.5 seemed to have a big jump in)

vexna 17 hours ago

There's an asterisk right below that table stating that:

> *Anthropic reported signs of memorization on a subset of problems

And from the Anthropic's Opus 4.7 release page, it also states:

> SWE-bench Verified, Pro, and Multilingual: Our memorization screens flag a subset of problems in these SWE-bench evals. Excluding any problems that show signs of memorization, Opus 4.7’s margin of improvement over Opus 4.6 holds.

conradkay 17 hours ago

Was 4.7 distilled off Mythos (which got 77.8%)? Interesting how mythos got 82% on terminal-bench 2.0 compared to 82.7% for GPT-5.5.

Also notice how they state just for SWE-Bench Pro: "*Anthropic reported signs of memorization on a subset of problems"

cynicalpeace 17 hours ago

It's possible that "smarter" AI won't lead to more productivity in the economy. Why?

Because software and "information technology" generally didn't increase productivity over the past 30 years.

This has been long known as Solow's productivity paradox. There's lots of theories as to why this is observed, one of them being "mismeasurement" of productivity data.

But my favorite theory is that information technology is mostly entertainment, and rather than making you more productive, it distracts you and makes you more lazy.

AI's main application has been information space so far. If that continues, I doubt you will get more productivity from it.

If you give AI a body... well, maybe that changes.

hol4b 9 hours ago

25 years of shipping software, and IT absolutely increased productivity - just not for everyone, not everywhere. Some workflows got 10x faster, others got slower from meetings about the new tools.

AI feels the same. I'm shipping indie apps solo now that would have needed a small team five years ago. But in bigger orgs I see people spending 20 minutes verifying 15-minute AI output that used to be a 30-minute task they'd just do. Depends where you sit.

ewrs 16 hours ago

Its quite possible the use of LLMs means that we are using less effort to produce the same output. This seems good.

But the less effort exertion also conditions you to be weaker, and less able to connect deeply with the brain to grind as hard as once did. This is bad.

Which effect dominates? Difficult to say.

Of course this is absolutely possible. Ultimately there was a time where physical exertion was a thing and nobody was over-weight. That isn't the case anymore is it.

aerhardt 16 hours ago

> "information technology" generally didn't increase productivity

Do you think it'd be viable to run most businesses on pen and paper? I'll give you email and being able to consume informational websites - rest is pen and paper.

cynicalpeace 16 hours ago

Productivity metrics were better when businesses were run on just pen and paper. Of course, there could be many confounding factors, but there are also many reasons why this could be so. Just a few hypotheses:

- Pen and paper become a limiting factor on bureaucratic BS

- Pen and paper are less distracting

- Pen and paper require more creative output from the user, as opposed to screens which are mostly consumptive

etc etc

theLiminator 16 hours ago

aiaiai177 17 hours ago

Downvoted by the AI Nazis. They are running a tight ship before the IPOs.

cbg0 17 hours ago

I downvoted it because it doesn't add anything useful to the conversation, and I don't own any AI stock.

cynicalpeace 17 hours ago

AbuAssar 16 hours ago

This is the first time openAi include competing models in their benchmarks, always included only openAi models.

tantalor 16 hours ago

> A playable 3D dungeon arena

Where's the demo link?

zerotosixty 15 hours ago

Those who are using gpt5.5 how does it compare to Opus 4.6 / 4.7 in terms of code generation?

renecito 12 hours ago

why the stats of every AI on every release looks around the same?

Are the tests getting harder and harder so the older AIs look worst and the new ones look like they are "almost there" ?

gordonhart 9 hours ago

Yes, once benchmarks get saturated they get replaced by harder ones. You don’t see GSM8K, MMLU, or HellaSwag anymore because they’re essentially solved. It takes constant work to make benchmarks hard enough to show meaningful model performance differences but easy enough to score higher than the noise threshold.

adam12 14 hours ago

"Sometime with GPT-5.5 I become lazy"

I don't want to be lazy.

faxmeyourcode 17 hours ago

How does it compare to mythos?

objektif 17 hours ago

Are there faster mini/nano versions as well?

tedsanders 17 hours ago

Not this time, no.

abi 17 hours ago

Usually, those get released a few weeks later.

Schlagbohrer 14 hours ago

entering this comments area wondering if it will be full of complaints about the new personality, as with every single LLM update

cchrist 16 hours ago

Which is better GPT-5.5 or Opus 4.7? And for what tasks?

senko 16 hours ago

I might just be following too many AI-related people on X, but omg the media blitz around 5.5 is aggressive.

Soo many unconvincing "I've had access for three weeks and omg it's amazing" takes, it actually primes me for it to be a "meh".

I prefer to see for myself, but the gradual rollout, combined with full-on marketing campaign, is annoying.

user34283 3 hours ago

I used it last night for iOS app development and it felt like a noticeable improvement.

With the Pro plan it was available in both Codex and ChatGPT already when I first checked, which was within an hour of the release.

phillipcarter 17 hours ago

... sigh. I realize there's little that can be done about this, but I just got through a real-world session determining of Opus 4.7 is meaningfully better than Opus 4.6 or GPT 5.4, and now there's another one to try things with. These benchmark results generally mean little to me in practice.

Anyways, still exciting to see more improvements.

egorfine 16 hours ago

> We are releasing GPT‑5.5 with our strongest set of safeguards to date

...

> we’re deploying stricter classifiers for potential cyber risk which some users may find annoying initially

So we should be expecting to not be able to check our own code for vulnerabilities, because inherently the model cannot know whether I'm feeding my code or someone else's.

dannyw 14 hours ago

Hopefully not, because checking your codebase for vulnerabilities is really valuable.

I hope it’s just limits on pentesting and stuff, and not for code analysis and review.

lucb1e 12 hours ago

But how do it know?

woeirua 17 hours ago

Nice to see them openly compare to Opus-4.7… but they don’t compare it against Mythos which says everything you need to know.

The LinkedIn/X influencers who hyped this as a Mythos-class model should be ashamed of themselves, but they’ll be too busy posting slop content about how “GPT-5.5 changes everything”.

A_D_E_P_T 15 hours ago

Almost nobody can actually use Mythos, though?

throwaway2027 17 hours ago

Good timing I had just renewed my subscription.

I_am_tiberius 17 hours ago

I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...

anematode 13 hours ago

I don't think this is technically possible without something like homomorphic encryption, which poses too large of a runtime cost for usage in LLMs

ace2pace 15 hours ago

I hear its as good as Opus 4.7.

The battle has just begun

swrrt 7 hours ago

I heard someone said it is better than Opus 4.7. Recently, a lot of my friends complain about Opus 4.7 and previous models performance degradation.

numbers 17 hours ago

I've stopped trusting these "trust me bro" benchmarks and just started going to LM Arena and looking for the actual benchmark comparisons.

https://arena.ai/leaderboard/code

stri8ted 17 hours ago

I doubt this is representative of real world usage. There is a difference between a few turns on a web chatbot, vs many-turn cli usage on a real project.

nba456_ 17 hours ago

This is not any better of a benchmark

nickandbro 13 hours ago

I just prompted GPT-5.5 Pro "Solve Nuclear Fusion" and it one shotted it (kidding obviously)

theihtisham 9 hours ago

i just installed Codex and And Gave try to GPT 5.5 Its Good As compare to previous one

PilotJeff 9 hours ago

So exhausted from all this endless bs…. Keep releasing , this reminds me of all the .com software during that era where wow we are already at version 3.0 it’s only been 60 Days

debba 17 hours ago

Cannot see it in Codex CLI

boring-human 15 hours ago

Did you upgrade the tool binaries? I also couldn't see it until after the upgrade.

c0rruptbytes 13 hours ago

literally cannot launch the codex app anymore

aussieguy1234 11 hours ago

If SWE-Bench Verified is no longer a good measure of agentic coding abilities, what benchmark now is?

journal 12 hours ago

does it have cached pricing?

jawiggins 15 hours ago

What is the major and minor semver meaning for these models? Is each minor release a new fine-tuning with a new subset of example data while the major releases are made from scratch? Or do they even mean anything at this point?

gck1 13 hours ago

Nothing. The next major increment is going to happen when marketing department is confident they can sell it as a major improvement without everyone laughing at them. Which at this point seems like never.

I think Anthropic fearmongering and "leaks" of Mythos was them testing the ground for 5.x, which seems to have backfired.

elAhmo 16 hours ago

Is Codex receiving 5.4 or 5.5 release?

I am still using Codex 5.3 and haven't switched to GPT 5.4 as I don't like the 'its automatic bro trust us', so wondering is Codex going to get these specific releases at all in the future.

jedisct1 14 hours ago

GPT-5.4 is already an incredible model for code reviews and security audits with the swival.dev /audit command.

The fact that GPT-5.5 is apparently even better at long-running tasks is very exciting. I don’t have access to it yet, but I’m really looking forward to trying it.

wslh 14 hours ago

Related and insightful: "GPT-5.5: Mythos-Like Hacking, Open to All" [1].

[1] https://news.ycombinator.com/item?id=47879330

ant6n 15 hours ago

My impression has been that ChatGPT-5.4 has been getting dumber and more exhausting in the last couple of weeks. Like it makes a lot of obvious mistakes, ignores (parts of) prompts. keeps forgetting important facts or requirement.

Maybe this is a crazy theory, but I sometimes feel like they gimp their existing models before a big release to you'll notice more of a "step".

atmanactive 10 hours ago

Definitely feels like it.

varispeed 16 hours ago

I am sceptical. The generation after 4o models have become crappier and crappier. Hope this one changes the trend. 5.4 is unusable for complex coding work.

mondojesus 17 hours ago

I'm still using 5.3 in codex. Are 5.4 and 5.5 better than 5.3 in concrete ways?

cbg0 16 hours ago

The benchmarks say so, but try it out with actual tasks and be the judge.

enraged_camel 17 hours ago

Is this the first time OpenAI compared their new release to Anthropic models? Previously they were comparing only to GPT's own previous versions.

k2xl 17 hours ago

ARC-AGI 3 is missing on this list - given that the SOTA before 5.5 <1% if I recall, I wonder if this didn't make meaningful progress.

redox99 17 hours ago

It's a silly benchmark anyways.

cmrdporcupine 17 hours ago

Not rolled out to my Codex CLI yet, but some users on Reddit claiming it's on theirs.

xnx 17 hours ago

Next up: Google I/O on May 19?

I have to imagine they'll go to Gemini 3.5 if only for marketing reasons.

luqtas 17 hours ago

they are using ethical training weights this time!!! /j

throwaw12 17 hours ago

If anyone tried it already, how do you feel?

Numbers look too good, wondering if it is benchmaxxed or not

i_love_retros 14 hours ago

Oh shiiiiit boy! An incrementation dropped!!

yuvrajmalgat 16 hours ago

finally

baxuz 15 hours ago

Ah yes, the next "trust me bro"

MagicMoonlight 17 hours ago

Two hundred pages of shilling and it’s a 1% improvement in the benchmarks. They’re dead in the water.

Imagine spending 100m on some of these AI “geniuses” and this is the best they can do.

XCSme 16 hours ago

2x the price for 1-5% performance gain

justonepost2 17 hours ago

the attenuation of man nears

< 5 years until humans are buffered out of existence tbh

may the light of potentia spread forth beyond us

coderssh 17 hours ago

Great modal, I have been using codex and its awesome. Lets see what GPT-5.5 does to it

vardump 16 hours ago

I just can't bear to use services from this company after what they did to the global DRAM markets.

I'm not trying to make any kind of moral statement, but the company just feels toxic to me.