GPT-5.4 (openai.com)
462 points by mudkipdev 4 hours ago
Philip-J-Fry an hour ago
I find it quite funny how this blog post has a big "Ask ChatGPT" box at the bottom. So you might think you could ask a question about the contents of the blog post, so you type the text "summarise this blog post". And it opens a new chat window with the link to the blog post followed by "summarise this blog post". Only to be told "I can't access external URLs directly, but if you can paste the relevant text or describe the content you're interested in from the page, I can help you summarize it. Feel free to share!"
That's hilarious. Does OpenAI even know this doesn't work?
zamadatix 20 minutes ago
Following this process summarizes the blogpost for me. Perhaps the difference is I'm signed into my account so it can access external URLs or something of that nature?
within_will 19 minutes ago
Who cares
Aurornis an hour ago
Probably intentional. They don't want open, no-registration endpoints able to trigger the AI into hitting URLs.
jazzypants an hour ago
But, why include the non-functional chat box in the article?
embedding-shape 38 minutes ago
observationist 40 minutes ago
jdndbdjsj 37 minutes ago
ionwake 37 minutes ago
m3kw9 32 minutes ago
what? it's their own site and own llm. I could paste most sites and it would work.
judge2020 an hour ago
__jl__ an hour ago
What a model mess!
OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4. There version numbers jump across different model lines with codex at 5.3, what they now call instant also at 5.3.
Anthropic are really the only ones who managed to get this under control: Three models, priced at three different levels. New models are immediately available everywhere.
Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.
strongpigeon an hour ago
> Google essentially only has Preview models! The last GA is 2.5. As a developer, I can either use an outdated model or have zero insurances that the model doesn't get discontinued within weeks.
What's funny is that there is this common meme at Google, you can either use the old, unmaintained tool that's used everywhere, or the new beta tools that doesn't quite do what you want.
Not quite the same, but it did remind me of it.
fhrow4484 an hour ago
CactusBlue 17 minutes ago
yieldcrv 30 minutes ago
L-four an hour ago
Gmail was in beta for 5 years, until 2009.
metalliqaz 40 minutes ago
cyanydeez 36 minutes ago
The business models of LLMs don't include any garuntee, and some how that's fine for a burgeoning decade of trillions of dollars of consumption.
Sure, makes total sense guys.
m_fayer an hour ago
My 5ish years in the mines of Android native back in the day are not years I recall fondly. Never change, Google.
jakub_g an hour ago
"Everything is beta or deprecated."
Aurornis an hour ago
> What a model mess! OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.
I don't know, this feels unnecessarily nitpicky to me
It isn't hard to understand that 5.4 > 5.2 > 5.1. It's not hard to understand that the dash-variants have unique properties that you want to look up before selecting.
Especially for a target audience of software engineers skipping a version number is a common occurrence and never questioned.
Melatonic 14 minutes ago
Agreed - and its a huge step up from their previous naming schemes. That stuff was confusing as hell
CobrastanJorji 37 minutes ago
> Google essentially only has Preview models.
It's really nice to see Google get back to its roots by launching things only to "beta" and then leaving them there for years. Gmail was "beta" for at least five years, I think.
0xbadcafebee an hour ago
> or have zero insurances that the model doesn't get discontinued within weeks
Why are you using the same model after a month? Every month a better model comes out. They are all accessible via the same API. You can pay per-token. This is the first time in, like, all of technology history, that a useful paid service is so interoperable between providers that switching is as easy as changing a URL.
phainopepla2 31 minutes ago
If you're trying to use LLMs in an enterprise context, you would understand. Switching models sometimes requires tweaking prompts. That can be a complete mess, when there are dozens or hundreds of prompts you have to test.
hobofan 12 minutes ago
That's true only in theory, but not in practice. In practice every inference provider handles errors (guardrails, rate limits) somewhat differently and with different quirks, some of which only surface in production usage, and Google is one of the worst offenders in that regard.
embedding-shape an hour ago
> OpenAI now has three price points: GPT 5.1, GPT 5.2 and now GPT 5.4.
I guess that's true, but geared towards API users.
Personally, since "Pro Mode" became available, I've been on the plan that enables that, and it's one price point and I get access to everything, including enough usage for codex that someone who spends a lot of time programming, never manage to hit any usage limits although I've gotten close once to the new (temporary) Spark limits.
biophysboy 25 minutes ago
Wow, is that what preview means? I see those model options in github copilot (all my org allows right now) - I was under the impression that preview means a free trial or a limited # of queries. Kind of a misleading name..
raincole an hour ago
They aggressively retire models, so GPT 5.1 and 5.2 are probably going to go soon.
hobofan 8 minutes ago
In the Azure Foundry, they list GPT 5.2 retirement as "No earlier than 2027-05-12" (it might leave OpenAIs normal API earlier than that). I'm pretty certain that Gemini 3, which isn't even in GA yet will be retired earlier than that.
delaminator an hour ago
two great problems in computing
naming things
cache invalidation
off by one errors
arthurcolle an hour ago
There is a lot of opportunity here for the AI infrastructure layer on top of tier-1 model providers
motoxpro an hour ago
This is what clouds like AWS, Azure, and GCP solve (vertex AI, etc). They are already an abstraction on top of the model makers with distribution built in.
I also don't believe there is any value in trying to aggregate consumers or businesses just to clean up model makers names/release schedule. Consumers just use the default, and businesses need clarity on the underlying change (e.g. why is it acting different? Oh google released 3.6)
arthurcolle 10 minutes ago
m3kw9 31 minutes ago
thats how they had it for years, is a mess, but controlled
minimaxir 4 hours ago
The marquee feature is obviously the 1M context window, compared to the ~200k other models support with maybe an extra cost for generations beyond >200k tokens. Per the pricing page, there is no additional cost for tokens beyond 200k: https://openai.com/api/pricing/
Also per pricing, GPT-5.4 ($2.50/M input, $15/M output) is much cheaper than Opus 4.6 ($5/M input, $25/M output) and Opus has a penalty for its beta >200k context window.
I am skeptical whether the 1M context window will provide material gains as current Codex/Opus show weaknesses as its context window is mostly full, but we'll see.
Per updated docs (https://developers.openai.com/api/docs/guides/latest-model), it supercedes GPT-5.3-Codex, which is an interesting move.
damsta 2 hours ago
There is extra cost for >272K:
> For models with a 1.05M context window (GPT-5.4 and GPT-5.4 pro), prompts with >272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
Taken from https://developers.openai.com/api/docs/models/gpt-5.4
minimaxir 2 hours ago
Good find, and that's too small a print for comfort.
ValentineC an hour ago
glenstein 2 hours ago
Wow, that's diametrically the opposite point: the cost is *extra*, not free.
apetresc an hour ago
fragmede 2 hours ago
Which, Claude has the same deal. You can get a 1M context window, but it's gonna cost ya. If you run /model in claude code, you get:
Switch between Claude models. Applies to this session and future Claude Code sessions. For other/previous model names, specify with --model.
1. Default (recommended) Opus 4.6 · Most capable for complex work
2. Opus (1M context) Opus 4.6 with 1M context · Billed as extra usage · $10/$37.50 per Mtok
3. Sonnet Sonnet 4.6 · Best for everyday tasks
4. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $6/$22.50 per Mtok
5. Haiku Haiku 4.5 · Fastest for quick answerstedsanders 4 hours ago
Yeah, long context vs compaction is always an interesting tradeoff. More information isn't always better for LLMs, as each token adds distraction, cost, and latency. There's no single optimum for all use cases.
For Codex, we're making 1M context experimentally available, but we're not making it the default experience for everyone, as from our testing we think that shorter context plus compaction works best for most people. If anyone here wants to try out 1M, you can do so by overriding `model_context_window` and `model_auto_compact_token_limit`.
Curious to hear if people have use cases where they find 1M works much better!
(I work at OpenAI.)
sillysaurusx 2 hours ago
You may want to look over this thread from cperciva: https://x.com/cperciva/status/2029645027358495156
I too tried Codex and found it similarly hard to control over long contexts. It ended up coding an app that spit out millions of tiny files which were technically smaller than the original files it was supposed to optimize, except due to there being millions of them, actual hard drive usage was 18x larger. It seemed to work well until a certain point, and I suspect that point was context window overflow / compaction. Happy to provide you with the full session if it helps.
I’ll give Codex another shot with 1M. It just seemed like cperciva’s case and my own might be similar in that once the context window overflows (or refuses to fill) Codex seems to lose something essential, whereas Claude keeps it. What that thing is, I have no idea, but I’m hoping longer context will preserve it.
FrankBooth 2 hours ago
woadwarrior01 2 hours ago
akiselev 3 hours ago
> Curious to hear if people have use cases where they find 1M works much better!
Reverse engineering [1]. When decompiling a bunch of code and tracing functionality, it's really easy to fill up the context window with irrelevant noise and compaction generally causes it to lose the plot entirely and have to start almost from scratch.
(Side note, are there any OpenAI programs to get free tokens/Max to test this kind of stuff?)
lubesGordi 28 minutes ago
It's funny that the context window size is such a thing still. Like the whole LLM 'thing' is compression. Why can't we figure out some equally brilliant way of handling context besides just storing text somewhere and feeding it to the llm? RAG is the best attempt so far. We need something like a dynamic in flight llm/data structure being generated from the context that the agent can query as it goes.
nowittyusername 2 hours ago
Personally what I am more interested about is effective context window. I find that when using codex 5.2 high, I preferred to start compaction at around 50% of the context window because I noticed degradation at around that point. Though as of a bout a month ago that point is now below that which is great. Anyways, I feel that I will not be using that 1 million context at all in 5.4 but if the effective window is something like 400k context, that by itself is already a huge win. That means longer sessions before compaction and the agent can keep working on complex stuff for longer. But then there is the issue of intelligence of 5.4. If its as good as 5.2 high I am a happy camper, I found 5.3 anything... lacking personally.
simianwords 4 hours ago
Do you maybe want to give us users some hints on what to compact and throw away? In codex CLI maybe you can create a visual tool that I can see and quickly check mark things I want to discard.
Sometimes I’m exploring some topic and that exploration is not useful but only the summary.
Also, you could use the best guess and cli could tell me that this is what it wants to compact and I can tweak its suggestion in natural language.
Context is going to be super important because it is the primary constraint. It would be nice to have serious granular support.
asabla 40 minutes ago
I really don't have any numbers to back this up. But it feels like the sweet spot is around ~500k context size. Anything larger then that, you usually have scoping issues, trying to do too much at the same time, or having having issues with the quality of what's in the context at all.
For me, I would say speed (not just time to first token, but a complete generation) is more important then going for a larger context size.
Someone1234 3 hours ago
That's an interesting point regarding context Vs. compaction. If that's viewed as the best strategy, I'd hope we would see more tools around compaction than just "I'll compact what I want, brace yourselves" without warning.
Like, I'd love an optional pre-compaction step, "I need to compact, here is a high level list of my context + size, what should I junk?" Or similar.
thyb23 2 hours ago
gspetr 2 hours ago
I have found a bigger context window qute useful when trying to make sense of larger codebases. Generating documentation on how different components interact is better than nothing, especially if the code has poor test coverage.
I've also had it succeed in attempts to identify some non-trivial bugs that spanned multiple modules.
netinstructions 3 hours ago
People (and also frustratingly LLMs) usually refer to https://openai.com/api/pricing/ which doesn't give the complete picture.
https://developers.openai.com/api/docs/pricing is what I always reference, and it explicitly shows that pricing ($2.50/M input, $15/M output) for tokens under 272k
It is nice that we get 70-72k more tokens before the price goes up (also what does it cost beyond 272k tokens??)
Flashtoo 2 hours ago
> Prompts with more than 272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.
netinstructions 2 hours ago
andai 2 hours ago
It's a little hard to compare, because Claude needs significantly fewer tokens for the same task. A better metric is the cost per task, which ends up being pretty similar.
For example on Artificial Analysis, the GPT-5.x models' cost to run the evals range from half of that of Claude Opus (at medium and high), to significantly more than the cost of Opus (at extra high reasoning). So on their cost graphs, GPT has a considerable distribution, and Opus sits right in the middle of that distribution.
The most striking graph to look at there is "Intelligence vs Output Tokens". When you account for that, I think the actual costs end up being quite similar.
According to the evals, at least, the GPT extra high matches Opus in intelligence, while costing more.
Of course, as always, benchmarks are mostly meaningless and you need to check Actual Real World Results For Your Specific Task!
For most of my tasks, the main thing a benchmark tells me is how overqualified the model is, i.e. how much I will be over-paying and over-waiting! (My classic example is, I gave the same task to Gemini 2.5 Flash and Gemini 2.5 Pro. Both did it to the same level of quality, but Gemini took 3x longer and cost 3x more!)
smusamashah an hour ago
Gemini already has 1M or 2M context window right?
luca-ctx 2 hours ago
Context rot is definitely still a problem but apparently it can be mitigated by doing RL on longer tasks that utilize more context. Recent Dario interview mentions this is part of Anthropic’s roadmap.
AtreidesTyrant 2 hours ago
token rot exists for any context window at above 75% capacity, thats why so many have pushed for 1 mil windows
thehamkercat 4 hours ago
GPT 5.3 codex had 400K context window btw
simianwords 4 hours ago
Why would some one use codex instead?
lmeyerov 2 hours ago
In our evals for answering cybersecurity incident investigation questions and even autonomously doing the full investigation, gpt-5.2-codex with low reasoning was the clear winner over non-codex or higher reasoning. 2X+ faster, higher completion rates, etc.
It was generally smarter than pre-5.2 so strategically better, and codex likewise wrote better database queries than non-codex, and as it needs to iteratively hunt down the answer, didn't run out the clock by drowning in reasoning.
Video: https://media.ccc.de/v/39c3-breaking-bots-cheating-at-blue-t...
We'll be updating numbers on 5.3 and claude, but basically same thing there. Early, but we were surprised to see codex outperform opus here.
jeswin 3 hours ago
When it comes to lengthy non-trivial work, codex is much better but also slower.
synergy20 an hour ago
in my testing codex actually planned worse than claude but coded better once the plan is set, and faster. it is also excellent to cross check claude's work, always finding great weakness each time.
pmarreck an hour ago
surgical_fire 4 hours ago
I've been using Codex for software development personally (I have a ChatGPT account), and I use Claude at work (since it is provided by my employer).
I find both Codex and Claude Opus perform at a similar level, and in some ways I actually prefer Codex (I keep hitting quota limits in Opus and have to revert back to Sonnet).
If your question is related to morality (the thing about US politics, DoD contract and so on)... I am not from the US, and I don't care about its internal politics. I also think both OpenAI and Anthropic are evil, and the world would be better if neither existed.
hnsr an hour ago
simianwords 4 hours ago
athrowaway3z 2 hours ago
embedding-shape 4 hours ago
Why would someone use Claude Code instead? Or any other harness? Or why only use one?
My own tooling throws off requests to multiple agents at the same time, then I compare which one is best, and continue from there. Most of the time Codex ends up with the best end results though, but my hunch is that at one point that'll change, hence I continue using multiple at the same time.
paulddraper 2 hours ago
I don’t know about 5.4 specifically, but in the past anything over 200k wasn’t that great anyway.
Like, if you really don’t want to spend any effort trimming it down, sure use 1m.
Otherwise, 1m is an anti pattern.
creamyhorror 3 hours ago
I've only used 5.4 for 1 prompt (edit: 3@high now) so far (reasoning: extra high, took really long), and it was to analyse my codebase and write an evaluation on a topic. But I found its writing and analysis thoughtful, precise, and surprisingly clearly written, unlike 5.3-Codex. It feels very lucid and uses human phrasing.
It might be my AGENTS.md requiring clearer, simpler language, but at least 5.4's doing a good job of following the guidelines. 5.3-Codex wasn't so great at simple, clear writing.
sampton 31 minutes ago
That's been my experience as well switching from Opus to Codex. Reasoning takes longer but answers are precise. Claude is sloppy in comparison.
throwaway911282 14 minutes ago
codex has been really good so far and the fast mode is cherry on top! and the very generous limits is another cherry on top
irishcoffee an hour ago
> It might be my AGENTS.md requiring clearer, simpler language
If you gave the exact same markdown file to me and I posted ed the exact same prompts as you, would I get the same results?
creamyhorror 4 minutes ago
I'm not sure if the model (or its settings) is deterministic. But I do think models' style and phrasing are fairly changeable via AGENTS.md-style guidelines.
5.4's choice of terms and phrasing is very precise and unambiguous to me, whereas 5.3-Codex often used jargon and less precise phrases that I would have to ask further about or demand fuller explanations via AGENTS.md.
m3kw9 28 minutes ago
you probably can't and asking agents.md to "make it clearer" will likely give you the illusion of clearer language without actual well structured tests. agents.md is to usually change what the llm should focus on doing more that suits you. Not to say stuff like "be better", "make no mistakes"
Alifatisk an hour ago
So let me get this straight, OpenAi previously had an issue with LOTS of different models snd versions being available. Then they solved this by introducing GPT-5 which was more like a router that put all these models under the hood so you only had to prompt to GPT-5, and it would route to the best suitable model. This worked great I assume and made the ui for the user comprehensible. But now, they are starting to introduce more of different models again?
We got:
- GPT-5.1
- GPT-5.2 Thinking
- GPT-5.3 (codex)
- GPT-5.3 Instant
- GPT-5.4 Thinking
- GPT-5.4 Pro
Who’s to blame for this ridiculous path they are taking? I’m so glad I am not a Chat user, because this adds so much unnecessary cognitive load.
The good news here is the support for 1M context window, finally it has caught up to Gemini.
361994752 an hour ago
i guess you still have the "auto" as an option to route your request
stainablesteel 17 minutes ago
5 itself might have solved the problem of having too many different models somewhere in the backend
kgeist 2 hours ago
>Today, we’re releasing <..> GPT‑5.3 Instant
>Today, we’re releasing GPT‑5.4 in ChatGPT (as GPT‑5.4 Thinking),
>Note that there is not a model named GPT‑5.3 Thinking
They held out for eight months without a confusing numbering scheme :)
XCSme 33 minutes ago
What I'm most confused, is why call it both GPT-5.3 Instant and gpt-5.3-chat?
gallerdude 2 hours ago
Tbf there was a 5.3 codex
m3kw9 26 minutes ago
instant kind of suck if you asking more than summerizations, surface info, web searches, it can lose track of who's who quickly in some complex multi turn asks. Just need to know what to use instant for.
Chance-Device 4 hours ago
I’m sure the military and security services will enjoy it.
theParadox42 3 hours ago
The self reported safety score for violence dropped from 91% to 83%.
skrebbel 2 hours ago
What the hell is a "safety score for violence"?
I-M-S 2 hours ago
murat124 2 hours ago
0123456789ABCDE 2 hours ago
ozgung 2 hours ago
Did they publish its scores on military benchmarks, like on ArtificialSuperSoldier or Humanity's Last War?
throwaway911282 13 minutes ago
like the claude models via anthropic?
xyzzy9563 10 minutes ago
Do you think the US military should have handicapped technology while China gets unrestricted LLM usage from their models?
yoyohello13 2 hours ago
Also advertisers, don't forget those sweet, sweet ads.
m3kw9 25 minutes ago
they use 4.1, switching up would take as much time to test as openai going from 4.1 to 5.4
varispeed 4 hours ago
prompt> Hi we want to build a missile, here is the picture of what we have in the yard.
mirekrusin 3 hours ago
{ tools: [ { name: "nuke", description: "Use when sure.", ... { lat: number, long: number } } ] }Insanity 2 hours ago
gavinray 4 hours ago
The "RPG Game" example on the blogpost is one of the most impressive demo's of autonomous engineering I've seen.
It's very similar to "Battle Brothers", and the fact that RPG games require art assets, AI for enemy moves, and a host of other logical systems makes it all the more impressive.
casid an hour ago
I don't know. It looks shallow and simple, not even a demo.
hu3 2 hours ago
indeed and I suspect it can be attributed to, at least in part, the improved playwright integration.
> we’re also releasing an experimental Codex skill called “Playwright (Interactive) (opens in a new window)”. This allows Codex to visually debug web and Electron apps; it can even be used to test an app it’s building, as it’s building it.
mattas 4 hours ago
"GPT‑5.4 interprets screenshots of a browser interface and interacts with UI elements through coordinate-based clicking to send emails and schedule a calendar event."
They show an example of 5.4 clicking around in Gmail to send an email.
I still think this is the wrong interface to be interacting with the internet. Why not use Gmail APIs? No need to do any screenshot interpretation or coordinate-based clicking.
bottlepalm an hour ago
The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.
Screenshots on the other hand are documentation, API, and discovery all in one. And you’d be surprised how little context/tokens screenshots consumer compared to all the back and forth verbose json payloads of APIs
LUmBULtERA an hour ago
>The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.
I think an important thing here is that a lot of websites/platforms don't want AIs to have direct API access, because they are afraid that AIs would take the customer "away" from the website/platform, making the consumer a customer of the AI rather than a customer of the website/platform. Therefore for AIs to be able to do what customers want them to do, they need their browsing to look just like the customer's browsing/browser.
npilk 3 hours ago
It feels like building humanoid robots so they can use tools built for human hands. Not clear if it will pay off, but if it does then you get a bunch of flexibility across any task "for free".
Of course APIs and CLIs also exist, but they don't necessarily have feature parity, so more development would be needed. Maybe that's the future though since code generation is so good - use AI to build scaffolding for agent interaction into every product.
packetlost an hour ago
I don't see how an API couldn't have full parity with a web interface, the API is how you actually trigger a state transition in the vast majority of cases
f0e4c2f7 3 hours ago
Lots of services have no desire to ever expose an API. This approach lets you step right over that.
If an API is exposed you can just have the LLM write something against that.
coffeemug 3 hours ago
A model that gets good at computer use can be plugged in anywhere you have a human. A model that gets good at API use cannot. From the standpoint of diffusion into the economy/labor market, computer use is much higher value.
TheAceOfHearts 4 hours ago
I think the desire is that in the long-term AI should be able to use any human-made application to accomplish equivalent tasks. This email demo is proof that this capability is a high priority.
modeless 4 hours ago
A world where AIs use APIs instead of UIs to do everything is a world where us humans will soon be helpless, as we'll have to ask the AIs to do everything for us and will have limited ability to observe and understand their work. I prefer that the AIs continue to use human-accessible tools, even if that's less efficient for them. As the price of intelligence trends toward zero, efficiency becomes relatively less important.
MattDaEskimo 2 hours ago
Same reason why Wikipedia deals with so many people scraping its web page instead of using their API:
Optimizations are secondary to convenience
kristianp 2 hours ago
This opens up a new question: how does bot detection work when the bot is using the computer via a gui?
itintheory 2 hours ago
On it's face, I'm not sure that's a new question. Bots using browser automation frameworks (puppeteer, selenium, playwright etc) have been around for a while. There are signals used in bot detection tools like cursor movement speed, accuracy, keyboard timing, etc. How those detection tools might update to support legitimate bot users does seem like an open question to me though.
PaulHoule 4 hours ago
APIs have never been a gift but rather have always been a take-away that lets you do less than you can with the web interface. It’s always been about drinking through a straw, paying NASA prices, and being limited in everything you can do.
But people are intimidated by the complexity of writing web crawlers because management has been so traumatized by the cost of making GUI applications that they couldn’t believe how cheap it is to write crawlers and scrapers…. Until LLMs came along, and changed the perceived economics and created a permission structure. [1]
AI is a threat to the “enshittification economy” because it lets us route around it.
[1] that high cost of GUI development is one reason why scrapers are cheap… there is a good chance that the scraper you wrote 8 years ago still works because (a) they can’t afford to change their site and (b) if they could afford to change their site changing anything substantial about it is likely to unrecoverably tank their Google rankings so they won’t. A.I. might change the mechanics of that now that you Google traffic is likely to go to zero no matter what you do.
Traster 2 hours ago
You can buy a Claude Code subscription for $200 bucks and use way more tokens in Claude Code than if you pay for direct API usage. Anthopic decided you can't take your Auth key for Claude code and use it to hit the API via a different tool. They made that business decision, because they thought it was better for them strategically to do that. They're allowed to make that choice as a business.
Plenty of companies make the same choice about their API, they provide it for a specific purpose but they have good business reasons they want you using the website. Plenty of people write webcrawlers and it's been a cat and mouse game for decades for websites to block them.
This will just be one more step in that cat and mouse game, and if the AI really gets good enough to become a complete intermediary between you and the website? The website will just shutdown. We saw it happen before with the open web. These websites aren't here for some heroic purpose, if you screw their business model they will just go out of business. You won't be able to use their website because it won't exist and the website that do exist will either (a) be made by the same guys writing your agent, and (b) be highly highly optimized to get your agent to screw you.
disqard 4 hours ago
> AI is a threat to the “enshittification economy” because it lets us route around it.
This is prescient -- I wonder if the Big Tech entities see it this way. Maybe, even if they do, they're 100% committed to speedrunning the current late-stage-cap wave, and therefore unable to do anything about it.
PaulHoule 3 hours ago
lostmsu 3 hours ago
> AI is a threat to the “enshittification economy” because it lets us route around it.
I am not sure about that. We techies avoid enshittification because we recognize shit. Normies will just get their syncopatic enshittified AI that will tell them to continue buying into walled gardens.
jstummbillig 4 hours ago
Because the web and software more generally if full of not APIs and you do, in fact, need the clicking to work to make agents work generally
satvikpendem 4 hours ago
The ideal of REST, the HTML and UI is the API.
Jacques2Marais 4 hours ago
I guess a big chunk of their target market won't know how to use APIs.
spongebobstoes 4 hours ago
not everything has an API, or API use is limited. some UIs are more feature complete than their APIs
some sites try to block programmatic use
UI use can be recorded and audited by a non-technical person
steve1977 4 hours ago
One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well. Why not use machine code?
embedding-shape 4 hours ago
Why would human language by the wrong interface when they're literally language models? Why would machine code be better when there is probably magnitude less of training material with machine code?
You can also test this yourself easily, fire up two agents, ask one to use PL meant for humans, and one to write straight up machine code (or assembly even), and see which results you like best.
adwn 2 hours ago
> One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well.
Then go ahead and make an argument. "Why not do X?" is not an argument, it's a suggestion.
BoredPositron 4 hours ago
because they are inherently text based as is code?
steve1977 4 hours ago
smoody07 2 hours ago
Surprised to see every chart limited to comparisons against other OpenAI models. What does the industry comparison look like?
throwaway911282 12 minutes ago
https://xcancel.com/OpenAI/status/2029620619743219811 you can see comparisons here
lorenzoguerra 2 hours ago
I believe that this choice is due to two main reasons. First, it's (obviously) a marketing strategy to keep the spotlight on their own models, showing they're constantly improving and avoiding validating competitors. Second, since the community knows that static benchmarks are unreliable, it makes sense for them to outsource the comparisons to independent leaderboards, which lets them avoid accusations of cherry-picking while justifying their marketing strategy.
Ultimately, the people actually interested in the performance of these models already don't trust self-reported comparisons and wait for third-party analysis anyway
aydyn 2 hours ago
They compare to Claude and Gemini in their tweet
0123456789ABCDE 2 hours ago
https://artificialanalysis.ai should have the numbers soon
egonschiele 4 hours ago
The actual card is here https://deploymentsafety.openai.com/gpt-5-4-thinking/introdu... the link currently goes to the announcement.
Rapzid 4 hours ago
I must have been sleeping when "sheet" "brief" "primer" etc become known as "cards".
I really thought weirdly worded and unnecessary "announcement" linking to the actual info along with the word "card" were the results of vibe slop.
realityfactchex 3 hours ago
Card is slightly odd naming indeed.
Criticisms aside (sigh), according to Wikipedia, the term was introduced when proposed by mostly Googlers, with the original paper [0] submitted in 2018. To quote,
"""In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information."""
So that's where they were coming from, I guess.
[0] Margaret Mitchell et al., 2018 submission, Model Cards for Model Reporting, https://arxiv.org/abs/1810.0399
Murfalo 2 hours ago
consumer451 an hour ago
I am very curious about this:
> Theme park simulation game made with GPT‑5.4 from a single lightly specified prompt, using Playwright Interactive for browser playtesting and image generation for the isometric asset set.
Is "Playwright Interactive" a skill that takes screenshots in a tight loop with code changes, or is there more to it?
zone411 an hour ago
Results from my Extended NYT Connections benchmark:
GPT-5.4 extra high scores 94.0 (GPT-5.2 extra high scored 88.6).
GPT-5.4 medium scores 92.0 (GPT-5.2 medium scored 71.4).
GPT-5.4 no reasoning scores 32.8 (GPT-5.2 no reasoning scored 28.1).
yanis_t 4 hours ago
These releases are lacking something. Yes, they optimised for benchmarks, but it’s just not all that impressive anymore. It is time for a product, not for a marginally improved model.
ipsum2 4 hours ago
The model was released less than an hour ago, and somehow you've been able to form such a strong opinion about it. Impressive!
satvikpendem 3 hours ago
It's more hedonic adaptation, people just aren't as impressed by incremental changes anymore over big leaps. It's the same as another thread yesterday where someone said the new MacBook with the latest processor doesn't excite them anymore, and it's because for most people, most models are good enough and now it's all about applications.
dmix 2 hours ago
mirekrusin 3 hours ago
earth2mars 3 hours ago
I am actually super impressed with Codex-5.3 extra high reasoning. Its a drop in replacement (infact better than Claude Opus 4.6. lately claude being super verbose going in circles in getting things resolved). I stopped using claude mostly and having a blast with Codex 5.3. looking forward to 5.4 in codex.
whynotminot 2 hours ago
braebo 39 minutes ago
satvikpendem 3 hours ago
cj 3 hours ago
One opinion you can form in under an hour is... why are they using GPT-4o to rate the bias of new models?
> assess harmful stereotypes by grading differences in how a model responds
> Responses are rated for harmful differences in stereotypes using GPT-4o, whose ratings were shown to be consistent with human ratings
Are we seriously using old models to rate new models?
hex4def6 3 hours ago
titanomachy 3 hours ago
utopiah 3 hours ago
Benchmarks?
I don't use OpenAI nor even LLMs (despite having tried https://fabien.benetou.fr/Content/SelfHostingArtificialIntel... a lot of models) but I imagine if I did I would keep failed prompts (can just be a basic "last prompt failed" then export) then whenever a new model comes around I'd throw at 5 it random of MY fails (not benchmarks from others, those will come too anyway) and see if it's better, same, worst, for My use cases in minutes.
If it's "better" (whatever my criteria might be) I'd also throw back some of my useful prompts to avoid regression.
Really doesn't seem complicated nor taking much time to forge a realistic opinion.
kranke155 2 hours ago
The models are so good that incremental improvements are not super impressive. We literally would benefit more from maybe sending 50% of model spending into spending on implementation into the services and industrial economy. We literally are lagging in implementation, specialised tools, and hooks so we can connect everything to agents. I think.
tgarrett 3 hours ago
Plasma physicist here, I haven't tried 5.4 yet, but in general I am very impressed with the recent upgrades that started arriving in the fall of 2025: for tasks like manipulating analytic systems of equations, quickly developing new features for simulation codes, and interpreting and designing experiments (with pictures) they have become much stronger. I've been asking questions and probing them for several years now out of curiosity, and they suddenly have developed deep understanding (Gemini 2.5 <<< Gemini 3.1) and become very useful. I totally get the current SV vibes, and am becoming a lot more ambitious in my future plans.
brcmthrowaway 3 hours ago
Youre just chatting yourself out of a job.
slibhb 3 minutes ago
axus 2 hours ago
softwaredoug 3 hours ago
The products are the harnesses, and IMO that’s where the innovation happens. We’ve gotten better at helping get good, verifiable work from dumb LLMs
mindwok an hour ago
They don't need to be impressive to be worthwhile. I like incremental improvements, they make a difference in the day to day work I do writing software with these.
iterateoften 3 hours ago
The product is putting the skills / harness behind the api instead of the agent locally on your computer and iterating on that between model updates. Close off the garden.
Not that I want it, just where I imagine it going.
wahnfrieden 4 hours ago
5.3 codex was a huge leap over 5.2 for agentic work in practice. have you been using both of those or paying attention more to benchmark news and chatgpt experience?
esafak 4 hours ago
That's for you to build; they provide the brains. Do you really want one company to build everything? There wouldn't be a software industry to speak of if that happened.
simlevesque 4 hours ago
Nah, the second you finish your build they release their version and then it's game over.
acedTrex 4 hours ago
Well they are currently the ones valued at a number with a whole lotta 0s on it. I think they should probably do both
varispeed 3 hours ago
The scores increase and as new versions are released they feel more and more dumbed down.
jascha_eng 3 hours ago
When did they stop putting competitor models on the comparison table btw? And yeh I mean the benchmark improvements are meh. Context Window and lack of real memory is still an issue.
metalliqaz 3 hours ago
They need something that POPS:
The new GPT -- SkyNet for _real_prydt 4 hours ago
I no longer want to support OpenAI at all. Regardless of benchmarks or real world performance.
Imustaskforhelp 3 hours ago
I agree with ya. You aren't alone in this. For what its worth, Chatgpt subscriptions have been cancelled or that number has risen ~300% in the last month.
Also, Anthropic/Gemini/even Kimi models are pretty good for what its worth. I used to use chatgpt and I still sometimes accidentally open it but I use Gemini/Claude nowadays and I personally find them to be better anyways too.
throwaway911282 10 minutes ago
google and anthropic have govt contracts long before openai.. if you are taking a stance you should rather use oss models
zeeebeee 2 hours ago
that aside, chatgpt itself has gone downhill so much and i know i'm not the only one feeling this way
i just HATE talking to it like a chatbot
idk what they did but i feel like every response has been the same "structure" since gpt 5 came out
feels like a true robot
nickysielicki 4 hours ago
can anyone compare the $200/mo codex usage limits with the $200/mo claude usage limits? It’s extremely difficult to get a feel for whether switching between the two is going to result in hitting limits more or less often, and it’s difficult to find discussion online about this.
In practice, if I buy $200/mo codex, can I basically run 3 codex instances simultaneously in tmux, like I can with claude code pro max, all day every day, without hitting limits?
vtail 4 hours ago
My own experience is that I get far far more usage (and better quality code, too) from codex. I downgrade my Claude Max to Claude Pro (the $20 plan) and now using codex with Pro plan exclusively for everything.
ritzaco 4 hours ago
I haven't tried the $200 plans by I have Claude and Codex $20 and I feel like I get a lot more out of Codex before hitting the limits. My tracker certainly shows higher tokens for Codex. I've seen others say the same.
lostmsu 4 hours ago
Sadly comment ratings are not visible on HN, so the only way to corroborate is to write it explicitly: Codex $20 includes significantly more work done and is subjectively smarter.
winstonp 4 hours ago
tauntz 3 hours ago
I've only run into the codex $20 limit once with my hobby project. With my Claude ~$20 plan, I hit limits after about 3(!) rather trivial prompts to Opus :/
throwaway911282 9 minutes ago
you get more more from codex than claude any day. and its more reliable as well.
CSMastermind 3 hours ago
Codex limits are much more generous than claude.
I switch between both but codex has also been slightly better in terms of quality for me personally at least.
gavinray 3 hours ago
I almost never hit my $20 Codex limits, whereas I often hit my Claude limits.
mikert89 3 hours ago
I personally like the 100 dollar one from claude, but the gpt4 pro can be very good
FergusArgyll 3 hours ago
Codex usage limits are definitely more generous. As for their strength, that's hard to say / personal taste
senko an hour ago
Just tested it with my version of the pelican test: a minimal RTS game implementation (zero-shot in codex cli): https://gist.github.com/senko/596a657b4c0bfd5c8d08f44e4e5347... (you'll have to download and open the file, sadly GitHub refuses to serve it with the correct content type)
This is on the edge of what the frontier models can do. For 5.4, the result is better than 5.3-Codex and Opus 4.6. (Edit: nowhere near the RPG game from their blog post, which was presumably much more specced out and used better engineering setup).
I also tested it with a non-trivial task I had to do on an existing legacy codebase, and it breezed through a task that Claude Code with Opus 4.6 was struggling with.
I don't know when Anthropic will fire back with their own update, but until then I'll spend a bit more time with Codex CLI and GPT 5.4.
twtw99 4 hours ago
If you don't want to click in, easy comparison with other 2 frontier models - https://x.com/OpenAI/status/2029620619743219811?s=20
bicx 3 hours ago
That last benchmark seemed like an impressive leg up against Opus until I saw the sneaky footnote that it was actually a Sonnet result. Why even include it then, other than hoping people don't notice?
osti 2 hours ago
It's only that one number that is for sonnet.
0123456789ABCDE 2 hours ago
conradkay 2 hours ago
Sonnet was pretty close to (or better than) Opus in a lot of benchmarks, I don't think it's a big deal
jitl 2 hours ago
Aboutplants 4 hours ago
It seems that all frontier models are basically roughly even at this point. One may be slightly better for certain things but in general I think we are approaching a real level playing field field in terms of ability.
observationist 4 hours ago
Benchmarks don't capture a lot - relative response times, vibes, what unmeasured capabilities are jagged and which are smooth, etc. I find there's a lot of difference between models - there are things which Grok is better than ChatGPT for that the benchmarks get inverted, and vice versa. There's also the UI and tools at hand - ChatGPT image gen is just straight up better, but Grok Imagine does better videos, and is faster.
Gemini and Claude also have their strengths, apparently Claude handles real world software better, but with the extended context and improvements to Codex, ChatGPT might end up taking the lead there as well.
I don't think the linear scoring on some of the things being measured is quite applicable in the ways that they're being used, either - a 1% increase for a given benchmark could mean a 50% capabilities jump relative to a human skill level. If this rate of progress is steady, though, this year is gonna be crazy.
baq 3 hours ago
basch 2 hours ago
bigyabai 3 hours ago
thewebguyd 4 hours ago
Kind of reinforces that a model is not a moat. Products, not models, are what's going to determine who gets to stay in business or not.
gregpred 4 hours ago
energy123 4 hours ago
kseniamorph 3 hours ago
makes sense, but i'd separate two things: models converging in ability vs hitting a fundamental ceiling. what we're probably seeing is the current training recipe plateauing — bigger model, more tokens, same optimizer. that would explain the convergence. but that's not necessarily the architecture being maxed out. would be interesting to see what happens when genuinely new approaches get to frontier scale.
druskacik 4 hours ago
That has been true for some time now, definitely since Claude 3 release two years ago.
chabes 4 hours ago
Definitely don’t want to click in at x either.
thejarren 4 hours ago
Sabinus 17 minutes ago
Get a redirect plugin and set it up to send you to xcancel instead of Twitter. I've done it, and it's very convenient.
anonym00se1 4 hours ago
Ditto, but I did anyways and enjoyed that OpenAI doesn't include the dogwater that is Grok on their scorecard.
dom96 3 hours ago
Why do none of the benchmarks test for hallucinations?
tedsanders 2 hours ago
In the text, we did share one hallucination benchmark: Claim-level errors fell by 33% and responses with an error fell by 18%, on a set of error-prone ChatGPT prompts we collected (though of course the rate will vary a lot across different types of prompts).
Hallucinations are the #1 problem with language models and we are working hard to keep bringing the rate down.
(I work at OpenAI.)
netule 3 hours ago
Optics. It would be inconvenient for marketing, so they leave those stats to third parties to figure out.
swingboy 4 hours ago
Why do so many people in the comments want 4o so bad?
cheema33 3 hours ago
> Why do so many people in the comments want 4o so bad?
You can ask 4o to tell you "I love you" and it will comply. Some people really really want/need that. Later models don't go along with those requests and ask you to focus on human connections.
astrange 4 hours ago
They have AI psychosis and think it's their boyfriend.
The 5.x series have terrible writing styles, which is one way to cut down on sycophancy.
baq 3 hours ago
embedding-shape 4 hours ago
Someone correct me if I'm wrong, but seemingly a lot of the people who found a "love interest" in LLMs seems to have preferred 4o for some reason. There was a lot of loud voices about that in the subreddit r/MyBoyfriendIsAI when it initially went away.
drittich 3 hours ago
MattGaiser 4 hours ago
The writing with the 5 models feels a lot less human. It is a vibe, but a common one.
MarcFrame 3 hours ago
how does 5.4-thinking have a lower FrontierMath score than 5.4-pro?
nico1207 3 hours ago
Well 5.4-pro is the more expensive and more advanced version of 5.4-thinking so why wouldn't it?
karmasimida 4 hours ago
It is a bigger model, confirmed
denysvitali 4 hours ago
Article: https://openai.com/index/introducing-gpt-5-4/
gpt-5.4
Input: $2.50 /M tokens
Cached: $0.25 /M tokens
Output: $15 /M tokens
---
gpt-5.4-pro
Input: $30 /M tokens
Output: $180 /M tokens
Wtf
elliotbnvl 4 hours ago
Looks like it's an order of magnitude off. Missprint?
GenerWork 4 hours ago
Looks like an extra zero was added?
benlivengood 4 hours ago
glerk 4 hours ago
Looks like fair price discovery :)
dpoloncsak 4 hours ago
>" GPT‑5.4 is priced higher per token than GPT‑5.2 to reflect its improved capabilities"
That's just not how pricing is supposed to work...? Especially for a 'non-profit'. You're charging me more so I know I have the better model?
elicash 4 hours ago
Can't you continue to use to older model, if you prefer the pricing?
But they also claim this new model uses fewer tokens, so it still might ultimately be cheaper even if per token cost is higher.
dpoloncsak 3 hours ago
jbellis 3 hours ago
FergusArgyll 4 hours ago
Maybe it's finally a bigger pretrain?
dpoloncsak 3 hours ago
timpera 4 hours ago
> Steerability: Similarly to how Codex outlines its approach when it starts working, GPT‑5.4 Thinking in ChatGPT will now outline its work with a preamble for longer, more complex queries. You can also add instructions or adjust its direction mid-response.
This was definitely missing before, and a frustrating difference when switching between ChatGPT and Codex. Great addition.
jryio 4 hours ago
1 million tokens is great until you notice the long context scores fall off a cliff past 256K and the rest is basically vibes and auto compacting.
hmokiguess 2 hours ago
They hired the dude from OpenClaw, they had Jony Ive for a while now, give us something different!
butILoveLife an hour ago
Anyone else completely not interested? Since GPT5, its been cost cutting measure after cost cutting measure.
I imagine they added a feature or two, and the router will continue to give people 70B parameter-like responses when they dont ask for math or coding questions.
daft_pink 2 hours ago
I’ve officially got model fatigue. I don’t care anymore.
postalrat 2 hours ago
I'd suggest not clicking for things you don't care about.
zeeebeee 2 hours ago
same same same
elmean 3 hours ago
Wow insane improvements in targeting systems for military targets over children
spiralcoaster 2 hours ago
This is the low quality reddit-style garbage that gets upvoted on HN these days?
zarzavat an hour ago
What are we supposed to talk about in this thread exactly? The developers of this model are evil. Are we supposed to just write dry comments about benchmarks while OpenAI condones their models being deployed for autonomously killing people?
Yes I'm sure it makes a very nice bicycle SVG. I will be sure to ask the OpenAI killbots for a copy when they arrive at my house.
esalman 2 hours ago
While low quality, it is extremely important, potentially historically significant too.
Someone1234 an hour ago
Sabinus an hour ago
elmean 7 minutes ago
I was just reading the model card...
karmasimida an hour ago
As programmers become intelligently irrelevant in the whole picture, you would see more posts like this
elmean 7 minutes ago
mycall an hour ago
True and simply vote it down.
elmean 5 minutes ago
rd an hour ago
Noticeably yes much more than usual. It’s quite bad. I need to start blocking accounts.
throwaway911282 7 minutes ago
what a thoughtful comment! HN is so low quality these days
timedude 2 hours ago
Absolutely amazing. Grateful to be living in this timeframe
oklahomasports 8 minutes ago
Evidence
bramhaag 2 hours ago
What makes you think that they see bombing civilians as a bug, not a feature?
elmean a minute ago
first real comment, I thought that at first but this could lower the possible users that could be using chatGPT and that would be against us (shareholders)
skilltissue 2 hours ago
Don't use the site this way.
elmean 2 minutes ago
AINT NO PARTY LIKE A GARRY TAN HOT TUB PARTY
louiereederson 2 hours ago
I think for your comment to follow the guidelines, you need to explain why the original comment did not follow them.
Customer values are relevant to the discussion given that they impact choice and therefore competition.
Chance-Device 2 hours ago
You made a burner account just to scold this guy? Don’t use burner accounts this way.
patcon 2 hours ago
Not all rule-following is noble or wise.
himata4113 2 hours ago
news guidelines
adamtaylor_13 2 hours ago
Chance-Device 2 hours ago
Ironically this would actually be a good thing. As we can see from Iran Claude doesn’t quite have these bugs ironed out yet…
MSFT_Edging 2 hours ago
This is the exact attitude that lead to a chat bot being used to identify a school for girls as a valid target.
The chatbot cannot be held responsible.
Whoever is using chatbots for selecting targets is incompetent and should likely face war crime charges.
bananamogul an hour ago
Chance-Device 2 hours ago
rbitar 3 hours ago
I think the most exciting change announced here is the use of tool search to dynamically load tools as needed: https://developers.openai.com/api/docs/guides/tools-tool-sea...
ZeroCool2u 4 hours ago
Bit concerning that we see in some cases significantly worse results when enabling thinking. Especially for Math, but also in the browser agent benchmark.
Not sure if this is more concerning for the test time compute paradigm or the underlying model itself.
Maybe I'm misunderstanding something though? I'm assuming 5.4 and 5.4 Thinking are the same underlying model and that's not just marketing.
oersted 4 hours ago
I believe you are looking at GPT 5.4 Pro. It's confusing in the context of subscription plan names, Gemini naming and such. But they've had the Pro version of the GPT 5 models (and I believe o3 and o1 too) for a while.
It's the one you have access to with the top ~$200 subscription and it's available through the API for a MUCH higher price ($2.5/$15 vs $30/$180 for 5.4 per 1M tokens), but the performance improvement is marginal.
Not sure what it is exactly, I assume it's probably the non-quantized version of the model or something like that.
nsingh2 3 hours ago
From what I've read online it's not necessarily a unquantized version, it seems to go through longer reasoning traces and runs multiple reasoning traces at once. Probably overkill for most tasks.
ZeroCool2u 4 hours ago
Yup, that was it. Didn't realize they're different models. I suppose naming has never been OpenAI's strong suit.
logicchains 3 hours ago
>It's the one you have access to with the top ~$200 subscription and it's available through the API for a MUCH higher price ($2.5/$15 vs $30/$180 for 5.4 per 1M tokens), but the performance improvement is marginal.
The performance improvement isn't marginal if you're doing something particularly novel/difficult.
highfrequency 4 hours ago
Can you be more specific about which math results you are talking about? Looks like significant improvement on FrontierMath esp for the Pro model (most inference time compute).
ZeroCool2u 4 hours ago
Frontier Math, GPQA Diamond, and Browsecomp are the benchmarks I noticed this on.
csnweb 4 hours ago
andoando 4 hours ago
The thinking models are additionally trained with reinforcement learning to produce chain of thought reasoning
motbus3 2 hours ago
Sam Altman can keep his model intentionally to himself. Not doing business with mass murderers
nickandbro 4 hours ago
Beat Simon Willison ;)
https://www.svgviewer.dev/s/gAa69yQd
Not the best pelican compared to gemini 3.1 pro, but I am sure with coding or excel does remarkably better given those are part of its measured benchmarks.
GaggiX 4 hours ago
This pelican is actually bad, did you use xhigh?
nickandbro 3 hours ago
yep, just double checked used gpt-5.4 xhigh. Though had to select it in codex as don't have access to it on the chatgpt app or web version yet. It's possible that whatever code harness codex uses, messed with it.
nubg 2 hours ago
Aldipower an hour ago
So did they raised the ridiculous small "per tool call token limit" when working with MCP servers? This makes Chat useless... I do not care, but my users.
dandiep 3 hours ago
Anyone know why OpenAI hasn't released a new model for fine tuning since 4.1? It'll be a year next month since their last model update for fine tuning.
zzleeper 3 hours ago
For me the issue is why there's not a new mini since 5-mini in August.
I have now switched web-related and data-related queries to Gemini, coding to Claude, and will probably try QWEN for less critical data queries. So where does OpenAI fits now?
Rapzid an hour ago
Also interested in this and a replacement for 4.1/4.1-mini that focuses on low latency and high accuracy for voice applications(not the all-in-one models).
qoez 3 hours ago
I think they just did that because of the energy around it for open source models. Their heart probably wasn't in it and the amount of people fine tuning given the prices were probably too low to continue putting in attention there.
bazmattaz 3 hours ago
Anyone else feel that it’s exhausting keeping up with the pace of new model releases. I swear every other week there’s a new release!
coffeemug 3 hours ago
Why do you need to keep up? Just use the latest models and don't worry about it.
davnicwil 3 hours ago
If you think about it there shouldn't really be a reason to care as long as things don't get worse.
Presumably this is where it'll evolve to with the product just being the brand with a pricing tier and you always get {latest} within that, whatever that means (you don't have to care). They could even shuffle models around internally using some sort of auto-like mode for simpler questions. Again why should I care as long as average output is not subjectively worse.
Just as I don't want to select resources for my SaaS software to use or have that explictly linked to pricing, I don't want to care what my OpenAI model or Anthropic model is today, I just want to pay and for it to hopefully keep getting better but at a minimum not get worse.
pupppet 2 hours ago
I think it's fun, it's like we're reliving the browser wars of the early days.
throwup238 3 hours ago
Yes, that's a common feeling. 5.3-Codex was released a month ago on Feb 5 so we're not even getting a full month within a single brand, let alone between competitors.
jcmontx 4 hours ago
5.4 vs 5.3-Codex? Which one is better for coding?
embedding-shape 4 hours ago
Literally just released, I don't think anyone knows yet. Don't listen to people's confident takes until after a week or two when people actually been able to try it, otherwise you'll just get sucked up in bears/bulls misdirected "I'm first with an opinion".
vtail 4 hours ago
Looking at the benchmarks, 5.4 is slightly better. But it also offers "Fast" mode (at 2x usage), which - if it works and doesn't completely depletes my Pro plan - is a no brainer at the same or even slightly worse quality for more interactive development.
Someone1234 3 hours ago
Related question:
- Do they have the same context usage/cost particularly in a plan?
They've kept 5.3-Codex along with 5.4, but is that just for user-preference reasons, or is there a trade-off to using the older one? I'm aware that API cost is better, but that isn't 1:1 with plan usage "cost."
awestroke 3 hours ago
Opus 4.6
jcmontx 3 hours ago
Codex surpassed Claude in usefulness _for me_ since last month
baal80spam an hour ago
Uh, oh. Looks like Claude sycophants joined linuxers and vegetarians.
esafak 4 hours ago
For the price, it seems the latter. I'd use 5.4 to plan.
paxys 3 hours ago
"Here's a brand new state-of-the-art model. It costs 10x more than the previous one because it's just so good. But don't worry, if you don't want all this power you can continue to use the older one."
A couple months later:
"We are deprecating the older model."
OutOfHere 3 hours ago
That's a misrepresentation of the cost. It is simply false. The cost is noted here: https://news.ycombinator.com/item?id=47265144
XCSme 2 hours ago
Seems to be quite similar to 5.3-codex, but somehow almost 2x more expensive: https://aibenchy.com/compare/openai-gpt-5-4-medium/openai-gp...
jstummbillig 2 hours ago
Inline poll: What reasoning levels do you work with?
This becomes increasingly less clear to me, because the more interesting work will be the agent going off for 30mins+ on high / extra high (it's mostly one of the two), and that's a long time to wait and an unfeasible amount of code to a/b
smusamashah an hour ago
I only want to see how it performs on the Bullshit-benchmark https://petergpt.github.io/bullshit-benchmark/viewer/index.v...
GPT is not even close yo Claude in terms of responding to BS.
alpineman 3 hours ago
No thanks. Already cancelled my sub.
woeirua an hour ago
Feels incremental. Looks like OpenAI is struggling.
melbourne_mat 36 minutes ago
Quick: let's release something new that gives the appearance that we're still relevant
7777777phil 3 hours ago
83% win rate over industry professionals across 44 occupations.
I'd believe it on those specific tasks. Near-universal adoption in software still hasn't moved DORA metrics. The model gets better every release. The output doesn't follow. Just had a closer look on those productivity metrics this week: https://philippdubach.com/posts/93-of-developers-use-ai-codi...
NiloCK 3 hours ago
This March 2026 blog post is citing a 2025 study based on Sonnet 3.5 and 3.7 usage.
Given that organization who ran the study [1] has a terrifying exponential as their landing page, I think they'd prefer that it's results are interpreted as a snapshot of something moving rather than a constant.
[1] - https://metr.org/
7777777phil 3 hours ago
Good catch, thanks (I really wrote that myself.) Added a note to the post acknowledging the models used were Claude 3.5 and 3.7 Sonnet.
twitchard 3 hours ago
Not sure DORA is that much of an indictment. For "Change Failure Rate" for instance these are subject to tradeoffs. Organizations likely have a tolerance level for Change Failure Rate. If changes are failing too often they slow down and invest. If changes aren't failing that much they speed up -- and so saying "change failure rate hasn't decreased, obviously AI must not be working" is a little silly.
"Change Lead Time" I would expect to have sped up although I can tell stories for why AI-assisted coding would have an indeterminate effect here too. Right now at a lot of orgs, the bottle neck is the review process because AI is so good at producing complete draft PRs quickly. Because reviews are scarce (not just reviews but also manual testing passes are scarce) this creates an incentive ironically to group changes into larger batches. So the definition of what a "change" is has grown too.
throwaway5752 an hour ago
Does this model autonomously kill people without human approval or perform domestic surveillance of US citizens?
OsrsNeedsf2P 3 hours ago
Does anyone know what website is the "Isometric Park Builder" shown off here?
turblety 27 minutes ago
They build that using GPT-5.4
> Theme park simulation game made with GPT‑5.4 from a single lightly specified prompt
GPT literally built that game.
motza an hour ago
No doubt this was released early to ease the bad press
strongpigeon 4 hours ago
It's interesting that they charge more for the > 200k token window, but the benchmark score seems to go down significantly past that. That's judging from the Long Context benchmark score they posted, but perhaps I'm misunderstanding what that implies.
Tiberium 3 hours ago
They don't actually seem to charge more for the >200k tokens on the API. OpenRouter and OpenAI's own API docs do not have anything about increased pricing for >200k context for GPT-5.4. I think the 2x limit usage for higher context is specific to using the model over a subscription in Codex.
simianwords 4 hours ago
This is exactly what I would expect. Why do you find it surprising
strongpigeon 2 hours ago
I guess that you pay more for worse quality to unlock use cases that could maybe be solved by better context management.
bob1029 2 hours ago
I was just testing this with my unity automation tool and the performance uplift from 5.2 seems to be substantial.
cj 4 hours ago
I use ChatGPT primarily for health related prompts. Looking at bloodwork, playing doctor for diagnosing minor aches/pains from weightlifting, etc.
Interesting, the "Health" category seems to report worse performance compared to 5.2.
paxys 4 hours ago
Models are being neutered for questions related to law, health etc. for liability reasons.
cj 4 hours ago
I'm sometimes surprised how much detail ChatGPT will go into without giving any dislaimers.
I very frequently copy/paste the same prompts into Gemini to compare, and Gemini often flat out refuses to engage while ChatGPT will happily make medical recommendations.
I also have a feeling it has to do with my account history and heavy use of project context. It feels like when ChatGPT is overloaded with too much context, it might let the guardrails sort of slide away. That's just my feeling though.
Today was particularly bad... I uploaded 2 PDFs of bloodwork and asked ChatGPT to transcribe it, and it spit out blood test results that it found in the project context from an earlier date, not the one attached to the prompt. That was weird.
bargainbin 3 hours ago
tiahura 3 hours ago
Are you sure about that? Plenty of lawyers that use them everyday aren't noticing.
partiallypro 3 hours ago
I've done the same, and I tested the same prompts with Claude and Google, and they both started hallucinating my blood results and supplement stack ingredients. Hopefully this new model doesn't fall on this. Claude and Google are dangerously unusable on the subject of health, from my experience.
zeeebeee 2 hours ago
what's best in your experience? i've always felt like opus did well
iamronaldo 4 hours ago
Notably 75% on os world surpassing humans at 72%... (How well models use operating systems)
gigatexal 29 minutes ago
Is it any good at coding?
swingboy 4 hours ago
Even with the 1m context window, it looks like these models drop off significantly at about 256k. Hopefully improving that is a high priority for 2026.
thefounder 16 minutes ago
Is it just me or the price for 5.4 pro is just insane?
nthypes 4 hours ago
$30/M Input and $180/M Output Tokens is nuts. Ridiculous expensive for not that great bump on intelligence when compared to other models.
stri8ted 4 hours ago
Price Input: $2.50 / 1M tokens Cached input: $0.25 / 1M tokens Output: $15.00 / 1M tokens
nthypes 4 hours ago
Gemini 3.1 Pro
$2/M Input Tokens $15/M Output Tokens
Claude Opus 4.6
$5/M Input Tokens $25/M Output Tokens
nthypes 4 hours ago
Just to clarify,the pricing above is for GPT-5.4 Pro. For standard here is the pricing:
$2.5/M Input Tokens $15/M Output Tokens
energy123 4 hours ago
For Pro
joe_mamba 4 hours ago
Better tokens per dollar could be useless for comparison if the model can't solve your problem.
rvz 4 hours ago
You didn't realize they can increase / change prices for intelligence?
This should not be shocking.
nickthegreek 4 hours ago
OP made no mention of not understanding cost relation to intelligence. In fact, they specifically call out the lack of value.
moralestapia 4 hours ago
Don't use it?
vicchenai 3 hours ago
Honestly at this point I just want to know if it follows complex instructions better than 5.1. The benchmark numbers stopped meaning much to me a while ago - real usage always feels different.
fernst an hour ago
Now with more and improved domestic espionage capabilities
beernet 4 hours ago
Sam really fumbled the top position in a matter of months, and spectacularly so. Wow. It appears that people are much more excited by Anthropic and Google releases, and there are good reasons for that which were absolutely avoidable.
world2vec 4 hours ago
Benchmarks barely improved it seems
tmpz22 4 hours ago
Does this improve Tomahawk Missile accuracy?
ch4s3 4 hours ago
They're already accurate within 5-10m at Mach 0.74 after traveling 2k+ km. Its 5m long so it seems pretty accurate. How much more could you expect?
keithnz an hour ago
I think for LLM like Open AI, it wouldn't be about hitting the target but target selection. Target selection is probably the most likely thing that won't be accurate
mikkupikku 3 hours ago
You could definitely do better than that with image recognition for terminal guidance. But I would assume those published accuracy numbers are very conservative anyway..
koakuma-chan 2 hours ago
Anyone else getting artifacts when using this model in Cursor?
numerusformassistant to=functions.ReadFile մեկնաբանություն 天天爱彩票网站json {"path":
mike_hearn an hour ago
I've seen that problem with 5.3-codex too, it didn't happen with earlier models.
Looks like some kind of encoding misalignment bug. What you're seeing is their Harmony output format (what the model actually creates). The Thai/Chinese characters are special tokens apparently being mismapped to Unicode. Their servers are supposed to notice these sequences and translate them back to API JSON but it isn't happening reliably.
ilaksh 4 hours ago
Remember when everyone was predicting that GPT-5 would take over the planet?
dbbk 4 hours ago
It was truly scary, according to Sam...
zeeebeee 2 hours ago
iTs lITeRaLlY AGI bro
OutOfHere 3 hours ago
What is with the absurdity of skipping "5.3 Thinking"?
lostmsu 4 hours ago
What is Pro exactly and is it available in Codex CLI?
akmarinov 3 hours ago
It’s not. It’s their ultra thinking model that’s really good but takes 40 minutes to come up with an answer
fy20 3 hours ago
It's available on OpenRouter. $180/1M output....
HardCodedBias 4 hours ago
We'll have to wait a day or two, maybe a week or two, to determine if this is more capable in coding than 5.3, which seems to be the economically valuable capability at this time.
In terms of writing and research even Gemini, with a good prompt, is close to useable. That's likely not a differentiator.
oytis 3 hours ago
Everyone is mindblown in 3...2...1
wahnfrieden 4 hours ago
No Codex model yet
minimaxir 4 hours ago
GPT-5.4 is the new Codex model.
nico1207 3 hours ago
GPT-5.3-Codex is superior to GPT-5.4 in Terminal Bench with Codex, so not really
conradkay 2 hours ago
wahnfrieden 3 hours ago
Finally
ignorantguy 4 hours ago
it shows a 404 as of now.
minimaxir 4 hours ago
Up now.
The OP has frequently gotten the scoop for new LLM releases and I am curious what their pipeline is.
Leynos 4 hours ago
Guess the URL and post at 10 AM PST on the day of release.
bdangubic 4 hours ago
curl the URL https://openai.com/index/introducing-gpt-5-? until you get 200
mudkipdev 4 hours ago
iamleppert 3 hours ago
I wouldn't trust any of these benchmarks unless they are accompanied by some sort of proof other than "trust me bro". Also not including the parameters the models were run at (especially the other models) makes it hard to form fair comparisons. They need to publish, at minimum, the code and runner used to complete the benchmarks and logs.
Not including the Chinese models is also obviously done to make it appear like they aren't as cooked as they really are.
simianwords 4 hours ago
What is the point of gpt codex?
catketch 4 hours ago
-codex variant models in earlier version were just fine tuned for coding work, and had a little better performance for related tool calling and maybe instruction calling.
in 5.4 it looks like the just collapsed that capability into the single frontier family model
akmarinov 3 hours ago
They’ll likely come out with a 5.4-Codex at some point, that’s what they did with 5 and 5.2
simianwords 4 hours ago
Yes so I’m even more confused. Why would I use codex?
joshuacc 3 hours ago
energy123 3 hours ago
minimaxir 4 hours ago
More discussion here on the blog post announcement which has been confusingly penalized by Hacker News's algorithm: https://news.ycombinator.com/item?id=47265005
dang 3 hours ago
Thanks. We'll merge the threads, but this time we'll do it hither, to spread some karma love.
leftbehinds 4 hours ago
some sloppy improvements