1M context is now generally available for Opus 4.6 and Sonnet 4.6 (claude.com)

1053 points by meetpateltech a day ago

jeremychone 2 hours ago

Interesting, I’ve never needed 1M, or even 250k+ context. I’m usually under 100k per request.

About 80% of my code is AI-generated, with a controlled workflow using dev-chat.md and spec.md. I use Flash for code maps and auto-context, and GPT-4.5 or Opus for coding, all via API with a custom tool.

Gemini Pro and Flash have had 1M context for a long time, but even though I use Flash 3 a lot, and it’s awesome, I’ve never needed more than 200k.

For production coding, I use

- a code map strategy on a big repo. Per file: summary, when_to_use, public_types, public_functions. This is done per file and saved until the file changes. With a concurrency of 32, I can usually code-map a huge repo in minutes. (Typically Flash, cheap, fast, and with very good results)

- Then, auto context, but based on code lensing. Meaning auto context takes some globs that narrow the visibility of what the AI can see, and it uses the code map intersection to ask the AI for the proper files to put in context. (Typically Flash, cheap, relatively fast, and very good)

- Then, use a bigger model, GPT 5.4 or Opus 4.6, to do the work. At this point, context is typically between 30k and 80k max.

What I’ve found is that this process is surprisingly effective at getting a high-quality response in one shot. It keeps everything focused on what’s needed for the job.

Higher precision on the input typically leads to higher precision on the output. That’s still true with AI.

For context, 75% of my code is Rust, and the other 25% is TS/CSS for web UI.

Anyway, it’s always interesting to learn about different approaches. I’d love to understand the use case where 1M context is really useful.

daemonk 2 hours ago

Yeah this is the simpler and also effective strategy. A lot of people are building sophisticated AST RAG models. But you really just need to ask Claude to generally build a semantic index for each large-ish piece of code and re-use it when getting context.

You have to make sure the semantic summary takes up significantly less tokens than just reading the code or its just a waste of token/time.

Then have a skill that uses git version logs to perform lazy summary cache when needed.

tontinton 10 minutes ago

Yeah we all converge to the same workflow, in my ai coding agent I'm working on now, I've added an "index" tool that uses tree-sitter to compress and show the AI a skeleton of a code file.

Here's the implementation for the interested: https://github.com/tontinton/maki/blob/main/maki-code-index%...

smusamashah 2 hours ago

It seems like a very good use of LLMs. You should write a blog post with detail of your process with examples for people who are not into all AI tools as much. I only use Web UI. Lots of what you are saying is beyond me, but it does sound like clever strategy.

speakbits an hour ago

I think you've kind of hit on the more successful point here, which is that you should be keeping things focused in a sufficiently focused area to have better success and not necessarily needing more context.

cloverich 2 hours ago

This is really interesting; ive done very high level code maps but the entire project seems wild, it works?

So, small model figures out which files to use based on the code map, and then enriches with snippets, so big model ideally gets preloaded with relevant context / snippets up front?

Where does code map live? Is it one big file?

jeremychone an hour ago

So, I have a pro@coder/.cache/code-map/context-code-map.json.

I also have a `.tmpl-code-map.jsonl` in the same folder so all of my tasks can add to it, and then it gets merged into context-code-map.json.

I keep mtime, but I also compute a blake3 hash, so if mtime does not match, but it is just a "git restore," I do not redo the code map for that file. So it is very incremental.

Then the trick is, when sending the code map to AI, I serialize it in a nice, simple markdown format.

- path/to/file.rs - summary: ... - when to use: ... - public types: .., .., .. - public functions: .., .., ..

- ...

So the AI does not have to interpret JSON, just clean, structured markdown.

Funny, I worked on this addition to my tool for a week, planning everything, but even today, I am surprised by how well it works.

I have zero sed/grep in my workflow. Just this.

My prompt is pro@coder/coder-prompt.md, the first part is YAML for the globs, and the second part is my prompt.

There is a TUI, but all input and output are files, and the TUI is just there to run it and see the status.

LuxBennu an hour ago

Your code map compresses signal on the context side. Same principle applies on the prompt side: prompts that front-load specifics (file, error, expected behavior) resolve in 1-2 turns. Vague ones spiral into 5-6. 1M context doesn't change that — it just gives you more room for the spiral.

firemelt 2 hours ago

whenever I see post like this

i said well yeah, but its too sophiscated to be practical

jeremychone an hour ago

Fair point, but because I spent a year building and refining my custom tool, this is now the reality for all of my AI requests.

I prompt, press run, and then I get this flow: dev setup (dev-chat or plan) code-map (incremental 0s 2m for initial) auto-context (~20s to 40s) final AI query (~30s to 2m)

For example, just now, in my Rust code (about 60k LOC), I wanted to change the data model and brainstorm with the AI to find the right design, and here is the auto-context it gave me:

- Reducing 381 context files ( 1.62 MB)

- Now 5 context files ( 27.90 KB)

- Reducing 11 knowledge files ( 30.16 KB)

- Now 3 knowledge files ( 5.62 KB)

The knowledge files are my "rust10x" best practices, and the context files are the source files.

(edited to fix formatting)

adammarples 2 hours ago

It's not sophisticated at all, he just uses a model to make some documentation before asking another model to work using the documentation

make_it_sure 27 minutes ago

very interested in this approach and many other people are for sure. Please do a blog post.

CuriouslyC an hour ago

1M context is super useful with Gemini, not so much for coding, but for data analysis.

dimitri-vs a day ago

The big change here is:

> Standard pricing now applies across the full 1M window for both models, with no long-context premium. Media limits expand to 600 images or PDF pages.

For Claude Code users this is huge - assuming coherence remains strong past 200k tok.

Bombthecat 5 hours ago

If it's not coding, even with 200k context it starts to write gibberish, even with the correct information in the context.

I tried to ask questions about path of exile 2. And even with web research on it gave completely wrong information... Not only outdated. Wrong

I think context decay is a bigger problem then we feel like.

AnotherGoodName 4 hours ago

Fwiw put a copy of the game folder in a directory and tell claude to extract game files and dissasemble the game in preparation for questions about the game.

As an example of doing this in a session with jagged alliance 3 (an rpg) https://pastes.io/jagged-all-69136

Claude extracting game archives and dissasembling leads to far more reliable results than random internet posts.

jnovek 2 hours ago

heraldgeezer an hour ago

Lord-Jobo 4 hours ago

Context decay is noticeable within 3 messages, nearly every time. Maybe not substantial, but definitely noticeable.

It’s lead to me starting new chats with bigger and bigger starting ‘summary, prompts to catch the model up while refreshing it. Surely there’s a way to automate that technique.

AStrangeMorrow 3 hours ago

nvardakas 4 hours ago

eric_cc 4 hours ago

It could also be a skill problem. It would be more helpful if when people made llm sucks claims they shared their prompt.

The people I work with who complain about this type of thing horribly communicate their ask to the llm and expect it to read their minds.

namr2000 3 hours ago

AStrangeMorrow 3 hours ago

trollbridge 2 hours ago

alwillis 3 hours ago

staticman2 2 hours ago

Adding web search doesn't necessarily lead to better information at any context.

In my experience the model will assume the web results are the answer even if the search engine returns irrelevant garbage.

For example you ask it a question about New Jersey law and the web results are about New York or about "many states" it'll assume the New York info or "many states" info is about New Jersey.

blueblisters 4 hours ago

I think ChatGPT has a huge advantage here. They have been collecting realistic multi-turn conversational data at a much larger scale. And generally their models appear to be more coherent with larger contexts for general purpose stuff.

gorjusborg 4 hours ago

The question that comes to mind for me after reading your comment is how can a question about a game require that much context?

Bombthecat 3 hours ago

wouldbecouldbe 4 hours ago

I feel like few weeks ago i suddenly had a week where even after 3 messages it forgot what we did. Seems fixed now.

turbostyler 4 hours ago

We need an MCP for path of building

__MatrixMan__ 3 hours ago

Agreed, there's no getting around the "break it into smaller contexts" problem that lies between us and generally useful AI.

It'll remain a human job for quite a while too. Separability is not a property of vector spaces, so modern AIs are not going to be good at it. Maybe we can manage something similar with simplical complexes instead. Ideally you'd consult the large model once and say:

> show me the small contexts to use here, give me prompts re: their interfaces with their neighbors, and show me which distillations are best suited to those tasks

...and then a network of local models could handle it from there. But the providers have no incentive to go in that direction, so progress will likely be slow.

reactordev 4 hours ago

That’s not context decay, that’s training data ambiguity. So much misinformation, nerfs, buffs, changes that an LLM can not keep up given the training time required. Do it for a game that has been stable and it knows its stuff.

Bombthecat 4 hours ago

jnovek 2 hours ago

What were you asking about PoE 2? So far my _general_ experience with asking LLMs about ARPGs has been meh. Except for Diablo 2 but I think that’s just because Diablo 2 has been heavily discussed for ~25 years.

holoduke 2 hours ago

Number one thing you always need to accomplish are feedback loops for Claude so it's able to shotgun program itself to a solution.

MikeNotThePope 18 hours ago

Is it ever useful to have a context window that full? I try to keep usage under 40%, or about 80k tokens, to avoid what Dex Horthy calls the dumb zone in his research-plan-implement approach. Works well for me so far.

No vibes allowed: https://youtu.be/rmvDxxNubIg?is=adMmmKdVxraYO2yQ

furyofantares 17 hours ago

I'd been on Codex for a while and with Codex 5.2 I:

1) No longer found the dumb zone

2) No longer feared compaction

Switching to Opus for stupid political reasons, I still have not had the dumb zone - but I'm back to disliking compaction events and so the smaller context window it has, has really hurt.

I hope they copy OpenAI's compaction magic soon, but I am also very excited to try the longer context window.

pjerem 10 hours ago

mgambati 16 hours ago

radicality 3 hours ago

karmasimida 14 hours ago

iknowstuff 16 hours ago

alecco 6 hours ago

Offtopic: I find it remarkable the shortened YT url has a tracking cost of 57% extra length. We live in stupid times.

dahart 4 hours ago

kaizenb 15 hours ago

Thanks for the video.

His fix for "the dumb zone" is the RPI Framework:

● RESEARCH. Don't code yet. Let the agent scan the files first. Docs lie. Code doesn't.

● PLAN. The agent writes a detailed step-by-step plan. You review and approve the plan, not just the output. Dex calls this avoiding "outsourcing your thinking." The plan is where intent gets compressed before execution starts.

● IMPLEMENT. Execute in a fresh context window. The meta-principle he calls Frequent Intentional Compaction: don't let the chat run long. Ask the agent to summarize state, open a new chat with that summary, keep the model in the smart zone.

dahart 4 hours ago

Huppie 9 hours ago

girvo 15 hours ago

iamacyborg 11 hours ago

greenchair 8 hours ago

SkyPuncher 18 hours ago

Yes. I've recently become a convert.

For me, it's less about being able to look back -800k tokens. It's about being able to flow a conversation for a lot longer without forcing compaction. Generally, I really only need the most recent ~50k tokens, but having the old context sitting around is helpful.

hombre_fatal 17 hours ago

ogig 18 hours ago

When running long autonomous tasks it is quite frequent to fill the context, even several times. You are out of the loop so it just happens if Claude goes a bit in circles, or it needs to iterate over CI reds, or the task was too complex. I'm hoping a long context > small context + 2 compacts.

SequoiaHope 18 hours ago

MikeNotThePope 18 hours ago

boredtofears 18 hours ago

dimitri-vs 18 hours ago

It's kind of like having a 16 gallon gas tank in your car versus a 4 gallon tank. You don't need the bigger one the majority of the time, but the range anxiety that comes with the smaller one and annoyance when you DO need it is very real.

steve-atx-7600 17 hours ago

scwoodal 17 hours ago

ricksunny 17 hours ago

Since I'm yet to seriously dive into vibe coding or AI-assisted coding, does the IDE experience offer tracking a tally of the context size? (So you know when you're getting close or entering the "dumb zone")?

jfim 12 hours ago

MikeNotThePope 16 hours ago

8note 16 hours ago

stevula 17 hours ago

quux 17 hours ago

nujabe 17 hours ago

hrmtst93837 9 hours ago

Maxing out context is only useful if all the information is directly relevant and tightly scoped to the task. The model's performance tends to degrade with too much loosely related data, leading to more hallucinations and slower results. Targeted chunking and making sure context stays focused almost always yields better outcomes unless you're attempting something atypical, like analyzing an entire monorepo in one shot.

Barbing 15 hours ago

Looking at this URL, typo or YouTube flip the si tracking parameter?

  youtu.be/rmvDxxNubIg?is=adMmmKdVxraYO2yQ

MikeNotThePope 9 hours ago

dev_l1x_be 11 hours ago

I never use these giant context windows. It is pointless. Agents are great at super focused work that is easy to re-do. Not sure what is the use case for giant context windows.

maskull 17 hours ago

After running a context window up high, probably near 70% on opus 4.6 High and watching it take 20% bites out of my 5hr quota per prompt I've been experimenting with dumping context after completing a task. Seems to be working ok. I wonder if I was running into the long context premium. Would that apply to Pro subs or is just relevant to api pricing?

virtualritz 6 hours ago

I haven't hit the "dumb zone" any more since two months. I think this talk is outdated.

I'm using CC (Opus) thinking and Codex with xhigh on always.

And the models have gotten really good when you let them do stuff where goals are verifiable by the model. I had Codex fix a Rust B-rep CSG classification pipeline successfully over the course of a week, unsupervised. It had a custom STEP viewer that would take screenshots and feed them back into the model so it could verify the progress resp. the triangle soup (non progress) itself.

Codex did all the planning and verification, CC wrote the code.

This would have not been possible six months ago at all from my experience.

Maybe with a lot of handholding; but I doubt it (I tried).

I mean both the problem for starters (requires a lot of spatial reasoning and connected math) and the autonomous implementation. Context compression was never an issue in the entire session, for either model.

saaaaaam 17 hours ago

That video is bizarre. Such a heavy breather.

coldtea 15 hours ago

indigodaddy 15 hours ago

bushbaba 16 hours ago

Yes. I’ve used it for data analysis

wat10000 7 hours ago

I've used it many times for long-running investigations. When I'm deep in the weeds with a ton of disassembly listings and memory dumps and such, I don't really want to interrupt all of that with a compaction or handoff cycle and risk losing important info. It seems to remain very capable with large contexts at least in that scenario.

twodave 16 hours ago

I mean, try using copilot on any substantial back-end codebase and watch it eat 90+% just building a plan/checklist. Of course copilot is constrained to 120k I believe? So having 10x that will blow open up some doors that have been closed for me in my work so far.

That said, 120k is pleeenty if you’re just building front-end components and have your API spec on hand already.

a_e_k 18 hours ago

I've been using the 1M window at work through our enterprise plan as I'm beginning to adopt AI in my development workflow (via Cline). It seems to have been holding up pretty well until about 700k+. Sometimes it would continue to do okay past that, sometimes it started getting a bit dumb around there.

(Note that I'm using it in more of a hands-on pair-programming mode, and not in a fully-automated vibecoding mode.)

chatmasta 18 hours ago

So a picture is worth 1,666 words?

islewis 18 hours ago

The quality with the 1M window has been very poor for me, specifically for coding tasks. It constantly forgets stuff that has happened in the existing conversation. n=1, ymmv

robwwilliams 16 hours ago

Yes, especially with shifts in focus of a long conversation. But given the high error rates of Opus 4.6 the last few weeks it is possibly due to other factors. Conversational and code prodding has been essential.

hagen8 18 hours ago

Well, the question is what is contributing to the usage. Because as the context grows, the amount of input tokens are increasing. A model call with 800K token as input is 8 times more expensive than a model call with 100K tokens as input. Especially if we resume a conversation and caching does not hit, it would be very expensive with API pricing.

j45 2 hours ago

This might burn through usage faster too though.

jFriedensreich 5 hours ago

yeah it totally does not remain coherent past 200k, would have been too nice.

__MatrixMan__ 2 hours ago

I bet it depends how homogenous the context is. I bet it works ok near 1M in some cases, but as far as I can tell, those cases are rare.

syntaxing 15 hours ago

It’s interesting because my career went from doing higher level language (Python) to lower language (C++ and C). Opus and the like is amazing at Python, honestly sometimes better than me but it does do some really stupid architectural decisions occasionally. But when it comes to embedded stuff, it’s still like a junior engineer. Unsure if that will ever change but I wonder if it’s just the quality and availability of training data. This is why I find it hard to believe LLMs will replace hardware engineers anytime soon (I was a MechE for a decade).

necovek 9 hours ago

As someone who did Python professionally from a software engineering perspective, I've actually found Python to be pretty crappy really: unaware of _good_ idioms living outside tutorials and likely 90% of Python code out there that was simply hacked together quickly.

I have not tested, but I would expect more niche ecosystems like Rust or Haskell or Erlang to have better overall training set (developer who care about good engineering focus on them), and potentially produce the best output.

For C and C++, I'd expect similar situation with Python: while not as approachable, it is also being pushed on beginning software engineers, and the training data would naturally have plenty of bad code.

jeremyjh 7 hours ago

I think its pretty good at Elixir, so that tracks.

borski 4 hours ago

mettamage 4 hours ago

Can you recommend some books that teach these idioms? I know not everything is in books but I suspect a bit of it is

n_u 12 hours ago

I've found it's ok at Rust. I think a lot of existing Rust code is high quality and also the stricter Rust compiler enforces that the output of the LLM is somewhat reasonable.

lemagedurage 10 hours ago

Yes, it's nice to have a strict compiler, so the agent has to keep fixing its bugs until it actually compiles. Rust and TypeScript are great for this.

apitman 3 hours ago

raincole 7 hours ago

Quite sure it's not about the language but the domain.

staticassertion 5 hours ago

ricardobeat 6 hours ago

It is really good at writing C++ for Arduino, can one-shot most programs.

NanoWar 4 hours ago

I'd say the chance of me one shotting C++ is veeeery low. Same for bash scripts etc. This is where the LLM really shines for me.

trenchgun 9 hours ago

LLMsdo great with Rust though

ex-aws-dude 14 hours ago

I've had a similar experience as a graphics programmer that works in C++ every day

Writing quick python scripts works a lot better than niche domain specific code

nullpoint420 14 hours ago

Unfortunately, I’ve found it’s really good at Wayland and OpenGL. It even knows how to use Clutter and Meta frameworks from the Gnome Mutter stack. Makes me wonder why I learned this all in the first place.

Trufa 13 hours ago

dzonga 5 hours ago

nor web engineers (backend) that are not doing standard crud work.

I have seen these shine on frontend work

ipnon 10 hours ago

I think the combinatorial space is just too much. When I did web dev it was mostly transforming HTML/JSON from well-defined type A to well-defined type B. Everything is in text. There's nothing to reason about besides what is in the prompt itself. But constructing and maintaining a mental model of a chip and all of its instructions and all of the empirical data from profiling is just too much for SOTA to handle reliably.

anshumankmr 2 hours ago

All while their usage limits are so excessively shitty that I paid them 50$ just two days back cause I ran out of usage and they still blocked from using it during a critical work week (and did not refund my 50$ despite my emails and requests and route me to s*ty AI bot.). Anyway, I am using Copilot and OpenCode a lot more these days which is much better.

praddlebus 2 hours ago

What model(s) do you use with OpenCode? Can you use opus4.6 1m? Is it better in terms of usage if you use the same model?

convenwis a day ago

Is there a writeup anywhere on what this means for effective context? I think that many of us have found that even when the context window was 100k tokens the actual usable window was smaller than that. As you got closer to 100k performance degraded substantially. I'm assuming that is still true but what does the curve look like?

esperent 16 hours ago

> As you got closer to 100k performance degraded substantially

In practice, I haven't found this to be the case at all with Claude Code using Opus 4.6. So maybe it's another one of those things that used to be true, and now we all expect it to be true.

And of course when we expect something, we'll find it, so any mistakes at 150k context use get attributed to the context, while the same mistake at 50k gets attributed to the model.

peacebeard 11 hours ago

My personal experience is that Opus 4.6 degrades after a while but the degradation is more subtle and less catastrophic than in the past. I still aggressively clear sessions to keep it sharp though.

dcre 14 hours ago

Personally, even though performance up to 200k has improved a lot with 4.5 and 4.6, I still try to avoid getting up there — like I said in another comment, when I see context getting up to even 100k, I start making sure I have enough written to disk to type /new, pipe it the diff so far, and just say “keep going.” I feel like the dropoff starts around maybe 150k, but I could be completely wrong. I thought it was funny that the graph in the post starts at 256k, which convenient avoids showing the dropoff I'm talking about (if it's real).

tyleo 18 hours ago

I mentioned this at work but context still rots at the same rate. 90k tokens consumed has just as bad results in 100k context window or 1M.

Personally, I’m on a 6M+ line codebase and had no problems with the old window. I’m not sending it blindly into the codebase though like I do for small projects. Good prompts are necessary at scale.

minimaxir a day ago

The benchmark charts provided are the writeup. Everything else is just anecdata.

FartyMcFarter 18 hours ago

Isn't transformer attention quadratic in complexity in terms of context size? In order to achieve 1M token context I think these models have to be employing a lot of shortcuts.

I'm not an expert but maybe this explains context rot.

vlovich123 17 hours ago

Nope, there’s no tricks unless there’s been major architectural shifts I missed. The rot doesn’t come from inference tricks to try to bring down quadratic complexity of the KV cache. Task performance problems are generally a training problem - the longer and larger the data set, the fewer examples you have to train on it. So how do you train the model to behave well - that’s where the tricks are. I believe most of it relies on synthetically generated data if I’m not mistaken, which explains the rot.

FartyMcFarter 8 hours ago

iandanforth 5 hours ago

I'm very happy about this change. For long sessions with Claude it was always like a punch to the gut when a compaction came along. Codex/GPT-5.4 is better with compactions so I switched to that to avoid the pain of the model suddenly forgetting key aspects of the work and making the same dumb errors all over again. I'm excited to return to Claude as my daily driver!

minimaxir a day ago

Claude Code 2.1.75 now no longer delineates between base Opus and 1M Opus: it's the same model. Oddly, I have Pro where the change supposedly only for Max+ but am still seeing this to be case.

EDIT: Don't think Pro has access to it, a typical prompt just hit the context limit.

The removal of extra pricing beyond 200k tokens may be Anthropic's salvo in the agent wars against GPT 5.4's 1M window and extra pricing for that.

auggierose 18 hours ago

No change for Pro, just checked it, the 1M context is still extra usage.

zaptrem 17 hours ago

I have Max 20x and they're still separate on 2.1.75.

wewewedxfgdf 18 hours ago

The weirdest thing about Claude pricing is their 5X pricing plan is 5 times the cost of the previous plan.

Normally buying the bigger plan gives some sort of discount.

At Claude, it's just "5 times more usage 5 times more cost, there you go".

apetresc 18 hours ago

Those sorts of volume discounts are what you do when you're trying to incentivize more consumption. Anthropic already has more demand then they're logistically able to serve, at the moment (look at their uptime chart, it's barely even 1 9 of reliability). For them, 1 user consuming 5 units of compute is less attractive than 5 users consuming 1 unit.

They would probably implement _diminishing_-value pricing if pure pricing efficiency was their only concern.

auggierose 18 hours ago

It is not the plan they want you to buy. It is a pricing strategy to get you to buy the 20x plan.

radley 18 hours ago

5x Max is the plan I use because the Pro plan limits out so quickly. I don't use Claude full-time, but I do need Claude Code, and I do prefer to use Opus for everything because it's focused and less chatty.

auggierose 18 hours ago

operatingthetan 18 hours ago

I think they are both subsidized so either is a great deal.

cush 3 hours ago

Yeah the free lunch on tokens is almost over. Get them while they’re still cheap

merrvk 13 hours ago

5 times the already subsidised rate is still a discount.

tclancy 15 hours ago

We’ll make it up on volume.

Zambyte 18 hours ago

5 for 5

Frannky 15 hours ago

Opus 4.6 is nuts. Everything I throw at it works. Frontend, backend, algorithms—it does not matter.

I start with a PRD, ask for a step-by-step plan, and just execute on each step at a time. Sometimes ideas are dumb, but checking and guiding step by step helps it ship working things in hours.

It was also the first AI I felt, "Damn, this thing is smarter than me."

The other crazy thing is that with today's tech, these things can be made to work at 1k tokens/sec with multiple agents working at the same time, each at that speed.

koreth1 15 hours ago

I wish I had this kind of experience. I threw a tedious but straightforward task at Claude Code using Opus 4.6 late last week: find the places in a React code base where we were using useState and useEffect to calculate a value that was purely dependent on the inputs to useEffect, and replace them with useMemo. I told it to be careful to only replace cases where the change did not introduce any behavior changes, and I put it in plan mode first.

It gave me an impressive plan of attack, including a reasonable way to determine which code it could safely modify. I told it to start with just a few files and let me review; its changes looked good. So I told it to proceed with the rest of the code.

It made hundreds of changes, as expected (big code base). And most of them were correct! Except the places where it decided to do things like put its "const x = useMemo(...)" call after some piece of code that used the value of "x", meaning I now had a bunch of undefined variable references. There were some other missteps too.

I tried to convince it to fix the places where it had messed up, but it quickly started wanting to make larger structural changes (extracting code into helper functions, etc.) rather than just moving the offending code a few lines higher in the source file. Eventually I gave up trying to steer it and, with the help of another dev on my team, fixed up all the broken code by hand.

It probably still saved time compared to making all the changes myself. But it was way more frustrating.

dcre 15 hours ago

One tip I have is that once you have the diff you want to fix, start a new session and have it work on the diff fresh. They’ve improved this, but it’s still the case that the farther you get into context window, the dumber and less focused the model gets. I learned this from the Claude Code team themselves, who have long advised starting over rather than trying to steer a conversation that has started down a wrong path.

I have heard from people who regularly push a session through multiple compactions. I don’t think this is a good idea. I virtually never do this — when I see context getting up to even 100k, I start making sure I have enough written to disk to type /new, pipe it the diff so far, and just say “keep going.” I learned recently that even essentials like the CLAUDE.md part of the prompt get diluted through compactions. You can write a hook to re-insert it but it's not done by default.

This fresh context thing is a big reason subagents might work where a single agent fails. It’s not just about parallelism: each subagent starts with a fresh context, and the parent agent only sees the result of whatever the subagent does — its own context also remains clean.

kjohanson 14 hours ago

sidrag22 14 hours ago

Glyptodon 12 hours ago

ramesh31 6 hours ago

olalonde 11 hours ago

Same here. I don't understand how people leave it running on an "autopilot" for long periods of time. I still use it interactively as an assistant, going back and forth and stepping in when it makes mistakes or questionable architectural decisions. Maybe that workflow makes more sense if you're not a developer and don't have a good way to judge code quality in the first place.

There's probably a parallel with the CMSes and frameworks of the 2000s (e.g. WordPress or Ruby on Rails). They massively improved productivity, but as a junior developer you could get pretty stuck if something broke or you needed to implement an unconventional feature. I guess it must feel a bit similar for non-developers using tools like Claude Code today.

ramesh31 5 hours ago

conception 15 hours ago

Branch first so you can just undo. I think this would have worked with sub agents and /loop maybe? Write all items to change to a todo.md. Have it split up the work with haiku sub agents doing 5-10 changes at a time, marking the todos done, and /loop until all are done. You’ll succeed I suspect. If the main claude instance compacts its context - stop and start from where you left off.

koreth1 15 hours ago

a13n 13 hours ago

If you use eslint and tell it how to run lint in CLAUDE.md it will run lint itself and find and fix most issues like this.

Definitely not ideal, but sure helps.

jdkoeck 11 hours ago

Undefined variable references? Did you not instruct it to run typescript after changes?

stpedgwdgfhgdd 8 hours ago

Start over, create a new plan with the lessons learned.

You need to converge on the requirements.

dyauspitr 14 hours ago

You’re using it wrong. As soon as it starts going off the rails once you’ve repeated yourself, you drop the whole session and start over.

saghm 13 hours ago

sarchertech 15 hours ago

What kinds of things are you building? This is not my experience at all.

Just today I asked Claude using opus 4.6 to build out a test harness for a new dynamic database diff tool. Everything seemed to be fine but it built a test suite for an existing diff tool. It set everything up in the new directory, but it was actually testing code and logic from a preexisting directory despite the plan being correct before I told it to execute.

I started over and wrote out a few skeleton functions myself then asked it write tests for those to test for some new functionality. Then my plan was to the ask it to add that functionality using the tests as guardrails.

Well the tests didn’t actually call any of the functions under test. They just directly implemented the logic I asked for in the tests.

After $50 and 2 hours I finally got something working only to realize that instead of creating a new pg database to test against, it found a dev database I had lying around and started adding tables to it.

When I managed to fix that, it decided that it needed to rebuild multiple docker components before each test and test them down after each one.

After about 4 hours and $75, I managed to get something working that was probably more code than I would have written in 4 hours, but I think it was probably worse than what I would have come up with on my own. And I really have no idea if it works because the day was over and I didn’t have the energy left to review it all.

We’ve recently been tasked at work with spending more money on Claude (not being more productive the metric is literally spending more money) and everyone is struggling to do anything like what the posts on HN say they are doing. So far no one in my org in a very large tech company has managed to do anything very impressive with Claude other than bringing down prod 2 days ago.

Yes I’m using planning mode and clearing context and being specific with requirements and starting new sessions, and every other piece of advice I’ve read.

I’ve had much more luck using opus 4.6 in vs studio to make more targeted changes, explain things, debug etc… Claude seems too hard to wrangle and it isn’t good enough for you to be operating that far removed from the code.

extr 14 hours ago

You probably just don't have the hang of it yet. It's very good but it's not a mind reader and if you have something specific you want, it's best to just articulate that exactly as best you can ("I want a test harness for <specific_tool>, which you can find <here>"). You need to explain that you want tests that assert on observable outcomes and state, not internal structure, use real objects not mocks, property based testing for invariants, etc. It's a feedback loop between yourself and the agent that you must develop a bit before you start seeing "magic" results. A typical session for me looks like:

- I ask for something highly general and claude explores a bit and responds.

- We go back and forth a bit on precisely what I'm asking for. Maybe I correct it a few times and maybe it has a few ideas I didn't know about/think of.

- It writes some kind of plan to a markdown file. In a fresh session I tell a new instance to execute the plan.

- After it's done, I skim the broad strokes of the code and point out any code/architectural smells.

- I ask it to review it's own work and then critique that review, etc. We write tests.

Perhaps that sounds like a lot but typically this process takes around 30-45 minutes of intermittent focus and the result will be several thousand lines of pretty good, working code.

staticassertion 4 hours ago

Huppie 8 hours ago

visarga 13 hours ago

__mharrison__ 13 hours ago

sarchertech 13 hours ago

dcre 14 hours ago

Curious what language and stack. And have people at your company had marginally more success with greenfield projects like prototypes? I guess that’s what you’re describing, though it sounds like it’s a directory in a monorepo maybe?

sarchertech 14 hours ago

JoeMerchant 4 hours ago

Try https://github.com/gsd-build/get-shit-done. It's been a game changer for me.

jhatemyjob 14 hours ago

Similar experience. I use these AI tools on a daily basis. I have tons of examples like yours. In one recent instance I explicitly told it in the prompt to not use memcpy, and it used memcpy anyway, and generated a 30-line diff after thinking for 20 minutes. In that amount of time I created a 10-line diff that didn't use memcpy.

I think it's the big investors' extremely powerful incentives manifesting in the form of internet comments. The pace of improvement peaked at GPT-4. There is value in autocomplete-as-a-service, and the "harnesses" like Codex take it a lot farther. But the people who are blown away by these new releases either don't spend a lot of time writing code, or are being paid to be blown away. This is not a hockey stick curve. It's a log curve.

Bigger context windows are a welcome addition. And stuff like JSON inputs is nice too. But these things aren't gonna like, take your SWE job, if you're any good. It's just like, a nice substitute for the Google -> Stack Overflow -> Copy/Paste workflow.

staticassertion 4 hours ago

Culonavirus 7 hours ago

eknkc 14 hours ago

I find that Opus misses a lot of details in the code base when I want it to design a feature or something. It jumps to a basic solution which is actually good but might affect something elsewhere.

GPT 5.4 on codex cli has been much more reliable for me lately. I used to have opus write and codex review, I now to the opposite (I actually have codex write and both review in parallel).

So on the latest models for my use case gpt > opus but these change all the time.

Edit: also the harness is shit. Claude code has been slow, weird and a resource hog. Refuses to read now standardized .agents dirs so I need symlink gymnastics. Hides as much info as it can… Codex cli is working much better lately.

toraway 13 hours ago

Codex CLI is so much more pleasant to use than CC. I cancelled my CC subscription after the OpenCode thing, but somewhat ironically have recently found myself naturally trying the native Codex CLI client first more often over OpenCode.

Kinda funny how you don't actually need to use coercion if you put in the engineering work to build a product that's competitive on its own technical merits...

ai_fry_ur_brain 13 hours ago

Im convinced everyone saying this is building the simplest web apps, and doing magic tricks on themselves.

hparadiz 9 hours ago

I've been building a new task manager in C for Linux.

If you're not using AI you are cooked. You just don't realize it yet.

https://i.imgur.com/YXLZvy3.png

popcorncowboy 8 hours ago

raldi 13 hours ago

What evidence would convince you otherwise?

isbvhodnvemrwvn 11 hours ago

marginalia_nu 6 hours ago

My experience is that it gets you 80-90% of the way at 20x the speed, but coaxing it into fixing the remaining 10-20% happens at a staggeringly slow speed.

All programming is like this to some extent, but Claude's 80/20 behavior is so much more extreme. It can almost build anything in 15-30 minutes, but after those 15-30 minutes are up, it's only "almost built". Then you need to spend hours, days, maybe even weeks getting past the "almost".

Big part of why everyone seems to be vibe coding apps, but almost nobody seems to be shipping anything.

fbrncci 13 hours ago

I am starting to believe it’s not OPUS but developers getting better at using LLMs across the board. And not realizing they are just getting much better at using these tools.

I also thought it was OPUS 4.5 (also tested a lot with 4.6) and then in February switched to only using auto mode in the coding IDEs. They do not use OPUS (most of the times), and I’m ending up with a similar result after a very rough learning curve.

Now switching back to OPUS I notice that I get more out of it, but it’s no longer a huge difference. In a lot of cases OPUS is actually in the way after learning to prompt more effectively with cheaper models.

The big difference now is that I’m just paying 60-90$ month for 40-50hrs of weekly usage… while I was inching towards 1000$ with OPUS. I chose these auto modes because they don’t dig into usage based pricing or throttling which is a pretty sweet deal.

danielbln 10 hours ago

Opus is not an acronym.

devld an hour ago

fbrncci 7 hours ago

copperx 12 hours ago

I had similar thoughts regarding "we are simply getting better at using them", but the man I tried Gemini again and reconsidered.

olalonde 13 hours ago

> PRD

Is it Baader-Meinhof or is everyone on HN suddenly using obscure acronyms?

shujito 13 hours ago

It stands for Product Requirements Document, it is something commonly used in project planning and management.

epicureanideal 13 hours ago

nvarsj 6 hours ago

Seems commonly used in Big Tech - first time I heard it was in my current job. Now it's seared into my brain since it's used so much. Among many other acronyms which I won't bore you with.

schainks 12 hours ago

> It was also the first AI I felt, "Damn, this thing is smarter than me."

1000% agree. It's also easy to talk to it about something you're not sure it said and derive a better, more elegant solution with simple questioning.

Gemini 3.1 also gives me these vibes.

rafaelmn 10 hours ago

I've seen a few instances of where Claude showed me a better way to do something and many many more instances of where it fails miserably.

Super simple problem :

I had a ZMK keyboard layout definition I wanted it to convert it to QMK for a different keyboard that had one key less so it just had to trim one outer key. It took like 45 minutes of back and forth to get it right - I could have done it in 30 min manually tops with looking up docs for everything.

Capability isn't the impressive part it's the tenacity/endurance.

Aperocky 15 hours ago

I had been able to get it into the classic AI loop once.

It was about a problem with calculation around filling a topographical water basin with sedimentation where calculation is discrete (e.g. turn based) and that edge case where both water and sediments would overflow the basin; To make the matter simple, fact was A, B, C, and it oscillated between explanation 1 which refuted C, explanation 2 which refuted A and explanation 3 that refuted B.

I'll give it to opus training stability that my 3 tries using it all consistently got into this loop, so I decided to directly order it to do a brute force solution that avoided (but didn't solve) this problem.

I did feel like with a human, there's no way that those 3 loop would happen by the second time. Or at least the majority of us. But there is just no way to get through to opus 4.6

dzink 15 hours ago

Opus 4.6 is AGI in my book. They won’t admit it, but it’s absolutely true. It shows initiative in not only getting things right but also adding improvements that the original prompt didn't request that match the goals of the job.

prmph 7 hours ago

> Opus 4.6 is AGI in my book.

Not even close. There are still tons of architectural design issues that I'd find it completely useless at, tons of subtle issues it won't notice.

I never run agents by themselves; every single edit they do is approved by me. And, I've lost track of the innumerable times I've had to step in and redirect them (including Opus) to an objectively better approach. I probably should keep a log of all that, for the sake of posterity.

I'll grant you that for basic implementation of a detailed and well-specced design, it is capable.

winrid 15 hours ago

On the adding improvements and being helpful thing, isn't that part of the system prompt?

dcre 14 hours ago

dyauspitr 14 hours ago

I don’t know if Opus is AGI but on a broader note, that’s how we will get AGI. Not some consciousness like people are expecting. It’s just going to be chatbot that’s very hard to stump and starts making actual scientific breakthroughs and solving long standing problems.

unshavedyak 13 hours ago

eru 15 hours ago

> [...] with multiple agents working at the same time, each at that speed.

Horizontal parallelising of tasks doesn't really require any modern tech.

But I agree that Opus 4.6 with 1M context window is really good at lots of routine programming tasks.

travisgriggs 15 hours ago

Opus helped me brick my RPi CM4 today. It glibly apologized for telling to use an e instead of a 6 in a boot loader sequence.

Spent an hour or so unraveling the mess. My feeling are growing more and more conflicted about these tools. They are here to stay obviously.

I’m honestly uncertain about the junior engineers I’m working with who are more productive than they might be otherwise, but are gaining zero (or very little) experience. It’s like the future is a world where the entire programming sphere is dominated by the clueless non technical management that we’ve all had to deal with in small proportion a time or two.

eru 14 hours ago

hrishikesh-s 15 hours ago

Opus-4.6 is so far ahead of the rest that I think Anthropic is the winner in winner-take-all

steve-atx-7600 14 hours ago

Codex doesn't seem that far behind. I use the top model available for api key use and its gotten faster this month even on the max effort level (not like a cheetah - more like not so damn painful anymore). Plus, it also forks agents in parallel - for speed & to avoid polluting the main context. I.e. it will fork explorer agents while investigating (kind of amusing because they're named after famous scientists).

raincole 14 hours ago

It's so far the best model that answers my questions about Wolfram language.

That being said it's the only use case for me. I won't subscribe to something that I can't use with third party harness.

copperx 12 hours ago

I use a Claude sub with oh-my-pi, but I do so with lots of anxiety, knowing that I will be banned at any moment.

fooker 13 hours ago

I have a PhD in a niche field and this can do my job ;)

Not sure if this means I should get a more interesting job or if we are all going to be at the mercy of UBI eventually.

suzzer99 13 hours ago

We're never getting UBI. See the latest interview with the Palantir CEO where he talks about white collar workers having to take more hands-on jobs that they may not feel as satisfied with. IE - tending their manors and compounds.

RIP widespread middle class. It was a good 80-year run.

_heimdall 13 hours ago

An economy, and likely a society, fails if everyone is at the mercy of a UBI.

copperx 12 hours ago

fph 7 hours ago

ed_elliott_asc 11 hours ago

It’s still pretty poor writing powershell

interpol_p 15 hours ago

I had Opus 4.6 running on a backend bug for hours. It got nowhere. Turned out the problem was in AWS X-ray swizzling the fetch method and not handling the same argument types as the original, which led to cryptic errors.

I had Opus 4.6 tell me I was "seeing things wrong" when I tried to have it correct some graphical issues. It got stuck in a loop of re-introducing the same bug every hour or so in an attempt to fix the issue.

I'm not disagreeing with your experience, but in my experience it is largely the same as what I had with Opus 4.5 / Codex / etc.

toraway 14 hours ago

Haha, reminds me of an unbelievably aggravating exchange with Codex (GPT 5.4 / High) where it was unflinchingly gaslighting me about undesired behavior still occurring after a change it made that it was adamant simply could not be happening.

It started by insisting I was repeatedly making a typo and still would not budge even after I started copy/pasting the full terminal history of what I was entering and the unabridged output, and eventually pivoted to darkly insinuating I was tampering with my shell environment as if I was trying to mislead it or something.

Ultimately it turned out that it forgot it was supposed to be applying the fixes to the actual server instead of the local dev environment, and had earlier in the conversation switched from editing directly over SSH to pushing/pulling the local repo to the remote due to diffs getting mangled.

devld 5 hours ago

But does it still generate slop?

I'm late to the party and I'm just getting started with Antrophic models. I have been finding Sonnet decent enough, but it seems to have trouble naming variables correctly (it's not just that most names are poor and undescriptive, sometimes it names it wrong, confusing) or sometimes unnecessarily declaring, re-declaring variables, encoding, decoding, rather than using the value that's already there etc. Is Opus better at this?

arcanemachiner 2 hours ago

You really need to try it for yourself. People working in different domains get wildly different results.

scroogey 14 hours ago

Just yesterday I asked it to repeat a very simple task 10 times. It ended up doing it 15 times. It wasn't a problem per se, just a bit jarring that it was unable to follow such simple instructions (it even repeated my desire for 10 repetitions at the start!).

vessenes 15 hours ago

I’ll put out a suggestion you pair with codex or deepthink for audit and review - opus is still prone to … enthusiastic architectural decisions. I promise you will be at least thankful and at most like ‘wtf?’ at some audit outputs.

Also shout out to beads - I highly recommend you pair it with beads from yegge: opus can lay out a large project with beads, and keep track of what to do next and churn through the list beautifully with a little help.

petesergeant 15 hours ago

I've been pairing it with Codex using https://github.com/pjlsergeant/moarcode

The amount of genuine fuck-ups Codex finds makes me skeptical of people who are placing a lot of trust in Claude alone.

vessenes 14 hours ago

vips7L 15 hours ago

Bullshit.

phendrenad2 14 hours ago

The replies to this really make me think that some people are getting left behind the AI age. Colleges are likely already teaching how to prompt, but a lot of existing software devs just don't get it. I encourage people who aren't having success with AI to watch some youtube videos on best practices.

germinalphrase 14 hours ago

Share one

phendrenad2 2 hours ago

gregharned 13 hours ago

The multi-agent angle is interesting from a cost perspective. At Opus 4.6 pricing ($15/MTok input, $75/MTok output), running several concurrent agents on 1M context sessions gets expensive fast — but the math still works if you're replacing hours of senior engineer time.

The shift I've noticed: 1M context makes "load the whole codebase once, run many agents" viable, whereas before you were constantly re-chunking and losing context. The per-task cost goes up but the time-to-correct-output drops significantly.

The harder problem for most teams is routing — knowing which tasks actually need Opus at 1M vs. Sonnet at 200k. Opus 4.6 at 1M is overkill for 80% of coding tasks. The ROI only works if you're being intentional about when to use it.

edot 8 hours ago

LLM written comments are not allowed on HN. This comment is written by an LLM and the account is fresh.

PeterStuer 2 hours ago

The thing that would get me more excited is how far they could push context coherence before the model loses track. I'm hoping 250k.

elophanto_agent 2 hours ago

finally, enough context to fit my entire codebase AND my excuses for why it doesn't work

jeff_antseed 12 hours ago

the coherence question is the one that matters here. 1M tokens is not the same as actually using 1M tokens well.

we've been testing long-context in prod across a few models and the degradation isn't linear — there's something like a cliff somewhere around 600-700k where instruction following starts getting flaky and the model starts ignoring things it clearly "saw" earlier. its not about retrieval exactly, more like... it stops weighting distant context appropriately.

gemini's problems with loops and tool forgetting that someone mentioned are real. we see that too. whether claude actually handles the tail end of 1M coherently is the real question here, and "standard pricing with no long-context premium" doesn't answer it.

honestly the fact that they're shipping at standard pricing is more interesting to me than the window size itself. that suggests they've got the KV cache economics figured out, which is harder than it sounds.

gskm 11 hours ago

Spot on. That cliff might be less about the model failing at distance and more about noise accumulating faster than signal. In prod, most of what fills the window is file reads, grep output, and tool overhead, i.e., low-value tokens. By 700k you're not really testing long-context reasoning, you're testing the model's ability to find signal in a haystack it built itself.

tariky 10 hours ago

This is amazing. I have to test it with my reverse engineering workflow. I don't know how many people use CC for RE but it is really good at it.

Also it is really good for writing SketchUp plugins in ruby. It one shots plugins that are in some versions better then commercial one you can buy online.

CC will change development landscape so much in next year. It is exciting and terrifying in same time.

jwilliams 3 hours ago

I'm fairly sure that your best throughput is single-prompt single-shot runs with Claude (and that means no plan, no swarms, etc) -- just with a high degree of work in parallel.

So for me this is a pretty huge change as the ceiling on a single prompt just jumped considerably. I'm replaying some of my less effective prompts today to see the impact.

vessenes 19 hours ago

This is super exciting. I've been poking at it today, and it definitely changes my workflow -- I feel like a full three or four hour parallel coding session with subagents is now generally fitting into a single master session.

The stats claim Opus at 1M is about like 5.4 at 256k -- these needle long context tests don't always go with quality reasoning ability sadly -- but this is still a significant improvement, and I haven't seen dramatic falloff in my tests, unlike q4 '25 models.

p.s. what's up with sonnet 4.5 getting comparatively better as context got longer?

steve-atx-7600 17 hours ago

Did it get better? I used sonnet 4.5 1m frequently and my impression was that it was around the same performance but a hell of a lot faster since the 1m model was willing to spends more tokens at each step vs preferring more token-cautious tool calls.

vessenes 16 hours ago

Opus 4.6 is wayy better than sonnet 4.5 for sure.

mattfrommars 17 hours ago

Random: are you personally paying for Claude Code or is it paid by you employer?

My employer only pays for GitHub copilot extension

kiratp 16 hours ago

GitHub Copilot CLI lets you use all these models (unless your employer disables them.

https://github.com/features/copilot/cli

Disclosure: work at Msft

ericpauley 4 hours ago

tclancy 15 hours ago

celestialcheese 17 hours ago

Both. Employer pays for work max 20x, i pay for a personal 10x for my side projects and personal stuff.

aragonite 17 hours ago

Do long sessions also burn through token budgets much faster?

If the chat client is resending the whole conversation each turn, then once you're deep into a session every request already includes tens of thousands of tokens of prior context. So a message at 70k tokens into a conversation is much "heavier" than one at 2k (at least in terms of input tokens). Yes?

dathery 17 hours ago

That's correct. Input caching helps, but even then at e.g. 800k tokens with all of them cached, the API price is $0.50 * 0.8 = $0.40 per request, which adds up really fast. A "request" can be e.g. a single tool call response, so you can easily end up making many $0.40 requests per minute.

acjohnson55 16 hours ago

Interesting, so a prompt that causes a couple dozen tool calls will end up costing in the tens of dollars?

dathery 3 hours ago

isbvhodnvemrwvn 11 hours ago

jasondclinton 17 hours ago

If you use context cacheing, it saves quite a lot on the costs/budgets. You can cache 900k tokens if you want.

thebigspacefuck 3 hours ago

I used this for a bit and I felt like it was slower and generally worse than using 200K with context compaction. Context compaction does lose some things though.

sailfast 4 hours ago

This is great news. The 1M context is much easier to work with than compacting all the time and seems to perform and remember quite well despite the insane amount of data.

jmkozko 3 hours ago

Do subscription users still need to tap into "extra usage" spending to go above 200K tokens?

bob1029 13 hours ago

I've been avoiding context beyond 100k tokens in general. The performance is simply terrible. There's no training data for a megabyte of your very particular context.

If you are really interested in deep NIAH tasks, external symbolic recursion and self-similar prompts+tools are a much bigger unlock than more context window. Recursion and (most) tools tend to be fairly deterministic processes.

I generally prohibit tool calling in the first stack frame of complex agents in order to preserve context window for the overall task and human interaction. Most of the nasty token consumption happens in brief, nested conversations that pass summaries back up the call stack.

k__ 9 hours ago

I heard, the middle of the context is often ignored.

Do long context windows make much sense then or is this just a way of getting people to use more tokens?

pixelpoet 18 hours ago

Compared to yesterday my Claude Max subscription burns usage like absolutely crazy (13% of weekly usage from fresh reset today with just a handful prompts on two new C++ projects, no deps) and has become unbearably slow (as in 1hr for a prompt response). GGWP Anthropic, it was great while it lasted but this isn't worth the hundreds of dollars.

Spooky23 18 hours ago

Yeah, morning eastern time Claude is brutal.

AbstractH24 4 hours ago

Am I crazy or wasn’t this announced like 2 weeks ago?

Or was that a different company or not GA. It’s all becoming a blur.

yubainu 9 hours ago

1M is truly amazing. However, what is the incidence of hallucination? I haven't found a benchmark, but I feel that maintaining context at 1M would likely increase hallucination. Is there some kind of mechanism to suppress hallucination?

suheilaaita 10 hours ago

This blew my mind the first i saw this. Another leap in AI that just swooshes by. In a couple of months, every model will be the same. Can't wait for IDEs like cursor and vs code to update their tooling to adap for this massive change in claude models.

ionwake 2 hours ago

Have we reached the point where its "normal" to mostly use AI to code? Im just wondering because Im sure it was less than a month ago when I said I havent coded manually for over 6 months and I had several comments about how my code must be terrible.

Im not butt hurt Im just wondering if the overton window has shifted yet.

LarsDu88 13 hours ago

The stuff I built with Opus 4.6 in the past 2.5 weeks:

Full clone of Panel de Pon/Tetris attack with full P2P rollback online multiplayer: https://panel-panic.com

An emulator of the MOS 6502 CPU with visual display of the voltage going into the DIP package of the physical CPU: https://larsdu.github.io/Dippy6502/

I'm impressed as fuck, but a part of me deep down knows that I know fuck all about the 6502 or its assembly language and architecture, and now I'll probably never be motivated to do this project in a way that I would've learned all the tings I wanted to learn.

adamm255 4 hours ago

That game is AWESOME! The fact that was vibe coded is insane.

LarsDu88 3 hours ago

Honestly that game wasnt oneshotted. I had longtine PdP enthusiasts play it and guve feedback

aenis 12 hours ago

Sample of one and all that, but it's way, way more sloppy than it used to be for me.

To the extent, that I have started making manual fixes in the code - I haven't had to stoop to this in 2 months.

Max subscription, 100k LOC codebases more or less (frontend and backend - same observations).

jFriedensreich 5 hours ago

My testing was extremely disappointing, this is not a context window that magically extends your breathing room for a conversation. I can tell blindly at this point when 150 - 200 k tokens are reached because the coding quality and coherence just drops by one or two generations. Its great for the case you really need a giant context for specific task but it changes nothing for needing to compact or handover at 200k.

margorczynski 18 hours ago

What about response coherence with longer context? Usually in other models with such big windows I see the quality to rapidly drop as it gets past a certain point.

mvrckhckr 9 hours ago

I never get to more than 20% of the 1M context window, and it’s working great. (Have the same experience in Codex with 5.4.)

chaboud 17 hours ago

Awesome.... With Sonnet 4.5, I had Cline soft trigger compaction at 400k (it wandered off into the weeds at 500k). But the stability of the 4.6 models is notable. I still think it pays to structure systems to be comprehensible in smaller contexts (smaller files, concise plans), but this is great.

(And, yeah, I'm all Claude Code these days...)

causalzap 16 hours ago

I've been using Opus 4.5 for programmatic SEO and localizing game descriptions. If 4.6 truly improves context compaction, it could significantly lower the API costs for large-scale content generation. Has anyone tested its logic consistency on JSON output compared to 4.5?

arizen 13 hours ago

Out of curiosity, what specific use cases on programmatic SEO are you currently doing with Opus?

heraldgeezer 2 hours ago

I feel like I'm the only one here using AI as just a chatbot for research, shopping, advice etc and for one off regex or bash/ps scripts... then again not a programmer so.

ofisboy 2 hours ago

i think it's buggy. i keep getting "compacting conversation" even though i restarted the cli. and i'm for sure not using 5 times more.

vicchenai 18 hours ago

The no-degradation-at-scale claim is the interesting part. Context rot has been the main thing limiting how useful long context actually is in practice — curious to see what independent evals show on retrieval consistency across the full 1M window.

apetresc 18 hours ago

I don't think they're claiming "no degradation at scale", are they? They still report a 91.9->78.3 drop. That's just a better drop than everyone else (is the claim).

aarmenante 14 hours ago

Hot take... the 1MM context degrades performance drastically.

aenis 12 hours ago

Same. First time in 2 months that I found it easier to fix the bugs it created manually, rather than get it to fix. Its google-code-CLI-on-gemini-2.5 level bad for me today. Meaning, almost comically bad.

fittingopposite 13 hours ago

I don't get the announcement. Is this included in the standard 5 or 20x Max plans?

arjie 17 hours ago

This is fantastic. I keep having to save to memory with instructions and then tell it to restore to get anywhere on long running tasks.

aliljet 18 hours ago

Are there evals showing how this improves outputs?

apetresc 18 hours ago

Improves outputs relative to what? Compared to previous contexts of 1M, it improves outputs by allowing them to exist (because previously you couldn't exceed 200K). Compared to contexts of <200K, it degrades outputs rather than improves them, but that's what you'd expect from longer contexts. It's still better than compaction, which was previously the alternative.

johnwheeler 18 hours ago

This is incredible. I just blew through $200 last night in a few hours on 1M context. This is like the best news I've heard all year in regards to my business.

What is OpenAIs response to this? Do they even have 1M context window or is it still opaque and "depends on the time of day"

hagen8 18 hours ago

Did u use the API or subscription?

johnwheeler 18 hours ago

Max subscription and "extra usage" billing

steve-atx-7600 17 hours ago

dominotw 18 hours ago

rarely go over 25 percent in codex but i hit 80 on claude code in just a short time.

8note 16 hours ago

im guessing this is why the compacts have started sucking? i just finished getting me some nicer tools for manipulating the graph so i could compact less frequently, and fish out context from the prior session.

maybe itll still be useful, though i only have opus at 1M, not sonnet yet

alienchow 13 hours ago

If this is a skill issue, feel free to let me know. In general Claude Code is decent for tooling. Onduty fullstack tooling features that used to sit ignored in the on-caller ticket queue for months can now be easily built in 20 minutes with unit tests and integration tests. The code quality isn't always the best (although what's good code for humans may not be good code for agents) but that's another specific and directed prompt away to refactor.

However, I can't seem to get Opus 4.6 to wire up proper infrastructure. This is especially so if OSS forks are used. It trips up on arguments from the fork source, invents args that don't exist in either, and has a habit of tearing down entire clusters just to fix a Helm chart for "testing purposes". I've tried modifying the CLAUDE.md and SPEC.md with specific instructions on how to do things but it just goes off on a tangent and starts to negotiate on the specs. "I know you asked for help with figuring out the CNI configurations across 2 clusters but it's too complex. Can we just do single cluster?" The entire repository gets littered with random MD files everywhere for directory specific memories, context, action plans, deprecated action plans, pre-compaction memories etc. I don't quite know which to prune either. It has taken most of the fun out of software engineering and I'm now just an Obsidian janitor for what I can best describe as a "clueless junior engineer that never learns". When the auto compaction kicks in it's like an episode of 50 first dates.

Right now this is where I assume is the limitation because the literature for real-world infrastructure requiring large contexts and integration is very limited. If anyone has any idea if Claude Opus is suitable for such tasks, do give some suggestions.

thunkle 17 hours ago

Just have to ask. Will I be spending way more money since my context window is getting so much bigger?

isbvhodnvemrwvn 11 hours ago

Yes, full context is used to generate each new token.

efeecllk 8 hours ago

finally. before 1m, i must speak 60k context for just telling the past chat and project

throw03172019 15 hours ago

Pentagon may switch to Claude knowing OpenAI has the premium rates for 1M context.

8cvor6j844qw_d6 18 hours ago

Oh nice, does it mean less game of /compact, /clear, and updating CLAUDE.md with Claude Code?

fnordpiglet 18 hours ago

I’ve been using 1M for a while and it defers it and makes it worse almost when it happens. Compacting a context that big loses a ton of fidelity. But I’ve taken to just editing the context instead (double esc). I also am planning to build an agent to slice the session logs up into contextually useful and useless discarding the useless and keeping things high fidelity that way. (I.e., carve up with a script the jsonl and have subagent haiku return the relevant parts and reconstructing the jsonl)

dominotw 18 hours ago

til you can edit context. i keep a running log and /clear /reload log

8note 16 hours ago

swader999 17 hours ago

I notice Claude steadily consuming less tokens, especially with tool calling every week too

dkpk 15 hours ago

Is this also applicable for usage in Claude web / mobile apps for chat?

cubefox 4 hours ago

> Standard pricing now applies across the full 1M window for both models, with no long-context premium.

Does that mean it's likely not a Transformer with quadratic attention, but some other kind of architecture, with linear time complexity in sequence length? That would be pretty interesting.

bob1029 4 hours ago

It's almost certainly not quadratic at 1M. This would be wildly infeasible at scale. 10^6^2 = 10^12. That's a trillion things.

They are probably doing something like putting the original user prompt into the model's environment and providing special tools to the model, along with iterative execution, to fully process the entire context over multiple invokes.

I think the Recursive Language Model paper has a very good take on how this might go. I've seen really good outcomes in my local experimentation around this concept:

https://arxiv.org/abs/2512.24601

You can get exponential scaling with proper symbolic stack frames. Handling a gigabyte of context is feasible, assuming it fits the depth first search pattern.

FartyMcFarter 3 hours ago

They're probably taking shortcuts such as taking advantage of sparsity. There are various tricks like that mentioned in some papers, although the big companies are getting more and more secretive about how their models work so you won't necessarily find proof.

zmmmmm 19 hours ago

Noticed this just now - all of a sudden i have 1M context window (!!!) without changing anything. It's actually slightly disturbing because this IS a behavior change. Don't get me wrong, I like having longer context but we really need to pin down behaviour for how things are deployed.

steve-atx-7600 17 hours ago

You can pin to specific models with —-model. Check out their doc. See https://support.claude.com/en/articles/11940350-claude-code-.... You can also pin to a less specific tag like sonnet-4.5[1m] (that’s from memory might be a little off).

zmmmmm 16 hours ago

sure - but the model hasn't changed. I'm specifying it explicitly. But suddenly the context window has. I'm not using Claude Code, this is an application built against Bedrock APIs. I assume there's a way I could be specifying the context window and I'm just using API defaults. But it definitely makes me wonder what else I'm not controlling that I really should be.

phist_mcgee 18 hours ago

Anthropic is famous for changing things under your feet. Claude code is basically alpha software with a global footprint.

vips7L 15 hours ago

Friends, just write the code. It’s not that hard.

AussieWog93 13 hours ago

I hear what you're saying, but for a lot of people coding isn't something we can throw 40+ hours per week at.

My main job is running a small eComm business, and I have to both develop software automations for the office (to improve productivity long-term) while also doing non-coding day to day tasks. On top of this, I maintain an open source project after hours. I've also got a young family with 3 kids.

I'm not saying Claude is the damn singularity or anything, but stuff is getting done now that simply wasn't being addressed before.

fixxation92 12 hours ago

100% agree with this, as much as I hate the term "game-changer"... it truly is, I'm working on projects that I've always wanted to do but never had the capacity (or money to pay a small team of devs to build something)-- all these things that you thought you'd never have a chance to do, are suddenly now real and completely possible. I know there's a lot of AI haters out there but I'm pretty sure in time, all devs will embrance it and truly enjoy working with it

vips7L 12 hours ago

mrgaro 8 hours ago

Not hard, but time consuming. In the past two weeks I've had Claude Code write me around 35k lines of code across 350 commits. It's a project which is giving positive impact to the company, but we would never have started it without CC as the effort would have been too big compared to the impact.

nkzd 10 hours ago

It's not that interesting.

righthand 12 hours ago

You're witnessing the rise of the Developer Technician or Software Technician. They can get a machine to print out an application but you will still need an engineer to know how it works or to get it working. This used to be juniors learning to be senior devs/engineers. Now it is a split between technicians and engineers. The market will be up shit creek when all their technicians can't vibe code their way out of not understanding the code.

andrewstuart 14 hours ago

Only someone not using Claude could equate human coding.

vips7L 14 hours ago

Only someone not using their brain could equate Claude to using their intelligence.

andrewstuart 14 hours ago

drcongo 8 hours ago

Could be pure coincidence, but my Claude Code session last night was an absolute nightmare. It kept forgetting things it had done earlier in the session and why it had done them, messed up a git merge so badly that it lost the CLAUDE.md file along with a lot of other stuff, and then started running commands on the host machine instead of inside the container because it no longer had a CLAUDE.md to tell it not to. Last night was the first time I've ever sworn at it.

xvector 8 hours ago

I think this is just the nature of a nondeterministic system; occasionally you're gonna be unlucky enough to encounter the leftmost segment of the bell curve.

In my experience dumping a summary + starting a fresh session helps in these cases.

shanjai_raj7 10 hours ago

are the costs the same as the 200k context opus 4.6?

compaction has been really good in claude we don't even recognize the switch

holoduke 10 hours ago

I am currently mass translating millions of records with short descriptions. Somehow tokens are consumed extremely fast. I have 3 max memberships. And all 3 of them are hitting the 5 hour limit in about 5 to 10 minutes. Still don't understand why this is happening.

cbg0 9 hours ago

Unless you're clearing up the context for each description or processing them in parallel with subagents your context window will grow for each short description added to it making you hit those hour limits.

LoganDark 15 hours ago

Finally, I don't have to constantly reload my Extra Usage balance when I already pay $200/mo for their most expensive plan. I can't believe they even did that. I couldn't use 1M context at all because I already pay $200/mo and it was going to ask me for even more.

Next step should be to allow fast mode to draw from the $200/mo usage balance. Again, I pay $200/mo, I should at least be able to send a single message without being asked to cough up more. (One message in fast mode costs a few dollars each) One would think $200/mo would give me any measure of ability to use their more expensive capabilities but it seems it's bucketed to only the capabilities that are offered to even free users.

aenis 12 hours ago

I find it hard to understand that people consider $200 p/m a lot for what they are getting. Expensive compared to what? A netflix sub?

A 1hr of a senior dev is at least $100, depending where one lives. Since Claude saves me hours every day, it pays for itself almost instantly. I think the economic value of the Claude subscription is on the order of $20-40k a month for a pro.

LoganDark 8 hours ago

When did I say anything about what I'm getting? I said I pay $200/mo and I expect that to cover anything up to my usage limit. I don't expect any slightly non-standard configuration to immediately ignore the high subscription price that I pay and go straight to "extra usage" that has to be billed separately by the token. I wouldn't even care if fast mode used 10x or 50x the usage as long as I could actually USE the balance that I already pay for. I thought the point of extra usage was to be for overage.

aenis 2 hours ago

dominotw 18 hours ago

can someone tell me how to make this instruction work in claude code

"put high level description of the change you are making in log.md after every change"

works perfectly in codex but i just cant get calude to do it automatically. I always have to ask "did you update the log".

8note 13 hours ago

whats the need? you have the session in a file as a dag. you can summarize to a log whenever you want. doesnt need to be as it goes.

earlier today i actually spent a bit of time asking claude to make an mcp to introspect that - break the session down into summarized topics, so i could try dropping some out or replacing the detailed messages with a summary - the idea being to compact out a small chunk to save on context window, rather than getting it back to empty.

the file is just there though, you can run jq against it to get a list of writes, and get an agent to summarize

dominotw 7 hours ago

i dont work in just one session though. some tasks take me days and many sessions. also what happens when your session compacts. I am not sure what you are suggesting here. what do you do with these summarized topics from your session.

Also i want ci to resume my task from log and do code review with that context.

https://www.anthropic.com/engineering/effective-harnesses-fo...

"Read the git logs and progress files to get up to speed on what was recently worked on."

prettyblocks 17 hours ago

I imagine you can do this with a hook that fires every time claude stops responding:

https://code.claude.com/docs/en/hooks-guide

steve-atx-7600 17 hours ago

Backup your config and ask Claude. I’ve done this for all kinds of things like mcp and agent config.

sergiotapia 14 hours ago

use claude hooks - in .claude/settings.json you can have it run on different claude events like "PreToolUse" or "Stop" and in those events you pass in commands you want it to run.

You can have stuff like for the "stop" event, run foobar.sh and in foobar.sh do cool stuff like format your code, run tests, etc.

gaigalas 18 hours ago

I'm getting close to my goal of fitting an entire bootstrappable-from-source system source code as context and just telling Claude "go ahead, make it better".

sergiotapia 14 hours ago

maybe i'm thinking too small, or maybe it's because i've been using these ai systems since they were first launched, but it feels wrong to just saturate the hell out of the context, even if it can take 1 million tokens.

maybe i need to unlearn this habit?

gskm 11 hours ago

I think your instinct is right. More context isn't free, even when the window supports it, and the model still has to attend to everything in there, and noise dilutes the signal. A cleaner, smaller context consistently gives better outputs than a bloated one, regardless of window size. For sure, the 1M window is great for not having to compact mid-task. But "I can fit more" and "I should put more in" are very different things. At least in my mind.

alienbaby 16 hours ago

is this the market played in front of our eyes slice by slice: ok, maybe not, but watching these entities duke it out is kinda amusing? There will be consequences but may as well sit it out for the ride, who knows where we are going?

nemo44x 16 hours ago

Has anyone started a project to replace Linux yet?

dude250711 8 hours ago

No, because it's not a hello-world Electron/React "app".

jf___ 14 hours ago

there is a parallel between managing context windows and hard real-time system engineering.

A context window is a fixed-size memory region. It is allocated once, at conversation start, and cannot grow. Every token consumed — prompt, response, digression — advances a pointer through this region. There is no garbage collector. There is no virtual memory. When the space is exhausted, the system does not degrade gracefully: it faults.

This is not metaphor by loose resemblance. The structural constraints are isomorphic:

No dynamic allocation. In a hard realtime system, malloc() at runtime is forbidden — it fragments the heap and destroys predictability. In a conversation, raising an orthogonal topic mid-task is dynamic allocation. It fragments the semantic space. The transformer's attention mechanism must now maintain coherence across non-contiguous blocks of meaning, precisely analogous to cache misses over scattered memory.

No recursion. Recursion risks stack overflow and makes WCET analysis intractable. In a conversation, recursion is re-derivation: returning to re-explain, re-justify, or re-negotiate decisions already made. Each re-entry consumes tokens to reconstruct state that was already resolved. In realtime systems, loops are unrolled at compile time. In LLM work, dependencies should be resolved before the main execution phase.

Linear allocation only. The correct strategy in both domains is the bump allocator: advance monotonically through the available region. Never backtrack. Never interleave. The "brainstorm" pattern — a focused, single-pass traversal of a problem space — works precisely because it is a linear allocation discipline imposed on a conversation.

rhubarbtree 9 hours ago

There is compaction, which is analogous to gc