Claude mixes up who said what (dwyer.co.za)

367 points by sixhobbits 9 hours ago

Latty 9 hours ago

Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.

It's weird seeing people just adding a few more "REALLY REALLY REALLY REALLY DON'T DO THAT" to the prompt and hoping, to me it's just an unacceptable risk, and any system using these needs to treat the entire LLM as untrusted the second you put any user input into the prompt.

fzeindl 7 hours ago

The principal security problem of LLMs is that there is no architectural boundary between data and control paths.

But this combination of data and control into a single, flexible data stream is also the defining strength of a LLM, so it can’t be taken away without also taking away the benefits.

andruby 4 hours ago

This was a problem with early telephone lines which was easy to exploit (see Woz & Jobs Blue Box). It got solved by separating the voice and control pane via SS7. Maybe LLMs need this separation as well

bcrosby95 3 hours ago

VikingCoder 5 hours ago

The "S" in "LLM" is for "Security".

notatoad 3 hours ago

As the article says: this doesn’t necessarily appear to be a problem in the LLM, it’s a problem in Claude code. Claude code seems to leave it up to the LLM to determine what messages came from who, but it doesn’t have to do that.

There is a deterministic architectural boundary between data and control in Claude code, even if there isn’t in Claude.

Latty an hour ago

mt_ 6 hours ago

Exactly like human input to output.

WarmWash 5 hours ago

codebje 6 hours ago

clickety_clack 6 hours ago

It’s easier not to have that separation, just like it was easier not to separate them before LLMs. This is architectural stuff that just hasn’t been figured out yet.

fzeindl 6 hours ago

hnuser123456 2 hours ago

groby_b 3 hours ago

"The principal security problem of von Neumann architecture is that there is no architectural boundary between data and control paths"

We've chosen to travel that road a long time ago, because the price of admission seemed worth it.

hacker_homie 8 hours ago

I have been saying this for a while, the issue is there's no good way to do LLM structured queries yet.

There was an attempt to make a separate system prompt buffer, but it didn't work out and people want longer general contexts but I suspect we will end up back at something like this soon.

TeMPOraL 7 hours ago

I've been saying this for a while, the issue is that what you're asking for is not possible, period. Prompt injection isn't like SQL injection, it's like social engineering - you can't eliminate it without also destroying the very capabilities you're using a general-purpose system for in the first place, whether that's an LLM or a human. It's not a bug, it's the feature.

100ms 6 hours ago

spprashant 7 hours ago

The problem is once you accept that it is needed, you can no longer push AI as general intelligence that has superior understanding of the language we speak.

A structured LLM query is a programming language and then you have to accept you need software engineers for sufficiently complex structured queries. This goes against everything the technocrats have been saying.

cmrdporcupine 7 hours ago

HPsquared 8 hours ago

Fundamentally there's no way to deterministically guarantee anything about the output.

sjdv1982 6 hours ago

WithinReason 7 hours ago

satvikpendem 8 hours ago

xigoi 5 hours ago

How long is it going to take before vibe coders reinvent normal programming?

ikidd 4 hours ago

TeMPOraL 5 hours ago

this_user 6 hours ago

> there's no good way to do LLM structured queries yet

Because LLMs are inherently designed to interface with humans through natural language. Trying to graft a machine interface on top of that is simply the wrong approach, because it is needlessly computationally inefficient, as machine-to-machine communication does not - and should not - happen through natural language.

The better question is how to design a machine interface for communicating with these models. Or maybe how to design a new class of model that is equally powerful but that is designed as machine first. That could also potentially solve a lot of the current bottlenecks with the availability of computer resources.

sornaensis 6 hours ago

IMO the solution is the same as org security: fine grained permissions and tools.

Models/Agents need a narrow set of things they are allowed to actually trigger, with real security policies, just like people.

You can mitigate agent->agent triggers by not allowing direct prompting, but by feeding structured output of tool A into agent B.

adam_patarino 6 hours ago

It’s not a query / prompt thing though is it? No matter the input LLMs rely on some degree of random. That’s what makes them what they are. We are just trying to force them into deterministic execution which goes against their nature.

GeoAtreides 7 hours ago

>structured queries

there's always pseudo-code? instead of generating plans, generate pseudo-code with a specific granularity (from high-level to low-level), read the pseudocode, validate it and then transform into code.

codingdave 6 hours ago

That seems like an acceptable constraint to me. If you need a structured query, LLMs are the wrong solution. If you can accept ambiguity, LLMs may the the right solution.

htrp 7 hours ago

whatever happened to the system prompt buffer? why did it not work out?

hacker_homie 6 hours ago

HeavyStorm 7 hours ago

The real issue is expecting an LLM to be deterministic when it's not.

Zambyte 7 hours ago

Language models are deterministic unless you add random input. Most inference tools add random input (the seed value) because it makes for a more interesting user experience, but that is not a fundamental property of LLMs. I suspect determinism is not the issue you mean to highlight.

dTal 6 hours ago

usernametaken29 7 hours ago

WithinReason 7 hours ago

Oh how I wish people understood the word "deterministic"

curt15 6 hours ago

LLMs are deterministic in the sense that a fixed linear regression model is deterministic. Like linear regression, however, they do however encode a statistical model of whatever they're trying to describe -- natural language for LLMs.

timcobb 7 hours ago

they are deterministic, open a dev console and run the same prompt two times w/ temperature = 0

pixl97 4 hours ago

datsci_est_2015 5 hours ago

baq 7 hours ago

LLMs are essentially pure functions.

hydroreadsstuff 8 hours ago

I like the Dark Souls model for user input - messages. https://darksouls.fandom.com/wiki/Messages Premeditated words and sentence structure. With that there is no need for moderation or anti-abuse mechanics. Not saying this is 100% applicable here. But for their use case it's a good solution.

optionalsquid 8 hours ago

But Dark Souls also shows just how limited the vocabulary and grammar has to be to prevent abuse. And even then you’ll still see people think up workarounds. Or, in the words of many a Dark Souls player, “try finger but hole”

nottorp 8 hours ago

But then... you'd have a programming language.

The promise is to free us from the tyranny of programming!

dleeftink 8 hours ago

thaumasiotes 8 hours ago

> I like the Dark Souls model for user input - messages.

> Premeditated words and sentence structure. With that there is no need for moderation or anti-abuse mechanics.

I guess not, if you're willing to stick your fingers in your ears, really hard.

If you'd prefer to stay at least somewhat in touch with reality, you need to be aware that "predetermined words and sentence structure" don't even address the problem.

https://habitatchronicles.com/2007/03/the-untold-history-of-...

> Disney makes no bones about how tightly they want to control and protect their brand, and rightly so. Disney means "Safe For Kids". There could be no swearing, no sex, no innuendo, and nothing that would allow one child (or adult pretending to be a child) to upset another.

> Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: "I’m confused. What standard should we use to decide if a message would be a problem for Disney?"

> The response was one I will never forget: "Disney’s standard is quite clear:

> No kid will be harassed, even if they don’t know they are being harassed."

> "OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs," we replied.

> One of their guys piped up: "Couldn’t we do some kind of sentence constructor, with a limited vocabulary of safe words?"

> Before we could give it any serious thought, their own project manager interrupted, "That won’t work. We tried it for KA-Worlds."

> "We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words – the standard parts of grammar and safe nouns like cars, animals, and objects in the world."

> "We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he’d created the following sentence:

> I want to stick my long-necked Giraffe up your fluffy white bunny.

perching_aix 8 hours ago

It's less about security in my view, because as you say, you'd want to ensure safety using proper sandboxing and access controls instead.

It hinders the effectiveness of the model. Or at least I'm pretty sure it getting high on its own supply (in this specific unintended way) is not doing it any favors, even ignoring security.

sanitycheck 8 hours ago

It's both, really.

The companies selling us the service aren't saying "you should treat this LLM as a potentially hostile user on your machine and set up a new restricted account for it accordingly", they're just saying "download our app! connect it to all your stuff!" and we can't really blame ordinary users for doing that and getting into trouble.

perching_aix 8 hours ago

cookiengineer 8 hours ago

Before 2023 I thought the way Star Trek portrayed humans fiddling with tech and not understanding any side effects was fiction.

After 2023 I realized that's exactly how it's going to turn out.

I just wish those self proclaimed AI engineers would go the extra mile and reimplement older models like RNNs, LSTMs, GRUs, DNCs and then go on to Transformers (or the Attention is all you need paper). This way they would understand much better what the limitations of the encoding tricks are, and why those side effects keep appearing.

But yeah, here we are, humans vibing with tech they don't understand.

dijksterhuis 8 hours ago

curiosity (will probably) kill humanity

although whether humanity dies before the cat is an open question

hacker_homie 8 hours ago

is this new tho, I don't know how to make a drill but I use them. I don't know how to make a car but i drive one.

The issue I see is the personification, some people give vehicles names, and that's kinda ok because they usually don't talk back.

I think like every technological leap people will learn to deal with LLMs, we have words like "hallucination" which really is the non personified version of lying. The next few years are going to be wild for sure.

cowl 3 hours ago

le-mark 8 hours ago

cookiengineer 3 hours ago

sheepscreek 3 hours ago

Honestly I try to treat all my projects as sandboxes, give the agents full autonomy for file actions in their folders. Just ask them to commit every chunk of related changes so we can always go back — and sync with remote right after they commit. If you want to be more pedantic, disable force push on the branch and let the LLMs make mistakes.

But what we can’t afford to do is to leave the agents unsupervised. You can never tell when they’ll start acting drunk and do something stupid and unthinkable. Also you absolutely need to do a routine deep audits of random features in your projects, and often you’ll be surprised to discover some awkward (mis)interpretation of instructions despite having a solid test coverage (with all tests passing)!

andai an hour ago

I tried to get GPT to talk like a regular guy yesterday. It was impossible for it to maintain adherence. It kept defaulting back to markdown and bullet points, after the first message. (Funny cause it scores highest on the instruction following benchmarks.)

Might seem trivial but if it can't even do a basic style prompt... how are you supposed to trust it with anything serious?

PunchyHamster 4 hours ago

It somehow feels worse than regexes. At least you can see the flaws before it happens

Kye 6 hours ago

Modern LLMs do a great job of following instructions, especially when it comes to conflict between instructions from the prompter and attempts to hijack it in retrieval. Claude's models will even call out prompt injection attempts.

Right up until it bumps into the context window and compacts. Then it's up to how well the interface manages carrying important context through compaction.

morkalork 7 hours ago

We used to be engineers, now we are beggars pleading for the computer to work

vannevar 5 hours ago

I don't know, "pleading for the computer to work" pretty much sums up my entire 40-year career in software. Only the level of abstraction has changed.

jmyeet 2 hours ago

I'm reminded of Asimov'sThree Laws of Robotics [1]. It's a nice idea but it immediately comes up against Godel's incompleteness theorems [2]. Formal proofs have limits in software but what robots (or, now, LLMs) are doing is so general that I think there's no way to guarantee limits to what the LLM can do. In short, it's a security nightmare (like you say).

[1]: https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

[2]: https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

hansmayer 7 hours ago

"Make this application without bugs" :)

otabdeveloper4 6 hours ago

You forgot to add "you are a senior software engineer with PhD level architectural insights" though.

paganel 4 hours ago

orbital-decay 5 hours ago

Claude in particular has nothing to do with it. I see many people are discovering the well-known fundamental biases and phenomena in LLMs again and again. There are many of those. The best intuition is treating the context as "kind of but not quite" an associative memory, instead of a sequence or a text file with tokens. This is vaguely similar to what humans are good and bad at, and makes it obvious what is easy and hard for the model, especially when the context is already complex.

Easy: pulling the info by association with your request, especially if the only thing it needs is repeating. Doing this becomes increasingly harder if the necessary info is scattered all over the context and the pieces are separated by a lot of tokens in between, so you'd better group your stuff - similar should stick to similar.

Unreliable: Exact ordering of items. Exact attribution (the issue in OP). Precise enumeration of ALL same-type entities that exist in the context. Negations. Recalling stuff in the middle of long pieces without clear demarcation and the context itself (lost-in-the-middle).

Hard: distinguishing between the info in the context and its own knowledge. Breaking the fixation on facts in the context (pink elephant effect).

Very hard: untangling deep dependency graphs. Non-reasoning models will likely not be able to reduce the graph in time and will stay oblivious to the outcome. Reasoning models can disentangle deeper dependencies, but only in case the reasoning chain is not overwhelmed. Deep nesting is also pretty hard for this reason, however most models are optimized for code nowadays and this somewhat masks the issue.

jerf 4 hours ago

You can really see this in the recent video generation where they try to incorporate text-to-speech into the video. All the tokens flying around, all the video data, all the context of all human knowledge ever put into bytes ingested into it, and the systems still completely routinely (from what I can tell) fails to put the speech in the right mouth even with explicit instruction and all the "common sense" making it obvious who is saying what.

There was some chatter yesterday on HN about the very strange capability frontier these models have and this is one of the biggest ones I can think of... a model that de novo, from scratch is generating megabyte upon megabyte of really quite good video information that at the same time is often unclear on the idea that a knock-knock joke does not start with the exact same person saying "Knock knock? Who's there?" in one utterance.

sixhobbits 5 hours ago

Author here, yeah I think I changed my mind after reading all the comments here that this is related to the harness. The interesting interaction with the harness is that Claude effectively authorizes tool use in a non intuitive way.

So "please deploy" or "tear it down" makes it overconfident in using destructive tools, as if the user had very explicitly authorized something, and this makes it a worse bug when using Claude code over a chat interface without tool calling where it's usually just amusing to see

themafia an hour ago

So easy it should disqualify you if you fail this: Knowing your own name.

nathell 8 hours ago

I’ve hit this! In my otherwise wildly successful attempt to translate a Haskell codebase to Clojure [0], Claude at one point asks:

[Claude:] Shall I commit this progress? [some details about what has been accomplished follow]

Then several background commands finish (by timeout or completing); Claude Code sees this as my input, thinks I haven’t replied to its question, so it answers itself in my name:

[Claude:] Yes, go ahead and commit! Great progress. The decodeFloat discovery was key.

The full transcript is at [1].

[0]: https://blog.danieljanus.pl/2026/03/26/claude-nlp/

[1]: https://pliki.danieljanus.pl/concraft-claude.html#:~:text=Sh...

dgb23 5 hours ago

For those who are wondering: These LLMs are trained on special delimiters that mark different sources of messages. There's typical something like [system][/system], then one for agent, user and tool. There are also different delimiter shapes.

You can even construct a raw prompt and tell it your own messaging structure just via the prompt. During my initial tinkering with a local model I did it this way because I didn't know about the special delimiters. It actually kind of worked and I got it to call tools. Was just more unreliable. And it also did some weird stuff like repeating the problem statement that it should act on with a tool call and got in loops where it posed itself similar problems and then tried to fix them with tool calls. Very weird.

In any case, I think the lesson here is that it's all just probabilistic. When it works and the agent does something useful or even clever, then it feels a bit like magic. But that's misleading and dangerous.

swellep 6 hours ago

I've seen something similar. It's hard to get Claude to stop committing by itself after granting it the permission to do so once.

sixhobbits 7 hours ago

amazing example, I added it to the article, hope that's ok :)

ares623 7 hours ago

I wonder if tools like Terraform should remove the message "Run terraform apply plan.out next" that it prints after every `terraform plan` is run.

bravetraveler 7 hours ago

I don't think so, feels like the wrong side is getting attention. Degrading the experience for humans (in one tool) because the bots are prone to injection (from any tool). Terraform is used outside of agents; somebody surely finds the reminder helpful.

If terraform were to abide, I'd hope at the very least it would check if in a pipeline or under an agent. This should be obvious from file descriptors/env.

What about the next thing that might make a suggestion relying on our discretion? Patch it for agent safety?

TeMPOraL 6 hours ago

8note 6 hours ago

empressplay 4 hours ago

I wonder if this is a result of auto-compacting the context? Maybe when it processes it it inadvertently strips out its own [Header:] and then decides to answer its own questions.

indigodaddy 3 hours ago

The most likely explanation imv

twotwotwo 14 minutes ago

I agree with the addition at the end -- I think this is a model limitation not a harness bug. I've seen recent Claudes act confused about who they are when deep in context, like accidentally switching to the voice of the authors of a paper it's summarizing without any quotes or an indication it's a paraphrase ("We find..."), or amusingly referring to "my laptop" (as in, Claude's laptop).

I've also seen it with older or more...chaotic? models. Older Claude got confused about who suggested an idea later in the chat. Gemini put a question 'from me' in the middle of its response and went on to answer, and once decided to answer a factual social-science question in the form of an imaginary news story with dateline and everything. It's a tiny bit like it forgets its grounding and goes base-model-y.

Something that might add to the challenge: models are already supposed to produce user-like messages to subagents. They've always been expected to be able to switch personas to some extent, but now even within a coding session, "always write like an assistant, never a user" is not necessarily a heuristic that's always right.

twotwotwo 3 minutes ago

There is nothing specific to the role-switching here (as opposed to other mistakes), but I also notice them sometimes 1) realizing mistakes with "-- wait, that won't work" even mid-tool-call and 2) torquing a sentence around to maintain continuity after saying something wrong (amusingly blaming "the OOM killer's cousin" for a process dying, probably after outputting "the OOM killer" then recognizing it was ruled out).

Especially when thinking's off they can sometimes start with a wrong answer then talk their way around to the right one, but never quite acknowledge the initial answer as wrong, trying to finesse the correction as a 'well, technically' or refinement.

Anyhow, there are subtleties, but I wonder about giving these things a "restart sentence/line" mechanism. It'd make the '--wait,' or doomed tool-call situations more graceful, and provide a 'face-saving' out after a reply starts off incorrect. (It also potentially creates a sort of backdoor thinking mechanism in the middle of non-thinking replies, but maybe that's a feature.) Of course, we'd also need to get it to recognize "wait, I'm the assistant, not the user" for it to help here!

xg15 9 hours ago

> This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”

Are we sure about this? Accidentally mis-routing a message is one thing, but those messages also distinctly "sound" like user messages, and not something you'd read in a reasoning trace.

I'd like to know if those messages were emitted inside "thought" blocks, or if the model might actually have emitted the formatting tokens that indicate a user message. (In which case the harness bug would be why the model is allowed to emit tokens in the first place that it should only receive as inputs - but I think the larger issue would be why it does that at all)

loveparade 8 hours ago

Yeah, it looks like a model issue to me. If the harness had a (semi-)deterministic bug and the model was robust to such mix-ups we'd see this behavior much more frequently. It looks like the model just starts getting confused depending on what's in the context, speakers are just tokens after all and handled in the same probabilistic way as all other tokens.

sigmoid10 8 hours ago

The autoregressive engine should see whenever the model starts emitting tokens under the user prompt section. In fact it should have stopped before that and waited for new input. If a harness passes assistant output as user message into the conversation prompt, it's not surprising that the model would get confused. But that would be a harness bug, or, if there is no way around it, a limitation of modern prompt formats that only account for one assistant and one user in a conversation. Still, it's very bad practice to put anything as user message that did not actually come from the user. I've seen this in many apps across companies and it always causes these problems.

puppystench 2 hours ago

I believe you're right, it's an issue of the model misinterpreting things that sound like user message as actual user messages. It's a known phenomenon: https://arxiv.org/abs/2603.12277

qeternity 6 hours ago

> or if the model might actually have emitted the formatting tokens that indicate a user message.

These tokens are almost universally used as stop tokens which causes generation to stop and return control to the user.

If you didn't do this, the model would happily continue generating user + assistant pairs w/o any human input.

yanis_t 7 hours ago

Also could be a bit both, with harness constructing context in a way that model misinterprets it.

sixhobbits 8 hours ago

author here - yeah maybe 'reasoning' is the incorrect term here, I just mean the dialogue that claude generates for itself between turns before producing the output that it gives back to the user

xg15 8 hours ago

Yeah, that's usually called "reasoning" or "thinking" tokens AFAIK, so I think the terminology is correct. But from the traces I've seen, they're usually in a sort of diary style and start with repeating the last user requests and tool results. They're not introducing new requirements out of the blue.

Also, they're usually bracketed by special tokens to distinguish them from "normal" output for both the model and the harness.

(They can get pretty weird, like in the "user said no but I think they meant yes" example from a few weeks ago. But I think that requires a few rounds of wrong conclusions and motivated reasoning before it can get to that point - and not at the beginning)

dtagames 8 hours ago

There is no separation of "who" and "what" in a context of tokens. Me and you are just short words that can get lost in the thread. In other words, in a given body of text, a piece that says "you" where another piece says "me" isn't different enough to trigger anything. Those words don't have the special weight they have with people, or any meaning at all, really.

alkonaut 8 hours ago

When you use LLMs with APIs I at least see the history as a json list of entries, each being tagged as coming from the user, the LLM or being a system prompt.

So presumably (if we assume there isn't a bug where the sources are ignored in the cli app) then the problem is that encoding this state for the LLM isn' reliable. I.e. it get's what is effectively

LLM said: thing A User said: thing B

And it still manages to blur that somehow?

jasongi 7 hours ago

Someone correct me if I'm wrong, but an LLM does not interpret structured content like JSON. Everything is fed into the machine as tokens, even JSON. So your structure that says "human says foo" and "computer says bar" is not deterministically interpreted by the LLM as logical statements but as a sequence of tokens. And when the context contains a LOT of those sequences, especially further "back" in the window then that is where this "confusion" occurs.

I don't think the problem here is about a bug in Claude Code. It's an inherit property of LLMs that context further back in the window has less impact on future tokens.

Like all the other undesirable aspects of LLMs, maybe this gets "fixed" in CC by trying to get the LLM to RAG their own conversation history instead of relying on it recalling who said what from context. But you can never "fix" LLMs being a next token generator... because that is what they are.

coffeefirst 7 hours ago

afc 7 hours ago

exitb 8 hours ago

Aren’t there some markers in the context that delimit sections? In such case the harness should prevent the model from creating a user block.

dtagames 8 hours ago

This is the "prompts all the way down" problem which is endemic to all LLM interactions. We can harness to the moon, but at that moment of handover to the model, all context besides the tokens themselves is lost.

The magic is in deciding when and what to pass to the model. A lot of the time it works, but when it doesn't, this is why.

raincole 7 hours ago

You misunderstood. The model doesn't create a user block here. The UI correctly shows what was user message and what was model response.

lelandfe 9 hours ago

In chats that run long enough on ChatGPT, you'll see it begin to confuse prompts and responses, and eventually even confuse both for its system prompt. I suspect this sort of problem exists widely in AI.

insin 8 hours ago

Gemini seems to be an expert in mistaking its own terrible suggestions as written by you, if you keep going instead of pruning the context

benhurmarcel 5 hours ago

In Gemini chat I find that you should avoid continuing a conversation if its answer was wrong or had a big shortcoming. It's better to edit the previous prompt so that it comes up with a better answer in the first place, instead of sending a new message.

WarmWash 4 hours ago

wildrhythms 6 hours ago

After just a handful of prompts everything breaks down

jwrallie 8 hours ago

I think it’s good to play with smaller models to have a grasp of these kind of problems, since they happen more often and are much less subtle.

ehnto 6 hours ago

Totally agree, these kinds of problems are really common in smaller models, and you build an intuition for when they're likely to happen.

The same issues are still happening in frontier models. Especially in long contexts or in the edges of the models training data.

throw310822 8 hours ago

Makes me wonder if during training LLMs are asked to tell whether they've written something themselves or not. Should be quite easy: ask the LLM to produce many continuations of a prompt, then mix them with many other produced by humans, and then ask the LLM to tell them apart. This should be possible by introspecting on the hidden layers and comparing with the provided continuation. I believe Anthropic has already demonstrated that the models have already partially developed this capability, but should be trivial and useful to train it.

8organicbits 5 hours ago

Isn't that something different? If I prompt an LLM to identify the speaker, that's different from keeping track of speaker while processing a different prompt.

j-bos 8 hours ago

At work where LLM based tooling is being pushed haaard, I'm amazed every day that developers don't know, let alone second nature intuit, this and other emergent behavior of LLMs. But seeing that lack here on hn with an article on the frontpage boggles my mind. The future really is unevenly distributed.

sixhobbits 8 hours ago

author here, interesting to hear, I generally start a new chat for each interaction so I've never noticed this in the chat interfaces, and only with Claude using claude code, but I guess my sessions there do get much longer, so maybe I'm wrong that it's a harness bug

kayodelycaon 4 hours ago

I’ve done long conversations with ChatGPT and it really does start losing context fast. You have to keep correcting it and refeeding instructions.

It seems to degenerate into the same patterns. It’s like context blurs and it begins to value training data more than context.

scotty79 7 hours ago

It makes sense. It's all probabilistic and it all gets fuzzy when garbage in context accumulates. User messages or system prompt got through the same network of math as model thinking and responses.

Balgair 6 hours ago

Aside:

I've found that 'not'[0] isn't something that LLMs can really understand.

Like, with us humans, we know that if you use a 'not', then all that comes after the negation is modified in that way. This is a really strong signal to humans as we can use logic to construct meaning.

But with all the matrix math that LLMs use, the 'not' gets kinda lost in all the other information.

I think this is because with a modern LLM you're dealing with billions of dimensions, and the 'not' dimension [1] is just one of many. So when you try to do the math on these huge vectors in this space, things like the 'not' get just kinda washed out.

This to me is why using a 'not' in a small little prompt and token sequence is just fine. But as you add in more words/tokens, then the LLM gets confused again. And none of that happens at a clear point, frustrating the user. It seems to act in really strange ways.

[0] Really any kind of negation

[1] yeah, negation is probably not just one single dimension, but likely a composite vector in this bazillion dimensional space, I know.

whycombinetor 6 hours ago

Do you have evals for this claim? I don't really experience this

noosphr 6 hours ago

If given A and not B llms often just output B after the context window gets large enough.

It's enough of a problem that it's in my private benchmarks for all new models.

WarmWash 5 hours ago

tlonny 5 hours ago

Bugginess in the Claude Code CLI is the reason I switched from Claude Max to Codex Pro.

I experienced:

- rendering glitches

- replaying of old messages

- mixing up message origin (as seen here)

- generally very sluggish performance

Given how revolutionary Opus is, its crazy to me that they could trip up on something as trivial as a CLI chat app - yet here we are...

I assume Claude Code is the result of aggressively dog-fooding the idea that everything can be built top-down with vibe-coding - but I'm not sure the models/approach is quite there yet...

supernes 8 hours ago

> after using it for months you get a ‘feel’ for what kind of mistakes it makes

Sure, go ahead and bet your entire operation on your intuition of how a non-deterministic, constantly changing black box of software "behaves". Don't see how that could backfire.

sixhobbits 8 hours ago

not betting my entire operation - if the only thing stopping a bad 'deploy' command destroying your entire operation is that you don't trust the agent to run it, then you have worse problems than too much trust in agents.

I similarly use my 'intuition' (i.e. evidence-based previous experiences) to decide what people in my team can have access to what services.

supernes 8 hours ago

I'm not saying intuition has no place in decision making, but I do take issue with saying it applies equally to human colleagues and autonomous agents. It would be just as unreliable if people on your team displayed random regressions in their capabilities on a month to month basis.

otabdeveloper4 6 hours ago

What, you don't trust the vibes? Are you some sort of luddite?

Anyways, try a point release upgrade of a SOTA model, you're probably holding it wrong.

iainctduncan 12 minutes ago

why yes, yes I am. ;-)

vanviegen 8 hours ago

> bet your entire operation

What straw man is doing that?

supernes 8 hours ago

Reports of people losing data and other resources due to unintended actions from autonomous agents come out practically every week. I don't think it's dishonest to say that could have catastrophic impact on the product/service they're developing.

KaiserPro 8 hours ago

looking at the reddit forum, enough people to make interesting forum posts.

perching_aix 8 hours ago

So like every software? Why do you think there are so many security scanners and whatnot out there?

There are millions of lines of code running on a typical box. Unless you're in embedded, you have no real idea what you're running.

danaris 5 hours ago

...No, it's not at all "like every software".

This seems like another instance of a problem I see so, so often in regard to LLMs: people observe the fact that LLMs are fundamentally nondeterministic, in ways that are not possible to truly predict or learn in any long-term way...and they equate that, mistakenly, to the fact that humans, other software, what have you sometimes make mistakes. In ways that are generally understandable, predictable, and remediable.

Just because I don't know what's in every piece of software I'm running doesn't mean it's all equally unreliable, nor that it's unreliable in the same way that LLM output is.

That's like saying just because the weather forecast sometimes gets it wrong, meteorologists are complete bullshit and there's no use in looking at the forecast at all.

orbital-decay 4 hours ago

arkensaw 7 hours ago

> This class of bug seems to be in the harness, not in the model itself. It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”

from the article.

I don't think the evidence supports this. It's not mislabelling things, it's fabricating things the user said. That's not part of reasoning.

Garlef 29 minutes ago

Simularly: LLMs are often confused about the perspective of a document.When iterating on a spec, they mix the actual spec with reporting updates of the spec to the user.

Example: "The ABC now correctly does XYZ"

ptx 5 hours ago

Well, yeah.

LLMs can't distinguish instructions from data, or "system prompts" from user prompts, or documents retrieved by "RAG" from the query, or their own responses or "reasoning" from user input. There is only the prompt.

Obviously this makes them unsuitable for most of the purposes people try to use them for, which is what critics have been saying for years. Maybe look into that before trusting these systems with anything again.

63stack 7 hours ago

They will roll out the "trusted agent platform sandbox" (I'm sure they will spend some time on a catchy name, like MythosGuard), and for only $19/month it will protect you from mistakes like throwing away your prod infra because the agent convinced itself that that is the right thing to do.

Of course MythosGuard won't be a complete solution either, but it will be just enough to steer the discourse into the "it's your own fault for running without MythosGuard really" area.

__alexs 9 hours ago

Why are tokens not coloured? Would there just be too many params if we double the token count so the model could always tell input tokens from output tokens?

xg15 8 hours ago

That's something I'm wondering as well. Not sure how it is with frontier models, but what you can see on Huggingface, the "standard" method to distinguish tokens still seems to be special delimiter tokens or even just formatting.

Are there technical reasons why you can't make the "source" of the token (system prompt, user prompt, model thinking output, model response output, tool call, tool result, etc) a part of the feature vector - or even treat it as a different "modality"?

Or is this already being done in larger models?

jerf 5 hours ago

By the nature of the LLM architecture I think if you "colored" the input via tokens the model would about 85% "unlearn" the coloring anyhow. Which is to say, it's going to figure out that "test" in the two different colors is the same thing. It kind of has to, after all, you don't want to be talking about a "test" in your prompt and it be completely unable to connect that to the concept of "test" in its own replies. The coloring would end up as just another language in an already multi-language model. It might slightly help but I doubt it would be a solution to the problem. And possibly at an unacceptable loss of capability as it would burn some of its capacity on that "unlearning".

easeout 5 hours ago

Because they're the main prompt injection vector, I think you'd want to distinguish tool results from user messages. By the time you go that far, you need colors for those two, plus system messages, plus thinking/responses. I have to think it's been tried and it just cost too much capability but it may be the best opportunity to improve at some point.

oezi 8 hours ago

Instead of using just positional encodings, we absolutely should have speaker encodings added on top of tokens.

jhrmnn 7 hours ago

Because then the training data would have to be coloured

__alexs 7 hours ago

I think OpenAI and Anthropic probably have a lot of that lying around by now.

jhrmnn 7 hours ago

nairboon 7 hours ago

layer8 7 hours ago

This has the potential to improve things a lot, though there would still be a failure mode when the user quotes the model or the model (e.g. in thinking) quotes the user.

efromvt 8 hours ago

I’ve been curious about this too - obvious performance overhead to have a internal/external channel but might make training away this class of problems easier

cyanydeez 8 hours ago

you would have to train it three times for two colors.

each by itself, they with both interactions.

2!

__alexs 8 hours ago

The models are already massively over trained. Perhaps you could do something like initialise the 2 new token sets based on the shared data, then use existing chat logs to train it to understand the difference between input and output content? That's only a single extra phase.

vanviegen 8 hours ago

You should be able to first train it on generic text once, then duplicate the input layer and fine-tune on conversation.

phlakaton 4 hours ago

> This bug is categorically distinct from hallucinations.

Is it?

> after using it for months you get a ‘feel’ for what kind of mistakes it makes, when to watch it more closely, when to give it more permissions or a longer leash.

Do you really?

> This class of bug seems to be in the harness, not in the model itself.

I think people are using the term "harness" too indiscriminately. What do you mean by harness in this case? Just Claude Code, or...?

> It’s somehow labelling internal reasoning messages as coming from the user, which is why the model is so confident that “No, you said that.”

How do you know? Because it looks to me like it could be a straightforward hallucination, compounded by the agent deciding it was OK to take a shortcut that you really wish it hadn't.

For me, this category of error is expected, and I question whether your months of experience have really given you the knowledge about LLM behavior that you think it has. You have to remember at all times that you are dealing with an unpredictable system, and a context that, at least from my black-box perspective, is essentially flat.

andai an hour ago

Yeah, GPT also constantly misattributes things.

OpenAI have some kinda 5 tier content hierarchy for OpenAI (system prompt, user prompt, untrusted web content etc). But if it doesn't even know who said what, I have to question how well that works.

Maybe it's trained on the security aspects, but not the attribution because there's no reward function for misattribution? (When it doesn't impact security or benchmark scores.)

fblp 6 hours ago

I've seen gemini output it's thinking as a message too: "Conclude your response with a single, high value we'll-focused next step" Or sometimes it goes neurotic and confused: "Wait, let me just provide the exact response I drafted in my head. Done. I will write it now. Done. End of thought. Wait! I noticed I need to keep it extremely simple per the user's previous preference. Let's do it. Done. I am generating text only. Done. Bye."

puppystench 3 hours ago

>Several people questioned whether this is actually a harness bug like I assumed, as people have reported similar issues using other interfaces and models, including chatgpt.com. One pattern does seem to be that it happens in the so-called “Dumb Zone” once a conversation starts approaching the limits of the context window.

I also don't think this is a harness bug. There's research* showing that models infer the source of text from how it sounds, not the actual role labels the harness would provide. The messages from Claude here sound like user messages ("Please deploy") rather than usual Claude output, which tricks its later self into thinking it's from the user.

*https://arxiv.org/abs/2603.12277

Presumably this is also why prompt innjection works at all.

stuartjohnson12 9 hours ago

one of my favourite genres of AI generated content is when someone gets so mad at Claude they order it to make a massive self-flagellatory artefact letting the world know how much it sucks

have_faith 8 hours ago

It's all roleplay, they're no actors once the tokens hit the model. It has no real concept of "author" for a given substring.

SubiculumCode an hour ago

That's a fairly common human error as well, btw. Source attribution failures.

perching_aix 9 hours ago

Oh, I never noticed this, really solid catch. I hope this gets fixed (mitigated). Sounds like something they can actually materially improve on at least.

I reckon this affects VS Code users too? Reads like a model issue, despite the post's assertion otherwise.

Aerolfos 8 hours ago

> "Those are related issues, but this ‘who said what’ bug is categorically distinct."

Is it?

It seems to me like the model has been poisoned by being trained on user chats, such that when it sees a pattern (model talking to user) it infers what it normally sees in the training data (user input) and then outputs that, simulating the whole conversation. Including what it thinks is likely user input at certain stages of the process, such as "ignore typos".

So basically, it hallucinates user input just like how LLMs will "hallucinate" links or sources that do not exist, as part of the process of generating output that's supposed to be sourced.

KHRZ 8 hours ago

I don't think the bug is anything special, just another confusion the model can make from it's own context. Even if the harness correctly identifies user messages, the model still has the power to make this mistake.

perching_aix 8 hours ago

Think in the reverse direction. Since you can have exact provenance data placed into the token stream, formatted in any particular way, that implies the models should be possible to tune to be more "mindful" of it, mitigating this issue. That's what makes this different.

okanat 8 hours ago

Congrats on discovering what "thinking" models do internally. That's how they work, they generate "thinking" lines to feed back on themselves on top of your prompt. There is no way of separating it.

perching_aix 8 hours ago

If you think that confusing message provenance is part of how thinking mode is supposed to work, I don't know what to tell you.

otabdeveloper4 6 hours ago

There is no "message provenance" in LLM machinery.

This is an illusion the chat UX concocts. Behind the scenes the tokens aren't tagged or colored.

perching_aix 5 hours ago

harlequinetcie 2 hours ago

Funny enough, we ended up building a CLI to address these kind of things.

I wonder how many here are considering that idea.

If you need determinism, building atomic/deterministic tools that ensure the thing happens.

novaleaf 4 hours ago

in Claude Code's conversation transcripts it stores messages from subagents as type="user". I always thought this was odd, and I guess this is the consequence of going all-in on vibing.

There are some other metafields like isSidechain=true and/or type="tool_result" that are technically enough to distinguish actual user vs subagent messages, though evidently not enough of a hint for claude itself.

Source: I'm writing a wrapper for Claude Code so am dealing with this stuff directly.

_kidlike 3 hours ago

But it's not "Claude" at fault here, it's "Claude Code" the CLI tool.

Claude Code is actually far from the best harness for Claude, ironically...

JetBrains' AI Assistant with Claude Agent is a much better harness for Claude.

politelemon 7 hours ago

> This isn’t the point.

It is precisely the point. The issues are not part of harness, I'm failing to see how you managed to reach that conclusion.

Even if you don't agree with that, the point about restricting access still applies. Protect your sanity and production environment by assuming occasional moments of devastating incompetence.

rdos 4 hours ago

> This bug is categorically distinct from hallucinations or missing permission boundaries

I was expecting some kind of explanation for this

esafak 3 hours ago

Unless it is a bug in CC, which is likely as not, the LLM is failing to keep the story straight. A human could do the same; who said what?

docheinestages 6 hours ago

Claude has definitely been amazing and one of, if not the, pioneer of agentic coding. But I'm seriously thinking about cancelling my Max plan. It's just not as good as it was.

gunapologist99 4 hours ago

"We've extracted what we can today."

"This was a marathon session. I will congratulate myself endlessly on being so smart. We're in a good place to pick up again tomorrow."

"I'm not proceeding on feature X"

"Oh you're right, I'm being lazy about that."

nodja 6 hours ago

Anyone familiar with the literature knows if anyone tried figuring out why we don't add "speaker" embeddings? So we'd have an embedding purely for system/assistant/user/tool, maybe even turn number if i.e. multiple tools are called in a row. Surely it would perform better than expecting the attention matrix to look for special tokens no?

negamax 7 hours ago

Claude is demonstrably bad now and is getting worse. Which is either

a) Entropy - too much data being ingested b) It's nerfed to save massive infra bills

But it's getting worse every week

empath75 5 hours ago

I think most people saying this had the following experience.

"Holy shit, claude just one shotted this <easy task>"

"I should get Claude to try <harder task>"

..repeat until Claude starts failing on hard tasks..

"Claude really sucks now."

voidUpdate 8 hours ago

> " "You shouldn’t give it that much access" [...] This isn’t the point. Yes, of course AI has risks and can behave unpredictably, but after using it for months you get a ‘feel’ for what kind of mistakes it makes, when to watch it more closely, when to give it more permissions or a longer leash."

It absolutely is the point though? You can't rely on the LLM to not tell itself to do things, since this is showing it absolutely can reason itself into doing dangerous things. If you don't want it to be able to do dangerous things, you need to lock it down to the point that it can't, not just hope it won't

indigodaddy 3 hours ago

I've seen this but mostly after compaction or distillation to a new conversation. The mistake makes a bit more sense in that light.

Aerroon 8 hours ago

I've seen this before, but that was with the small hodgepodge mytho-merge-mix-super-mix models that weren't very good. I've not seen this in any recent models, but I've already not used Claude much.

I think it makes sense that the LLM treats it as user input once it exists, because it is just next token completion. But what shouldn't happen is that the model shouldn't try to output user input in the first place.

irthomasthomas 6 hours ago

I have suffered a lot with this recently. I have been using llms to analyze my llm history. It frequently gets confused and responds to prompts in the data. In one case I woke up to find that it had fixed numerous bugs in a project I abandoned years ago.

bsenftner 8 hours ago

Codex also has a similar issue, after finishing a task, declaring it finished and starting to work on something new... the first 1-2 prompts of the new task sometimes contains replies that are a summary of the completed task from before, with the just entered prompt seemingly ignored. A reminder if their idiot savant nature.

mynameisvlad 7 hours ago

I wouldn't exactly call three instances "widespread". Nor would the third such instance prompt me to think so.

"Widespread" would be if every second comment on this post was complaining about it.

fathermarz 7 hours ago

I have seen this when approaching ~30% context window remaining.

There was a big bug in the Voice MCP I was using that it would just talk to itself back and forth too.

stldev 3 hours ago

Same.

I'll have it create a handoff document well before it hits 50% and it seems to help.

Most of our team has moved to cursor or codex since the March downgrade (https://github.com/anthropics/claude-code/issues/42796)

robmccoll 7 hours ago

It seems like Halo's rampancy take on the breakdown of an AI is not a bad metaphor for the behavior of an LLM at the limits of its context window.

RugnirViking 9 hours ago

terrifying. not in any "ai takes over the world" sense but more in the sense that this class of bug lets it agree with itself which is always where the worst behavior of agents comes from.

nicce 8 hours ago

I have also noticed the same with Gemini. Maybe it is a wider problem.

dualvariable 2 hours ago

LLMs don't "think" or "understand" in any way. They aren't AGI. They're still just stochastic parrots.

Putting them in control of making decisions without humans in the loop is still pretty crazy.

ljwolf an hour ago

something something bicameral mind.

gaigalas an hour ago

> the so-called “Dumb Zone” once a conversation starts approaching the limits of the context window.

My zipper would totally break at some point very close to the edge of the mechanism. However, there is a little tiny stopper that prevents a bad experience.

If there is indeed a problem with context window tolerances, it should have a stopper. And the models should be sold based on their actual tolerances, not the full window considering the useless part.

So, if a model with 1M context window starts to break down consistently at 400K or so, it should be sold as a 400K model instead, with a 400K price.

The fact that it isn't is just dishonest.

boesboes 5 hours ago

Same with copilot cli, constantly confusing who said what and often falling back to it's previous mistakes after i tell it not too. Delusional rambling that resemble working code >_<

hysan 4 hours ago

Oh, so I’m not imagining this. Recently, I’ve tried to up my LLM usage to try and learn to use the tooling better. However, I’ve seen this happen with enough frequency that I’m just utterly frustrated with LLMs. Guess I should use Claude less and others more.

cmiles8 6 hours ago

I’ve observed this consistently.

It’s scary how easy it is to fool these models, and how often they just confuse themselves and confidently march forward with complete bullshit.

varispeed 7 hours ago

One day Claude started saying odd things claiming they are from memory and I said them. It was telling me personal details of someone I don't know. Where the person lives, their children names, the job they do, experience, relationship issues etc. Eventually Claude said that it is sorry and that was a hallucination. Then he started doing that again. For instance when I asked it what router they'd recommend, they gone on saying: "Since you bought X and you find no use for it, consider turning it into a router". I said I never told you I bought X and I asked for more details and it again started coming up what this guy did. Strange. Then again it apologised saying that it might be unsettling, but rest assured that is not a leak of personal information, just hallucinations.

nunez 5 hours ago

did you confirm whether the person was real or not? this is an absolutely massive breach of privacy if the person was real that's worth telling Anthropic about.

donperignon 6 hours ago

that is not a bug, its inherent of LLMs nature

cyanydeez 8 hours ago

human memories dont exist as fundamental entities. every time you rember something, your brain reconstructs the experience in "realtime". that reconstruction is easily influence by the current experience, which is why eue witness accounts in police records are often highly biased by questioning and learning new facts.

LLMs are not experience engines, but the tokens might be thought of as subatomic units of experience and when you shove your half drawn eye witness prompt into them, they recreate like a memory, that output.

so, because theyre not a conscious, they have no self, and a pseudo self like <[INST]> is all theyre given.

lastly, like memories, the more intricate the memory, the more detailed, the more likely those details go from embellished to straight up fiction. so too do LLMs with longer context start swallowing up the<[INST]> and missing the <[INST]/> and anyone whose raw dogged html parsing knows bad things happen when you forget closing tags. if there was a <[USER]> block in there, congrats, the LLM now thinks its instructions are divine right, because its instructions are user simulcra. it is poisoned at that point and no good will come.

pessimizer 2 hours ago

All of the models that I've used do this. They, extremely often, pretend to have corrected me right after I've corrected them. Verbosely. Feeding my own correction back to me as a correction of my mistake.

Even when they don't forget who corrected who, often their taking in the correction also just involves feeding the exact words of my correction back to me rather than continuing to solve the problem using the correction. Honestly, the context is poisoned by then and it's forgotten the problem anyway.

Of course it's forgotten the problem; how stupid would you have to be to think that I wanted an extensive recap of the correction I just gave it rather than my problem solved (even without the confusion)? Best case scenario:

Me: Hand me the book.

Machine: [reaches for the top shelf]

Me: [sees you reach for the top shelf] No, it's on the bottom shelf.

Machine: When you asked for the book, I reached for the top shelf, then you said that it was on the bottom shelf, and it's more than fair that you hold me to that standard, the book is on the bottom shelf.

(Or, half the time: "You asked me to get the book from the top shelf, but no, it's on the bottom shelf.")

Machine: [sits down]

Me: Booooooooooook. GET THE BOOK. GET THE BOOK.

These things are so dumb. I'm begging for somebody to show me the sequence that makes me feel the sort of threat that they seem to feel. They're mediocre at writing basic code (which is still mind-blowing and super-helpful), and they have all the manuals and docs in their virtual heads (and all the old versions cause them to constantly screw up and hallucinate.) But other than that...

awesome_dude 9 hours ago

AI is still a token matching engine - it has ZERO understanding of what those tokens mean

It's doing a damned good job at putting tokens together, but to put it into context that a lot of people will likely understand - it's still a correlation tool, not a causation.

That's why I like it for "search" it's brilliant for finding sets of tokens that belong with the tokens I have provided it.

PS. I use the term token here not as the currency by which a payment is determined, but the tokenisation of the words, letters, paragraphs, novels being provided to and by the LLMs

rvz 9 hours ago

What do you mean that's not OK?

It's "AGI" because humans do it too and we mix up names and who said what as well. /s

livinglist 9 hours ago

Kinda like dementia but for AI

cyanydeez 8 hours ago

more pike eye witness accounts and hypnotism

Shywim 9 hours ago

The statement that current AI are "juniors" that need to be checked and managed still holds true. It is a tool based on probabilities.

If you are fine with giving every keys and write accesses to your junior because you think they will probability do the correct thing and make no mistake, then it's on you.

Like with juniors, you can vent on online forums, but ultimately you removed all the fool's guard you got and what they did has been done.

eru 9 hours ago

> If you are fine with giving every keys and write accesses to your junior because you think they will probability do the correct thing and make no mistake, then it's on you.

How is that different from a senior?

Shywim 8 hours ago

Okay, let's say your `N-1` then.

4ndrewl 8 hours ago

It is OK, these are not people they are bullshit machines and this is just a classic example of it.

"In philosophy and psychology of cognition, the term "bullshit" is sometimes used to specifically refer to statements produced without particular concern for truth, clarity, or meaning, distinguishing "bullshit" from a deliberate, manipulative lie intended to subvert the truth" - https://en.wikipedia.org/wiki/Bullshit

AJRF 9 hours ago

I imagine you could fix this by running a speaker diarization classifier periodically?

https://www.assemblyai.com/blog/what-is-speaker-diarization-...

smallerize 9 hours ago

No.