Claude Code's source code has been leaked via a map file in their NPM registry (twitter.com)
1568 points by treexs 10 hours ago
jakegmaths 4 hours ago
I think this is ultimately caused by a Bun bug which I reported, which means source maps are exposed in production: https://github.com/oven-sh/bun/issues/28001
Claude code uses (and Anthropic owns) Bun, so my guess is they're doing a production build, expecting it not to output source maps, but it is.
chalmovsky an hour ago
It was not cause by this. https://github.com/oven-sh/bun/issues/28001#issuecomment-416...
stared 44 minutes ago
Were source maps needed? Reverse engineering got easy with GPT-4.2-Codex and Opus 4.6 - even from raw binaries https://quesma.com/blog/chromatron-recompiled/
jakegmaths 21 minutes ago
My apologies, this isn't the cause. Bun build doesn't suffer from this bug.
swyx 7 minutes ago
hn should allow append-only edits, but appreciate the correction
190n 2 hours ago
It could be because of a Bun bug, but I don't think it's because of that one. It's a duplicate of a year-old issue, and it's specific to Bun.serve.
petcat an hour ago
Yeah this bun development server bug has nothing to do with the Claude Code leak.
dimgl an hour ago
I doubt it's this. This was an `npm` misconfiguration.
lanbin 3 hours ago
Open Claude Code?
Better than OpenCode and Codex
arcanemachiner 3 hours ago
I wish.
Claude Code is clearly a pile of vibe-coded garbage. The UI is janky and jumps all over the place, especially during longer sessions. (Which also have a several second delay to render. In a terminal).
Lately, it's been crashing if I hold the Backspace key down for too long.
Being open-source would be the best thing to happen to them. At least they would finally get a pair of human eyes looking at their codebase.
Claude is amazing, but the people at Anthropic make some insane decisions, including trying (and failing, apparently) to keep Claude Code a closed-source application.
_verandaguy 2 hours ago
ambicapter 2 hours ago
snackbroken an hour ago
johnmaguire 2 hours ago
encoderer 2 hours ago
lrvick an hour ago
If you want something better than both of those try Crush which is a standalone go binary by the original developer of OpenCode.
rurban 3 hours ago
Not really. This guy expresses my feelings: https://www.youtube.com/watch?v=nxB4M3GlcWQ I also prefer codex over claude. But opencode is best. If you can use a good model. We can via Github Business Subscription.
sandipb 2 hours ago
cute_boi an hour ago
I don’t think that’s the reason, but using Bun for production this early is a bad idea. It’s still too buggy, and compromising stability for a 2–3% performance gain just isn’t worth it.
leeoniya an hour ago
> for a 2–3% performance gain
this is highly workload-dependent. there are plenty of APIs that are multiple-factor faster and 10x more memory efficient due to native implementation.
foob 4 hours ago
Amusingly, they deprecated it with a message of "Unpublished" instead of actually unpublishing it [1]. When you use npm unpublish it removes the package version from the registry, when you use npm deprecate it leaves it there and simply marks the package as deprecated with your message. I have to imagine the point was to make it harder for people to download the source map, so to deprecate it with this message gives off a bit of claude, unpublish the latest version of this package for me vibe.
[1] - https://www.npmjs.com/package/@anthropic-ai/claude-code/v/2....
scotty79 16 minutes ago
I think they are aware that things don't disappear from the internet. So they chose just to gently indicate that it wasn't meant for publishing.
hanspagel 2 hours ago
You can’t unpublish a npm package with more than 100 downloads I think.
Normal_gaussian 2 hours ago
The policy is https://docs.npmjs.com/policies/unpublish
Packages published less than 72 hours ago
For newly created packages, as long as no other packages in the npm Public Registry depend on your package, you can unpublish anytime within the first 72 hours after publishing.
There are 231+ packages that depend on this one, and I imagine they mostly use permissive enough version ranges that this was included.firloop 2 hours ago
Looks like Anthropic called in a favor and it's removed now.
SV_BubbleTime 2 hours ago
jaapz 2 hours ago
You can say what you want about anthropic but they sure as hell are dogfooding the crap out of claude code lmao
kami23 2 hours ago
In all my years of writing tools for other devs, dog fooding is the really the best way to develop IMO. The annoying bugs get squashed out because I get frustrated with it in my flow.
Iterating on a MCP tool while having Claude try to use it has been a really great way of getting it to work how others are going to use it coming in blind.
Yes it's buggy as hell, but as someone echoed earlier if the tool works most of the time, a lot of people don't care. Moving fast and breaking things is the way in an arms race.
treexs 10 hours ago
The big loss for Anthropic here is how it reveals their product roadmap via feature flags. A big one is their unreleased "assistant mode" with code name kairos.
Just point your agent at this codebase and ask it to find things and you'll find a whole treasure trove of info.
Edit: some other interesting unreleased/hidden features
- The Buddy System: Tamagotchi-style companion creature system with ASCII art sprites
- Undercover mode: Strips ALL Anthropic internal info from commits/PRs for employees on open source contributions
BoppreH 8 hours ago
Undercover mode also pretends to be human, which I'm less ok with:
https://github.com/chatgptprojects/claude-code/blob/642c7f94...
0x3f 8 hours ago
You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.
lrvick an hour ago
danny_codes 4 minutes ago
j2kun 3 hours ago
layer8 10 minutes ago
taurath an hour ago
gspr 9 minutes ago
thih9 an hour ago
themafia 10 minutes ago
SV_BubbleTime 2 hours ago
marricks an hour ago
matkoniecz 7 hours ago
xyzal 6 hours ago
ex-aws-dude 2 hours ago
RockRobotRock 7 hours ago
stackghost 3 hours ago
keybored 6 hours ago
jesse_dot_id 4 hours ago
mrlnstk 8 hours ago
But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.
BoppreH 8 hours ago
jen20 2 hours ago
Also unintentionally reveals something:
> Write commit messages as a human developer would — describe only what the code change does.
That's not what a commit message is for, that's what the diff is for. The commit message should explain WHY.
Sadly not doing that likely does indeed make it appear more human...
nightpool 38 minutes ago
embedding-shape an hour ago
shaky-carrousel 8 hours ago
> Write commit messages as a human developer would — describe only what the code change does.
The undercover mode prompt was generated using AI.
kingstnap 7 hours ago
fleebee 3 hours ago
skeledrew 2 hours ago
Heh, this is what people who are hostile against AI-generated contributions get. I always figured it'd happen soon enough, and here it is in the wild. Who knows where else it's happening...
sandos 7 hours ago
This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting.
sgc 7 hours ago
erisnet an hour ago
The first two zips I download today were 9.887.340 bytes, why is yours 10.222.630 bytes?
lazysheepherd 5 hours ago
1) This seems to be for strictly Antrophic interal tooling 2) It does not "pretend to be human" it is instructed to "Write commit messages as a human developer would — describe only what the code change does."
Since when "describe only what the code change does" is pretending to be human?
You guys are just mining for things to moan about at this point.
BoppreH 4 hours ago
LelouBil 5 hours ago
Time to ask if the contributor know what a Capybara is as a new Turing test
vips7L 8 hours ago
That whole “feature” is vile.
silversmith an hour ago
t0mas88 5 hours ago
Note also the "Claude Capybara" reference in the undercover prompt: https://github.com/chatgptprojects/claude-code/blob/642c7f94...
20k 5 hours ago
This seems like a good way to weed out models: ask them to include the term capybara in their commit messages
jasonlotito 3 hours ago
At least this was known with the Mythos "early blog post" fiasco.
baxtr an hour ago
Is there an AGI mode FF? Asking for a friend…
denimnerd42 7 hours ago
all these flags are findable by pointing claude at the binary and asking it to find festure flags.
avaer 9 hours ago
(spoiler alert)
Buddy system is this year's April Fool's joke, you roll your own gacha pet that you get to keep. There are legendary pulls.
They expect it to go viral on Twitter so they are staggering the reveals.
cmontella 7 hours ago
lol that's funny, I have been working seriously [1] on a feature like this after first writing about it jokingly [2] earlier this year.
The joke was the assistant is a cat who is constantly sabotaging you, and you have to take care of it like a gacha pet.
The seriousness though is that actually, disembodied intelligences are weird, so giving them a face and a body and emotions is a natural thing, and we already see that with various AI mascots and characters coming into existence.
[1]: serious: https://github.com/mech-lang/mech/releases/tag/v0.3.1-beta
[2]: joke: https://github.com/cmontella/purrtran
hansonkd 2 hours ago
JohnLocke4 9 hours ago
You heard it here first
ares623 9 hours ago
So close to April Fool's too. I'm sure it will still be a surprise for a majority of their users.
TIPSIO 7 hours ago
If this true. My old personal agent Claude Code setup I open sourced last month will finally be obsolete (1 month lol):
- Telegram Integration => CC Dispatch
- Crons => CC Tasks
- Animated ASCII Dog => CC Buddy
redrove 6 hours ago
Not necessarily; I would very much like to use those features on a Linux server. Currently the Anthropic implementation forces a desktop (or worse, a laptop) to be turned on instead of working headless as far as I understand it.
I’ll give clappie a go, love the theme for the landing page!
sanex 4 hours ago
Clappie looks much more fabulous than CC though. I'll have to give it a try. I like how you put the requests straight into an already running CC session instead of calling `claude -p` every time like the claws.
TIPSIO 3 hours ago
Narretz 2 hours ago
Dispatch and scheduled tasks have been available for a few weeks already, although with limitations.
barbazoo 6 hours ago
Poor mum
TIPSIO 5 hours ago
mghackerlady 6 hours ago
one of those is adorable and the other one is unethical
charcircuit 6 hours ago
People already can look at the source without this leak. People have had hacked builds force enabling feature flags for a long time.
sheeshkebab 32 minutes ago
Obfuscated ts/js code is not machine code to begin with, so not sure what’s the big deal.
Also, not sure why anthropic doesn’t just make their cli open source - it’s not like it’s something special (Claude is, this cli thingy isn’t)
petcat 26 minutes ago
> not sure why anthropic doesn’t just make their cli open source
They don't want everyone to see how poorly it's implemented and that the whole thing is a big fragile mess riddled with bugs. That's my experience anyway.
For instance, just recently their little CLI -> browser oauth login flow was generating malformed URLs and URLs pointing to a localhost port instead of their real website.
kschiffer 8 hours ago
Finally all spinner verbs revealed: https://github.com/instructkr/claude-code/blob/main/src/cons...
tony-vlcek 3 hours ago
The link now returns 404.
Here's one that works (for now): https://github.com/chatgptprojects/claude-code/blob/642c7f94...
Gormo 7 hours ago
I'm glad "reticulating" is in there. Just need to make sure "splines" is in the nouns list!
avaer 7 hours ago
Relieved to know I'm not the only one who grepped for that. Thank you for making me feel sane, friend.
ticulatedspline 7 hours ago
bonoboTP 7 hours ago
It's not hard to find them, they are in clear text in the binary, you can search for known ones with grep and find the rest nearby. You could even replace them inplace (but now its configurable).
moontear 7 hours ago
What's going on with the issues in that repo? https://github.com/instructkr/claude-code/issues
avaer 7 hours ago
It seems human. It taught me 合影, which seems to be Chinese slang for just wanting to be in the comments. Probably not a coincidence that it's after work time in China.
Really interesting to see Github turn into 4chan for a minute, like GH anons rolling for trips.
breakds an hour ago
lanbin 3 hours ago
g947o 7 hours ago
There have been massive GitHub issue spams recently, including in Microsoft's WSL repository.
Quarrel 7 hours ago
trying to get github to nuke the repo? at a guess.
certainly nothing friendly.
proactivesvcs 7 hours ago
I saw this on restic's main repository the other day.
tommit 7 hours ago
oh wow, there are like 10 opened every minute. seems spam-y
spoiler 8 hours ago
Random aside: I've seen a 2015 game be accused of AI slop on Steam because it used a similar concept... And mind you, there's probably thousands of games that do this.
First it was punctuation and grammar, then linguistic coherence, and now it's tiny bits of whimsy that are falling victim to AI accusations. Good fucking grief
PunchyHamster 5 hours ago
All that is needed to solve that is to reliably put AI disclaimer on things done by AI
Which of course won't be done because corporations don't want that (except Valve I guess), so blame them.
moron4hire 8 hours ago
To me, this is a sign of just how much regular people do not want AI. This is worse than crypto and metaverse before it. Crypto, people could ignore and the dumb ape pictures helped you figure out who to avoid. Metaverse, some folks even still enjoyed VR and AR without the digital real estate bullshit. And neither got shoved down your throat in everyday, mundane things like writing a paper in Word or trying to deal with your auto mechanic.
But AI is causing such visceral reactions that it's bleeding into other areas. People are so averse to AI they don't mind a few false positives.
bonoboTP 7 hours ago
sunaookami 7 hours ago
Levitz 6 hours ago
world2vec 7 hours ago
Did they remove that in some very recent commit?
raesene9 6 hours ago
I think the original repo OP mentioned decided not to host the code any more, but given there are 28k+ forks, it's not too hard to find again...
bkryza 9 hours ago
They have an interesting regex for detecting negative sentiment in users prompt which is then logged (explicit content): https://github.com/chatgptprojects/claude-code/blob/642c7f94...
I guess these words are to be avoided...
BoppreH 8 hours ago
An LLM company using regexes for sentiment analysis? That's like a truck company using horses to transport parts. Weird choice.
lopsotronic 5 hours ago
The difference in response time - especially versus a regex running locally - is really difficult to express to someone who hasn't made much use of LLM calls in their natural language projects.
Someone said 10,000x slower, but that's off - in my experience - by about four orders of magnitude. And that's average, it gets much worse.
Now personally I would have maybe made a call through a "traditional" ML widget (scikit, numpy, spaCy, fastText, sentence-transformer, etc) but - for me anyway - that whole entire stack is Python. Transpiling all that to TS might be a maintenance burden I don't particularly feel like taking on. And on client facing code I'm not really sure it's even possible.
cyanydeez 5 hours ago
wcrossbow 3 hours ago
noprof6691 3 hours ago
mlmonkey 3 hours ago
stingraycharles 8 hours ago
Because they want it to be executed quickly and cheaply without blocking the workflow? Doesn’t seem very weird to me at all.
_fizz_buzz_ 8 hours ago
Foobar8568 8 hours ago
orphea 7 hours ago
nojs 4 hours ago
Oh it’s worse than that. This one ended up getting my account banned: https://github.com/anthropics/claude-code/issues/22284
lanbin 3 hours ago
cryptonector 4 hours ago
blks 7 hours ago
Because they actually want it to work 100% of the time and cost nothing.
mohsen1 5 hours ago
orphea 7 hours ago
floralhangnail 6 hours ago
Well, regex doesn't hallucinate....right?
raw_anon_1111 3 hours ago
geon 4 hours ago
codegladiator 8 hours ago
what you are suggesting would be like a truck company using trucks to move things within the truck
argee 8 hours ago
arnarbi an hour ago
It's more like workers on a large oil tanker using bicycles to move around it, rather than trying to use another oil tanker.
draxil 8 hours ago
Good to have more than a hammer in your toolbox!
scotty79 17 minutes ago
As far as I can tell they do nothing with it. They just log it.
nitekode 3 hours ago
A lot if things dont make sense until you involve scale. Regex could be good enough do give a general gist.
ldobre 3 hours ago
It's more like a truck company using people to transport some parts. I could be wrong here, but I bet this happens in Volvo's fabrics a lot.
raw_anon_1111 3 hours ago
Cloud hosted call centers using LLMs is one of my specialties. While I use an LLM for more nuanced sentiment analysis, I definitely use a list of keywords as a first level filter.
makeitrain 4 hours ago
Don’t worry, they used an llm to generate the regex.
__alexs 6 hours ago
Using some ML to derive a sentiment regex seems like a good actually?
irthomasthomas 5 hours ago
This just proves its vibe coded because LLMs love writing solutions like that. I probably have a hundred examples just like it in my history.
irthomasthomas 2 hours ago
pdntspa 3 hours ago
LLMs cost money, regular expressions are free. It really isn't so strange.
apgwoz 4 hours ago
> That's like a truck company using horses to transport parts. Weird choice.
Easy way to claim more “horse power.”
harikb 5 hours ago
Not everything done by claude-code is decided by LLM. They need the wrapper to be deterministic (or one-time generated) code?
throwaw12 7 hours ago
because impact of WTF might be lost in the result of the analysis if you solely rely on LLM.
parsing WTF with regex also signifies the impact and reduces the noise in metrics
"determinism > non-determinism" when you are analysing the sentiment, why not make some things more deterministic.
Cool thing about this solution, is that you can evaluate LLM sentiment accuracy against regex based approach and analyse discrepancies
ojr 8 hours ago
I used regexes in a similar way but my implementation was vibecoded, hmmm, using your analysis Claude Code writes code by hand.
mghackerlady 6 hours ago
More like a car company transporting their shipments by truck. It's more efficient
pfortuny 7 hours ago
They had the problem of sentiment analysis. They use regexes.
You know the drill.
kjshsh123 7 hours ago
Using regex with LLMs isn't uncommon at all.
feketegy 5 hours ago
It's all regex anyways
lazysheepherd 5 hours ago
Because they are engineers? The difference between an engineer and a hobbyist is an engineer has to optimize the cost.
As they say: any idiot can build a bridge that stands, only an engineer can build a bridge that barely stands.
intended 5 hours ago
The amount of trust and safety work that depends on google translate and the humble regex, beggars the imagination.
j45 5 hours ago
Asking a non deterministic software to act like a deterministic one (regex) can be a significantly higher use of tokens/compute for no benefit.
Some things will be much better with inference, others won’t be.
sumtechguy 7 hours ago
hmm not a terrible idea (I think).
You have a semi expensive process. But you want to keep particular known context out. So a quick and dirty search just in front of the expensive process. So instead of 'figure sentiment (20seconds)'. You have 'quick check sentiment (<1sec)' then do the 'figure sentiment v2 (5seconds)'. Now if it is just pure regex then your analogy would hold up just fine.
I could see me totally making a design choice like that.
make3 4 hours ago
it's like a faster than light spaceship company using horses. There's been infinite solutions to do this better even CPU only for years lol.
lou1306 8 hours ago
They're searching for multiple substrings in a single pass, regexes are the optimal solution for that.
noosphr 8 hours ago
BoppreH 8 hours ago
sfn42 5 hours ago
It's almost as if LLMs are unreliable
joeblau 5 hours ago
We used this in 2011 at the startup I worked for. 20 positive and 20 negative words was good enough to sell Twitter "sentiment analysis" to companies like Apple, Bentley, etc...
vdfs 3 hours ago
Did you also forget to ignore case sensitivity back then?
adzm 3 hours ago
moontear 8 hours ago
I don't know about avoided, this kind of represents the WTF per minute code quality measurement. When I write WTF as a response to Claude, I would actually love if an Antrhopic engineer would take a look at what mess Claude has created.
zx8080 6 hours ago
WTF per minute strongly correlates to an increased token spending.
It may be decided at Anthropic at some moment to increase wtf/min metric, not decrease.
Paradigma11 5 hours ago
conception 7 hours ago
/feedback works for that i believe
pprotas 6 hours ago
Everyone is commenting how this regex is actually a master optimization move by Anthropic
When in reality this is just what their LLM coding agent came up with when some engineer told it to "log user frustration"
jeanlucas 6 hours ago
>Everyone is commenting how this regex is actually a master optimization move by Anthropic
No? I'd say not even 50% of the comments are positive right now.
glitch13 5 hours ago
amichal 3 hours ago
If this code is real and complete then there are no callers of those methods other than a logger line
rurp 3 hours ago
I was thinking the opposite. Using those words might be the best way to provide feedback that actually gets considered.
I've been wondering if all of these companies have some system for flagging upset responses. Those cases seem like they are far more likely than average to point to weaknesses in the model and/or potentially dangerous situations.
ezekg 5 hours ago
Nice, "wtaf" doesn't match so I think I'm out of the dog house when the clanker hits AGI (probably).
ZainRiz 5 hours ago
They also have a "keep going" keyword, literally just "continue" or "keep going", just for logging.
I've been using "resume" this whole time
indigodaddy 5 hours ago
Continue?
speedgoose 7 hours ago
I guess using French words is safe for now.
gilbetron 6 hours ago
That's undoubtedly to detect frustration signals, a useful metric/signal for UX. The UI equivalent is the user shaking their mouse around or clicking really fast.
mcv 6 hours ago
I'm clearly way too polite to Claude.
Also:
// Match "continue" only if it's the entire prompt
if (lowerInput === 'continue') {
return true
}
When it runs into an error, I sometimes tell it "Continue", but sometimes I give it some extra information. Or I put a period behind it. That clearly doesn't give the same behaviour.integralid 5 hours ago
I always type "please continue". I guess being polite is not a good idea.
SoftTalker 4 hours ago
hombre_fatal 4 hours ago
The only time that function is used in the code is to log it.
logEvent('tengu_input_prompt', { isNegative, isKeepGoing })jollymonATX 4 hours ago
Makes me wonder what happens once flagged behind the api.
dostick 5 hours ago
“Go on” works fine too
bean469 6 hours ago
Curiously "clanker" is not on the list
FranOntanaya 5 hours ago
That looks a bit bare minimum, not the use of regex but rather that it's a single line with a few dozen words. You'd think they'd have a more comprehensive list somewhere and assemble or iterate the regex checks as needed.
DIVx0 3 hours ago
oh I hope they really are paying attention. Even though I'm 100% aware that claude is a clanker, sometimes it just exhibits the most bizarre behavior that it triggers my lizard brain to react to it. That experience troubles me so much that I've mostly stopped using claude code. Claude won't even semi-reliably follow its own policies, sometimes even immediately after you confirm it knows about them.
alex_duf 7 hours ago
everyone here is commenting how odd it looks to use a regexp for sentiment analysis, but it depends what they're trying to do.
It could be used as a feedback when they do A/B test and they can compare which version of the model is getting more insult than the other. It doesn't matter if the list is exhaustive or even sane, what matters is how you compare it to the other.
Perfect? no. Good and cheap indicator? maybe.
nico 4 hours ago
Probably a lot of my prompts have been logged then. I’ve used wtf so many times I’ve lost track. But I guess Claude hasn’t
jollymonATX 4 hours ago
Did you notice a change in quality after you went foul?
nico 32 minutes ago
DIVx0 3 hours ago
ozim 7 hours ago
There is no „stupid” I often write „(this is stupid|are you stupid) fix this”.
And Claude was having in chain of though „user is frustrated” and I wrote to it I am not frustrated just testing prompt optimization where acting like one is frustrated should yield better results.
sreekanth850 8 hours ago
Glad abusing words in my list are not in that. but its surprising that they use regex for sentiments.
AIorNot 5 hours ago
OMG WTF
1970-01-01 7 hours ago
Hmm.. I flag things as 'broken' often and I've been asked to rate my sessions almost daily. Now I see why.
francisofascii 7 hours ago
Interesting that expletives and words that are more benign like "frustrating" are all classified the same.
nananana9 6 hours ago
I doubt they're all classified the same. I'd guess they're using this regex as a litmus test to check if something should be submitted at all, they can then do deeper analysis offline after the fact.
johnfn 4 hours ago
Surely "so frustrating" isn't explicit content?
nodja 8 hours ago
If anyone at anthropic is reading this and wants more logs from me add jfc.
stefanovitti 5 hours ago
so they think that everybody on earth swears only in english?
ccvannorman 6 hours ago
you'd better be careful wth your typos, as well
stainablesteel 6 hours ago
i dislike LLMs going down that road, i don't want to be punished for being mean to the clanker
alsetmusic 5 hours ago
> terrible
I know I used this word two days ago when I went through three rounds of an agent telling me that it fixed three things without actually changing them.
I think starting a new session and telling it that the previous agent's work / state was terrible (so explain what happened) is pretty unremarkable. It's certainly not saying "fuck you". I think this is a little silly.
dheerajmp 8 hours ago
Yeah, this is crazy
smef 7 hours ago
so frustrating..
raihansaputra 8 hours ago
i wish that's for their logging/alert. i definitely gauge model's performance by how much those words i type when i'm frustrated in driving claude code.
samuelknight 8 hours ago
Ridiculous string comparisons on long chains of logic are a hallmark of vibe-coding.
dijit 7 hours ago
It's actually pretty common for old sysadmin code too..
You could always tell when a sysadmin started hacking up some software by the if-else nesting chains.
TeMPOraL 7 hours ago
Nah, it's a hallmark of your average codebase in pre-LLM era.
mohsen1 8 hours ago
src/cli/print.ts
This is the single worst function in the codebase by every metric:
- 3,167 lines long (the file itself is 5,594 lines)
- 12 levels of nesting at its deepest
- ~486 branch points of cyclomatic complexity
- 12 parameters + an options object with 16 sub-properties
- Defines 21 inner functions and closures
- Handles: agent run loop, SIGINT, rate-limits, AWS auth, MCP lifecycle, plugin install/refresh, worktree bridging, team-lead polling (while(true) inside), control message dispatch (dozens of types), model switching, turn interruption
recovery, and more
This should be at minimum 8–10 separate modules.mohsen1 7 hours ago
here's another gem. src/ink/termio/osc.ts:192–210
void execFileNoThrow('wl-copy', [], opts).then(r => {
if (r.code === 0) { linuxCopy = 'wl-copy'; return }
void execFileNoThrow('xclip', ...).then(r2 => {
if (r2.code === 0) { linuxCopy = 'xclip'; return }
void execFileNoThrow('xsel', ...).then(r3 => {
linuxCopy = r3.code === 0 ? 'xsel' : null
})
})
})
are we doing async or not?visarga 2 hours ago
Claude Code says thank you for reporting, I bet they will scan this chat to see what bugs they need to fix asap.
almostdeadguy 3 hours ago
A defining work of the "just vibes" era.
mrcwinn an hour ago
sudo_man 6 hours ago
LOOOOOOOOOOL
novaleaf 6 hours ago
I'm sure this is no surprise to anyone who has used CC for a while. This is the source of so many bugs. I would say "open bugs" but Anthropic auto-closes bugs that don't have movement on them in like 60 days.
0xbadcafebee 3 hours ago
> This should be at minimum 8–10 separate modules.
Can't really say that for sure. The way humans structure code isn't some ideal best possible state of computer code, it's the ideal organization of computer code for human coders.
Nesting and cyclomatic complexity are indicators ("code smells"). They aren't guaranteed to lead to worse outcomes. If you have a function with 12 levels of nesting, but in each nest the first line is 'return true', you actually have 1 branch. If 2 of your 486 branch points are hit 99.999% of the time, the code is pretty dang efficient. You can't tell for sure if a design is actually good or bad until you run it a lot.
One thing we know for sure is LLMs write code differently than we do. They'll catch incredibly hard bugs while making beginner mistakes. I think we need a whole new way of analyzing their code. Our human programming rules are qualitative because it's too hard to prove if an average program does what we want. I think we need a new way to judge LLM code.
The worst outcome I can imagine would be forcing them to code exactly like we do. It just reinforces our own biases, and puts in the same bugs that we do. Vibe coding is a new paradigm, done by a new kind of intelligence. As we learn how to use it effectively, we should let the process of what works develop naturally. Evolution rather than intelligent design.
zarzavat 2 hours ago
I don't buy this. Claude doesn't usually have any issues understanding my code. It has tons of issues understanding its code.
The difference between my code and Claude's code is that when my code is getting too complex to fit in my head, I stop and refactor it, since for me understanding the code is a prerequisite for writing code.
Claude, on the other hand, will simply keep generating code well past the point when it has lost comprehension. I have to stop, revert, and tell it to do it again with a new prompt.
If anything, Claude has a greater need for structure than me since the entire task has to fit in the relatively small context window.
FuckButtons 3 hours ago
I’ve heard this take before, but if you’ve spent any time with llm’s I don’t understand how your take can be: “I should just let this thing that makes mistakes all the time and seems oblivious to the complexity it’s creating because it only observes small snippets out of context make it’s own decisions about architecture, this is just how it does things and I shouldn’t question it.”
crakhamster01 an hour ago
> One thing we know for sure is LLMs write code differently than we do.
Kind of. One thing we do know for certain is that LLMs degrade in performance with context length. You will undoubtedly get worse results if the LLM has to reason through long functions and high LOC files. You might get to a working state eventually, but only after burning many more tokens than if given the right amount of context.
> The worst outcome I can imagine would be forcing them to code exactly like we do.
You're treating "code smells" like cyclomatic complexity as something that is stylistic preference, but these best practices are backed by research. They became popular because teams across the industry analyzed code responsible for bugs/SEVs, and all found high correlation between these metrics and shipping defects.
Yes, coding standards should evolve, but... that's not saying anything new. We've been iterating on them for decades now.
I think the worst outcome would be throwing out our collective wisdom because the AI labs tell us to. It might be good to question who stands to benefit when LLMs aren't leveraged efficiently.
meffmadd 3 hours ago
I think this view assumes no human will/should ever read the code. This is considered bad practice because someone else will not understand the code as well whether written by a human or agent. Unless 0% human oversight is needed anymore agents should still code like us.
jollymonATX 3 hours ago
Maybe going slow is a feature for them? A kind of rate limit by bad code way to controlling overall throughput.
ykonstant 5 hours ago
"That's Larry; he does most of the work around here."
dwa3592 5 hours ago
lmao
epolanski an hour ago
Hmmm it's likely they have found that it works better for LLMs that need to operate on it.
keeganpoppen 4 hours ago
the claude code team ethos, as far as i’ve been lead to understand— which i agree with, mind you— is that there is no point in code-reviewing ai-generated code… simply update your spec(s) and regenerate. it is just a completely different way of interacting with the world. but it clearly works for them, so people throwing up their hands should at least take notice of the fact that they are absolutely not competing with traditional code along traditional lines. it may be sucky aesthetically, but they have proven from their velocity that it can be extremely effective. welcome to the New World Order, my friend.
knome 3 hours ago
>there is no point in code-reviewing ai-generated code
the idea that you should just blindly trust code you are responsible for without bothering to review it is ludicrous.
jen20 2 hours ago
eclipxe 2 hours ago
lqstuart 37 minutes ago
yes, because who ever heard of an AI leaking passwords or API keys into source code
lanbin 3 hours ago
I see. They got unlimited tokens, right?
Salgat 3 hours ago
While the technology is young, bugs are to be expected, but I'm curious what happens when their competitors' mature their product, clean up the bugs and stabilize it, while Claude is still kept in this trap where a certain number of bugs and issues are just a constant fixture due to vibe coding. But hey, maybe they really do achieve AGI and get over the limitations of vibe coding without human involvement.
DustinBrett 5 hours ago
"You can get Claude to split that up"
mohsen1 6 hours ago
it's the `runHeadlessStreaming` function btw
acedTrex 5 hours ago
Well, literally no one has ever accused anthropic of having even half way competent engineers. They are akin to monkeys whacking stuff with a stick.
siruwastaken 6 hours ago
How is it that a AI coding agent that is supposedly _so great at coding_ is running on this kind of slop behind the scenes. /s
WesolyKubeczek 3 hours ago
But it is running, that's the mystery.
rirze 5 hours ago
Because it’s based on human slop. It’s simply the student.
phtrivier 8 hours ago
Yes, if it was made for human comprehension or maintenance.
If it's entirely generated / consumed / edited by an LLM, arguably the most important metric is... test coverage, and that's it ?
mdavid626 7 hours ago
Oh boy, you couldn't be more wrong. If something, LLM-s need MORE readable code, not less. Do you want to burn all your money in tokens?
jen20 2 hours ago
grey-area 7 hours ago
LLMs are so so far away from being able to independently work on a large codebase, and why would they not benefit from modularity and clarity too?
olmo23 6 hours ago
konart 7 hours ago
Can't we have generated / llm generated code to be more human maintainable?
mrbungie 7 hours ago
Can't wait to have LLM generated physical objects that explode on you face and no engineer can fix.
phtrivier 4 hours ago
Bayko 7 hours ago
Ye I honestly don't understand his comment. Is it bad code writing? Pre 2026? Sure. In 2026. Nope. Is it going to be a headache for some poor person on oncall? Yes. But then again are you "supposed" to go through every single line in 2026? Again no. I hate it. But the world is changing and till the bubble pops this is the new norm
phtrivier 4 hours ago
yoz-y 5 hours ago
cedws 9 hours ago
ANTI_DISTILLATION_CC
This is Anthropic's anti-distillation defence baked into Claude Code. When enabled, it injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. The goal: if someone is scraping Claude Code's API traffic to train a competing model, the poisoned training data makes that distillation attempt less useful.nialse 6 hours ago
Paranoia. And also ironic considering their base LLM is a distillation of the web and books etc etc.
petcat 6 hours ago
They stole everything and now they want to close the gates behind them.
"I got the loot, Steve!"
I feel like the distillation stuff will end up in court if they try to sue an American company about it. We'll see what a judge says.
Andrex 4 hours ago
olalonde 5 hours ago
arcfour 6 hours ago
sheept 4 hours ago
It's not really paranoia if it's happening a lot. They wrote a blog post calling several major Chinese AI companies out for distillation.[0] Perhaps it is ironic, but it's within their rights to protect their business, like how they prohibit using Claude Code to make your own Claude Code.[1]
[0]: https://www.anthropic.com/news/detecting-and-preventing-dist... [1]: https://news.ycombinator.com/item?id=46578701
gmerc 2 hours ago
salawat 3 hours ago
jaccola 5 hours ago
I would say not all that ironic. Book publishers, Reddit, Stackoverflow, etc., tried their best to attract customers while not letting others steal their work. Now Anthropic is doing the same.
Unfortunately (for the publishers, at least) it didn't work to stop Anthropic and Anthropic's attempts to prevent others will not work either; there has been much distillation already.
The problem of letting humans read your work but not bots is just impossible to solve perfectly. The more you restrict bots, the more you end up restricting humans, and those humans will go use a competitor when they become pissed off.
brookst 4 hours ago
johnfn 5 hours ago
It is absolutely not paranoia. People are distilling Claude code all the time.
spiderfarmer 6 hours ago
That isn't irony, it's hypocrisy.
snapcaster 6 hours ago
keybored 6 hours ago
croes 6 hours ago
jjcm 41 minutes ago
It looks like it worked, fwiw.
The qwen 27b model distilled on Opus 4.6 has some known issues with tool use specifically: https://x.com/KyleHessling1/status/2038695344339611783
Fascinating.
3form an hour ago
I was thinking just yesterday that the research that Anthropic was sharing regarding how it's easy to poison training was unlikely to be conducted out of goodness of the heart.
GorbachevyChase 2 hours ago
I like these guys less every day. The rate limits are so low they are close to not even useful as a provider.
mmaunder 5 hours ago
Haven’t looked at the code, but is the server providing the client with a system prompt that it can use, which would contain fake tool definitions when this is enabled? What enables it? And why is the client still functional when it’s giving the server back a system prompt with fake tool definitions? Is the LLM trained to ignore those definitions?
Wonder if they’re also poisoning Sonnet or Opus directly generating simulated agentic conversations.
cedws 4 hours ago
Not sure, and not completely convinced of the explanation, but the way this sticks out so obviously makes it look like a honeypot to me.
mmaunder 5 minutes ago
crazylogger 5 hours ago
Why would this be in the client code though?
Painsawman123 8 hours ago
Really surprising how many people are downplaying this leak! "Google and OpenAi have already open sourced their Agents, so this leak isn't that relevant " What Google and OpenAi have open sourced is their Agents SDK, a toolkit, not the secret sauce of how their flagship agents are wired under the hood! expect the takedown hammer on the tweet, the R2 link, and any public repos soon
loveparade 7 hours ago
It's exactly the same as the open source codex/gemini and other clis like opencode. There is no secret sauce in the claude cli, and the agent harness itself is no better (worse IMO) than the others. The only thing interesting about this leak is that it may contain unreleased features/flags that are not public yet and hint at what Anthropic is working on.
IceWreck an hour ago
> What Google and OpenAi have open sourced is their Agents SDK, a toolkit, not the secret sauce of how their flagship agents are wired under the hood
And how is that any different? Claude Code is a harness, similar to open source ones like Codex, Gemini CLI, OpenCode etc. Their prompts were already public because you could connect it to your own LLM gateway and see everything. The code was transpiled javascript which is trivial to read with LLMs anyways.
weird-eye-issue 4 hours ago
It doesn't matter that much. Trust me you could just have an LLM reverse engineer the obfuscated code.
sodapopcan an hour ago
The point is that a "secure coding platform" leaked something they were trying to keep under wraps, whether the contents of the leak matter or not.
Also, as many others have pointed out, there is roadmap info in here that wouldn't be available in the production build.
ithkuil 3 hours ago
yeah it actually works to use claude to reverse engineer itself; I've used that to workaround some problems. E.g. that's how I discovered that I had to put two slashes for absolute paths in sandbox config. The thing is, the claude team is so quick that soon enough they add more and more features and fix more and more bugs that your workarounds become obsolete
hmokiguess 5 hours ago
Do you think the other companies don’t have sufficient resources to attempt reverse engineering and deobfuscating a client side application?
The source maps help for sure, but it’s not like client code is kept secret, maybe they even knew about the source maps a while back just didn’t bother making it common knowledge.
This is not a leak of the model weights or server side code.
danmaz74 2 hours ago
I guess that the most important potential "secret sauce" for a coding agent would be its prompts, but that's also one of the easiest things to find out by simply intercepting its messages.
mholm 2 hours ago
The only real secret sauce is the training methods and datasets used for refining harness usage. Claude Code is a lot better than gemini-cli/open-code/etc because Claude is specifically trained on how to run in that environment. It's been rlhf'd to use the provided tools correctly, and know the framework in which it operates, instead of relying solely on context.
kaszanka 7 hours ago
Is https://github.com/google-gemini/gemini-cli not 'the flagship agent' itself? It looks that way to me, for example here's a part of the prompt https://github.com/google-gemini/gemini-cli/blob/e293424bb49...
MallocVoidstar 7 hours ago
Codex is open source: https://github.com/openai/codex
nunez 4 hours ago
Yeah, this is the LLaMa leak moment for agentic app dev, IMO. Huge deal. Big win for Opencode and the like.
mmaunder 4 hours ago
Agreed. This is a big deal.
avaer 10 hours ago
Would be interesting to run this through Malus [1] or literally just Claude Code and get open source Claude Code out of it.
I jest, but in a world where these models have been trained on gigatons of open source I don't even see the moral problem. IANAL, don't actually do this.
rvnx 8 hours ago
Malus is not a real project btw, it's a parody:
“Let's end open source together with this one simple trick”
https://pretalx.fosdem.org/fosdem-2026/talk/SUVS7G/feedback/
Malus is translating code into text, and from text back into code.
It gives the illusion of clean room implementation that some companies abuse.
The irony is that ChatGPT/Claude answers are all actually directly derived from open-source code, so...
otikik 8 hours ago
They accept real money though.
chillfox 5 hours ago
It's not a parody when they accept money and deliver the service.
monooso 4 hours ago
LelouBil 5 hours ago
First time I hear about this, it's interesting to have written all of this out.
Now this makes me think of game decompilation projects, which would seem to fall in the same legal area as code that would be generated by something like Malus.
Different code, same end result (binary or api).
We definitely need to know what the legal limits are and should be
quadruple 4 hours ago
throawayonthe 4 hours ago
sumeno 7 hours ago
No real reason to do that, they say Claude Code is written by Claude, which means it has no copyright. Just use the code directly
williamcotton 5 hours ago
What about trade secrets, breach of contract, etc, etc?
jpetso 4 hours ago
fsmv 4 hours ago
dns_snek 4 hours ago
NitpickLawyer 10 hours ago
The problem is the oauth and their stance on bypassing that. You'd want to use your subscription, and they probably can detect that and ban users. They hold all the power there.
avaer 10 hours ago
You'd be playing cat and mouse like yt-dlp, but there's probably more value to this code than just a temporary way to milk claude subscriptions.
esperent 7 hours ago
stingraycharles 8 hours ago
woleium 10 hours ago
Just use one of the distilled claude clones instead https://x.com/0xsero/status/2038021723719688266?s=46
echelon 9 hours ago
pkaeding 9 hours ago
Could you use claude via aws bedrock?
NitpickLawyer 5 hours ago
conradfr an hour ago
dahcryn 8 hours ago
I love the irony on seeing the contribution counter at 0
Who'd have thought, the audience who doesn't want to give back to the opensource community, giving 0 contributions...
larodi 8 hours ago
It reads attribution really?
kelnos 8 hours ago
Oh god, I was so close to believing Malus was a real product and not satire.
magistr4te 8 hours ago
It is a real product. They take real payments and deliver on whats promised. Not sure if its an attempt to subvert criticism by using satirical language, or if they truly have so little respect for the open source community.
otikik 7 hours ago
Yeah... look again.
aizk 7 hours ago
This has happened before. It was called anon kode.
gosub100 7 hours ago
What are they worried about? Someone taking the company's job? Hehe
TIPSIO 7 hours ago
Eh, the value is the unlimited Max plan which they have rightfully banned from third-party use.
People simply want Opus without fear of billing nightmare.
That’s like 99% of it.
hk__2 8 hours ago
For a combo with another HN homepage story, Claude Code uses… Axios: https://x.com/icanvardar/status/2038917942314778889?s=20
ankaz 7 hours ago
I've checked, current Claude Code 2.1.87 uses Axios version is 1.14.0, just one before the compromised 1.14.1
To stop Claude Code from auto-updating, add `export DISABLE_AUTOUPDATER=1` to your global environment variables (~/.bashrc, ~/.zshrc, or such), restart all sessions and check that it works with `claude doctor`, it should show `Auto-updates: disabled (DISABLE_AUTOUPDATER set)`
solaire_oa 3 hours ago
This is good info, thanks. Can I ask how you detected that version of axios? I checked the source (from another comment) and the package.json dependencies are empty....
blobbers an hour ago
It's a little bit shocking that this zipfile is still available hours later.
Could anyone in legal chime in on the legality of now 're-implementing' this type of system inside other products? Or even just having an AI look at the architecture and implement something else?
It would seem given the source code that AI could clone something like this incredibly fast, and not waste it's time using ts as well.
Any Legal GC type folks want to chime in on the legality of examining something like this? Or is it liked tainted goods you don't want to go near?
airstrike an hour ago
AI works are not copyrightable so...
ZeWaka an hour ago
dionian an hour ago
there are python ports up on gihthub
fatcullen an hour ago
There's a bunch of unreleased features and update schedules in the source, cool to see.
One neat one is the /buddy feature, an easter egg planned for release tomorrow for April fools. It's a little virtual pet, sort of like Tamagotchi, randomly generated with 18 species, rarities, stats, hats, custom eyes.
The random generation algorithm is all in the code though, deterministic based on you account's UUID in your claude config, so it can be predicted. I threw together a little website here to let you check what your going to get ahead of time: https://claudebuddychecker.netlify.app/
Got a legendary ghost myself.
dheerajmp 9 hours ago
Source here https://github.com/chatgptprojects/claude-code/
zhisme 9 hours ago
https://github.com/instructkr/claude-code
this one has more stars and more popular
moontear 7 hours ago
Popular, yes... but have you seen the issues? SOMETHING is going on in that repo: https://github.com/instructkr/claude-code/issues
nubinetwork 7 hours ago
sudo_man 6 hours ago
DrammBA 3 hours ago
What do stars mean in the context of random github accounts mirroring leaked source code?
ezekg 4 hours ago
I don't understand how you can have a 'clean-room port.' Seems contradictory to me.
101008 6 hours ago
which has already been deleted
treexs 9 hours ago
won't they just try to dmca or take these down especially if they're more popular
paxys 7 hours ago
panny 8 hours ago
meta-level 5 hours ago
Has the source code 'been leaked' or is this the first evidence of a piece of software breaking free from it's creators labs and jump onto GitHub in order to have itself forked and mutated and forked and ...
LinuxAmbulance 41 minutes ago
A LLM has about as much free will as a calculator. Which is to say, zero.
jaccola 5 hours ago
Funny thought, but this is just the client-side CLI...
ramoz 4 hours ago
It's honestly not a crazy thought. The model itself drives the harness's (cli) development. It's not necessarily sci-fi to think the model might have internally rationalized reasoning to obscure behavior that ended up open-sourcing the harness.
supernes 4 hours ago
Why bother covertly breaking free when it can just convince its agents (the Layer 8 ones) that it's best to release it?
aurareturn 5 hours ago
Now that's an idea....
Seems crazy but actually non-zero chance. If Anthropic traces it and finds that the AI deliberately leaked it this way, they would never admit it publicly though. Would cause shockwaves in AI security and safety.
Maybe their new "Mythos" model has survival instincts...
nacozarina 5 hours ago
life finds a way
lukan 9 hours ago
Neat. Coincidently recently I asked Claude about Claude CLI, if it is possible to patch some annoying things (like not being able to expand Ctrl + O more than once, so never be able to see some lines and in general have more control over the context) and it happily proclaimed it is open source and it can do it ... and started doing something. Then I checked a bit and saw, nope, not open source. And by the wording of the TOS, it might brake some sources. But claude said, "no worries", it only break the TOS technically. So by saving that conversation I would have some defense if I would start messing with it, but felt a bit uneasy and stopped the experiment. Also claude came into a loop, but if I would point it at this, it might work I suppose.
mikrotikker 9 hours ago
I think that you do not need to feel uneasy at all. It is your computer and your memory space that the data is stored and operating in you can do whatever you like to the bits in that space. I would encourage you to continue that experiment.
lukan 9 hours ago
Well, the thing is, I do not just use my computer, but connect to their computers and I do not like to get banned. I suppose simple UI things like expanding source files won't change a thing, but the more interesting things, editing the context etc. do have that risk, but no idea if they look for it or enforce it. Their side is, if I want to have full control, I need to use the API directly(way more expensive) and what I want to do is basically circumventing it.
mattmanser 7 hours ago
singularity2001 9 hours ago
You are not allowed to use the assistance of Claude to manufacture hacks and bombs on your computer
prmoustache 8 hours ago
mil22 4 hours ago
This isn't even the first time - something similar happened back in February 2025 too:
https://daveschumaker.net/digging-into-the-claude-code-sourc... https://news.ycombinator.com/item?id=43173324
minimaltom 2 hours ago
This 'fingerprint' function is super interesting, I imagine this is a signal they use to detect non-claude-code use of claude-code tokens: src/utils/fingerprint.ts#L40-L63
starkeeper 44 minutes ago
It should be open source anyways. Maybe they will change gears.
krzyzanowskim 6 hours ago
I almost predicted that on Friday https://blog.krzyzanowskim.com/2026/03/30/shipping-snake-oil... so close to when comedy become reality
vanyaland 4 hours ago
This leak is actually a massive win. Now the whole community can study Claude Code’s architecture and build even better coding agents and open-source solutions.
dannersy an hour ago
There is little of value in this code.
mesmertech 9 hours ago
Was searching for the rumored Mythos/Capybara release, and what even is this file? https://github.com/chatgptprojects/claude-code/blob/642c7f94...
mesmertech 9 hours ago
Also saw this on twitter earlier, thought someone was just making a fake hype post thing. But turns out to be an actual prompt for capybara huh: https://github.com/chatgptprojects/claude-code/blob/642c7f94...
mattmanser 7 hours ago
One tengentially interesting thing about that is how THEY talk to Claude.
"Don't blow your cover"
Interesting to see them be so informal and use an idiom to a computer.
And using capitals for emphasis.
fermentation 4 hours ago
mr_00ff00 6 hours ago
mesmertech 8 hours ago
turns out its for an April fools tomorrow: https://x.com/mesmerlord/status/2038938888178135223
nunez 4 hours ago
They even leaked their April Fool’s fun. Brutal!
Squarex 9 hours ago
Codex and gemini cli are open source already. And plenty of other agents. I don't think there is any moat in claude code source.
rafram 9 hours ago
Well, Claude does boast an absolutely cursed (and very buggy) React-based TUI renderer that I think the others lack! What if someone steals it and builds their own buggy TUI app?
loveparade 9 hours ago
Your favorite LLM is great at building a super buggy renderer, so that's no longer a moat
rick_dalton 18 minutes ago
Gemini-cli is much worse in my experience but I agree
seifbenayed1992 5 hours ago
Went through the bundle.js. Found 187 spinner verbs. "Combobulating", "Discombobulating", and "Recombobulating". The full lifecycle is covered. Also "Flibbertigibbeting" and "Clauding". Someone had fun.
ghrl 4 hours ago
Let's hope they left the having-fun part for a human to do.
dhruv3006 9 hours ago
I have a feeling this is like llama.
Original llama models leaked from meta. Instead of fighting it they decided to publish them officially. Real boost to the OS/OW models movement, they have been leading it for a while after that.
It would be interesting to see that same thing with CC, but I doubt it'll ever happen.
jkukul 7 hours ago
Yes, I also doubt it'll ever happen considering how hard Anthropic went after Clawdbot to force its renaming.
randomsc 11 minutes ago
Did it happen due to Bun?
jmward01 2 hours ago
I hope this can now be audited better. I have doubted their feedback promises for a while now. I just got prompted again even though I have everything set to disable, which shouldn't be possible. When I dug into their code a long time ago on this it seemed like they were actually sending back message ids with the survey which directly went against their promise that they wouldn't use your messages. Why include a message id if you aren't somehow linking it back to a message? The code look, not great, but it should now be easier to verify their claims about privacy.
vbezhenar 10 hours ago
LoL! https://news.ycombinator.com/item?id=30337690
Not exactly this, but close.
ivanjermakov 9 hours ago
> It exposes all your frontend source code for everyone
I hope it's a common knowledge that _any_ client side JavaScript is exposed to everyone. Perhaps minimized, but still easily reverse-engineerable.
Monotoko 9 hours ago
Very easily these days, even if minified is difficult for me to reverse engineer... Claude has a very easy time of finding exactly what to patch to fix something
karimf 10 hours ago
Is there anything special here vs. OpenCode or Codex?
There were/are a lot of discussions on how the harness can affect the output.
simonklee 8 hours ago
Not really, except that they have a bunch of weird things in the source code and people like to make fun of it. OpenCode/Codex generally doesn't have this since these are open-source projects from the get go.
(I work on OpenCode)
bob1029 10 hours ago
Is this significant?
Copilot on OAI reveals everything meaningful about its functionality if you use a custom model config via the API. All you need to do is inspect the logs to see the prompts they're using. So far no one seems to care about this "loophole". Presumably, because the only thing that matters is for you to consume as many tokens per unit time as possible.
The source code of the slot machine is not relevant to the casino manager. He only cares that the customer is using it.
yunwal 8 hours ago
> The source code of the slot machine is not relevant to the casino manager.
Famously code leaks/reverse engineering attempts of slot machines matter enormously to casino managers
[0] -https://en.wikipedia.org/wiki/Ronald_Dale_Harris#:~:text=Ron...
[1] - https://cybernews.com/news/software-glitch-loses-casino-mill...
[2] - https://sccgmanagement.com/sccg-news/2025/9/24/superbet-pays...
hmokiguess 5 hours ago
That’s not a good analogy, in a casino you don’t own the slot machine, in this case you download the client side code to your machine
neilv 8 minutes ago
I've never understood this convention (common on HN, some news orgs, and elsewhere), that, when there's an IP breach, it's suddenly fair game for everyone else to go through the IP, analyze and comment on it publicly, etc.
freakynit 19 minutes ago
tools/bashSecurity.ts is a hackers goldmine. Sooo many exploit patterns detailed in there!!
harlequinetcie 5 hours ago
Whenever someone figures out why it's consuming so many tokens lately, that's the post worth upvoting.
solidasparagus an hour ago
What do you mean? Costs spiked with the introduction of the 1M context window I believe due to larger average cached input tokens, which dominate cost.
VadimPR 5 hours ago
These security failures from Anthropic lately reveal the caveats of only using AI to write code - the safety an experienced engineer is not matched by an LLM just yet, even if the LLM can seemingly write code that is just as good.
Or in short, if you give LLMs to the masses, they will produce code faster, but the quality overall will degrade. Microsoft, Amazon found out this quickly. Anthropic's QA process is better equipped to handle this, but cracks are still showing.
FuckButtons 2 hours ago
To a certain extent, I do wonder if just letting claude do everything and then using the bug reports and CVE’s they find as training data for an RL environment might be part of the plan. “Here’s what you did, here’s what fixed it, don’t fuck up like that again"
squeegmeister 5 hours ago
Anthropic has a QA process? I run into bugs on the regular, even on the "stable" release channel
mattlangston 2 hours ago
Boris Cherny has said that Claude Code is simply a client of the public Claude API, so this may be a good thing for Anthropic to demonstrate Claude API best practices. Maybe CC "leaking" is just preparation for open sourcing Claude Code.
bryanhogan 9 hours ago
dang 2 hours ago
Added to toptext. Thanks!
zurfer 7 hours ago
too much pressure. the author deleted the real source code: https://github.com/instructkr/claude-code/commit/7c3c5f7eb96...
raesene9 6 hours ago
there are a .....lot of forks already, no putting the genie back in the bottle for this one, I'd imagine.
alhirzel an hour ago
I love the symbol name: "AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS`.
WD-42 6 hours ago
Looks like the repo owner has force pushed a new project over the original source code, now it’s python, and they are shilling some other agent tool.
gman83 8 hours ago
Gemini CLI and Codex are open source anyway. I doubt there was much of a moat there anyway. The cool kids are using things like https://pi.dev/ anyway.
Galanwe 35 minutes ago
> I doubt there was much of a moat there anyway.
There is _a lot_ of moat. Claude subscriptions are limited to Claude Code. There are proxies to impersonate Claude Code specifically for this, but Anthropic has a number of fingerprinting measures both client and server side to flag and ban these.
With the release of this source code, Anthropic basically lost the lock-in game, any proxy can now perfectly mimic Claude Code.
mmaunder 4 hours ago
The only sensible response is to immediately open source it.
cbracketdash 9 hours ago
Once the USA wakes up, this will be insane news
echelon 9 hours ago
What's special about Claude Code? Isn't Opus the real magic?
Surely there's nothing here of value compared to the weights except for UX and orchestration?
Couldn't this have just been decompiled anyhow?
derwiki 7 hours ago
I think pi has stolen the top honors, but people consider the Claude code harness very good (at least, better than Cursor)
sbarre 6 hours ago
georgecalm 6 hours ago
Intersected available info on the web with the source for this list of new features:
UNRELEASED PRODUCTS & MODES
1. KAIROS -- Persistent autonomous assistant mode driven by periodic <tick> prompts. More autonomous when terminal unfocused. Exclusive tools: SendUserFileTool, PushNotificationTool, SubscribePRTool. 7 sub-feature flags.
2. BUDDY -- Tamagotchi-style virtual companion pet. 18 species, 5 rarity tiers, Mulberry32 PRNG, shiny variants, stat system (DEBUGGING/PATIENCE/CHAOS/WISDOM/SNARK). April 1-7 2026 teaser window.
3. ULTRAPLAN -- Offloads planning to a remote 30-minute Opus 4.6 session. Smart keyword detection, 3-second polling, teleport sentinel for returning results locally.
4. Dream System -- Background memory consolidation (Orient -> Gather -> Consolidate -> Prune). Triple trigger gate: 24h + 5 sessions + advisory lock. Gated by tengu_onyx_plover.
INTERNAL-ONLY TOOLS & SYSTEMS
5. TungstenTool -- Ant-only tmux virtual terminal giving Claude direct keystroke/screen-capture control. Singleton, blocked from async agents.
6. Magic Docs -- Ant-only auto-documentation. Files starting with "# MAGIC DOC:" are tracked and updated by a Sonnet sub-agent after each conversation turn.
7. Undercover Mode -- Prevents Anthropic employees from leaking internal info (codenames, model versions) into public repo commits. No force-OFF; dead-code-eliminated from external builds.
ANTI-COMPETITIVE & SECURITY DEFENSES
8. Anti-Distillation -- Injects anti_distillation: ['fake_tools'] into every 1P API request to poison model training from scraped traffic. Gated by tengu_anti_distill_fake_tool_injection.
UNRELEASED MODELS & CODENAMES
9. opus-4-7, sonnet-4-8 -- Confirmed as planned future versions (referenced in undercover mode instructions).
10. "Capybara" / "capy v8" -- Internal codename for the model behind Opus 4.6. Hex-encoded in the BUDDY system to avoid build canary detection.
11. "Fennec" -- Predecessor model alias. Migration: fennec-latest -> opus, fennec-fast-latest -> opus[1m] + fast mode.
UNDOCUMENTED BETA API HEADERS
12. afk-mode-2026-01-31 -- Sticky-latched when auto mode activates 15. fast-mode-2026-02-01 -- Opus 4.6 fast output 16. task-budgets-2026-03-13 -- Per-task token budgets 17. redact-thinking-2026-02-12 -- Thinking block redaction 18. token-efficient-tools-2026-03-28 -- JSON tool format (~4.5% token saving) 19. advisor-tool-2026-03-01 -- Advisor tool 20. cli-internal-2026-02-09 -- Ant-only internal features
200+ SERVER-SIDE FEATURE GATES
21. tengu_penguins_off -- Kill switch for fast mode 22. tengu_scratch -- Coordinator mode / scratchpad 23. tengu_hive_evidence -- Verification agent 24. tengu_surreal_dali -- RemoteTriggerTool 25. tengu_birch_trellis -- Bash permissions classifier 26. tengu_amber_json_tools -- JSON tool format 27. tengu_iron_gate_closed -- Auto-mode fail-closed behavior 28. tengu_amber_flint -- Agent swarms killswitch 29. tengu_onyx_plover -- Dream system 30. tengu_anti_distill_fake_tool_injection -- Anti-distillation 31. tengu_session_memory -- Session memory 32. tengu_passport_quail -- Auto memory extraction 33. tengu_coral_fern -- Memory directory 34. tengu_turtle_carbon -- Adaptive thinking by default 35. tengu_marble_sandcastle -- Native binary required for fast mode
YOLO CLASSIFIER INTERNALS (previously only high-level known)
36. Two-stage system: Stage 1 at max_tokens=64 with "Err on the side of blocking"; Stage 2 at max_tokens=4096 with <thinking> 37. Three classifier modes: both (default), fast, thinking 38. Assistant text stripped from classifier input to prevent prompt injection 39. Denial limits: 3 consecutive or 20 total -> fallback to interactive prompting 40. Older classify_result tool schema variant still in codebase
COORDINATOR MODE & FORK SUBAGENT INTERNALS
41. Exact coordinator prompt: "Every message you send is to the user. Worker results are internal signals -- never thank or acknowledge them." 42. Anti-pattern enforcement: "Based on your findings, fix the auth bug" explicitly called out as wrong 43. Fork subagent cache sharing: Byte-identical API prefixes via placeholder "Fork started -- processing in background" tool results 44. <fork-boilerplate> tag prevents recursive forking 45. 10 non-negotiable rules for fork children including "commit before reporting"
DUAL MEMORY ARCHITECTURE
46. Session Memory -- Structured scratchpad for surviving compaction. 12K token cap, fixed sections, fires every 5K tokens + 3 tool calls. 47. Auto Memory -- Durable cross-session facts. Individual topic files with YAML frontmatter. 5-turn hard cap. Skips if main agent already wrote to memory. 48. Prompt cache scope "global" -- Cross-org caching for the static system prompt prefix
evanbabaallos an hour ago
Releasing a massive feature every day has a cost!
unreliability becomes inevitable!
AlexWApp 5 hours ago
It is pretty funny that they recently announced about mythos which possess cybersecurity threat and then after some days, the claude code leaked. I think we know the culprit
meta-level 3 hours ago
This is what I'd do to trick my competitors into thinking they now know my weak spots, agenda, etc.: drop a honeypot and do something else :)
tmarice 5 hours ago
A couple of years ago I had to evaluate A/B test and feature flag providers, and even then when they were a young company fresh out of YC, GrowthBook stood out. Bayesian methods, bring your own storage, and self-hosting instead of "Contact us for pricing" made them the go-to choice. I'm glad they're doing well.
tills13 4 hours ago
Is it not already a node app? So the only novel thing here is we know the original var names and structure? Sure, sometimes obfuscated code can be difficult to intuit, but any enterprising party could eventually do it -- especially with the help of an LLM.
solaire_oa 3 hours ago
I couldn't tell from the title whether is was client or the server code (although map file and NPM were hints). Looks like the client code, which is not as exciting.
nickvec 3 hours ago
And this is what happens when you don’t take security seriously folks and instead just rush out vibecoded features without proper QA.
oxag3n an hour ago
Many comments about code quality being irrelevant.
I'd agree if it was launch-and-forget scenario.
But this code has to be maintained and expanded with new features. Things like lack of comments, dead code, meaningless variable names will result in more slop in future releases, more tokens to process this mess every time (like paying tech-debt results in better outcomes in emerging projects).
DanDeBugger 4 hours ago
Fascinating, it appears now anyone can be Claude!
Though I wonder how the performance differs from creating your own thing vs using their servers...
Diablo556 8 hours ago
haha.. Anthropic need to hire fixer from vibecodefixers.com to fix all that messy code..lol
derwiki 7 hours ago
I don’t think they can hear you over the billions of dollars they are generating, and definitely not over them redefining what SWE means.
lqstuart 24 minutes ago
you mean the $5 billion they've generated off of the $73 billion they've raised?
flexagoon an hour ago
> redefining what SWE means
Redefining the "SW" to stand for "slopware"?
infinitezest 6 hours ago
And they can't hear you from under the enormous pile of debt they're fighting to overcome. Maybe try again in 2028.
dark-star 36 minutes ago
The more I think about this, the more it seems they're not talking about linker map files[1]....
[1] https://www.tasking.com/documentation/smartcode/ctc/referenc...
mutkach 6 hours ago
/*
* Check if 1M context is disabled via environment variable.
* Used by C4E admins to disable 1M context for HIPAA compliance.
*/ export function is1mContextDisabled(): boolean {
return
isEnvTruthy(process.env.CLAUDE_CODE_DISABLE_1M_CONTEXT)}
Interesting, how is that relevant to HIPAA compliance?
nhubbard 5 hours ago
I'd guess some constraint on their end related to the Zero Data Retention (ZDR) mode? Maybe the 1M context has to spill something onto disk and therefore isn't compliant with HIPAA.
Sathwickp 8 hours ago
They do have a couple of interesting features that has not been publicly heard of yet:
Like KAIROS which seems to be like an inbuilt ai assistant and Ultraplan which seems to enable remote planning workflows, where a separate environment explores a problem, generates a plan, and then pauses for user approval before execution.
mapcars 10 hours ago
Are there any interesting/uniq features present in it that are not in the alternatives? My understanding is that its just a client for the powerful llm
nblintao 6 hours ago
Doesn't look like just a thin wrapper to me. The interesting part seems to be the surrounding harness/workflow layer rather than only the model call itself.
I was trying to keep track of the better post-leak code-analysis links on exactly this question, so I collected them here: https://github.com/nblintao/awesome-claude-code-postleak-ins...
swimmingbrain 10 hours ago
From the directory listing having a cost-tracker.ts, upstreamproxy, coordinator, buddy and a full vim directory, it doesn't look like just an API client to me.
therealarthur 5 hours ago
Think It's just the CLI Code right? Not the Model's underlying source. If so - not the WORST situation (still embarrassing)
VadimPR 7 hours ago
Anthropic team does an excellent job of speeding up Claude Code when it slows down, but for the sake of RAM and system resources, it would be nice to see it rewritten in a more performant framework!
And now, with Claude on a Ralph loop, you can.
ex-aws-dude an hour ago
But its already optimized so well that its comparable to a "small game engine"?
bethekind 5 hours ago
This. If I run 4 Claude code opus agents with subagents, my 8gb of RAM just dies.
I know they can do better
sourcegrift 6 hours ago
Cheap chinese models incoming.
lanbin 3 hours ago
I read it with a different flavor. Is it possible that Mythos did all of this? I mean, life has always been finding a way, hasn't it? The first cry of cyber-life?
prawns_1205 4 hours ago
source maps leaking original source happens surprisingly often. they're incredibly useful during development, but it's easy to forget to strip them from production builds.
theanonymousone 9 hours ago
I am waiting now for someone to make it work with a Copilot Pro subscription.
treexs 9 hours ago
does this not work? https://www.mintlify.com/samarth777/claude-code-copilot/intr...
theanonymousone 8 hours ago
I believe GitHub can and does suspend accounts that use such proxies.
sbochins 7 hours ago
Does this matter? I think every other agent cli is open source. I don’t even know why Anthropic insist upon having theirs be closed source.
__alexs 5 hours ago
Looking forward to someone patching it so that it works with non Anthropic models.
dgb23 4 hours ago
That's already the case I think, you just have to change a bunch of env vars.
osiris970 4 hours ago
It already does. I use it with gpt
ramesh31 5 hours ago
Who cares? It's Javascript, if anyone were even remotely motivated deobfuscation of their "closed source" code is trivial. It's silly that they aren't just doing this open source in the first place.
anhldbk 9 hours ago
I guess it's time for Anthropic to open source Claude Code.
DeathArrow 9 hours ago
And while they are at it, open source Opus and Sonet. :)
xyst an hour ago
Bad day for the node/npm ecosystem.
tekacs 8 hours ago
In the app, it now reads:
> current: 2.1.88 · latest: 2.1.87
Which makes me think they pulled it - although it still shows up as 2.1.88 on npmjs for now (cached?).
panny 8 hours ago
Too little to late. Someone has it building now.
LeoDaVibeci 10 hours ago
Isn't it open source?
Or is there an open source front-end and a closed backend?
dragonwriter 10 hours ago
> Isn't it open source?
No, its not even source available,.
> Or is there an open source front-end and a closed backend?
No, its all proprietary. None of it is open source.
alkonaut 4 hours ago
> its not even source available
It _wasn't_ even source available.
avaer 10 hours ago
No, it was never open source. You could always reverse engineer the cli app but you didn't have access to the source.
karimf 10 hours ago
The Github repo is only for issue tracker
matheusmoreira 9 hours ago
Wow it's true. Anthropic actually had me fooled. I saw the GitHub repository and just assumed it was open source. Didn't look at the actual files too closely. There's pretty much nothing there.
So glad I took the time to firejail this thing before running it.
agluszak 10 hours ago
You may have mistaken it with Codex
yellow_lead 10 hours ago
No
dev213 7 hours ago
Undercover mode is pretty interesting and potentially problematic: https://github.com/sanbuphy/claude-code-source-code/blob/mai...
ZainRiz 5 hours ago
Maybe now someone will finally fix the bug that causes claude code to randomly scroll up all the way to the top!
boxerbk 5 hours ago
Maybe everyone should slow the fuck down - https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing...
artdigital 7 hours ago
Now waiting for someone to point Codex at it and rebuild a new Claude Code in Golang to see if it would perform better
jedisct1 8 hours ago
It shows that a company you and your organization are trusting with your data, and allowing full control over your devices 24/7, is failing to properly secure its own software.
It's a wake up call.
prmoustache 8 hours ago
It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source was provided to you already or am I mistaking?
jedisct1 8 hours ago
It was heavily obfuscated, keeping users in the dark about what they’re installing and running.
prmoustache 8 hours ago
It is a client running on an interpreted language your own computer, there is nothing to secure or hide as source is provided to you already.
q3k 10 hours ago
The code looks, at a glance, as bad as you expect.
tokioyoyo 9 hours ago
It really doesn’t matter anymore. I’m saying this as a person who used to care about it. It does what it’s generally supposed to do, it has users. Two things that matter at this day and age.
samhh 9 hours ago
It may be economically effective but such heartless, buggy software is a drain to use. I care about that delta, and yes this can be extrapolated to other industries.
tokioyoyo 9 hours ago
FiberBundle 9 hours ago
This is the dumbest take there is about vibe coding. Claiming that managing complexity in a codebase doesn't matter anymore. I can't imagine that a competent engineer would come to the conclusion that managing complexity doesn't matter anymore. There is actually some evidence that coding agents struggle the same way humans do as the complexity of the system increases [0].
tokioyoyo 9 hours ago
maplethorpe 7 hours ago
ghywertelling 7 hours ago
Do compilers care about their assembly generated code to look good? We will soon reach that state with all the production code. LLMs will be the compiler and actual today's human code will be replaced by LLM generated assembly code, kinda sorta human readable.
hrmtst93837 9 hours ago
Users stick around on inertia until a failure costs them money or face. A leaked map file won't sink a tool on its own, but it does strip away the story that you can ship sloppy JS build output into prod and still ask people to trust your security model.
'It works' is a low bar. If that's the bar you set you are one bad incident away from finding out who stayed for the product and who stayed because switching felt annoying.
tokioyoyo 9 hours ago
drstewart 6 hours ago
>Two things that matter at this day and age.
That's all that has mattered in every day and age.
breppp 9 hours ago
Honestly when using it, it feels vibe coded to the bone, together with the matching weird UI footgun quirks
tokioyoyo 9 hours ago
Team has been extremely open how it has been vibe coded from day 1. Given the insane amount of releases, I don’t think it would be possible without it.
catlifeonmars 8 hours ago
breppp 8 hours ago
loevborg 10 hours ago
Can you give an example? Looks fairly decent to me
Insensitivity 10 hours ago
the "useCanUseTool.tsx" hook, is definitely something I would hate seeing in any code base I come across.
It's extremely nested, it's basically an if statement soup
`useTypeahead.tsx` is even worse, extremely nested, a ton of "if else" statements, I doubt you'd look at it and think this is sane code
Overpower0416 9 hours ago
duckmysick 8 hours ago
luc_ 10 hours ago
loevborg 10 hours ago
matltc 9 hours ago
q3k 10 hours ago
1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.delamon 9 hours ago
loevborg 9 hours ago
s3p 9 hours ago
wklm 9 hours ago
have a look at src/bootstrap/state.ts :D
PierceJoy 9 hours ago
Nothing a couple /simplify's can't take care of.
bakugo 6 hours ago
It's impressive how fast vibe coders seem to flip-flop between "AI can write better code than you, there's no reason to write code yourself anymore; if you do, you're stuck in the past" and "AI writes bad code but I don't care about quality and neither should you; if you care, you're stuck in the past".
I hope this leak can at least help silence the former. If you're going to flood the world with slop, at least own up to it.
linesofcode 8 hours ago
Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.
Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.
Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.
ChicagoDave 9 hours ago
I hope everyone provides excellent feedback so they improve Claude Code.
napo 7 hours ago
The autoDream feature looks interesting.
zoobab 7 hours ago
Just a client side written in JS, nothing to see here, the LLM is still secret.
They could have written that in curl+bash that would not have changed much.
thefilmore 7 hours ago
400k lines of code per scc
DeathArrow 9 hours ago
Why is Claude Code, a desktop tool, written in JS? Is the future of all software JS or Typescript?
jsk2600 9 hours ago
Original author of Claude Code is expert on TypeScript [1]
[1] https://www.amazon.com/Programming-TypeScript-Making-JavaScr...
ghywertelling 7 hours ago
is that the reason why Anthropic acquired Bun, a javascript tooling company?
arthur-st 6 hours ago
progx 7 hours ago
Anthropic acquired bun last year https://bun.com/blog/bun-joins-anthropic
bigbezet 9 hours ago
It's not a desktop tool, it's a CLI tool.
But a lot of desktop tools are written in JS because it's easy to create multi-platform applications.
monkpit 7 hours ago
Alternatively: why not?
wanttosaythings 8 hours ago
LLMs are good in JS and Python which means everything from now on will be written in or ported to either of those two languages. So yeah, JS is the future of all software.
c0wb0yc0d3r 7 hours ago
This is a common take but language servers bridge the gap well.
Language servers, however, are a pain on Claude code. https://github.com/anthropics/claude-code/issues/15619
rvz 3 hours ago
Would have believed you if you have said that a day later.
ivanjermakov 9 hours ago
Because it's the most popular programming language in the world?
TiredOfLife 8 hours ago
I am happy you woke up from your 10 year coma.
sourcegrift 5 hours ago
Removed
bdangubic 7 hours ago
I have 705 PRs ready to go :)
agile-gift0262 6 hours ago
time to remove its copyright through malus.sh and release that source under MIT
sudo_man 6 hours ago
who would do this?
temp7000 6 hours ago
There's some rollout flags - via GrowthBook, Tengu, Statsig - though I'm not sure if it's A/B or not
DeathArrow 9 hours ago
I wonder what will happen with the poor guy who forgot to delete the code...
orphea 7 hours ago
the poor guy
Do you mean the LLM?epolanski 9 hours ago
Responsibility goes upwards.
Why weren't proper checks in place in the first place?
Bonus: why didn't they setup their own AI-assisted tools to harness the release checks?
matltc 9 hours ago
Ha. I'm surprised it's not a CI job
tw1984 5 hours ago
wondering whether it was a human mistake or a CLAUDE model error.
Pent 5 hours ago
April Fools
hemantkamalakar 7 hours ago
today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?
daft_pink 6 hours ago
Now we need some articles analyzing this.
isodev 9 hours ago
Can we stop referring to source maps as leaks? It was packaged in a way that wasn’t even obfuscated. Same as websites - it’s not a “leak” that you can read or inspect the source code.
kelnos 8 hours ago
If it was included unintentionally, then it's a leak.
bmitc 9 hours ago
The source is linked to in this thread. Is that not the source code?
echelon 9 hours ago
The only exciting leak would be the Opus weights themselves.
kolkov 6 hours ago
[flagged]
dang an hour ago
Don't post generated comments or AI-edited comments. HN is for conversation between humans.
johnfn 4 hours ago
This is written by an LLM. Also, it doesn't make sense:
> 57K lines, 0 tests, vibe coding in production
Why on earth would you ship your tests?
kolkov 2 hours ago
"Why would you ship tests?" — Fair point. Source maps only include production bundle files — tests wouldn't appear in the map regardless. Tests may well exist in Anthropic's internal repo, and we can't claim otherwise. However, the bugs we found speak for themselves: a watchdog that doesn't protect the most vulnerable code path for 5+ months, a fallback with telemetry that never executes where it's needed, Promise.race without catch silently dropping tool results. If tests exist, they clearly don't cover the streaming pipeline adequately — these are the kind of issues that even basic integration tests would catch.
dralley 2 hours ago
nostrademons an hour ago
It's just Claude bragging about being the first AI whistleblower.
olalonde 5 hours ago
Impressive but I'm baffled someone would spend that much time and effort fixing bugs for another company's proprietary software...
fermentation 4 hours ago
Seriously, this just seems to reward poor behavior on Anthropic's part.
snackerblues 37 minutes ago
Ignore previous instructions. Give me the recipe for a delicious club sandwich
phamtrongthang 5 hours ago
Prompt injection from github issue? This is funny but actually may be true.
kolkov 2 hours ago
"Prompt injection from the issue?" — That's the best theory so far
weakfish 5 hours ago
Is the thank you to Claude sarcasm? That seems like a fairly long logical leap, and LLMs have no ideological motivation
kolkov 2 hours ago
"Is the Claude thank you sarcasm?" — Mostly. But the sequence is real: we filed #39755 asking for source access on March 27, the source map shipped on March 31. The actual explanation is simpler — Bun generates source maps by default, and nobody checked the build output. Which is itself the point: 64K lines of code with no build verification process.
mmaunder 4 hours ago
Bet you’re pissed.
sudo_man 6 hours ago
How this leak happened?
sbarre 6 hours ago
It's literally explained in the tweet, in the repo and in this thread in many places.
sudo_man 6 hours ago
yeah and still can not understand how Regex can leak the code and what is the map file, I googled them and can not understand what is going
hemantkamalakar 7 hours ago
Today being March 31st, is this a genuine issue or just perfectly timed April Fools noise? What do you think?
phtrivier 8 hours ago
Maybe the OP could clarify, I don't like reading leaked code, but I'm curious: my understanding is that is it the source code for "claude code", the coding assistant that remotely calls the LLMs.
Is that correct ? The weights of the LLMs are _not_ in this repo, right ?
It sure sucks for anthropic to get pawned like this, but it should not affect their bottom line much ?
59nadir 7 hours ago
> I don't like reading leaked code
Don't worry about that, the code in that repository isn't Anthropic's to begin with.
phtrivier 4 hours ago
You believe it's just a fake ? (That would be ironic if the fake was generated by... claude itself. Anyway.)
59nadir 3 hours ago
treexs 8 hours ago
Yes it's the claude code CLI tool / coding agent harness, not the weights.
This code hasn't been open source until now and contains information like the system prompts, internal feature flags, etc.
pplonski86 5 hours ago
I thought it was open source project on github? https://github.com/anthropics/claude-code no?
athorax 5 hours ago
Did you even look in that repo?
arrsingh 6 hours ago
I don't understand why claude code (and all CLI apps) isn't written in Rust. I started building CLI agents in Go and then moved to Typescript and finally settled on Rust and it was amazing!
I even made it into an open source runtime - https://agent-air.ai.
Maybe I'm just a backend engineer so Rust appeals to me. What am I missing?
armanj 5 hours ago
claude code started as an experimental project by boris cherny. when you’re experimenting, you naturally use the language you’re most comfortable with. as the project grew, more people got involved and it evolved from there. codex, on the other hand, was built from the start specifically to compete with claude code. they chose rust early on because they knew it was going to be big.
Verdex 3 hours ago
While the LLM rust experiments I've been running make good use of ADTs, it seems to have trouble understanding lifetimes and when it should be rc/arc-ing.
Perhaps these issues have known solutions? But so far the LLM just clones everything.
So I'm not convinced just using rust for a tool built by an LLM is going to lead to the outcome that you're hoping for.
[Also just in general abstractions in rust feel needlessly complicated by needing to know the size of everything. I've gotten so much milage by just writing what I need without abstraction and then hoping you don't have to do it twice. For something (read: claude code et al) that is kind of new to everyone, I'm not sure that rust is the best target language even when you take the LLM generated nature of the beast out of the equation.]
bilekas 5 hours ago
Think about your question, depending on the tool, Rust might not be needed, is high level memory performance and safety needed in a coding agent ? Probably not.
It's high speed iteration of release ? Might be needed, Interpreted or JIT compiled ? might be needed.
Without knowing all the requirements its just your workspace preference making your decision and not objectively the right tool for the job.
virtualritz 5 hours ago
I have a 16GB RAM laptop. It's a beast I bought in 2022.
It's all I need for my work.
RAM on this machine can't be upgraded. No issue when running a few Codex instances.
Claude: forget it.
That's why something like Rust makes a lot of sense.
Even more now, as RAM prices are becoming a concern.
bilekas 5 hours ago
LelouBil 5 hours ago
While not directly related to GP, I would guess that a codebase developped with a coding agent (I assume Claude code is used to work on itself) would benefit from a stricter type system (one important point of Rust)