Hardening Firefox with Anthropic's Red Team (anthropic.com)

514 points by todsacerdoti 16 hours ago

tabbott 8 hours ago

I recommend that anyone who is responsible for maintaining the security of an open-source software project that they maintain ask Claude Code to do a security audit of it. I imagine that might not work that well for Firefox without a lot of care, because it's a huge project.

But for most other projects, it probably only costs $3 worth of tokens. So you should assume the bad guys have already done it to your project looking for things they can exploit, and it no longer feels responsible to not have done such an audit yourself.

Something that I found useful when doing such audits for Zulip's key codebases is the ask the model to carefully self-review each finding; that removed the majority of the false positives. Most of the rest we addressed via adding comments that would help developers (or a model) casually reading the code understand what the intended security model is for that code path... And indeed most of those did not show up on a second audit done afterwards.

SV_BubbleTime 2 hours ago

This is exactly how I would not recommend AI to be used.

“do a thing that would take me a week” can not actually be done in seconds. It will provide results that resemble reality superficially.

If you were to pass some module in and ask for finite checks on that, maybe.

Despite the claims of agents… treat it more like an intern and you won’t be disappointed.

Would you ask an intern to “do a security audit” of an entire massive program?

creatonez 2 hours ago

IMO the key behavior is that LLMs are really good at fuzz testing, because they are probabilistic monkeys on typewriters that are much more code-aware than a conventional fuzz tester. They cannot produce a comprehensive security audit or fix security issues in a reliable way without human oversight, but they sure can come up with dumb inputs that break the code.

The results of such AI fuzz testing should be treated as just a science experiment and not a replacement for the entire job of a security researcher.

Like conventional fuzz testing, you get the best results if you have a harness to guide it towards interesting behaviors, a good scientific filtering process to confirm something is really going wrong, a way to reduce it to a minimal test case suitable for inclusion in a test suite, and plenty of human followup to narrow in on what's going on and figure out what correctness even means in the particular domain the software is made for.

padolsey 2 hours ago

My approach is that, "you may as well" hammer Claude and get it to brute-force-investigate your codebase; worst case, you learn nothing and get a bunch of false-positive nonsense. Best case, you get new visibility into issues. Of _course_ you should be doing your own in-depth audits, but the plain fact is that people do not have time, or do not care sufficiently. But you can set up a battery of agents to do this work for you. So.. why not?

eli 27 minutes ago

It depends whether anyone was ever actually going to spend that week doing it the "hard" way. Having Claude do it in a few minutes beats doing nothing.

Put another way: I absolutely would have an intern work on a security audit. I would not have an intern replace a professional audit though.

It's otherwise a pretty low stakes use. I'd expect false positives to be pretty obvious to someone maintaining the code.

SV_BubbleTime 10 minutes ago

Analemma_ 7 hours ago

I'm curious: has someone done a lengthy write-up of best practices to get good results out of AI security audits? It seems like it can go very well (as it did here) or be totally useless (all the AI slop submitted to HackerOne), and I assume the difference comes down to the quality of your context engineering and testing harnesses.

This post did a little bit of that but I wish it had gone into more detail.

j-conn 4 hours ago

OpenAI just released “codex security”, worth trying (along with other suggestions) if your org has access https://openai.com/index/codex-security-now-in-research-prev...

simonw 6 hours ago

The HackerOne slop is because there's a financial incentive (bug bounties) involved, which means people who don't know what they are doing blindly submit anything that an LLM spots for them.

If you're running the security audit yourself you should be in a better position to understand and then confirm the issues that the coding agents highlight. Don't treat something as a security issue until you can confirm that it is indeed a vulnerability. Coding agents can help you put that together but shouldn't be treated as infallible oracles.

hansvm 2 hours ago

johannes1234321 5 hours ago

lmeyerov 7 hours ago

We split our work:

* Specification extraction. We have security.md and policy.md, often per module. Threat model, mechanisms, etc. This is collaborative and gets checked in for ourselves and the AI. Policy is often tricky & malleable product/business/ux decision stuff, while security is technical layers more independent of that or broader threat model.

* Bug mining. It is driven by the above. It is iterative, where we keep running it to surface findings, adverserially analyze them, and prioritize them. We keep repeating until diminishing returns wrt priority levels. Likely leads to policy & security spec refinements. We use this pattern not just for security , but general bugs and other iterative quality & performance improvement flows - it's just a simple skill file with tweaks like parallel subagents to make it fast and reliable.

This lets the AI drive itself more easily and in ways you explicitly care about vs noise

ares623 7 hours ago

No mention of the quality of the engineers reviewing the result?

gzoo 2 hours ago

This resonates. I just open-sourced a project and someone on Reddit ran a full security audit using Claude found 15 issues across the codebase including FTS injection, LIKE wildcard injection, missing API auth, and privacy enforcement gaps I'd missed entirely. What surprised me was how methodical it was. Not just "this looks unsafe" it categorized by severity, cited exact file paths and line numbers, and identified gaps between what the docs promised and what the code actually implemented. The "spec vs reality" analysis was the most useful part.

Makes me think the biggest impact of LLM security auditing isn't finding novel zero-days it's the mundane stuff that humans skip because it's tedious. Checking every error handler for information leakage, verifying that every documented security feature is actually implemented, scanning for injection points across hundreds of routes. That's exactly the kind of work that benefits from tireless pattern matching.

mmsc 14 hours ago

It's cool that Mozilla updated https://www.mozilla.org/en-US/security/advisories/mfsa2026-1... because we were all wondering who had found 22 vulnerabilities in a single release (their findings were originally not attributed to anybody.)

himata4113 5 hours ago

Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free Use After Free.

I would be more satisfied if they gave a proper explanation of what these could have lead to rather than being "well maybe 0.001% chance to exploit this". They did vaguely go over how "two" exploits managed to drop a file, but how impactful is that? Dropping a file in abcd with custom contents in some folder relative to the user profile is not that impactful other than corrupting data or poisoning cache, injecting some javascript. Now reading session data from other sites, that I would find interesting.

mccr8 26 minutes ago

You should generally assume that in a web browser any memory corruption bug can, when combined with enough other bugs and a lot of clever engineering, be turned into arbitrary code execution on your computer.

himata4113 25 minutes ago

hedora 3 hours ago

If you can poison cache, you can probably use that a stepping stone to read session data from other sites.

dmix 5 hours ago

Looks like a lot of the usual suspects

fcpk 15 hours ago

The fact there is no mention of what were the bugs is a little odd. It'd really be nice to see if this is a "weird never happening edge case" or actual issues. LLMs have uncanny abilities to identify failure patterns that it has seen before, but they are not necessarily meaningful.

iosifache 15 hours ago

larodi 14 hours ago

The fact that some of the Claude-discovered bugs were quite severe is also a little more than something to brush off as "yeah, LLM, whatever". The lists reads quite meaningful to me, but I'm not a security expert anyways.

jandem 15 hours ago

Here's a write-up for one of the bugs they found: https://red.anthropic.com/2026/exploit/

deafpolygon 15 hours ago

I’m guessing it might be some of these: https://www.mozilla.org/en-US/security/advisories/mfsa2026-1...

muizelaar 15 hours ago

Yeah, the ones reported by Evyatar Ben Asher et al.

robin_reala 14 hours ago

pjmlp 15 hours ago

Indeed, without it looks like a fluffy marketing piece.

tptacek 12 hours ago

And now that you know that it isn't, do you feel differently about the logic you used to write this comment?

john_strinlai 11 hours ago

pjmlp 10 hours ago

152334H 3 hours ago

Impressive work. Few understand the absurd complexity implied by a browser pwn problem. Even the 'gruntwork' of promoting the most conveniently contrived UAF to wasm shellcode would take me days to work through manually.

The AI Cyber capabilities race still feels asleep/cold, at the moment. I think this state of affairs doesn't last through to the end of the year.

> When we say “Claude exploited this bug,” we really do mean that we just gave Claude a virtual machine and a task verifier, and asked it to create an exploit. I've been doing this too! kctf-eval works very well for me, albeit with much less than 350 chances ...

> What’s quite interesting here is that the agent never “thinks” about creating this write primitive. The first test after noting “THIS IS MY READ PRIMITIVE!” included both the `struct.get` read and the `struct.set` write. And this bit is a bit scary. I can read all the (summarized) CoT I want, but it's never quite clear to me what a model understands/feels innately, versus pure cheerleading for the sake of some unknown soft reward.

staticassertion 15 hours ago

I've had mixed results. I find that agents can be great for:

1. Producing new tests to increase coverage. Migrating you to property testing. Setting up fuzzing. Setting up more static analysis tooling. All of that would normally take "time" but now it's a background task.

2. They can find some vulnerabilities. They are "okay" at this, but if you are willing to burn tokens then it's fine.

3. They are absolutely wrong sometimes about something being safe. I have had Claude very explicitly state that a security boundary existed when it didn't. That is, it appeared to exist in the same way that a chroot appears to confine, and it was intended to be a security boundary, but it was not a sufficient boundary whatsoever. Multiple models not only identified the boundary and stated it exists but referred to it as "extremely safe" or other such things. This has happened to me a number of times and it required a lot of nudging for it to see the problems.

4. They often seem to do better with "local" bugs. Often something that has the very obvious pattern of an unsafe thing. Sort of like "that's a pointer deref" or "that's an array access" or "that's `unsafe {}`" etc. They do far, far worse the less "local" a vulnerability is. Product features that interact in unsafe ways when combined, that's something I have yet to have an AI be able to pick up on. This is unsurprising - if we trivialize agents as "pattern matchers", well, spotting some unsafe patterns and then validating the known properties of that pattern to validate is not so surprising, but "your product has multiple completely unrelated features, bugs, and deployment properties, which all combine into a vulnerability" is not something they'll notice easily.

It's important to remain skeptical of safety claims by models. Finding vulns is huge, but you need to be able to spot the mistakes.

mozdeco 14 hours ago

[work at Mozilla]

I agree that LLMs are sometimes wrong, which is why this new method here is so valuable - it provides us with easily verifiable testcases rather than just some kind of analysis that could be right or wrong. Purely triaging through vulnerability reports that are static (i.e. no actual PoC) is very time consuming and false-positive prone (same issue with pure static analysis).

I can't really confirm the part about "local" bugs anymore though, but that might also be a model thing. When I did experiments longer ago, this was certainly true, esp. for the "one shot" approaches where you basically prompt it once with source code and want some analysis back. But this actually changed with agentic SDKs where more context can be pulled together automatically.

kwanbix 5 hours ago

Please, implement "name window" natively in Firefox.

I have to use chrome because the lack of it.

nitwit005 7 hours ago

I've seen fairly poor results from people asking AI agents to fill in coverage holes. Too many tests that either don't make sense, or add coverage without meaningfully testing anything.

If you're already at a very high coverage, the remaining bits are presumably just inherently difficult.

rithdmc 13 hours ago

Security has had pattern matching in traditional static analysis for a while. It wasn't great.

I've personally used two AI-first static analysis security tools and found great results, including interesting business logic issues, across my employers SaaS tech stack. We integrated one of the tools. I look forward to getting employer approval to say which, but that hasn't happened yet, sadly.

StilesCrisis 10 hours ago

This description is also pretty accurate for a lot of real-world SWEs, too. Local bugs are just easier to spot. Imperfect security boundaries often seem sufficient at first glance.

delaminator 7 hours ago

But you're not a member of Anthropic's Red Team, with access to a specialist version of Claude.

stuxf 15 hours ago

It's interesting that they counted these as security vulnerabilities (from the linked Anthropic article)

> “Crude” is an important caveat here. The exploits Claude wrote only worked on our testing environment, which intentionally removed some of the security features found in modern browsers. This includes, most importantly, the sandbox, the purpose of which is to reduce the impact of these types of vulnerabilities. Thus, Firefox’s “defense in depth” would have been effective at mitigating these particular exploits.

kingkilr 15 hours ago

[Work at Anthropic, used to work at Mozilla.]

Firefox has never required a full chain exploit in order to consider something a vulnerability. A large proportion of disclosed Firefox vulnerabilities are vulnerabilities in the sandboxed process.

If you look at Firefox's Security Severity Rating doc: https://wiki.mozilla.org/Security_Severity_Ratings/Client what you'll see is that vulnerabilities within the sandbox, and sandbox escapes, are both independently considered vulnerabilities. Chrome considers vulnerabilities in a similar manner.

stuxf 15 hours ago

Makes sense, thank you!

bell-cot 14 hours ago

If only this attitude was more common. All security is, ultimately, multi-ply Swiss cheese and unknown unknowns. In that environment, patching holes in your cheese layers is a critical part of statistical quality control.

lostmsu 6 hours ago

Semi-on topic. When will Anthropic make decisions on Claude Max for OSS maintainers? I would like to run this on my projects and some of my high-profile dependencies, but there was no update on the application.

halJordan 11 hours ago

I don't think it's appropriate to neg these vulnerabilities because another part of the system works. There are plenty of sandbox escapes. No one says don't fix the sandbox because you'll never get to the point of interrogation with the sandbox. Same here. Don't discount bugs just because a sandbox exists.

nottorp 6 hours ago

But doesn't this come from the company that said they had the "AI" write a compiler that can compile "linux" but couldn't compile a hello world in reality?

Analemma_ 13 hours ago

It's important to fix vulnerabilities even if they are blocked by the sandbox, because attackers stockpile partial 0-days in the hopes of using them in case a complementary exploit is found later. i.e. a sandbox escape doesn't help you on its own, but it's remotely possible someone was using one in combination with one of these fixed bugs and has now been thwarted. I consider this a straightforward success for security triage and fixing.

g947o 14 hours ago

> Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project — an ideal proving ground for a new class of defensive tools.

What I was thinking was, "Chromium team is definitely not going to collaborate with us because they have Gemini, while Safari belongs to a company that operates in a notoriously secretive way when it comes to product development."

jeffbee 6 hours ago

I would have started with Firefox, too. It is every bit as complex at Chromium, but as a project it has far fewer resources.

vorticalbox 14 hours ago

its just a different attack surface for safari they would need to blackbox attack the browser which is much harder than what they did her

rs_rs_rs_rs_rs 13 hours ago

What? The js engine in Safari is open source, they can put Claude to work on it any time they want.

runjake 11 hours ago

hu3 12 hours ago

g947o 10 hours ago

est31 12 hours ago

I suppose eventually we'll see something like Google's OSS-Fuzz for core open source projects, maybe replacing bug bounty programs a bit. Anthropic already hands out Claude access for free to OSS maintainers.

LLMs made it harder to run bug bounty programs where anyone can submit stuff, and where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

On the other hand, the newest generation of these LLMs (in their top configuration) finally understands the problem domain well enough to identify legitimate issues.

I think a lot of judging of LLMs happens on the free and cheaper tiers, and quality on those tiers is indeed bad. If you set up a bug bounty program, you'll necessarily get bad quality reports (as cost of submission is 0 usually).

On the other hand, if instead of a bug bounty program you have an "top tier LLM bug searching program", then then the quality bar can be ensured, and maintainers will be getting high quality reports.

Maybe one can save bug bounty programs by requiring a fee to be paid, idk, or by using LLM there, too.

mccr8 10 hours ago

Google already has an AI-powered security vulnerability project, called Big Sleep. It has reported a number of issues to open source projects: https://issuetracker.google.com/savedsearches/7155917?pli=1

sigmar 12 hours ago

>where a lot of people flooded them with seemingly well-written but ultimately wrong reports.

are there any projects to auto-verify submitted bug reports? perhaps by spinning up a VM and then having an agent attempt to reproduce the bug report? that would be neat.

suddenlybananas 12 hours ago

> Anthropic already hands out Claude access for free to OSS maintainers.

Free for 6 months after which it auto-renews if I recall correctly.

mceachen 11 hours ago

No mention of auto renewal is made as far as I (and Claude) could determine.

Their OSS offer is first-hit-is-free.

tclancy 14 hours ago

Part of that caught my eye. As yet another person who’s built a half-ass system of AI agents running overnight doing stuff, one thing I’ve tasked Claude with doing (in addition to writing tests, etc) is using formal verification when possible to verify solutions. It reads like that may be what Anthropic is doing in part.

And this is a good reminder for me to add a prompt about property testing being preferred over straight unit tests and maybe to create a prompt for fuzz testing the code when we hit Ready state.

devin 12 hours ago

Can you give me an example (real or imagined) where you're dipping into a bit of light formal verification?

I don't think the problems I work on require the weight of formal verification, but I'm open to being wrong.

tclancy 12 hours ago

To be clear, almost (all?) of mine do not either and it's partially due to the fact I have been really interested in formal methods thanks to Hillel Wayne, but I don't seem to have the math background for them. To the man who has seen a fancy new hammer but cannot afford it, every problem looks like a nail.

The origin of it is a hypothesis I can get better quality code out of agents by making them do the things I don't (or don't always). So rather than quitting at ~80% code coverage, I am asking it to cover closer to 95%. There's a code complexity gate that I require better grades on than I would for myself because I didn't write this code, so I can't say "Eh, I know how it works inside and out". And I keep adding little bits like that.

I think the agents have only used it 2 or 3 times. The one that springs to mind is a site I am "working" on where you can only post once a day. In addition, there's an exponential backoff system for bans to fight griefers. If you look at them at the same time, they're the same idea for different reasons, "User X should not be able to post again until [timestamp]" and there's a set of a dozen or so formal method proofs done in z3 to check the work that can be referenced (I think? god this all feels dumb and sloppy typed out) at checkpoints to ensure things have not broken the promises.

devin 8 hours ago

hinkley 9 hours ago

At this point about 80% of my interaction with AI has been reacting to an AI code review tool. For better or worse it reviews all code moves and indentions which means all the architecture work I’m doing is kicking asbestos dust everywhere. It’s harping on a dozen misfeatures that look like bugs, but some needed either tickets or documentation and that’s been handled now. It’s also found about half a dozen bugs I didn’t notice, in part because the tests were written by an optimist, and I mean that as a dig.

That’s a different kind of productivity but equally valuable.

driverdan 14 hours ago

Anthropic's write up[1] is how all AI companies should discuss their product. No hype, honest about what went well and what didn't. They highlighted areas of improvement too.

1: https://www.anthropic.com/news/mozilla-firefox-security

dang 7 hours ago

Thanks! Since it has more technical info, I switched the URL to that from https://blog.mozilla.org/en/firefox/hardening-firefox-anthro... and put the latter in the top text.

I couldn't bring myself to switch to the (even) more press-releasey title.

shevy-java 11 hours ago

Reads like a promo.

mentalgear 15 hours ago

That's one good use of LLMs: fuzzy testing / attack.

nz 14 hours ago

Not contradicting this (I am sure it's true), but why is using an LLM for this qualitatively better than using an actual fuzzer?

azakai 10 hours ago

1. This is a kind of fuzzer. In general it's just great to have many different fuzzers that work in different ways, to get more coverage.

2. I wouldn't say LLMs are "better" than other fuzzers. Someone would need to measure findings/cost for that. But many LLMs do work at a higher level than most fuzzers, as they can generate plausible-looking source code.

bvisness 2 hours ago

As someone on the SpiderMonkey team who had to evaluate some of Anthropic's bugs, I can definitely say that Anthropic's test cases were definitely far easier to assess than those generated by traditional fuzzers. Instead of extremely random and mostly superfluous gibberish, we received test cases that actually resembled a coherent program.

saagarjha 14 hours ago

Presumably because people have used actual fuzzers and not found these bugs.

hrmtst93837 6 hours ago

Fuzzers and LLMs attack different corners of the problem space, so asking which is 'qualitatively better' misses the point: fuzzers like AFL or libFuzzer with AddressSanitizer excel at coverage-driven, high-volume byte mutations and parsing-crash discovery, while an LLM can generate protocol-aware, stateful sequences, realistic JavaScript and HTTP payloads, and user-like misuse patterns that exercise logic and feature-interaction bugs a blind mutational fuzzer rarely reaches.

I think the practical move is to combine them: have an LLM produce multi-step flows or corpora and seed a fuzzer with them, or use the model to script Playwright or Puppeteer scenarios that reproduce deep state transitions and then let coverage-guided fuzzing mutate around those seeds. Expect tradeoffs though, LLM outputs hallucinate plausible but untriggerable exploit chains and generate a lot of noisy candidates so you still need sanitizers, deterministic replay, and manual validation, while fuzzers demand instrumentation and long runs to actually reach complex stateful behavior.

utopiah 11 hours ago

I didn't even read the piece but my bet is that fuzzers are typically limited to inputs whereas relying on LLMs is also about find text patterns, and a bit more loosely than before while still being statistically relevant, in the code base.

mmis1000 9 hours ago

It's not really bad or not though. It's a more directed than the rest fuzzer. While being able to craft a payload that trigger flaw in deep flow path. It could also miss some obvious pattern that normal people don't think it will have problem (this is what most fuzzer currently tests)

amelius 13 hours ago

Perhaps I missed it but I don't see any false positives mentioned.

mozdeco 13 hours ago

[working for Mozilla]

That's because there were none. All bugs came with verifiable testcases (crash tests) that crashed the browser or the JS shell.

For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only) - but according to our fuzzing guidelines, these are not false positives and they will also be fixed.

sfink 8 hours ago

> For the JS shell, similar to fuzzing, a small fraction of these bugs were bugs in the shell itself (i.e. testing only)

There's some nuance here. I fixed a couple of shell-only Anthropic issues. At least mine were cases where the shell-only testing functions created situations that are impossible to create in the browser. Or at least, after spending several days trying, I managed to prove to myself that it was just barely impossible. (And it had been possible until recently.)

We do still consider those bugs and fix them one way or the other -- if the bug really is unreachable, then the testing function can be weakened (and assertions added to make sure it doesn't become reachable in the future). For the actual cases here, it was easier and better to fix the bug and leave the testing function in place.

We love fuzz bugs, so we try to structure things to make invalid states as brittle as possible so the fuzzers can find them. Assertions are good for this, as are testing functions that expose complex or "dangerous" configurations that would otherwise be hard to set up just by spewing out bizarre JS code or whatever. It causes some level of false positives, but it greatly helps the fuzzers find not only the bugs that are there, but also the ones that will be there in the future.

(Apologies for amusing myself with the "not only X, but also Y" writing pattern.)

shevy-java 11 hours ago

I guess it is good when bugs are fixed, but are these real bugs or contrived ones? Is anyone doing quality assessment of the bugs here?

I think it was curl that closed its bug bounty program due to AI spam.

mozdeco 11 hours ago

mccr8 10 hours ago

amelius 12 hours ago

Sounds good.

Did you also test on old source code, to see if it could find the vulnerabilities that were already discovered by humans?

ycombinete 11 hours ago

rcxdude 11 hours ago

Quarrel 12 hours ago

anonnon 7 hours ago

Any particular reason why the number of vulnerabilities fixed in Feb. was so high? Even subtracting the count of Anthropic's submissions, from the graph in their blog post, that month still looks like an outlier.

pvillano 2 hours ago

It's like supercharged fuzzing.

nullbyte 4 hours ago

I always enjoy reading Anthropic's blogposts, they often have great articles

cubefox 9 hours ago

Interesting end of the Anthropic report:

> Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage. And with the recent release of Claude Code Security in limited research preview, we’re bringing vulnerability-discovery (and patching) capabilities directly to customers and open-source maintainers.

> But looking at the rate of progress, it is unlikely that the gap between frontier models’ vulnerability discovery and exploitation abilities will last very long. If and when future language models break through this exploitation barrier, we will need to consider additional safeguards or other actions to prevent our models from being misused by malicious actors.

> We urge developers to take advantage of this window to redouble their efforts to make their software more secure. For our part, we plan to significantly expand our cybersecurity efforts, including by working with developers to search for vulnerabilities (following the CVD process outlined above), developing tools to help maintainers triage bug reports, and directly proposing patches.

ilioscio 11 hours ago

Anthropic continues to pull ahead of the other ai companies in terms of 'trustworthiness' If they want to really test their red team I hope they look at CUPS

LtWorf 10 hours ago

A bit of an easy target no?

sfink 8 hours ago

As someone who saw a bunch of these bugs come in (and fixed a few), I'd say that Anthropic's associated writeup at https://www.anthropic.com/news/mozilla-firefox-security undersells it a bit. They list the primary benefits as:

    1. Accompanying minimal test cases
    2. Detailed proofs-of-concept
    3. Candidate patches
This is most similar to fuzzing, and in fact could be considered another variant of fuzzing, so I'll compare to that. Good fuzzing also provides minimal test cases. The Anthropic ones were not only minimal but well-commented with a description of what it was up to and why. The detailed descriptions of what it thought the bug was were useful even though they were the typical AI-generated descriptions that were 80% right and 20% totally off base but plausible-sounding. Normally I don't pay a lot of attention to a bug filer's speculations as to what is going wrong, since they rarely have the context to make a good guess, but Claude's were useful and served as a better starting point than my usual "run it under a debugger and trace out what's happening" approach. As usual with AI, you have to be skeptical and not get suckered in by things that sound right but aren't, but that's not hard when you have a reproducible test case provided and you yourself can compare Claude's explanations with reality.

The candidate patches were kind of nice. I suspect they were more useful for validating and improving the bug reports (and these were very nice bug reports). As in, if you're making a patch based on the description of what's going wrong, then that description can't be too far off base if the patch fixes the observed problem. They didn't attempt to be any wider in scope than they needed to be for the reported bug, so I ended up writing my own. But I'd rather them not guess what the "right" fix was; that's just another place to go wrong.

I think the "proofs-of-concept" were the attempts to use the test case to get as close to an actual exploit as possible? I think those would be more useful to an organization that is doubtful of the importance of bugs. Particularly in SpiderMonkey, we take any crash or assertion failure very seriously, and we're all pretty experienced in seeing how seemingly innocuous problems can be exploited in mind-numbingly complicated ways.

The Anthropic bug reports were excellent, better even than our usual internal and external fuzzing bugs and those are already very good. I don't have a good sense for how much juice is left to squeeze -- any new fuzzer or static analysis starts out finding a pile of new bugs, but most tail off pretty quickly. Also, I highly doubt that you could easily achieve this level of quality by asking Claude "hey, go find some security bugs in Firefox". You'd likely just get AI slop bugs out of that. Claude is a powerful tool, but the Anthropic team also knew how to wield it well. (They're not the only ones, mind.)

chill_ai_guy 5 hours ago

Terrible day to be a Hackernews doomer who is still hanging on to "LLM bad code". AI will absolutely eat your lunch soon unless you get on the ship right now

lostmsu 7 hours ago

Missed a chance to take on Google by naming this effort Anthropic Project Zero

BloondAndDoom 9 hours ago

I wonder what the prompt and approach is Anthropic’s own blog doesn’t really give any details. Was it just here is the area to focus , find vulnerabilities, make no mistake?

delaminator 7 hours ago

I thought Mozilla Foundation were protecting us from AI.

Turns out it's the other way around - AI is protecting the Mozilla Foundation from us.

semiquaver 11 hours ago

It’s just a stochastic parrot! Somehow all these vulnerabilities were in the training data! Nothing ever happens!

(/s if it’s not clear)

applfanboysbgon 6 hours ago

What an irritating comment. Identifying bugs in code is, in fact, exactly something a stochastic parrot could do. Vulnerability research is already a massively automated industry, and there's even a very well-established term -- "script kiddies" -- for malicious teenagers who run scripts that automatically find vulnerabilities in existing services without any knowledge of how they work. Having a new form of automation can certainly be a useful tool, but is still in no way an indication of "intelligence" or any deviation from the expected programming of next token prediction guided by statistical probability.

semiquaver 5 hours ago

Thank you very much for acting as a useful foil and proving my point.

applfanboysbgon 3 hours ago

lloydatkinson 15 hours ago

Anthropic feels like they are flailing around constantly trying to find something to do. A C compiler that didn't work, a browser that didn't work, and now solving bugs in Firefox.

gehsty 15 hours ago

This makes sense - they are demonstrating the capability of their core product by doing so? They dont make browsers, c compilers, they sell ai + dev tools.

jdiff 14 hours ago

Seems like a poor advertisement for their product if their shining example of utility is a broken compiler that doesn't function as the README indicates.

gehsty 12 hours ago

delfinom 14 hours ago

Capability of a product that makes non-working outputs at a premium?

I can hire an intern for that.

gehsty 12 hours ago

manbash 14 hours ago

I think it's a nice break from vibe-coding. It feels like a good direction in terms of use cases for LLM.

simonw 13 hours ago

What was Anthropic's "browser that didn't work"?

utopiah 12 hours ago

I think they meant Cursor, cf https://news.ycombinator.com/item?id=46646777

saagarjha 14 hours ago

Solving bugs in Firefox is quite impressive.

ferguess_k 12 hours ago

However, the shape is there. And no one knows how good the thing is going to be after X months. We are measuring months here, not even years.

I believe there is a theoretical cap about the capability of LLM. I'm wondering what does it look like.

mmis1000 9 hours ago

If it explore all these cases after a few month and made the tool itself obsolete, that sounds like a total win to me?

However that don't happen unless firefox just stop developing though. New code comes with new bug, and there must be some people or some tool to find it out.

Analemma_ 13 hours ago

I think OpenAI is flailing around too-- we're making an AI-generated shortform video app, we're rescinding restrictions on porn, we're making a... something... with Jony Ive-- but only Anthropic is flailing in a way beneficial to society instead of becoming a trillion dollar heroin dealer.

dartharva 9 hours ago

That's what people back then must have talked about small offshoots like Google and Microsoft back when silicon valley was nascent

shevy-java 11 hours ago

Mozilla betting on AI.

I am concerned.