First public macOS kernel memory corruption exploit on Apple M5 (blog.calif.io)

444 points by quadrige a day ago

MrWiffles 5 hours ago

Wait a minute - clearly I missed something here. Last I read, Mythos was only available to a handpicked list of megacorps under project glasswing. Did the hourly changing AI soap opera air yet another plot twist that I missed amidst my quest better known as “trying to find a job”?

If not, how’d a small time outfit get access to something the rest of us can’t have because we’re (apparently) not trustworthy enough?

No shade on these guys - I’m thinking it’s just another plot twist in “Hours of AI’s Lives”.

xuancanh 2 hours ago

They list Anthropic as one of their customers and appear to have conducted penetration testing for them previously, so they're likely one of those trusted organizations.

tempaccount420 4 hours ago

I'm guessing they have some friends at Anthropic.

unaut 9 hours ago

Well, this was fun read. Discovering such a high-ranking critical exploit within a week by coupling experts with frontier models is an amazing new journey we're about to embark on.

teiferer 8 hours ago

> amazing new journey we're about to embark on.

Is it? It's an arms race between the "good guys / defenders" and "bad guys / attackers". Assuming both sides have access to the same tools, how is this going to make any difference? Their relative strength will stay the same.

What is actually different is that 1. anybody without tool access is out of the game, which includes security professionals from poorer backgrounds (for them it's not too amazing of a journey) and 2. the AI vendors get a constant stream of what's essentially an AI tax from everybody - so yeah, for them it's gonna be an amazing ride.

Tuna-Fish 5 hours ago

Meh. A world where the defenders and attackers are both omniscient is a world without exploits. Steps towards that world are steps towards more security. The reason the exploits are being found so rapidly is that they are mining all the bugs left in these projects from decades of coding. Eventually they will run out.

asdff 2 hours ago

Pretty soon people will just airgap their stuff and that will eliminate most of the attack surface short of a sort of Mission Impossible style operation.

I mean really, given how AI is being marketed, what is the point of the internet going forward when all its contents are going to be ai slop anyhow? Just disconnect and run local models if all you are getting is slop anyhow. The original purpose of the internet is now dead. Real people communicating and sharing information with eachother? Ha! That is no longer valued.

In fact actual innovation might no longer be valued anymore. What is innovation but opening Pandora's box and potentially seeing a disruptive competitor take your slice of the federal reserves money print? Better to nip that in the bud and control all the devils we already know, from a pure sociopathic profitmaking standpoint, which seems to be a very dominant viewpoint among the people in charge of the worlds power structures right now.

andai 13 hours ago

So like ... I thought Mythos was just a bunch of hype? Or maybe the researchers are having their skills boosted due to using a model with such a cool name?

I jest, but I did notice having more confidence to take on more ambitious work lately. We're all centaurs now.

traceroute66 11 hours ago

> I thought Mythos was just a bunch of hype?

My opinion is that it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.

Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.

Random inexperienced people thinking Mythos can replace the need for experienced pen-testers, auditors etc. are likely to be sorely disappointed if/when they get their hands on Mythos.

alwillis 2 hours ago

> Nicholas Carlini, for example, whose name is on many of the recent high-profile Mythos findings is not just some random dude with a Claude sub on his credit card .... he's an experienced security researcher.

I don’t think Mythos is hype for all kinds of reasons.

Anthropic is a young company but their track record is solid; they don’t seem to hype things just for the sake of hyping things. Sam Altman at OpenAI? We already know his track record…

I’m going Occam’s razor here: the simplest explanation is usually the correct one.

Anthropic had an “oh shit” moment when they realized what Mythos can do. They decided to do the responsible thing: give the industry a heads-up and an opportunity to use the preview to identify and fix the most dangerous zero-day vulnerabilities.

Since the FAANG companies have billions of users, it makes sense to start with them.

There’s still going to major issues for users of systems too old to get patches or updates. Or for IT organizations who think Mythos is a replay of Y2K, where, compared to the warnings, not lot happened.

The bottom line is someone with Mythos won’t need to be an experienced security expert to cause real problems. That’s kind of the point.

harambae 13 minutes ago

asdff 2 hours ago

Over time that will change. Technology has proven time and time again that as we add a new layer of abstraction over the fundamental functionality, knowing the previous layer quickly becomes vestigial knowledge. It is true not just in software but absolutely all technology there is, going back to the first fire made or atl atl or rock sling.

heresie-dabord 5 hours ago

> it is over-hyped because like any LLM, it requires a suitable human in the loop to keep the LLM on the straight and narrow, and then to weed through the inevitable false-positives and hallucinations.

"Suitable human" is a dry phrase indeed. ^_^

The hype is "gosh look at all the bad things this brilliant almost conscious tool found!"

The reality: an insecure toolchain for an insecure language with an insecure compiler produced a runnable but insecure binary for an insecure OS. We couldn't be arsed to address any of this before, but now we're being billed the full price of our laziness.

wslh 10 hours ago

I think it's worth to look at the recent XBOW benchmark: https://xbow.com/blog/mythos-offensive-security-xbow-evaluat... they realized that ChatGPT 5.5 works better so the secret is in the architecture (including humans in the loop).

baq 10 hours ago

smallnix 11 hours ago

> likely to be sorely disappointed if/when they get their hands on Mythos.

At first they will be delighted. So much money and time saved. When their adversaries get their hands on their system (with or without Mythos), then they'll be sorely disappointed.

yieldcrv 3 hours ago

Anthropic just doesn't (didn't) have compute, so they make up a bunch of PR to delay it

They got all that compute with the SpaceX partnership but now the PR has taken a life of its own, so might as well keep hyping up Mythos and artificial scarcity if they have an asset people want now

Just roll with it

Open AI has a history of doing the same thing and it's the same people. GPT 5 was supposed to be AGI at one point, remember.

yellow_lead 12 hours ago

Did Mythos have access to Apple's source code?

> Apple spent five years building it. Probably billions of dollars too.

This seems higher than I'd expect.

Someone 6 hours ago

I wouldn’t know whether that’s true, but this is partly hardware. That tends to make things more expensive, as, at some point, design bugs get really expensive both in time and in dollars, and chances are there were design bugs late in the process (e.g. because somebody published a new way of attacking a system that the latest design didn’t anticipate)

Also, Apple claims it was an effort spanning half a decade (https://security.apple.com/blog/memory-integrity-enforcement...), so depending on what you consider part of this (for example, do you include time spent on their secure memory allocator, on designing/implementing the ARM Memory Tagging Extension or Extended Memory Tagging Extension in the costs of this feature?),

dindresto 7 hours ago

The macOS kernel (XNU) as well as the basic system underneath macOS (darwin) is open source.

https://github.com/apple-oss-distributions/xnu

djwatson24 3 hours ago

> We wanted to check out the infamous Infinite Loop too, but were afraid it could take a long time.

Haha! Nerd jokes are the best jokes

yieldcrv a day ago

from what they demonstrated, this seems to only be a $100,000 exploit in Apple's bug bounty platform, but if they package it right, it could be a $1.5 million exploit

They simply have to show it against a beta version of MacOS, and frame it as unauthorized access, and maybe from locked mode if possible

vsgherzi a day ago

This is an lpe I believe what you’re describing is a zero click rce.

yieldcrv a day ago

how much do you think it is worth in the bug bounty program

vsgherzi a day ago

fguerraz 10 hours ago

This is incredibly light in details, no verifiable claim as far as I can tell.

(I’m sure they’re not lying, but we’re not learning anything here)

fg137 10 hours ago

It reads more like a PR piece than technical article.

alwillis an hour ago

They can’t disclose the technical details yet. They did say a detailed write up is coming.

dgellow a day ago

The world is so not ready for the impact of LLMs on security issues. If true, congrats to the Calif team. It’s likely too technical for me to understand in details but looking forward to reading the 55 pages report

runlevel1 19 hours ago

> The world is so not ready for the impact of LLMs on security issues.

I agree, but it's the people I'm worried about.

I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.

What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).

The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.

adrianN 18 hours ago

The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.

pjmlp 15 hours ago

marysol5 9 hours ago

>I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.

No anecdotes needed, it's entirely happening.

But it's also devs, being devs.

8note 19 hours ago

is this exciting?

juniors have been writing code forever that is imperfect and not memorized by the people reviewing

isnt the important thing the mechanisms for maintaining the code?

neoncontrails 18 hours ago

lmm 18 hours ago

alwillis 15 hours ago

> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.

I don’t think so.

An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.

It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.

As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.

ruszki 13 hours ago

plewd 9 hours ago

iqihs a day ago

you're assuming that blue teams and engineers are sitting around twiddling their thumbs

nvr219 a day ago

Most companies in the world do not have “blue teams”. They barely have any kind of security employee.

steve_adams_86 a day ago

saagarjha 12 hours ago

Veserv 19 hours ago

aiisjustanif 19 hours ago

dgellow a day ago

Not at all. I’m considering that the amount of vulnerable software in the wild is very, very large, with most organizations not managing their systems properly. Imagine all the small to medium size companies that do not have budgets for a dedicated, talented security team. And all the software that will never be patched. We are at the beginning of the exponential

saghm 15 hours ago

bottlepalm 20 hours ago

jp0001 18 hours ago

LLMs are going to produce amazing Rube Goldberg style vulnerabilities for years to come. It's already starting, this instance isn't the case, but it's happening.

shpx 14 hours ago

Maybe it's physically impossible to build a theoretically secure system, just as it's (presumably) impossible to have a cell that isn't susceptible to any virus. Maybe this whole time we've been getting away with a type of security by obscurity, where the obscurity is just no one having the time and focus to actually analyze the code.

JacobKfromIRC 13 hours ago

Suppose the following:

1. Any given system has a finite number of findable vulnerabilities.

2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).

3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.

4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.

If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.

Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.

saagarjha 12 hours ago

lowdude 13 hours ago

I would rather claim that building a theoretically secure system is prohibitively expensive. At the end of the day, Mythos et al. are just better tools for finding vulnerabilities that will eventually be available to both offensive and defensive actors.

If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.

lugu 13 hours ago

txhwind 14 hours ago

another "obscurity": I'm not valuable enough to be attacked, compared with the cost. But what if cost has been reduced a lot?

fsflover 8 hours ago

It's probably impossible to achieve security through correctness, but security through compartmentalization can work. See: https://qubes-os.org.

nashadelic 8 hours ago

hyperbolic but it might be safe to assume any local data on a connected device is going to be accessible.

iknowSFR 8 hours ago

Genuine question as I’m far less technical than the crowd here. Has this not always been the case?

tweakimp 15 hours ago

Do you mean by vibecoding these vulnerabilities into the kernel or by finding them?

vsgherzi a day ago

unfortunately a little light on the details. I'm very curious how the bug survived through MTE

dorianmariecom a day ago

Memory Tagging Extension

Arm published the Memory Tagging Extension (MTE) specification in 2019 as a tool for hardware to help find memory corruption bugs. MTE is a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don’t match, the app crashes, and the event is logged. This allows developers to identify memory corruption bugs immediately as they occur.

https://support.apple.com/guide/security/operating-system-in...

sillysaurusx a day ago

Thank you. I was about to ask.

vsgherzi a day ago

Upon further reading on data only attacks

(https://www.usenix.org/publications/loginonline/data-only-at...)

This makes more sense. You don't trigger MTE since you're not doing anything for force MTE to take action the program isn't actually changing.

My other question would be, why didn't apple use fbounds checking here? They've been doing it aggressively everywhere else.

MTE plus fbounds checking everywhere should lead to an extremly hardened OS

pjmlp a day ago

Quite strange indeed, given that was one of the main points on their security conference a few months ago.

vsgherzi a day ago

aiscoming a day ago

could be a different type of data only attack, which doesnt override the boundaries

vsgherzi a day ago

landr0id a day ago

GPU memory/shaders/etc. isn't protected by MTE or PAC. They said "data-only", so I guess GPU commands could fit into this description.

LoganDark a day ago

IIRC, the GPU is behind a memory controller, so I doubt corrupting GPU memory alone could lead to an LPE. But I suppose it would give you someplace to store stuff if you can make something else read from it.

traceroute66 a day ago

> I'm very curious how the bug survived through MTE

Its not the first time bugs get past MTE, happened with Google Pixel last year ... https://github.blog/security/vulnerability-research/bypassin...

Riany 15 hours ago

I had the same question and if this is a data-only attack, the lesson may be that MIE reduces many attack paths but does not remove every useful corruption primitive

isodev 16 hours ago

I’m surprised Apple is still not dogfooding their allegedly safe language Swift. Or was the whole exercise of Swift 6 mostly marketing

pjmlp 15 hours ago

They certainly are, one of the reasons behind Embedded Swift is to replace iBoot firmware currently written in a C dialect similar in ideas to Fil-C, with something better.

However it is no different from the Linux kernel, just because Rust is now allowed, the world hasn't been rewriten, and no sane person is going to do a Claude rewrite of the kernel.

vsgherzi 16 hours ago

Swift is definitely being used at apple. Most recently added as a CSS parser in safari and running embedded in some of the secure enclave parts. I know there was talk from as far back as strangeloop to get it in the kernel but I'm not sure how far that has gone. That being said they've been huge proponents of fbounds check in clang which can achieve a small portion (but important!) of what memory safe languages can do. I'd also like to see more swift or alternative adoptions I think they have potential and more competition in the safe language space is always welcome.

nielsbot 16 hours ago

You might be interested in the Strict Memory Safety option

https://docs.swift.org/compiler/documentation/diagnostics/st...

commandersaki a day ago

I bought the M5 specifically cause of MIE. Now I feel dumb.

vsgherzi a day ago

You shouldn’t, MTE blocks a large chunk of vulnerabilities and makes things like rop and jop very difficult if not impossible now.

commandersaki a day ago

I should've added /s.

vsgherzi a day ago

aiscoming a day ago

you should worry about npm/pypi malware, not memory corruption bugs

bredren a day ago

Did the article get edited? There is not much description of the field trip.