The Vercel breach: OAuth attack exposes risk in platform environment variables (trendmicro.com)

313 points by queenelvis 17 hours ago

Vercel April 2026 security incident - https://news.ycombinator.com/item?id=47824463 - April 2026 (485 comments)

A Roblox cheat and one AI tool brought down Vercel's platform - https://news.ycombinator.com/item?id=47844431 - April 2026 (145 comments)

westont5 16 hours ago

I'm not sure I've seen it mentioned yet that when Vercel rolled out their environment variable UI, there was no "sensitive" option https://github.com/vercel/vercel/discussions/4558#discussion.... There was ~2 years or more until it was introduced https://vercel.com/changelog/sensitive-environment-variables...

nopointttt 13 hours ago

A sensitive flag at the UI layer doesn't actually change runtime. Once it's in process.env during a build, any dep that decides to grep it can. The real problem isn't a missing checkbox, it's that we still stuff every secret into one env bag and hand the build tools the whole bag. Cloudflare scoped bindings and Fly already split it up, other platforms are just slower.

_pdp_ 16 hours ago

Sensitive does not mean it is not readable. It is just simply not exposed through the UI. It can be easily leaked if you return a bit too much props from the action functions or routes.

The only way to defend against these types of issues is to encrypt your environment with your own keys, with secrets possibly baked into source as there are no other facilities to separate them. An attacker would need to not only read the environments but also download the compiled functions and find the decryption keys.

It is not ideal but it could work as a workaround.

harikb 16 hours ago

> with secrets possibly baked into source

please don't suggest this. The right way is to have the creds fetched from a vault, which is programmed to release the creds auth-free to your VM (with machine level identify managed by the parent platform)

This is how Google Secrets or AWS Vaults work.

jcgl 15 hours ago

chatmasta 14 hours ago

_pdp_ 15 hours ago

SoftTalker 15 hours ago

elwebmaster 8 hours ago

eecc 3 hours ago

Looks like how GitLab does it.

As far as I’m concerned, the only sane way is to dump credentials in a well-known path and let the environment decide what to bind them with at runtime (which is how Kubernetes does it, at least the EKS version I’ve had to work with).

IOW, JEE variable binding (JNDI) did it right 20 years or so ago.

It might be worth for architecture designers to look back at that engineering monument (in all its possible meanings, it felt complicated at times) and study its solutions before coming up with a different solution to a problem it solved

awestroke 15 hours ago

The better way to defend against these types of issues is to avoid Vercel and similar providers

elwebmaster 8 hours ago

thundergolfer 15 hours ago

> AI-accelerated tradecraft. The CEO publicly attributed the attacker's unusual velocity to AI augmentation — an early, high-profile data point in the 2026 discourse around AI-accelerated adversary tradecraft.

Attributed without evidence from what I could tell. So it doesn't reveal much at all.

12_throw_away 14 hours ago

Seems like AI is really disrupting the markets for nonsensical excuses the media will repeat uncritically!

mday27 14 hours ago

It's like we're back in 2009 with "did social media cause this?"

shimman 10 hours ago

krautsauer 9 hours ago

I for one was getting bored of hearing about APTs.

trollbridge 8 hours ago

To be fair, vibe coded solutions tend to recommend Vercel (just like they tended to recommend the Axios library).

tom1337 16 hours ago

I still don't get how this exactly worked. Is the OAuth Token they talk about the one that you get when a user uses "Sign in with Google"? Aren't they then bound to the client id and client secret of that specific Google App the user signed in to? How were the attackers able to go from that to a control plane? Because even if the attacker knows the users OAuth token, the client id and the client secret, they can access the Google Drive etc. (which is bad, I get that) but I simply do not understand how they could log in into any Vercel systems from that point. Did they find the credentials in the google drive?

gizzlon 14 hours ago

They don't really say. My guess would be something embarrassing, and that's why they are keeping it to themselves. Maybe passwords in Drive og Gmail. Or just passwordless login links (like sibling said)

_pdp_ 16 hours ago

Once you have a session token, which is what you get after you complete the oauth dance, you can issue requests to the API. It is simple as that. The minted token had permission to access the victim's inbox, most likely, which the attacker leveraged to read email and obtain one-time passwords, magic links and other forms of juicy information.

progbits 14 hours ago

If they had SSO sign in to their admin panel (trusted device checks notwithstanding) the oauth access would be useless.

Vercel is understandably trying to shift all the blame on the third party but the fact their admin panel can be accessed with gmail/drive/whatever oauth scopes is irresponsible.

zbentley 12 hours ago

kyle-rb 13 hours ago

I guess what's unusual is that the scope includes inbox access.

IMO it's probably a bad idea to have an LLM/agent managing your email inbox. Even if it's readonly and the LLM behaves perfectly, supply chain attacks have an especially large blast radius (even more so if it's your work email).

TeMPOraL 12 hours ago

chatmasta 14 hours ago

I’m not clear on it either. Was the Context.ai OAuth application compromised? So the threat actor essentially had the same visibility into every Context.ai customer’s workspace that Context.ai has? And why is a single employee being blamed? Did this Vercel employee authorize Context.ai to read the whole Vercel workspace?

datadrivenangel 16 hours ago

"Effective defense requires architectural change: treating OAuth apps as third‑party vendors, eliminating long‑lived platform secrets, and designing for the assumption of provider‑side compromise."

Designing for provider-side compromise is very hard because that's the whole point of trust...

losvedir 16 hours ago

As someone trying to think about OAuth apps at our SaaS, it certainly is very hard.

Do any marketplaces have a good approach here? I know Cloudflare, after their similar Salesloft issue, has proposed proxying all 3rd party OAuth and API traffic through them. But that feels a little bit like trading one threat vector for another.

Other than standard good practices like narrow scopes, shorter expirations, maybe OAuth Client secret rotation, etc, I'm not sure what else can be done. Maybe allowlisting IP addresses that the requests associated with a given client can come from?

mooreds 16 hours ago

This was probably partly a Google refresh token theft (given the length of the access). No inside info, just looking at how the attack occurred.

OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.

This is recommended for public clients, but I think makes sense for all clients.

The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.

The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.

0: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-v2-1

hvb2 14 hours ago

wouldbecouldbe 16 hours ago

I mean the admin account had visibility of clients env vars, thats maybe not really great in the first place.

iririririr 15 hours ago

nyc_data_geek1 16 hours ago

Corroborates that zero-trust until now has been largely marketing gibberish. Security by design means incorporating concepts such as these to not assume that your upstream providers will not be utterly owned in a supply chain attack.

LudwigNagasena 10 hours ago

I don't understand Stage 2. Did ContextAI app asked for access to Google mail, drive, calendar, etc? That's crazy. I can't believe any company bigger than a mom and pop shop would agree to run that outside of their own environment.

EDIT: the writeup from context.ai themselves seems quite informative: https://context.ai/security-update, it seems like it was a personal choice of one of the Vercel employees to grant full access to their Google workspace.

tosser12344321 13 hours ago

There are going to be a lot more like this as the IT-enabled economy at large catch up to the risk debt of broad-based experimentation with AI tools from large and small vendors.

It's "AI-enabled tradecraft" as in let's take a guess at Vercel leadership's pressure to install and test AI across the company, regardless of vendor risk? Speed speed speed.

This is an extremely vanilla exploit that every company operating without a strictly enforceable AI install allowlist is exposed to - how many AI tools like Context are installed across your scope of local and SaaS AI? Odds are, quite a bit, or ask your IT guy/gal for estimates.

These tools have access to... everything! And with a security vendor and RBAC mechanism space that'll exist in about... 18-24 months.

Vercel is the canary. It's going to get interesting here, no way in heck that Context is the only target. This is a well established, well-concerned/well-ignored threat vector, when one breaks open the other start too.

Implies a very challenging 6 months ahead if these exploits are kicking off, as everyone is auditing their AI installs now (or should be), and TAs will fire off with the access they have before it is cut.

Source - am a head of sec in tech

chasd00 10 hours ago

I’ve said for a while now that there’s going to be a lot of bizarre security incidents in the news over the next few years.

peterldowns 10 hours ago

I see it the same way. Interesting times…

afunk 8 hours ago

Such an embarrassing way to to get caught.

"The attacker compromised this OAuth application — the compromise has since been traced to a Lumma Stealer malware infection of a Context.ai employee in approximately February 2026, reportedly after the employee downloaded Roblox game exploit scripts"

saadn92 16 hours ago

What bites people: rotating a vercel env variable doesn't invalidate old deployments, because previous deploys keep running with the old credential until you redeploy or delete them. So if you rotated your keys after the bulletin but didn't redeploy everything, then the compromised value is still live.

Also worth checking your Google Workspace OAuth authorizations. Admin Console > Security > API Controls > Third-party app access. Guarantee there are apps in there you authorized for a demo two years ago that are still sitting with full email/drive access.

quentindanjou 16 hours ago

Usually rotating a credential means that you invalidate the previous one. Never heard of rotating credentials that would only create new ones and keep the old ones active.

simlevesque 15 hours ago

But then every rotation would break production, wouldn't it ?

cortesoft 14 hours ago

kstrauser 15 hours ago

oasisbob 13 hours ago

> What bites people: rotating a vercel env variable doesn't invalidate old deployments, because previous deploys keep running with the old credential until you redeploy or delete them. So if you rotated your keys after the bulletin but didn't redeploy everything, then the compromised value is still live.

That statement in the report really confuses me; feels illogical and LLM generated.

An old deployment using an older env var doesn't do anything to control whether or not the credential is still valid. This is a footgun which affects availability, not confidentiality like implied.

Another section in the report is confusing, "Environment variable enumeration (Stage 4)". The described mechanics of env var access are bizarre to me -

> Pay particular attention to any environment variable access originating from user accounts rather than service accounts, or from accounts that do not normally interact with the projects being accessed.

Are people really reading credentials out of vercel env vars for use in other systems?

trollbridge 8 hours ago

We have multiple Google accounts for this very reason. Of course, a lot of orgs don’t do this due to the Google Workspace per user “tax”. I tried and failed at a past employer to get some account other than my primary for doing OAuth grants like this.

nulltrace 11 hours ago

Preview deploys are even worse. Every PR spins one up with the same env vars and nobody ever cleans them up. You rotate the key, redeploy prod, and there are still like 200 zombie previews sitting there with the old value.

wouldbecouldbe 16 hours ago

When you rotate them, you supposed expire your old vars

kevinqi 16 hours ago

yeah not redeploying on credential changes seems like a design flaw. Render redeploys on env var changes, for instance.

treexs 14 hours ago

Vercel very clearly highlights that you need to redeploy once you make a credential change

_the_inflator 3 hours ago

Vercel did a great job with NextJS and supports quite some OS projects.

But even before AI they had some serious struggles according to long time users.

With the introduction of the deployment platform NextJS appeared to be having advantages being deployed there.

What I can say is that Next has some weird things going on under the hood most senior coders know as “it works, no one knows why, don’t touch these 1.000 LoC here”

Build and runtime settings are a mess. Pre building a docker image on a local machine and deploying it on another turned out to be its Achilles Heel. Weird settings prioritize not as documented, different settings in one area lead to changes in default settings somewhere else. ReactJS server components played a role.

In other words: I sense that while being incredibly useful there might more to come.

It ain’t easy for them, V16 was a rewrite which was API stable. I am not sure about that.

_pdp_ 16 hours ago

> OAuth trust relationship cascaded into a platform-wide exposure

> The CEO publicly attributed the attacker's unusual velocity to AI

> questions about detection-to-disclosure latency in platform breaches

Typical! The main failures in my mind are:

1. A user account with far too much privileges - possible many others like them

2. No or limited 2FA or any form of ZeroTrust architecture

3. Bad cyber security hygiene

JauntyHatAngle 16 hours ago

Blaming AI is gonna be the security breach equivalent to blaming ddos when your website breaks isn't it.

progbits 14 hours ago

It's the new sophisticated nation state.

ekropotin 10 hours ago

The idea of blaming something you can choice not to do is quite strange.

paulddraper 9 hours ago

anematode 15 hours ago

That part of his tweet made me laugh out loud. I don't understand who it's directed toward.

BoorishBears 15 hours ago

xienze 14 hours ago

I think there’s a lot of truth to “the AI did it” though. We’re encouraging the same people who get tricked by “attached is your invoice” emails to run agent harnesses that have control of your desktop. I think there’s gonna be a lot of AI-powered exploits in the future.

oasisbob 13 hours ago

Some of the details in this report, like the timeline beginning in 2024-2025, haven't been widely reported?

Anyone know where these dates are being sourced from? eg,

> Late 2024 – Early 2025: Attacker pivots from Context.ai OAuth access to a Vercel employee's Google Workspace account -- CONFIRMED — Rauch statement

> Early - mid-2025: Internal Vercel systems accessed; customer environment variable enumeration begins -- CONFIRMED — Vercel bulletin

captn3m0 13 hours ago

These are all made up and likely hallucinated.

hungryhobbit 15 hours ago

Why is this same story repeated over and over here?

I get it, it's a big story ... but that doesn't mean it needs N different articles describing the same thing (where N > 1).

jackconsidine 15 hours ago

New information here -- I had no idea that Env enumeration was happening MONTHS before the disclosure for example and that's part of why I come to HN.

Would guess that double digit percent of readers have some level of skin in the game with Vercel

thisisauserid 15 hours ago

Maybe this flood is a response to the constant flood of:

"Why do people use Vercel?"

"Because it's cheap* and easy."

*expensive

The_Blade 15 hours ago

i didn't know it was OAuth related. when did that hit the front page here?

in fact, the sparse details had Barbara warming up her vocal chords

pier25 15 hours ago

Funny how the headline tries to spin this as an env vars issue.

By far the biggest issue is being able to access the production environment of millions of customers from a Google Workspace. Only a handful of Vercel employees should be able to do that with 2FA if not 3FA.

jwpapi 14 hours ago

No one should be, why are the enverionmant variables not encrypted itself and the encryption key is stored with your oauth provider ?

progbits 14 hours ago

Vercel runtime must be able to access the values (so customer's apps can use them). But nobody else should ever be able to. This is the typical amateur hour security but on the other hand, who was naive enough to expect any better from vercel?

greenmilk 15 hours ago

To me the biggest (but not only) issue is that blindly connecting sensitive tools to 3rd party services has been normalized. Every time I hear the word "claw" I cringe...

krooj 16 hours ago

Interesting - I wonder if this isn't a case of theft on a refresh token that was minted by a non-confidential 3LO flow w/PKCE. That would explain how a leaked refresh token could then be used to obtain access, but does the Vercel A/S not implement any refresh token reuse detection? i.e.: you see the same R/T more than once, you nuke the entire session b/c it's assumed the R/T was compromised.

pier25 15 hours ago

> The CEO publicly attributed the attacker's unusual velocity to AI

Unusual velocity? Didn't the attacker have the oauth keys for months?

steve1977 15 hours ago

But they got it via Context.ai, so there you have it, it's even in the name!

jdiaz97 15 hours ago

He's just lying tbh, this sounds cool and makes you sound less incompetent

ubershmekel 13 hours ago

I'm building something that isn't necessarily more secure than vercel, but it is self hosted. I think in the future personal vps family clouds are going to be a lot more common because of these cloud-level attacks and costs.

jwpapi 14 hours ago

What are these non-sensitive variables that could only be the NEXT_PUBLIC ones? else I haven’t seen any difference?

Or is it the UI sensitive that they ask you in CLI, that would be crazy. That means if you decide to not mark them as sensitive they don’t store encrypted ???

donglong 14 hours ago

those are environment variables that the frontend can consume, hence the public prefix

IshKebab 3 hours ago

Environment variables are one of the most misused features of modern Unix. Storing secrets in them is insane, despite what the 12 factor people think.

throwaway27448 16 hours ago

Do any services use vercel?

drusepth 16 hours ago

It's a really common platform for vibe coded sites, as I understand it.

raw_anon_1111 14 hours ago

I used v0 for a vibe coded internal admin app.

*BUT* I downloaded the source code from Vercel’s site, built and deployed in a Docker container (I never download random npm packages to my local computer), deployed the Docker container to Lambda (choose your Docker deployment platform. They are a dime a dozen), had a tightly scoped IAM role attached to the Lambda and my secrets were in Secret Manager.

My deployment also had a placeholder for the secrets when it was deployed and they were never in my repo and purposefully had to be manually configured.

I would never trust something like Vercel for hosting. I’m not saying go all in on a major cloud provider. Get your own cheap VPS if that’s all you need and take responsibility for your own security posture the best you can.

jdw64 16 hours ago

First of all, it is often used in Korea.

antonvs 16 hours ago

Small startups often use it but typically outgrow it quickly unless they remain small and simple.

vaguemit 17 hours ago

I recently went to BreachForums and the space was filled with this

akanet 15 hours ago

This article is solely overly wordy (probably ai) restatements of essentially just what vercel has publicly disclosed already

phoenixranger 15 hours ago

sad of state of all shorts of media lately

joemazerino 7 hours ago

How did the Roblox cheat pass EDR?

semiquaver 15 hours ago

I’m sure this has been said before but the new part of me is that the initial breach happened 22 months ago and has been sitting undetected that whole time. This really looks quite bad for vercel’s security posture.

rvz 9 hours ago

If I were to interview someone and I see that they use Vercel, I'd immediately reject them.

Oauth is another flawed standard as I said before and this attack clearly shows that.

pphysch 17 hours ago

Security-by-obfuscation is ridiculed but I'm a firm believer that preventing yourself from getting owned when someone is able to type 3 letters `env` is a worthy layer of defense. Even if those same secrets are unencrypted somewhere else on the same system, at least make them spend a bunch of time crawling through files and such.

Quarrelsome 16 hours ago

It's ridiculed because its no protection on its own when an attacker is motivated. Its fine to add as an additional layer though if you want to make your space mildly custom to protect against broader attacks.

I don't see how its necessarily relevant to this attack though. These guys were storing creds in clear and assuming actors within their network were "safe", weren't they?

pphysch 16 hours ago

TFA cites "env var enumeration", likely implying someone got somewhere they shouldn't and typed 3 characters, as the critical attack that led to customers getting compromised.

My point is sensitive secrets should literally never be exported into the process environment, they should be pulled directly into application memory from a file or secrets manager.

It would still be a bad compromise either way, but you have a fighting chance of limiting the blast radius if you aren't serving secrets to attackers on an env platter, which could be the first three characters they type once establishing access.

kstrauser 15 hours ago

lbarrow 16 hours ago

forrestthewoods 12 hours ago

I hate environment variables. I hate them so so so much. I can’t think of a single time I would prefer an envvar to a config file.

They’re somewhat necessary when dealing with Docker. But I also hate Docker. So it’s not surprising when one bad design pattern leads to another.

I suppose maybe envvars make sense when dealing with secrets? I’m not sure. I don’t do any webdev. So not sure what’s least bad solution there.