All posts
Security Supply Chain AI April 2026

The Vercel Breach: On Trust, Velocity, and the Tools We Let In

A Roblox cheat download at a third-party AI company triggered a $2M ransom demand against Vercel. Here’s the full attack chain and what it teaches us about how fast we’re granting access to AI tools. Updated April 26, 2026 with new investigation findings.

$2M
Ransom demanded
3
Hops in the chain
1
Game cheat download
Update: April 26, 2026

Further reporting and Vercel’s continued investigation have added significant context since this post was first published. The scope is larger than initially disclosed, the ShinyHunters attribution is now disputed, and new details have emerged about a second OAuth grant at Context.ai. Jump to the update section.

The story of how Vercel got breached in April 2026 starts, improbably, with a Roblox cheat.

Somewhere around February of this year, an employee at Context.ai — a company that makes an AI productivity suite — was looking for a game exploit script online. They found something, downloaded it, ran it. What they actually ran was Lumma Stealer, a well-documented infostealer that quietly harvested every credential on the machine: Google Workspace passwords, API keys, Supabase tokens, Datadog credentials, and more.

From that single download, a chain of events unfolded that eventually reached Vercel’s customer environments and ended with a $2 million ransom demand on BreachForums.

What actually happened

By March 2026, the attacker — using credentials stolen from the Context.ai employee — had gotten into Context.ai’s AWS environment. That gave them access to OAuth tokens belonging to Context.ai’s users. Among those users: a Vercel employee who had connected their enterprise Google Workspace account to Context.ai’s browser extension and granted it “Allow All” permissions.

That’s all it took. With that OAuth token, the attacker had legitimate, authenticated access to the Vercel employee’s Google Workspace account. And from there, they moved laterally into Vercel’s internal environments.

Vercel moved quickly. They engaged Mandiant, worked with GitHub, Microsoft, npm, and Socket to confirm that no packages had been tampered with. The npm supply chain remained clean. Sensitive-marked environment variables were not accessed.

The chain of trust problem

None of the individual steps in this attack chain look particularly exotic. And yet, together, they bypassed the defenses of a sophisticated, well-resourced company.

That’s because security isn’t just about the strength of individual links. It’s about the length of the chain.

Jaime Blasco, CTO of Nudge Security, put it simply: “OAuth is the new lateral movement. Until the industry treats OAuth tokens as high-value credentials, we’re going to keep reading the same breach writeup with the vendor names swapped out.”

“Be more careful” doesn’t scale

The speed at which we’re granting access to AI tools has outrun the organizational processes we have for governing that access. Most companies don’t have an OAuth inventory. Most don’t have a process for reviewing new AI tool integrations. They have a security team that writes policies and an engineering team that moves fast, and the gap between those two things is exactly where attacks like this one live.

What you can actually do

  • Know what OAuth access you’ve granted. Most identity providers have a page somewhere that shows every third-party app with access to your account. Review it. Revoke anything you don’t actively use or recognize.
  • Mark secrets as sensitive. Vercel’s own platform distinguished between sensitive and non-sensitive environment variables. The ones marked sensitive weren’t exposed.
  • Think about blast radius, not just prevention. Credentials scoped tightly to what they need, rotation policies that limit the lifetime of any given secret — these don’t stop the breach, but they narrow its consequences.
  • Treat AI tool permissions like production credentials. An AI tool with access to your enterprise Google Workspace has access to your email, your Drive, your calendar.

The thing about moving fast

Vercel wasn’t compromised because they were careless. They were compromised because they were part of a long trust chain, and one link in that chain — a game cheat download, on a machine that had nothing to do with Vercel — turned out to be load-bearing.

Update: April 26, 2026

Vercel’s security bulletin was updated six times between April 19 and 23 as the investigation progressed, and further reporting has surfaced details that weren’t in the initial disclosure.

Scope expanded during the investigation. Vercel identified a “small number of additional accounts” beyond those contacted in the first notification. A separate small group was also found to have compromise indicators that investigators determined were unconnected to the April incident. Both groups have been contacted. (Vercel security bulletin)

The ShinyHunters attribution is disputed. Most early coverage named ShinyHunters as the threat actor. Google Threat Intelligence assessed the group claiming that name as “likely an imposter attempting to use an established name.” The actual actor remains publicly unattributed. Vercel described the attacker as demonstrating “operational velocity and in-depth understanding of Vercel’s product API surface” (Vercel security bulletin), language that points toward something more targeted than opportunistic credential dumping.

A second OAuth grant surfaced at Context.ai. On March 27, Google removed Context.ai’s Chrome extension after discovering it had a second embedded grant for Google Drive files, separate from the Workspace access already known. (r/sysadmin thread) The removal happened roughly four weeks after the initial Lumma Stealer infection and several weeks before Vercel went public, suggesting the scope of the access was still being mapped at that point.

The product response went further than a single settings change. Beyond defaulting environment variables to sensitive, Vercel shipped team-wide variable management, security overview tools, and enhanced audit logging. Worth knowing if you’re evaluating whether the platform’s security posture has materially changed. (Vercel security bulletin)

Vercel’s CEO attributed attacker speed to AI-assisted tooling. Guillermo Rauch said this publicly. It is hard to verify independently, and CEOs have reasons to frame breaches in ways that point outward. But it tracks with what security researchers are observing more broadly: the same productivity acceleration that defenders use is increasingly being applied on the offensive side.

The core story hasn’t changed. The entry point is still a game cheat download, and the chain-of-trust problem is the same. But the disputed attribution and the expanded scope are worth tracking if you are following this incident.


We think about these questions a lot at VM Farms — not because we have them solved, but because managed infrastructure is, in a lot of ways, a bet on where trust is best placed. If you’re working through your own threat model or want to compare notes on what we’ve learned, we’re easy to reach.