In February 2026, a Lumma Stealer malware infection at Context.ai, a third-party AI analytics vendor, set off a supply chain compromise that ultimately reached Vercel, one of the most widely-used cloud deployment platforms for front-end and serverless applications. By April 2026, the attacker had leveraged stolen OAuth tokens to access Vercel’s internal systems and enumerate customer environment variables, potentially exposing credentials for database connections, cloud services, payment processors, and AI APIs.
This incident is not isolated. Within a three-week window spanning March–April 2026, separate supply chain attacks targeting LiteLLM (via PyPI) and Axios (via npm) all converged on the same target: developer-stored credentials. Together, they represent a structural vulnerability in the modern software delivery ecosystem.
Severity: Critical
Incident At A Glance
- Incident Start: ~February 2026 (Lumma Stealer infection at Context.ai)
- Disclosed: April 19, 2026 (Vercel security bulletin + CEO thread on X)
- Dwell Time: ~2 months
- Initial Vector: Lumma Stealer malware via Roblox exploit script download
- Pivot Method: Compromised Context.ai OAuth token → Vercel employee Google Workspace
- Assets Exposed: Non-sensitive environment variables across limited subset of customer projects
- Affected Platform: Vercel (cloud deployment/hosting for front-end and serverless apps)
- Unverified Claim: ShinyHunters affiliated actor alleges possession of Vercel data on BreachForums
- Status: Under active investigation; customer notifications & credential rotation guidance issued
Attack Chain
Stage 1 – Third-Party OAuth Compromise (T1199)
- A Context.ai employee downloaded Roblox game exploit scripts in approximately February 2026. These scripts delivered Lumma Stealer malware, which exfiltrated corporate credentials, session tokens, and OAuth tokens – including tokens for users of Context AI Office Suite, a self-serve consumer product launched in June 2025.
- The attacker then accessed Context.ai’s AWS environment and extracted OAuth tokens associated with Vercel employees who had previously authorized Context.ai’s Google Workspace OAuth application.
Stage 2 – Workspace Account Takeover (T1550.001)
- Using the compromised OAuth application’s access, the attacker pivoted into a Vercel employee’s Google Workspace account. This provided access to email, Google Drive (internal documents, runbooks, infrastructure notes), calendar data, and other OAuth-connected services downstream.
Stage 3 – Internal System Access (T1078)
- From the compromised Workspace account, the attacker escalated into Vercel’s internal systems. Vercel CEO Guillermo Rauch described this as “a series of maneuvers” from the compromised colleague’s account.
Stage 4 – Environment Variable Enumeration (T1552.001)
- With internal access established, the attacker enumerated customer project environment variables. Vercel’s environment variable model at the time of the breach stored non-sensitive variables without additional encryption at rest, making them readable via internal API access.
Stage 5 – Downstream Credential Abuse (T1078.004)
- At least one exposed credential, an OpenAI API key, was detected in the wild via OpenAI’s automated secret scanning system. A Vercel customer, Andrey Zagoruiko, reported receiving a leaked-key notification from OpenAI on April 10, 2026, nine days before Vercel’s public disclosure.
Indicators Of Compromise
- OAuth App:
- 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
Recommendations
- Rotate all Vercel environment variables not marked as sensitive, regardless of whether you believe they were accessed.
- Enable the sensitive flag on all environment variables containing credentials, tokens, keys, or secrets. Audit every project.
- Search Google Workspace Admin → Reports → Audit and Investigation → OAuth Log Events, for the IOC Client ID. Date range: February 2026 – present. Any hit warrants immediate revocation and incident investigation.
- Transition from platform stored environment variables to dedicated secrets managers (e.g., HashiCorp Vault, AWS Secrets Manager).
- Query AWS CloudTrail (focusing on sts, iam, and s3 event sources) for API calls using known Vercel-stored access keys from unexpected IP ranges or suspicious user agents like python-requests, curl, or Go http-client.
- Query GCP Audit Logs for protoPayload.authenticationInfo.principalEmail for service accounts whose keys were stored in Vercel. Filter protoPayload.requestMetadata.callerIp against your known ranges. Look for protoPayload.methodName containing storage.objects.get, compute.instances.list, or iam.serviceAccountKeys.create from unexpected sources.
- Query Azure Activity Logs and filter on caller matching any application ID or service principal whose credentials were in Vercel env vars. Flag callerIpAddress outside expected ranges.
- Treat OAuth grants as a vendor risk function rather than a developer self-service task. Perform periodic reviews and re-authorizations of all granted applications.
- Check dashboards for services like Stripe, OpenAI, or SendGrid for key usage from unrecognized IPs or during windows when your application was inactive.
Source:
- https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html
- https://vercel.com/kb/bulletin/vercel-april-2026-security-incident
Enjoyed reading this Threat Intelligence Advisory? Stay updated with our latest exclusive content by following us on Twitter and LinkedIn
No related posts found.