OWASP Top 10 GenAI Risks: Security Team Should Know in 2025

Share:

Generative AI (GenAI) continues transforming industries, revolutionizing everything from customer service to software development. However, as organizations race to adopt Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) based systems, the security landscape is rapidly shifting.

Recognizing this, OWASP released its 2025 Top 10 GenAI Security Risks, reflecting the evolving threat landscape in real-world LLM deployments. This list addresses not just prompt injections or hallucinations but also embedding abuse, RAG exploitation, and prompt leakage, all unique to GenAI systems.

The Rise of GenAI Across Enterprises

Modern enterprises are deeply integrating GenAI into operations, such as:

  • Intelligent assistants for customer support
  • RAG-based tools for document summarization
  • Internal copilots for HR, sales, and legal

These systems can now make decisions, interact with APIs, and store dynamic memory, widening the attack surface far beyond traditional applications. Without clear boundaries or human review, security teams face a growing need to adapt threat models to GenAI contexts.

Why OWASP GenAI Risks Are Different

GenAI systems differ from traditional apps in key ways:

  • Language-driven logic- Prompt-based behavior, not fixed code
  • Non-deterministic responses- Hallucination and uncertainty
  • RAG and vector store exposure- Retrieval layers add new risks
  • Model access & cost abuse- Unbounded consumption risk

The OWASP GenAI Top 10 provides the foundational lens to effectively understand and mitigate these risks.

OWASP Top 10 GenAI Risks (2025)

RankIDRisk Name
1LLM01Prompt Injection
2LLM02Sensitive Information Disclosure
3LLM03Supply Chain Vulnerabilities
4LLM04Data & Model Poisoning
5LLM05Improper Output Handling
6LLM06Excessive Agency
7LLM07System Prompt Leakage
8LLM08Vector & Embedding Weaknesses
9LLM09Misinformation
10LLM10Unbounded Consumption

Deep Dive Into the 2025 OWASP GenAI Risks

LLM01: Prompt Injection

Attackers manipulate model behavior by injecting hostile content into user prompts or context. This remains the top risk.
Mitigation: Use strict system prompts, escape user inputs, and apply prompt isolation.

LLM02: Sensitive Information Disclosure

LLMs may unintentionally expose secrets seen during training or via retrieval systems.
Mitigation: Redact sensitive training data, apply differential privacy, and enforce access control.

LLM03: Supply Chain Vulnerabilities

Malicious models, plugins, datasets, or open-source tools may compromise your entire GenAI stack.
Mitigation: Scan dependencies, verify sources, and restrict external plugin execution.

LLM04: Data & Model Poisoning

Adversaries tamper with training or fine-tuning data, inserting backdoors or toxic behavior.
Mitigation: Audit data pipelines, monitor behavior, and use clean-label techniques.

LLM05: Improper Output Handling

LLMs can generate misleading, biased, or even malicious content, which may be consumed blindly.
Mitigation: Sanitize all outputs, apply response validation, and avoid direct execution of outputs.

LLM06: Excessive Agency

Systems are granting GenAI autonomous control over tasks like approvals or system changes – often without checks.
Mitigation: Keep a human in the loop, especially for high-risk operations.

LLM07: System Prompt Leakage

Attackers uncover hidden system instructions (e.g., jailbreaks), allowing them to override controls.
Mitigation: Obfuscate system prompts, segment user and system instructions, and monitor prompt chain leakage.

LLM08: Vector & Embedding Weaknesses

RAG systems use vector databases for context. If these are poisoned or manipulated, attackers can inject malicious data.
Mitigation: Monitor vector store access, validate indexed content, and authenticate embedding pipelines.

LLM09: Misinformation

LLMs may confidently generate false or biased content. Overreliance can lead to reputational or operational damage.
Mitigation: Flag low-confidence outputs, add citations in RAG, and encourage human review.

LLM10: Unbounded Consumption

LLMs can be abused to cause resource exhaustion or cost explosion via complex prompts or infinite loops.
Mitigation: Rate limit inputs, cap token usage, and monitor prompt cost metrics.

Key Security Gaps in GenAI Deployments

  • Lack of audit logs for prompt/response history
  • No monitoring for vector store manipulation
  • Blind trust in third-party plugins
  • Failure to verify LLM-generated actions

Security Recommendations for GenAI

  • Treat GenAI as a privileged component in your architecture
  • Build prompt firewalls and use static/dynamic filtering
  • Monitor embedding pipelines and data provenance
  • Establish playbooks for GenAI red teaming

Conclusion

The GenAI revolution brought new capabilities and unprecedented risks. OWASP’s 2025 Top 10 list helps teams proactively secure LLM-based systems before attackers exploit them.
As GenAI becomes more embedded and autonomous, security cannot be an afterthought, it must be a foundational layer.

Enjoyed reading this blog? Stay updated with our latest exclusive content by following us on Twitter and LinkedIn.

Ampcus Cyber
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.