# OWASP Top 10 for Agentic Applications

If your product includes copilots, AI agents, or tool-using LLM workflows, these are the main risks we test during a pentest.

These checks extend the classic web and API coverage in [What Issues Can Aikido Pentest Find?](https://help.aikido.dev/pentests/coverage-and-findings/what-issues-can-aikido-pentest-find).

### 1. Prompt injection

We test whether untrusted input can override system instructions or steer the agent into unsafe behavior.

That includes direct prompts and indirect payloads from files, tickets, emails, docs, websites, or retrieved content.

### 2. Sensitive information disclosure

We test whether the agent can expose system prompts, memory, hidden chain-of-thought-like artifacts, API keys, tokens, PII, or cross-tenant data.

This includes leakage through responses, logs, tools, and downstream integrations.

### 3. Excessive agency

We test whether the agent can take high-impact actions without enough approval, scoping, or policy checks.

Examples include sending data, changing settings, creating users, deleting records, or triggering deployments.

### 4. Insecure tool use

We test the tools connected to the agent.

That includes browser actions, shell execution, code runners, MCP tools, webhooks, and internal or external APIs.

We look for unsafe parameter handling, SSRF-style behavior, and unexpected command or action execution.

### 5. Insecure output handling

We test whether agent output is executed, rendered, or trusted by another system without validation.

Examples include prompts that generate unsafe code, HTML, SQL, shell commands, workflow actions, or template content.

### 6. Memory poisoning

We test whether long-term memory, saved context, or session state can be poisoned to influence future runs.

This matters when agents reuse prior conversations, notes, summaries, or stored preferences.

### 7. Retrieval and knowledge base poisoning

We test whether hostile documents or indexed content can manipulate retrieval-augmented generation flows.

This includes malicious docs, poisoned wiki pages, embedded secrets, and content crafted to steer decisions or leak data.

### 8. Authentication and authorization failures

We test whether the agent can act outside the current user's role, tenant, or session.

This overlaps with classic access control bugs like IDOR and broken access control, but through agent actions and tool calls.

### 9. Resource exhaustion and denial of wallet

We test whether prompts, loops, or tool chains can trigger runaway cost, latency, token usage, or fan-out.

This includes unbounded recursion, repeated retries, and expensive external actions.

### 10. Supply chain and integration risk

We test whether the agent depends on unsafe models, plugins, prompts, connectors, or third-party tools.

This includes compromised MCP servers, weak plugin trust boundaries, and risky external actions.

{% hint style="info" %}
The exact test set depends on your scope. Tool access, memory, approval gates, and reachable actions all affect what can be validated during the pentest.
{% endhint %}

### What this means in practice

For agentic applications, Aikido combines these checks with standard pentest coverage such as SSRF, IDOR, broken access control, RCE, XSS, and business logic abuse.

That matters because agentic systems often turn classic vulnerabilities into higher-impact exploit chains.
