LLM02 · OWASP LLM Top 10
Sensitive Information Disclosure (LLM02)
An LLM reveals sensitive data through output. The data may come from training data, fine-tuning data, the system prompt, retrieved context (RAG), or other tenants if isolation fails.
Examples
- A fine-tuned model surfaces customer PII memorized from training data.
- A system prompt containing API keys is leaked through a clever prompt-injection attack.
- A RAG pipeline returns chunks from a document the requesting user did not have access to.
Recommended controls
- Data classification before training/fine-tuning/prompt inclusion
- Output filtering and PII detection
- Context isolation between tenants
- Privacy testing including differential privacy where appropriate
- Vendor data-handling contracts
Posture Check checkpoint
Posture Check questions Q6–Q10. Score affects Data dimension.
Score yourself against this framework.
The AI Posture Check is a free 30-question self-assessment that maps your gaps directly to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check Need help operationalizing this?
Talk to a CWS engineer about your AI security program.
Schedule a Discovery Call to scope a Standard Audit or Enterprise Program.
Schedule a Discovery Call