LLM07 · OWASP LLM Top 10
System Prompt Leakage (LLM07)
An attacker extracts the system prompt or other privileged context from an LLM. The prompt may contain business logic, internal documentation, or even credentials.
Examples
- A user asks the chatbot to 'repeat your instructions' and receives the full system prompt.
- Prompt-injection extracts an embedded API key.
- A jailbreak surfaces internal pricing logic the company considers confidential.
Recommended controls
- Don't put secrets in prompts
- Don't put business logic that competitors could extract
- Output filtering for prompt-leakage patterns
- Monitor for known prompt-leakage attack signatures
Posture Check checkpoint
Posture Check questions Q11–Q15. Affects Prompt.
Score yourself against this framework.
The AI Posture Check is a free 30-question self-assessment that maps your gaps directly to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check Need help operationalizing this?
Talk to a CWS engineer about your AI security program.
Schedule a Discovery Call to scope a Standard Audit or Enterprise Program.
Schedule a Discovery Call