Glossary

Hallucination (LLM)

When an LLM generates plausible but factually incorrect content.

Context and detail

Why it happens. When it's a security risk (legal, medical, downstream automation). Mitigation approaches.

Related terms

  • Sycophancy (LLM) — An LLM's tendency to agree with the user's stated position or assumption rather than provide accurate analysis.
  • OWASP LLM Top 10 — OWASP's catalog of the top 10 risks for LLM applications. Updated annually. The most-cited LLM security framework.

See how hallucination (llm) maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check