Glossary

AI Red-Teaming

Adversarial testing of AI systems to identify safety, security, and robustness failures before production.

Context and detail

Methodology. Tools. When to engage external red teams.

Related terms

  • Adversarial Testing (AI) — Systematic testing of an AI system against attacks, edge cases, and failure modes.
  • OWASP LLM Top 10 — OWASP's catalog of the top 10 risks for LLM applications. Updated annually. The most-cited LLM security framework.

See how ai red-teaming maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check