Glossary

Model Inversion

An attack that recovers training data or sensitive features by querying a model and analyzing outputs.

Context and detail

Membership inference vs inversion. When the risk is real (small models with sensitive training data) vs theoretical.

Related terms

  • Training Data Poisoning — Adversarial manipulation of training, fine-tuning, or RAG-corpus data to alter model behavior.

See how model inversion maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check