Glossary

Differential Privacy

A mathematical framework for measuring and bounding the privacy loss when statistics or models are released from sensitive data.

Context and detail

When it's relevant for AI training. Limitations. Practical adoption.

Related terms

  • Training Data Poisoning — Adversarial manipulation of training, fine-tuning, or RAG-corpus data to alter model behavior.
  • Model Inversion — An attack that recovers training data or sensitive features by querying a model and analyzing outputs.

See how differential privacy maps to your AI posture.

The free AI Posture Check produces a per-dimension score and maps your gaps to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.

Take the AI Posture Check