Open-Source and Frameworks

LLM Guard

Open-source security toolkit for LLM-powered applications.

About LLM Guard

LLM Guard provides input scanners (anonymization, prompt injection, ban substrings) and output scanners (sensitive content, bias, factual checking). Drop-in for Python LLM applications. Maintained by Protect AI.

Notable open-source projects and reference frameworks used by enterprises and consultancies to harden AI deployments.

Products

LLM Guard products and platform components

Direct links to the vendor's product pages. Last reviewed 2026-05-07.

LLM Guard

Visit page

Comprehensive input/output scanners for LLM apps. MIT license.

CWS engagement

How CWS works with LLM Guard

CWS helps customers evaluate, deploy, and operate LLM Guard products as part of an AI security program. Engagements span vendor selection, proof-of-concept design, integration with existing controls, day-2 operations, and exit planning if the fit changes over time.

CWS does not resell LLM Guard. The recommendation is honest, evidence-based, and tied to the customer's posture gaps — not to channel economics.

Engage CWS on LLM Guard

Not sure if LLM Guard fits your gaps?

The free AI Posture Check scores your security across six dimensions in 10 minutes. Use the result to shortlist vendors that fit your actual posture — not the loudest demo.

Take the AI Posture Check