LLM Guard
Visit pageComprehensive input/output scanners for LLM apps. MIT license.
Open-source security toolkit for LLM-powered applications.
LLM Guard provides input scanners (anonymization, prompt injection, ban substrings) and output scanners (sensitive content, bias, factual checking). Drop-in for Python LLM applications. Maintained by Protect AI.
Notable open-source projects and reference frameworks used by enterprises and consultancies to harden AI deployments.
Direct links to the vendor's product pages. Last reviewed 2026-05-07.
Comprehensive input/output scanners for LLM apps. MIT license.
CWS helps customers evaluate, deploy, and operate LLM Guard products as part of an AI security program. Engagements span vendor selection, proof-of-concept design, integration with existing controls, day-2 operations, and exit planning if the fit changes over time.
CWS does not resell LLM Guard. The recommendation is honest, evidence-based, and tied to the customer's posture gaps — not to channel economics.
Engage CWS on LLM GuardOpen-source toolkit for adding programmable guardrails to LLM apps.
View profileOpen-source LLM vulnerability scanner.
View profileOpen-source LLM evaluation, red teaming, and security testing.
View profileMicrosoft's open-source Python Risk Identification Toolkit for GenAI.
View profileOpen-source LLM testing framework with hosted hub.
View profileThe free AI Posture Check scores your security across six dimensions in 10 minutes. Use the result to shortlist vendors that fit your actual posture — not the loudest demo.
Take the AI Posture Check