Giskard Open Source
Visit pageOpen-source ML/LLM testing library. Apache 2.0.
Open-source LLM testing framework with hosted hub.
Giskard offers an open-source library for ML and LLM testing plus a hosted hub for collaborative evaluation. Strong on bias, fairness, and robustness testing. EU-headquartered, popular in regulated European deployments.
Notable open-source projects and reference frameworks used by enterprises and consultancies to harden AI deployments.
Direct links to the vendor's product pages. Last reviewed 2026-05-07.
Open-source ML/LLM testing library. Apache 2.0.
Hosted collaborative testing for AI teams.
CWS helps customers evaluate, deploy, and operate Giskard products as part of an AI security program. Engagements span vendor selection, proof-of-concept design, integration with existing controls, day-2 operations, and exit planning if the fit changes over time.
CWS does not resell Giskard. The recommendation is honest, evidence-based, and tied to the customer's posture gaps — not to channel economics.
Engage CWS on GiskardOpen-source toolkit for adding programmable guardrails to LLM apps.
View profileOpen-source LLM vulnerability scanner.
View profileOpen-source LLM evaluation, red teaming, and security testing.
View profileMicrosoft's open-source Python Risk Identification Toolkit for GenAI.
View profileOpen-source security toolkit for LLM-powered applications.
View profileThe free AI Posture Check scores your security across six dimensions in 10 minutes. Use the result to shortlist vendors that fit your actual posture — not the loudest demo.
Take the AI Posture Check