How the AI Posture Check scores your security.
Thirty questions. Six dimensions. Four-point answers. One tier from Foundation to Leading. Here is exactly how it works — no black box.
Five questions per dimension. Each multiple choice with four answers.
Governance, Data, Prompt, Model, Runtime, Vendor.
Each question scores 0 to 3. Five questions per dimension. Six dimensions. 5 × 6 × 3 = 90.
Average completion time. Pick the answer closest to your reality.
Every question has the same four possible answers.
Yes/no loses nuance. Five points adds noise. Four forces a decision: are you absent, aware, in progress, or operational?
Coverage without overlap.
Every OWASP LLM Top 10 risk and every NIST AI RMF function lands inside one of these six dimensions.
Governance
AI policy, accountable owner, inventory, framework alignment.
Read moreData
Classification, controls, logging, vendor data terms, subject rights.
Read morePrompt
Injection testing, input validation, output filtering, OWASP LLM mapping, red-teaming.
Read moreModel
Selection, version control, hallucination testing, retirement, theft prevention.
Read moreRuntime
Rate limiting, monitoring, isolation, incident response, audit logging.
Read moreVendor
Due diligence, contracts, attestations, onboarding, continuous monitoring.
Read moreYour total score lands in one of four tiers.
Tiers are calibrated against real organizations and signal a different recommended next step.
Significant gaps. AI risk is largely unmanaged. Discovery call recommended.
Foundational controls in place. Gaps in specific dimensions. Standard Audit recommended.
Mature posture. Optimization opportunities exist. Standard Audit recommended for tuning.
Leading posture. Targeted review of weakest dimension recommended.
Results render in your browser. Nothing leaves it.
- Total + per-dimension scores compute client-side. Sum your 30 answers (0–90) and your five per-dimension answers (0–15).
- You see your tier and where you sit on each dimension. Color-coded banner, horizontal bars per dimension, top two strengths, bottom two gaps with the specific question numbers that scored 0 or 1.
- You get prioritized recommendations. Three actions per gap dimension, drawn from a static rules table keyed on dimension and score band. Recommendations cite OWASP LLM Top 10 risk numbers and NIST AI RMF subcategories where applicable.
- You decide what to do next. Two CTA buttons. Foundation/Developing tier sees Discovery Call. Mature/Leading sees Standard Audit. Both deep-link to wearecws.com/contact with your score, tier, and gaps as URL parameters so the CWS team starts the conversation already informed.
- Print or walk away. A print stylesheet handles the PDF path natively. Or close the tab. Your answers were never captured.
The questions we get most.
Why these six dimensions?
They map to the way AI security risk actually surfaces: who governs it, what data feeds it, what prompts it receives, what model runs it, how it operates in production, and which third-party vendors deliver it. Every OWASP LLM Top 10 risk and every NIST AI RMF function lands inside one of these six. Coverage without overlap.
Why a 0–3 scale per question instead of 1–5 or yes/no?
Yes/no loses the difference between "we know we need this" and "we have it documented and operational." Five points adds noise without signal. Four points forces a decision: are you absent, aware, in progress, or operational?
Are the tier thresholds calibrated against real organizations?
Yes. Foundation 0–22 reflects organizations using AI with no formal program. Developing 23–45 is typical for organizations with a draft policy but uneven controls. Mature 46–67 indicates documented operational controls across most dimensions. Leading 68–90 reflects programs with continuous monitoring, red-teaming, and external attestation.
Does the score change as my AI footprint grows?
It should. The Posture Check is a snapshot, not a certificate. Re-take it quarterly or after any material AI deployment change. Score trajectory matters more than any single result.
Why do recommendations differ by score band?
A Foundation-tier organization needs to stand up a policy and an inventory. A Mature-tier organization needs continuous monitoring, red-teaming, and vendor attestation review. The same dimension produces different recommendations at different bands.
What happens with my answers?
They never leave your browser. The tool runs entirely client-side using sessionStorage. There is no email gate, no PDF service, no server. When you click a CTA, your tier and gaps travel as URL parameters to wearecws.com/contact so the CWS team has context — but only if you choose to click.
Ready to find out where you stand?
Free. Ten minutes. No email. Instant results.
Take the AI Posture Check