Framework
NIST AI RMF in Operation
The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) defines four core functions for AI risk: Govern, Map, Measure, Manage. AI RMF is voluntary in the US but increasingly cited by federal agencies, state regulators, and contracts.
The four core functions
- Govern
- What. Establish AI risk-management culture, accountability, and policy.
Control. Named AI risk owner. Approved AI policy. Inventory of AI systems. Mapping to broader enterprise risk. - Map
- What. Understand the AI system, its context, and its potential impacts.
Control. AI system documentation. Stakeholder analysis. Risk and benefit analysis. Use-case classification. - Measure
- What. Assess AI risks using qualitative and quantitative methods.
Control. Bias and fairness testing. Robustness testing. Performance metrics. Adversarial testing. - Manage
- What. Allocate resources to address identified risks proportionate to projected impact.
Control. Risk treatment plans. Incident response. Continuous monitoring. Documentation of decisions.
Posture Check checkpoint
Posture Check governance dimension (Q1–Q5) maps directly to Govern. Other dimensions map to Map, Measure, and Manage.
Score yourself against this framework.
The AI Posture Check is a free 30-question self-assessment that maps your gaps directly to OWASP LLM Top 10, NIST AI RMF, and ISO 42001.
Take the AI Posture Check Need help operationalizing this?
Talk to a CWS engineer about your AI security program.
Schedule a Discovery Call to scope a Standard Audit or Enterprise Program.
Schedule a Discovery Call