NIST AI Risk Management Framework — Vectimus Mapping
The NIST AI RMF (AI 100-1, January 2023) organises AI risk management into four functions: Govern, Map, Measure and Manage. Vectimus maps to specific subcategories within three of these functions.
Vectimus is a tool-level governance layer. It evaluates individual AI agent actions against Cedar policies before execution. It does not perform model risk assessment, data governance or organisational AI strategy — those are separate workstreams that the NIST AI RMF rightly identifies as necessary.
This document is specific about where Vectimus helps and where it does not.
GOVERN — Policies, processes and practices
GOVERN 1.1: Legal and regulatory requirements identified
Coverage: PARTIAL
Every Cedar policy rule includes a @controls annotation mapping it to
specific compliance frameworks (SOC 2, OWASP, SLSA, EU AI Act). This
annotation system enables organisations to trace enforcement rules back to
regulatory requirements.
Example from the codebase:
@id("vectimus-base-015")
@controls("SLSA-L2, SOC2-CC6.8, NIST-AI-MG-3.2, EU-AI-15")
Limitation: Vectimus maps its own rules to controls. It does not identify which legal or regulatory requirements apply to your organisation or AI systems.
GOVERN 1.5: Ongoing monitoring and periodic review
Coverage: PARTIAL
- Observe mode enables staged rollout of new policies. Teams run observe mode to review what would be blocked before switching to enforcement.
- Audit logging provides a continuous record of every agent action and every policy decision for review.
- Per-project overrides allow policy tuning without weakening the global rule set.
MAP — Context and risk identification
MAP 1.5: Impacts to individuals, groups, communities, organisations
Coverage: INFORMATIONAL
Every policy rule references the real-world incident that motivated it via the
@incident annotation. These references connect abstract rules to concrete
impact: “Clinejection compromised 4,000+ developers,” “Terraform destroy caused
a 6-hour production outage.”
This is informational, not a risk assessment. Vectimus does not identify which risks apply to your specific AI deployment.
MEASURE — Assessment and evaluation
MEASURE 2.5: AI system behaviour monitored
Coverage: HIGH
Vectimus monitors AI agent behaviour in real time:
- Every shell command, file operation, web request, MCP tool call, package operation, git operation, infrastructure command, agent spawn and inter-agent message is evaluated before execution.
- Cedar policies return allow, deny or escalate decisions.
- In server mode, session-level tracking detects behavioural anomalies: spawn floods, message floods and action rate spikes.
MEASURE 2.6: Evaluation results documented
Coverage: HIGH
All evaluation results are logged to JSONL audit files with:
- Timestamp
- Principal identity (git email, git name or OS user)
- Action type and full command/path details
- Decision (allow/deny/escalate)
- Matched policy IDs
- Human-readable reason
- Suggested alternative
- Evaluation time in milliseconds
- Repository, branch and hostname context
Audit logs use file locking for concurrent access, daily rotation and a 100MB per-file cap.
MANAGE — Risk treatment and mitigation
MANAGE 2.2: Mechanisms to mitigate identified AI risks
Coverage: HIGH
Vectimus implements risk mitigation through deterministic policy enforcement:
- 48 rules in the base pack covering destructive commands, credential protection, supply chain safety, infrastructure protection, database safety, agent safety and MCP governance.
- 29 policies in the OWASP Agentic pack covering the OWASP Top 10 for Agentic Applications.
- Every deny decision includes a suggested alternative that guides the agent (and the developer) toward the safe path.
- Fail-closed design: any Cedar evaluation error results in a deny decision. The system never defaults to allow on error.
MANAGE 3.2: Third-party AI risks managed
Coverage: PARTIAL
- MCP server allowlisting: Default-deny for all MCP servers. Third-party tools must be explicitly approved before agents can call them.
- Supply chain controls: Blocks package installs from non-standard indexes, URL-based installs, direct lockfile modification and registry config changes.
- Cargo/git install blocking: Prevents agents from installing Rust packages from unvetted git repositories.
Limitation: Vectimus governs the agent’s request to call a third-party tool. It cannot inspect what the third-party tool does internally after the request is approved.
Summary
| Function | Subcategory | Coverage | What Vectimus provides |
|---|---|---|---|
| GOVERN | 1.1 Legal requirements | PARTIAL | @controls annotations mapping rules to frameworks |
| GOVERN | 1.5 Ongoing monitoring | PARTIAL | Observe mode, audit logging, policy tuning |
| MAP | 1.5 Impact identification | INFO | @incident annotations with real-world impact |
| MEASURE | 2.5 Behaviour monitoring | HIGH | Real-time evaluation of all agent actions |
| MEASURE | 2.6 Results documented | HIGH | JSONL audit logs with full decision context |
| MANAGE | 2.2 Risk mitigation | HIGH | 81 Cedar rules, fail-closed, suggested alternatives |
| MANAGE | 3.2 Third-party risk | PARTIAL | MCP allowlisting, supply chain controls |
What Vectimus does not cover
The NIST AI RMF is intentionally broad. Several functions and subcategories fall outside the scope of a tool-level governance layer:
- MAP (most subcategories): AI system categorisation, intended use documentation, stakeholder identification. These require organisational processes, not tool enforcement.
- GOVERN 2-6: Organisational governance structures, workforce diversity, stakeholder engagement, third-party governance frameworks.
- MEASURE 1, 3, 4: Measurement approaches, AI system performance metrics, deployment monitoring beyond tool-level actions.
- MANAGE 1, 4: Risk response planning, risk communication, post-deployment feedback loops.
Vectimus contributes evidence and enforcement to a NIST AI RMF programme. It does not replace the programme.