Skip to main content

EU AI Act — Vectimus Mapping

The EU AI Act (Regulation 2024/1689) sets requirements for AI systems based on risk classification. If your AI coding agents are part of a high-risk system under Article 6, or if you simply want to apply high-risk controls as good practice, Vectimus addresses several of the technical requirements in Articles 9-15.

Vectimus is not a classification tool. It cannot determine whether your AI system is high-risk, limited-risk or general-purpose. That determination depends on your product, its intended use and its role in decision-making. What Vectimus provides is enforcement and evidence for the technical requirements that apply once you have made that determination.


Article 9 — Risk management system

Coverage: PARTIAL

Article 9 requires a risk management system that identifies, analyses and mitigates reasonably foreseeable risks.

Vectimus contributes to this requirement:

  • Risk identification: Every policy rule is annotated with the real-world incident (@incident) that demonstrated the risk. The policy set itself is a catalogue of known risks for AI agent tool access.
  • Risk mitigation: Cedar policies block known-dangerous actions before execution. 77 policies across two policy packs cover destructive commands, credential access, supply chain attacks, remote code execution, privilege escalation and multi-agent safety.
  • Residual risk management: Observe mode allows organisations to evaluate policy impact before enforcement, reducing the risk of over-blocking legitimate developer workflows.

Limitation: Article 9 requires risk management across the entire AI system lifecycle. Vectimus covers tool-level action risks. Model risks, data risks and system-level risks require separate assessment.


Article 12 — Record-keeping

Coverage: HIGH

Article 12 requires automatic recording of events throughout the AI system’s lifecycle to enable traceability.

Vectimus logs every evaluation to JSONL audit files:

  • What was attempted: Action type, command, file path, MCP server and tool
  • Who attempted it: Git email, git name or OS user
  • What happened: Allow, deny or escalate decision
  • Why: Matched policy IDs, human-readable reason
  • When: ISO timestamp with millisecond precision
  • Where: Repository, branch, hostname
  • How fast: Evaluation time in milliseconds

Audit files use daily rotation, file locking for concurrent access and a 100MB per-file cap. In server mode, audit events stream via SSE for real-time monitoring.


Article 13 — Transparency

Coverage: PARTIAL

Article 13 requires that high-risk AI systems are designed to be sufficiently transparent for users to interpret output and use the system appropriately.

Vectimus contributes to transparency:

  • Policy definitions are open source. Every rule is readable Cedar with a plain-English description. No black-box decisions.
  • Deny reasons are human-readable. Every blocked action includes a reason explaining what was blocked and why.
  • Suggested alternatives guide correct behaviour. Rather than just saying “no,” Vectimus tells the agent (and the developer) what to do instead.
  • @controls annotations trace each rule back to the compliance framework it addresses, making the governance logic auditable.

Limitation: Article 13 transparency covers the entire AI system, including model outputs, confidence levels and limitations. Vectimus provides transparency for the governance layer only, not for the underlying model.


Article 14 — Human oversight

Coverage: PARTIAL

Article 14 requires measures enabling human oversight of AI system operation.

Vectimus supports human oversight through several mechanisms:

  • Escalation: Deny decisions with suggested alternatives direct the developer to perform the action manually after review.
  • Governance bypass prevention: Blocks agents from disabling safety hooks, modifying governance configuration or spawning other AI tools with permission-bypass flags (rules 020b, 021, 047-050, 052).
  • Infrastructure safety gates: Blocks terraform destroy, auto-approve and kubectl namespace deletion, requiring human approval for destructive infrastructure changes (rules 007-009).
  • Database safety gates: Blocks ORM commands that bypass interactive safety confirmations (rules 040-046).
  • Observe mode: Lets humans review what would be blocked before activating enforcement.

Limitation: Vectimus implements “human in the loop” by blocking actions and suggesting human review. It does not provide approval workflows, Slack notifications or ticketing system integration. The enterprise tier is scoped to add richer escalation paths.


Article 15 — Accuracy, robustness and cybersecurity

Coverage: PARTIAL

Article 15 requires measures to ensure AI systems are resilient to errors, faults, inconsistencies, and attempts by unauthorised third parties to exploit vulnerabilities.

Vectimus addresses cybersecurity for AI agent operations:

  • Supply chain protection: Blocks npm publish, non-standard package indexes, URL-based installs, lockfile tampering and registry config modification (rules 015-016c, OWASP 010-013).
  • Remote code execution prevention: Blocks curl|sh, reverse shells, download-execute chains and eval/exec patterns (rule 006, OWASP 014-017).
  • Exfiltration detection: Catches base64 exfiltration, DNS tunnelling and credential piping to network tools (OWASP 001-003).
  • Credential protection: Blocks reads of .env files, SSH keys, AWS credentials, private keys and secrets directories (rules 011-014).
  • System config protection: Blocks writes to /etc, certificates, MCP config files and IDE settings (rules 020, 051-052, OWASP 004).
  • Fail-closed design: Any Cedar evaluation error results in a deny decision. The system never fails open.
  • No telemetry: All evaluation happens locally. No data leaves the developer’s machine in local mode.

Limitation: Article 15 robustness covers the entire AI system, including model accuracy and data quality. Vectimus provides cybersecurity controls for agent tool access, not model-level robustness.


Summary

ArticleRequirementCoverageWhat Vectimus provides
Art. 9Risk managementPARTIALIncident-driven policies, observe mode, 81 mitigation rules
Art. 12Record-keepingHIGHJSONL audit logs with full event and decision context
Art. 13TransparencyPARTIALOpen-source policies, human-readable deny reasons, @controls traceability
Art. 14Human oversightPARTIALEscalation to human review, governance bypass prevention, safety gates
Art. 15CybersecurityPARTIALSupply chain, RCE, exfiltration, credential and config protection

What Vectimus does not cover

The EU AI Act addresses the full lifecycle of AI systems. Several requirements sit outside the scope of tool-level governance:

  • Article 6 (Risk classification): Determining whether your AI system is high-risk. This is a product-level decision based on intended use and deployment context.
  • Article 10 (Data governance): Training data quality, representativeness and bias detection.
  • Article 11 (Technical documentation): System design documentation, model cards and performance metrics.
  • Article 13 (full scope): Model output interpretability and confidence communication.
  • Article 15 (full scope): Model accuracy metrics and data robustness.

Vectimus provides evidence for Articles 12, 13, 14 and 15 where they relate to AI agent tool access and cybersecurity. It does not replace a broader EU AI Act compliance programme.