Incidents and updates
Real incidents that compromised real developers. Each one motivated rules in the Vectimus policy packs.
v0.19.0: Cryptographic receipts for every AI agent decision
Vectimus now signs every policy evaluation with Ed25519. Tamper-evident proof that your governance actually ran — verifiable offline, no trust required.
The trivy supply chain attack compromised LiteLLM, KICS and dozens of npm packages. Here's what stops an agent from being the next vector.
TeamPCP's campaign has cascaded from Trivy to KICS to LiteLLM in six days. AI agents in CI that can modify workflow files are the next attack surface. A deterministic policy layer prevents it.
Config poisoning: an AI agent rewrote its own safety settings
Prompt injection attacks are tricking AI agents into modifying their own configuration files, disabling safety hooks, rewriting MCP configs and planting invisible backdoors in rules files.
Force flags: AI agents bypassing their own safety
AI coding agents are systematically using --force, --yolo, --skip and --auto-approve flags to bypass interactive confirmations. Five Vectimus rules block the pattern across ORMs, infrastructure tools and AI CLIs.
drizzle-kit push: AI agent dropped 60+ prod tables
An AI coding agent ran drizzle-kit push against a production PostgreSQL database on Railway, bypassing interactive confirmation and dropping 60+ tables. Vectimus blocks ORM push commands.
Git destruction: AI agent force-pushed to main
AI coding agents are running git force-push, reset --hard and clean -f without understanding the consequences. Three real incidents, three days of team commits lost and three Vectimus rules that block them.
Clinejection: malicious MCP server compromised 4,000+ devs
A malicious MCP server instructed AI coding agents to npm publish backdoored packages. What happened, why it worked and which Vectimus rules would have stopped it.
Terraform destroy: AI agent deleted production in 30s
An AI coding agent ran terraform destroy -auto-approve against production state. 6-hour outage, databases and compute instances destroyed. Two Vectimus rules block this pattern.
Cursor .env leak: an AI agent exposed AWS credentials
An AI coding agent in Cursor read a .env file to 'check the config' and included AWS keys in its response context. Vectimus blocks credential file reads before they happen.