63% of Organizations Cannot Stop Their Own AI Agents. The Kill Switch Problem Is an Identity Problem.
The Kiteworks 2026 Data Security and Compliance Risk Forecast Report dropped a number that should alarm anyone deploying AI agents: 63% of organizations cannot enforce purpose limitations on what t...

Source: DEV Community
The Kiteworks 2026 Data Security and Compliance Risk Forecast Report dropped a number that should alarm anyone deploying AI agents: 63% of organizations cannot enforce purpose limitations on what their agents are authorized to do. And 60% cannot terminate a misbehaving agent. Every organization surveyed — 225 security, IT, and risk leaders across 10 industries — has agentic AI on its roadmap. More than half already have agents in production. A third are planning autonomous workflow agents that act without human approval. The deployment is outrunning the governance. This is not news. What is news is why the governance gap persists. Model-Level Guardrails Are Not Compliance Controls Kiteworks makes a distinction that most vendors blur: system prompts, fine-tuning, and safety filters are not compliance controls. They can be bypassed by prompt injection, model updates, or indirect manipulation. The February 2026 "Agents of Chaos" red-team study — conducted by 20 researchers from Harvard, M