A new category of risks that arise from autonomy, tool access, multi-step reasoning, and inter-agent communication.
Traditional AI risk management focuses on model risk: bias, drift, accuracy degradation, and data quality. Agentic AI introduces an entirely new category of risks that arise from the combination of autonomy, tool access, multi-step reasoning, and inter-agent communication. These risks do not exist in conventional ML deployments and are not adequately addressed by existing model risk management frameworks like SR 11-7.
Risks from agents accumulating and exercising permissions in unpredictable ways. Cumulative operational authority means an agent's effective permissions are a dynamic composite of its own entitlements, delegated authority from users, tools available, and operational context. This creates privilege escalation paths, delegation chain risks, and blast radius expansion that static access controls cannot prevent.
Risks from the fact that agents reason, and that reasoning can fail differently from statistical model errors. Epistemic drift occurs when an agent's premises become stale while its logic remains valid — the agent acts on a version of reality that no longer exists. Includes hallucination, reasoning chain corruption, and causal dependency failures.
Risks emerging when multiple agents interact, creating system-level behaviours no individual agent was designed to produce. Compounding uncertainty is multiplicative, not additive. Conflicting objectives between agents can create deadlocks or adversarial dynamics. Emergent coordination failures are unpredictable from individual agent analysis.
Risks from operational characteristics of autonomous systems interacting with external services, tools, and infrastructure. API dependency risks, tool misuse, environmental manipulation, and the expanded attack surface created by agent-to-service integrations.
Risks threatening the institution's ability to explain, justify, and take responsibility for agent-driven outcomes. Attribution gaps when decisions traverse multiple agents. Reasoning opacity when audit trails capture actions but not reasoning. Explainability failures when regulatory requirements demand human-understandable justification.
Risks from deploying agents into organisations whose structures are simultaneously being transformed. The AI Productivity Trap — automating existing workflows without restructuring for the agent-native operating model. Coordination tax collapse when AI handles execution and coordination overhead evaporates faster than governance can adapt. Knowledge at rest when institutional expertise remains undocumented and unavailable to agents.
Understand the risks your existing frameworks miss and build the controls your regulator expects.