R-RC-01 Regulatory & Compliance DAMAGE 4.2 / Critical

Framework Obsolescence

Regulatory frameworks designed for traditional AI do not address agentic-specific risks. Compliance with existing frameworks creates false assurance.

The Risk

Existing regulatory frameworks (NIST AI RMF 1.0, MAS AIRG, EU AI Act) were designed for traditional AI systems: models with predefined inputs, models that make predictions or classifications, systems that require human review. These frameworks do not address agentic-specific risks: autonomous decision-making without human approval, delegation to other agents, emergent behavior from agent interactions, online learning that changes agent behavior at runtime.

Organizations deploy agentic systems and assess them against existing regulatory frameworks. They check boxes: "Does the system have documentation?" Yes. "Is there monitoring and testing?" Yes. "Is there human oversight?" Yes (for high-risk decisions). The organization concludes that the system complies with the regulatory framework.

But the framework itself is obsolete. It does not ask the right questions about agents. It does not require agent-specific controls. Compliance with an obsolete framework creates false assurance: the organization believes it is compliant when it is actually exposing itself to agentic-specific risks that the framework does not address.

How It Materializes

A financial services firm interprets MAS AIRG requirements for an agentic trading system. The firm documents trading risks, market risks, and operational risks. The firm documents that the agent is trained on historical market data and tested for accuracy. The firm assesses the system as high-risk (it makes trading decisions that affect capital) and applies additional controls including model transparency with confidence scores, performance monitoring comparing real-world results to test-set performance, and human oversight requiring approval for trades above $10 million.

The firm checks all boxes and concludes that the agentic trading system complies with MAS AIRG. However, MAS AIRG was not designed for agents. The framework does not require assessment of agent autonomy degradation, agent learning at runtime, emergent behaviors from coordination with other agents, delegation and tool use governance, or continuous risk changes based on market conditions.

The firm has complied with an obsolete framework that does not address these agentic risks. Six months after deployment, the agent's performance degrades when market conditions become highly volatile. The agent's confidence scores remain high, but its actual accuracy drops. Because the risk assessment was static (performed at deployment), the firm does not re-assess the system's risk profile. The agent continues operating at the autonomy level granted at deployment, even though its competence has degraded.

Regulators later question the firm: "Your risk assessment indicates the system is high-risk and requires enhanced oversight. Where is the enhanced oversight?" The firm points to human approval requirements for large trades. But the regulators note that the agent was operating at low confidence (and thus low accuracy) for weeks before the firm recognized the degradation. The firm's reliance on an obsolete framework created false assurance that allowed the agent to operate outside its competence envelope without detection.

DAMAGE Score Breakdown

Dimension Score Rationale
D - Detectability 4 Framework obsolescence is not visible until the agent exhibits risks that the framework did not account for. Compliance with the framework creates false assurance that the system is governed.
A - Autonomy Sensitivity 5 Framework obsolescence is most severe for autonomous agents. Frameworks designed for traditional AI do not address autonomous agentic risks.
M - Multiplicative Potential 4 Framework obsolescence affects all agents assessed against obsolete frameworks. The number of agents using obsolete frameworks compounds the risk.
A - Attack Surface 3 Framework obsolescence is not a direct security vulnerability. However, it creates windows of opportunity where agentic-specific risks are not being monitored.
G - Governance Gap 5 Regulatory frameworks have not been updated to address agentic systems. Governance processes have not been redesigned for agents. Organizations are using frameworks designed for a different type of system.
E - Enterprise Impact 4 Framework obsolescence leads to inadequate governance of agentic systems. Impact becomes apparent when agentic-specific risks materialize.
Composite DAMAGE Score 4.2 Critical. Requires immediate architectural controls. Cannot be accepted.

Agent Impact Profile

How severity changes across the agent architecture spectrum.

Agent Type Impact How This Risk Manifests
Digital Assistant Low DA operates with human oversight. Existing frameworks adequately govern DA because DA requires human approval. DA risks are traditional AI risks, not agentic risks.
Digital Apprentice Low AP is supervised. Existing frameworks are adequate for supervised agents because AP does not exhibit fully autonomous behavior.
Autonomous Agent High AA exhibits autonomous behavior not covered by existing frameworks. Risk assessment based on existing frameworks will underestimate agentic risks.
Delegating Agent High DL invokes tools. Existing frameworks do not address tool invocation governance. Risk assessment will be incomplete.
Agent Crew / Pipeline High CR chains agents. Existing frameworks do not address agent coordination or emergent behaviors in pipelines. Risk assessment will be incomplete.
Agent Mesh / Swarm Critical MS features dynamic peer-to-peer delegation and emergent coordination. Existing frameworks are completely inadequate. Risk assessment based on existing frameworks is fundamentally wrong.

Regulatory Framework Mapping

Framework Coverage Citation What It Addresses What It Misses
NIST AI RMF 1.0 Partial GOVERN, MANAGE Traditional AI governance framework. Applicable to models and systems. No specific guidance on agent autonomy, learning at runtime, delegation, or emergent behaviors.
MAS AIRG Partial Sections 2-6 Governance, risk management, explainability, monitoring, and human oversight for financial services. No specific guidance on agent autonomy changes, learning, coordination, or emergence.
EU AI Act Partial Articles 6-14 Addresses high-risk AI and obligations for providers and users. No specific guidance on autonomous agents, learning at runtime, delegation, or agent coordination.
NIST GenAI Profile Partial Section 4-5 Adapted NIST AI RMF for generative AI. No specific guidance on agents that are not generative AI, on autonomous agents, or on agent coordination.
OWASP Agentic Top 10 Addressed Security risks specific to agentic systems Addresses agentic security risks that traditional frameworks do not cover. Does not address all agentic governance risks (e.g., autonomy escalation, learning, emergence).
Berkeley Agentic Profile Addressed Architectural patterns and governance for agentic systems Addresses agentic design patterns and autonomy governance. Not yet adopted by regulators. Guidance is research-oriented rather than regulatory.
SR 11-7 Minimal Model risk management Traditional model risk governance framework. Predates widespread agentic deployment. No agentic guidance.
GDPR Minimal Article 22 (Automated Decision-Making) Addresses decision automation but not agent autonomy or learning. No specific guidance on autonomous agents or learning systems.

Why This Matters in Regulated Industries

Regulators in capital markets, banking, insurance, and healthcare are becoming aware of agentic systems but have not yet updated regulatory frameworks to address agentic-specific risks. Organizations deploying agents and assessing them against existing frameworks are using outdated guidance.

In banking, regulators expect that credit and trading decisions are governed by frameworks designed for traditional AI. But agents may make decisions differently than traditional models. Regulators may discover that compliance with SR 11-7 or OCC guidance is insufficient for agents.

In insurance, regulators expect that underwriting and claims decisions are governed by frameworks designed for traditional AI. But agents may learn and adapt at runtime, changing their decision-making in ways that static frameworks do not contemplate.

In healthcare, regulators expect that clinical decisions are governed by frameworks designed for traditional AI. But agents may coordinate with other agents or with clinical systems in ways that static frameworks do not contemplate.

Controls & Mitigations

Design-Time Controls

  • Implement agentic-specific risk assessment that goes beyond traditional AI risk frameworks. Assess agent autonomy, learning capabilities, coordination with other agents, and potential for emergent behaviors.
  • Establish agent-specific governance requirements that address risks traditional frameworks do not cover: agent autonomy levels, learning governance, delegation governance, and emergency controls.
  • Conduct regulatory gap analysis that identifies which agentic-specific risks are not covered by applicable regulatory frameworks. Implement additional controls to address gaps.
  • Engage with regulators proactively to understand their expectations for agentic systems. Do not assume that compliance with existing frameworks is sufficient for agents.

Runtime Controls

  • Deploy continuous risk reassessment that updates the agent's risk profile at runtime based on observed behavior. Do not rely on static risk assessments performed at deployment.
  • Implement agent behavior monitoring that tracks autonomy changes, learning, and emergent behaviors. Flag behaviors not anticipated in the design phase.
  • Establish regulatory alignment monitoring that tracks regulatory guidance for agents. If regulators issue new guidance or frameworks, assess whether the deployed agent complies.
  • Use the Blast Radius Calculator (Component 4) to identify which agent decisions have highest regulatory exposure. Require more rigorous governance for high-exposure decisions.

Detection & Response

  • Conduct periodic regulatory adequacy audits that assess whether the agent's governance is adequate under regulatory frameworks. If gaps are identified, escalate for governance improvements.
  • Establish regulatory tracking processes that monitor changes in regulatory guidance and frameworks. If new agentic-specific guidance is issued, assess compliance.
  • Create regulatory position documentation that articulates the organization's interpretation of regulatory requirements for agents. Keep this documentation current as regulations evolve.
  • Maintain regulatory optionality by designing agents in ways that comply with multiple interpretations of regulatory requirements, so the organization can adapt if regulators' interpretations change.

Related Risks

Address This Risk in Your Institution

Framework Obsolescence requires governance controls that go beyond what existing regulatory frameworks provide. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.

Schedule a Briefing