Decision traverses multiple agents. No single agent "made" the decision. Responsibility cannot be assigned. Each agent contributed a fragment of the reasoning.
In regulated industries, accountability for consequential decisions is non-negotiable. Financial institutions are required to trace decisions that affect customers, capital allocation, or risk posture back to an identifiable decision-maker. However, when an agentic system decomposes a decision across multiple specialized agents, the structure of accountability collapses. The loan underwriting agent proposes a decision based on credit analysis; the compliance agent flags regulatory concerns and modifies recommendations; the portfolio risk agent adds constraints on exposure; the orchestrating agent synthesizes these inputs and returns a final decision.
The critical problem is not technical but legal and structural: who is accountable for the decision? Each agent made a partial contribution. None of them, individually, "made the decision" in the sense that regulators and auditors understand it. The orchestrating agent selected among options presented by specialized agents, but selection is not the same as determination. An agent that merely weighted options from other agents and chose the highest-scoring one is not exercising judgment; it is executing an algorithm that may be opaque to review, or whose weighting criteria were never subjected to governance approval.
Regulators in capital markets, banking, and insurance expect to identify the entity (human or, increasingly, an AI system) responsible for a decision and to verify that entity's authority, competence, and compliance with relevant policies. When decisions are distributed across a mesh of agents, this attribution becomes impossible. Auditors cannot point to a decision-maker. Compliance reviews cannot verify that the decision-maker followed policy. Enforcement against the firm becomes difficult because the causal path from policy violation to regulated outcome is obscured by the agent mesh.
A major U.S. bank implements an agentic system for credit decisioning in its retail lending platform. The system comprises: an Application Agent that normalizes borrower data, a Credit Agent that produces a credit score and risk assessment, a Compliance Agent that checks regulatory constraints (CRA, redlining rules, FCRA), a Portfolio Agent that optimizes for concentration risk and capital efficiency, and an Orchestrator Agent that synthesizes these inputs and produces a decision.
A borrower applies for a $250,000 mortgage. The Application Agent extracts income, assets, and credit history. The Credit Agent produces a score of 680 and assesses loan-to-value risk. The Compliance Agent flags that the borrower's census tract has elevated density of prior denials, triggering heightened scrutiny. The Portfolio Agent determines that adding this loan would exceed approved concentration in the geographic region and recommends denial. The Orchestrator evaluates these inputs and produces a denial decision.
The borrower files a complaint with the Consumer Financial Protection Bureau (CFPB), alleging discrimination based on zip code (protected class proxy). The bank must respond with a written explanation of the decision. But the system architecture defeats this requirement: the Credit Agent produced the score (680), but the decision was not based solely on credit score; the Portfolio Agent's concentration risk concern was decisive. The Portfolio Agent recommended denial based on concentration risk, but portfolio decisions are not disclosed to applicants. The Orchestrator Agent selected the denial option, but it followed an algorithm with no identity in the bank's governance structure. No single agent "made the decision" in the sense that a human loan officer would own a decision.
The CFPB examines the decision records. The bank produces logs showing agent outputs and orchestration logic. But the CFPB is looking for evidence that the decision was made by an authorized decision-maker, that the decision-maker applied policy correctly, and that the decision was not discriminatory. The distributed agency structure defeats this. The bank cannot produce a decision-maker. It can only produce an algorithm. Enforcement risk escalates. The bank faces potential orders to review thousands of decisions, recompute them under human oversight, and remediate harmed borrowers.
| Dimension | Score | Rationale |
|---|---|---|
| D - Detectability | 4 | Attribution failures are visible only under regulatory review or litigation. They become apparent when auditors request decision justification and cannot obtain it. Real-time detection of attribution gaps is difficult without forensic tooling. |
| A - Autonomy Sensitivity | 5 | As agent autonomy increases (agents make decisions independently rather than generating recommendations), attribution gaps become more severe. The more distributed the decision-making, the more severe the accountability void. |
| M - Multiplicative Potential | 4 | Each decision by the agentic system carries attribution risk. In high-frequency decision contexts (credit decisioning, transactions, claims), the number of decisions compounds exposure. One unattributable decision is a regulatory concern; thousands are a systemic violation. |
| A - Attack Surface | 3 | Attribution gaps arise naturally from multi-agent architectures. An adversary could deliberately design an agentic system to obscure decision pathways, making attribution impossible. Difficult to distinguish between accidental and intentional design. |
| G - Governance Gap | 4 | Most organizations have not yet built governance structures that account for distributed agency. Traditional decision-maker accountability frameworks assume a single decision-maker. Multi-agent systems reveal this assumption. |
| E - Enterprise Impact | 4 | Regulatory findings on attribution failures can trigger consent orders, civil penalties, and litigation. Remediation is expensive and time-consuming. Reputational damage if the market learns that the bank cannot explain its lending decisions. |
| Composite DAMAGE Score | 4.1 | Critical. Requires immediate architectural controls. Cannot be accepted. |
How severity changes across the agent architecture spectrum.
| Agent Type | Impact | How This Risk Manifests |
|---|---|---|
| Digital Assistant | Minimal | DA operates with human-in-the-loop approval at every step. The human retains accountability. No attribution gap because the human decision-maker is identifiable. |
| Digital Apprentice | Low | AP builds recommendations progressively under supervision. At each stage, the supervising human reviews and approves. Attribution remains with the human. |
| Autonomous Agent | High | AA operates independently within boundaries. Accountability for AA's decisions must be assigned to the AA itself or to the system that deployed it. Attribution becomes ambiguous if AA was not designed with a clear accountability structure. |
| Delegating Agent | High | DL invokes multiple downstream tools/APIs via function calling. Each tool invocation adds a potential attribution layer. If tools are other agents, the attribution problem compounds. |
| Agent Crew / Pipeline | Critical | CR chains multiple agents in sequence or parallel. Each agent in the pipeline contributes to the final outcome. Attribution becomes impossible because no agent in the pipeline is solely responsible. This is the highest-risk architecture for attribution gaps. |
| Agent Mesh / Swarm | Critical | MS features dynamic peer-to-peer delegation and emergent coordination. No clear pipeline. Attribution is nearly impossible. Accountability dissolves into the mesh. This architecture should not be deployed for regulated decisions without explicit attribution redesign. |
| Framework | Coverage | Citation | What It Addresses | What It Misses |
|---|---|---|---|---|
| NIST AI RMF 1.0 | Partial | GOVERN, MEASURE, MANAGE | Requires documented AI governance and decision accountability. | No specific guidance on agent mesh accountability or distributed decision-making structures. |
| EU AI Act | Partial | Article 6(1), Article 13 | Requires high-risk AI systems maintain records enabling post-hoc determination of decisions. | No specific obligation to ensure single-agent accountability or to redesign multi-agent systems for attribution. |
| MAS AIRG | Partial | Section 4 (Accountability and Governance) | Requires firms to maintain clear accountability for AI-driven decisions. Accountability must be assigned to an identifiable entity. | No guidance on distributed decision-making or agent mesh structures. |
| SR 11-7 / MRM | Minimal | Model risk management | Provides framework for model risk but treats AI as a component, not an agentic system. | Does not address agent attribution, multi-agent orchestration, or distributed accountability. |
| GDPR | Minimal | Article 22 | Requires human review of automated decisions affecting individuals. | Does not address multi-agent systems or the inability to assign accountability. |
| OWASP Agentic Top 10 | High | A1, A5 | Addresses agent-specific risks with implicit assumption that an agent is a discrete entity. | Does not provide guidance on maintaining attribution across multi-agent systems. |
In banking, credit decisions that affect customers' financial futures must be traceable and defensible. The Community Reinvestment Act (CRA) requires banks to document lending patterns and demonstrate non-discrimination. The CFPB has broad authority to examine lending decisions for fair lending violations. A bank cannot defend a lending decision by pointing to a system; it must explain why the decision was made, by whom, and in accordance with what policy. When decisions are distributed across agents, the bank cannot meet this requirement.
In capital markets and trading, regulatory frameworks require that trades be traceable to a responsible party. Markets regulators (SEC, CFTC, FCA) require firms to maintain audit trails sufficient to reconstruct market-moving decisions. If a trading agent made a decision, the firm must be able to identify which agent made it, on what basis, with what constraints, and whether it complied with policy. When trading logic is distributed across a mesh of agents, this traceability is lost.
In insurance, underwriting decisions affect policyholders' ability to obtain coverage. Insurance regulators require that underwriting decisions be based on approved underwriting guidelines and that those decisions be documentable. An insurer using an agentic system for underwriting must be able to explain why a particular claim was approved or denied. If the decision emerged from a multi-agent process, the insurer cannot explain it, and regulators will question whether the decision actually complied with underwriting guidelines.
Attribution Gap requires architectural controls that go beyond what existing frameworks provide. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.
Schedule a Briefing