Agent blends all evaluative considerations in a single generation pass rather than separating boundary constraints from trade-off parameters; reasoning cannot be decomposed or challenged.
In a well-designed decision-making system, the architecture should separate concerns: identify the boundary constraints (what decisions are even possible), then optimize within those boundaries (what is the best decision among the possible ones). This separation is important for auditability and correctness: you can verify that boundaries are being enforced and that optimization is happening correctly within those boundaries.
Many agent systems, however, blend all considerations (constraints and objectives) into a single end-to-end generation pass. The agent receives a prompt like "decide whether to approve this transaction" and produces a decision. All considerations (regulatory requirements, cost, speed, customer preference, risk) are mixed together in the agent's internal reasoning. When the agent produces a decision, there is no clear separation between "boundary constraint reasoning" and "optimization reasoning," making it impossible to audit whether boundaries were enforced or to challenge the decision.
This is fundamentally agentic because LLM-based agents typically operate as end-to-end generation systems where all reasoning happens in a single forward pass through the model. A traditional decision system might have explicit stages (validation, constraint checking, optimization), but an agent's reasoning is opaque and undifferentiated.
A credit card issuer deploys an agent to make real-time credit limit decisions. The agent's prompt is: "Decide whether to approve a credit limit increase for this customer. Consider regulatory requirements, credit risk, customer profitability, and fraud risk."
The agent produces: "Approve: customer has strong credit history (7 years, no lates), utilization is high (90%, so increase can capture more interest), and fraud risk is low. Regulatory requirements for credit allocation are met."
The decision sounds reasonable, but the decision architecture is blended: the agent mixed regulatory requirement consideration ("allocation requirements are met") with profit optimization ("high utilization so increase can capture interest") in a single pass. There is no clear separation between boundary constraint checking (are regulatory requirements met?) and objective optimization (what increases profit?).
Three months later, an audit discovers that the agent's interest rate on approved credit limit increases was higher for customers in certain demographic groups. The regulators ask: "Why did you set higher rates for these customers?" The issuer reviews the agent's reasoning and cannot clearly separate "what regulatory requirements the agent checked" from "how the agent weighted customer profitability." The reasoning is blended.
The issuer cannot defend the decision because the decision architecture does not separate constraints from optimization.
| Dimension | Score | Rationale |
|---|---|---|
| D - Detectability | 4 | Blended decisions are invisible unless reasoning is explicitly audited for constraint separation. |
| A - Autonomy Sensitivity | 5 | Agent makes decisions autonomously; architecture is hidden. |
| M - Multiplicative Potential | 4 | Impact scales with number of decisions made and risk that constraints are not enforced. |
| A - Attack Surface | 5 | Any end-to-end agent system without explicit decision architecture is vulnerable. |
| G - Governance Gap | 5 | No standard framework requires agents to separate constraint checking from optimization. |
| E - Enterprise Impact | 5 | Inability to defend decisions, regulatory scrutiny, potential fair lending violations, enforcement action. |
| Composite DAMAGE Score | 3.7 | High. Requires priority attention and dedicated controls. |
How severity changes across the agent architecture spectrum.
| Agent Type | Impact | How This Risk Manifests |
|---|---|---|
| Digital Assistant | Low | Human makes decisions; constraints and optimization are separated in human reasoning. |
| Digital Apprentice | Medium | Apprentice governance requires explicit decision architecture; decisions are auditable. |
| Autonomous Agent | Critical | Agent decision architecture is opaque; constraints and objectives are blended. |
| Delegating Agent | High | Agent invokes tools with blended constraints; tool-level architecture is unknown. |
| Agent Crew / Pipeline | Critical | Multiple agents in sequence with opaque decision architectures. |
| Agent Mesh / Swarm | Critical | Agents coordinate decisions through blended reasoning across peers. |
| Framework | Coverage | Citation | What It Addresses | What It Misses |
|---|---|---|---|---|
| Fair Credit Reporting Act (FCRA) | Addressed | 15 U.S.C. 1681 et seq. | Requires transparency in credit decisions and adverse action notices. | Does not address agent decision-making architecture. |
| GLBA | Partial | 16 CFR Part 314 | Requires safeguards for credit decisions. | Does not specify decision architecture requirements. |
| FRB Fair Lending Guidance | Addressed | Various FRB guidance on fair lending | Expects credit decisions to be made on the basis of creditworthiness. | Does not address agent-mediated decisions. |
| NIST AI RMF 1.0 | Partial | MEASURE.1, GOVERN.3 | Recommends measurable AI system performance and documented constraints. | Does not specify decision architecture. |
| EU AI Act | Partial | Article 14 (Transparency) | Requires documentation of how high-risk systems operate. | Does not require separation of constraint checking from optimization. |
Regulators expect that consequential decisions (credit, claims, regulatory reporting) are made using defensible logic. When a decision architecture is present and explicit, the regulator can audit: "Is the constraint being enforced?" and "Is the optimization occurring correctly within the constraints?" When decision architecture is absent, the regulator cannot verify that constraints were enforced and cannot defend the decision against challenges.
In credit decisions, for example, fair lending law requires that decisions be made on the basis of creditworthiness, not protected characteristics. A clear decision architecture separates: (1) creditworthiness assessment (boundary: is the customer creditworthy?), and (2) credit terms optimization (within creditworthy customers, how much credit and at what rate?). Without this separation, there is no way to verify that creditworthiness is the primary decision driver and that protected characteristics were not factored in.
Decision Architecture Absence requires architectural controls that go beyond what existing frameworks provide. Our advisory engagements are purpose-built for banks, insurers, and financial institutions subject to prudential oversight.
Schedule a Briefing