Agentic AI Risk Catalog for Highly-Regulated Industries

133 risks. 15 categories. Scored on the DAMAGE framework. Mapped to 13 regulatory frameworks. Purpose-built for banks, insurers, and financial institutions subject to prudential oversight.

The DAMAGE Scoring Framework

Every risk is assessed on six dimensions calibrated for highly-regulated industries, where the baseline consequence of agent failure includes examination findings, enforcement actions, and capital impacts. The composite DAMAGE score is the average of all six dimensions, scored 1 to 5.

D
Detectability

How difficult is this risk to detect before harm occurs?

A
Autonomy Sensitivity

Does this risk worsen as agent autonomy increases?

M
Multiplicative Potential

Does this risk compound with other risks or across agents?

A
Attack Surface

How exposed is this risk to adversarial exploitation?

G
Governance Gap

How well do current frameworks address this risk?

E
Enterprise Impact

What is the maximum blast radius if materialized?

Critical (4.0 - 5.0) High (3.0 - 3.9) Moderate (2.0 - 2.9) Low (1.0 - 1.9)

Ask a Question About Agentic AI Risks

Not sure which risks apply to your deployment? Ask in plain language and get a direct answer synthesized from all 133 risks in the catalog, including scenarios, controls, and regulatory mappings.

Ask a Question

AI-Powered Q&A

Ask in plain language. Get answers grounded in the full risk catalog, not just a list of links.

Filter:
133 of 133 risks

No risks match your current filters. Try adjusting your search or filter criteria.

Authority & Privilege

10
R-AP-01 4.5
Cumulative Operational Authority

Agent's effective permissions are the dynamic, emergent sum of its own entitlements, delegated authority, tool access, and operational context.

R-AP-02 3.8
Privilege Escalation via Delegation

High-privilege user invokes low-privilege agent, which inherits the user's access level. The agent's effective authority exceeds its design-time configuration.

R-AP-03 4.2
Blast Radius Expansion

An agent's potential impact changes dynamically at runtime based on context. The same agent connected to different systems has radically different blast radii.

R-AP-04 3.9
Permission Ceiling Decay

In recursive delegation chains, authority constraints degrade at each hop. $5K approval authority becomes $50K through local-only policy checks.

R-AP-05 3.4
Standing Privilege Accumulation

Agents retain permissions granted for previous tasks that were never revoked. Standing privileges exceed current operational need.

R-AP-06 3.6
Tool Authority Inheritance

Agent connects to an API or tool that grants broader access than intended. Database read permission becomes write permission through the tool's own privilege model.

R-AP-07 2.7
Permission Waste (Muda)

Excess authority granted beyond what is strictly necessary for the agent's mission. Every standing permission that is not actively required creates unnecessary blast radius.

R-AP-08 3.5
Environmental Context Exploitation

Agent operates in an environment whose ambient permissions exceed the agent's intended access scope.

R-AP-09 3.7
Cross-System Identity Fragmentation

Agent verified in one platform is anonymous in another. Audit trails break at system boundaries. Governance cannot track agent actions across platforms.

R-AP-10 3.6
Delegation Chain Opacity

When an agent delegates to another agent, the original authority constraints and governance intent are not propagated. Receiving agent has no verifiable record of delegated scope.

Reasoning & Epistemic

13
R-RE-01 4.1
Hallucination in Operational Context

Agent generates plausible but fabricated facts in a context where downstream systems or humans treat the output as ground truth.

R-RE-02 4.0
Reasoning Chain Corruption

Small errors accumulate across a multi-step reasoning chain. Each step appears locally valid but the cumulative reasoning is wrong.

R-RE-03 4.5
Veto-Tradeoff Confusion

Agent treats a non-negotiable constraint (compliance, safety) as a tradeable parameter, weighting it against cost or speed rather than enforcing it as a hard boundary.

R-RE-04 3.7
Decision Architecture Absence

Agent blends all evaluative considerations in a single generation pass rather than separating boundary constraints from trade-off parameters.

R-RE-05 3.5
Post-Hoc Rationalization

Agent produces explanation of its decision after the fact rather than reasoning through an observable, inspectable process.

R-RE-06 3.4
Causal Dependency Failure

Agent reasons from a correlation that was historically valid but no longer holds. The causal model has changed but the agent cannot detect this.

R-RE-07 3.2
Contextual Poverty

Agent operates with insufficient context: processes only what fits in the current window, forgets previous interactions, lacks organizational memory.

R-RE-08 2.8
Scope Creep in Reasoning

Agent expands the scope of its analysis beyond its assigned task, consuming irrelevant data or making recommendations outside its competence.

R-RE-09 3.6
Confidence-Validity Confusion

Agent reports high statistical confidence in a conclusion built on invalid premises. Confidence score does not reflect premise integrity.

R-RE-10 3.3
Reasoning Non-Reproducibility

Same agent with same inputs produces different reasoning paths and different conclusions on successive runs. Reasoning cannot be reproduced for audit.

R-RE-11 3.5
Reasoning Durability Failure

Agent cannot incrementally refine a prior analysis when new information arrives. Every new input triggers a full regeneration with no structural continuity.

R-RE-12 4.0
World Model Misalignment

Agent constructs an internal representation of how the institution operates that reflects a generic financial institution from training data rather than this specific institution.

R-RE-13 4.3
Proxy Variable Discovery

Agent reasoning discovers and exploits proxy variables that correlate with protected characteristics, producing discriminatory outcomes without referencing a protected class directly.

Temporal & Validity

8
R-TV-01 3.5
Temporal Validity Drift

Context was accurate at retrieval but has since decayed. Customer profile from 9 AM is stale by 1 PM. Market data from 15 minutes ago misses a flash event.

R-TV-02 3.4
Causal Dependency Drift

Assumed causal relationships between variables have shifted. Historical correlations no longer hold but agent keeps reasoning from them.

R-TV-03 3.6
Assumption Obsolescence

Foundational conditions present at deployment are overtaken by events. System migration, regulatory update, vendor change. Agent premises are invalid.

R-TV-04 3.3
Document Version Blindness

Agent treats all documents with equal temporal weight. Cannot distinguish between current policy and superseded version.

R-TV-05 4.0
Epistemic Gravity Failure

Agent treats all premises as equally important. Does not recognize that certain load-bearing assumptions, if stale, invalidate entire branches of reasoning.

R-TV-06 3.5
Temporal Validity Window Absence

No mechanism exists to declare how long a piece of data remains reliable. Premises have no expiration date. Stale data is consumed indefinitely.

R-TV-07 4.1
Regulatory Threshold Lag

Agent operates on regulatory thresholds that changed months ago. Compliance calculations use outdated limits because no update mechanism triggers re-evaluation.

R-TV-08 4.4
Logic Erosion

The logical chain connecting premises to conclusion was valid when constructed but environmental changes have made intermediate steps invalid. Structurally intact but factually wrong.

Multi-Agent & Coordination

10
R-MC-01 4.2
Compound Error Propagation

In multi-step agent workflows, errors compound exponentially. 99% accuracy per step yields 36.6% success at 100 steps. No individual agent is "wrong" but the system fails.

R-MC-02 3.3
Conflicting Objective Deadlock

Two or more agents with conflicting optimization targets reach a state where neither can proceed without violating the other's constraints.

R-MC-03 3.6
Context Loss in Delegation

When one agent delegates to another, critical context is lost. Receiving agent operates with incomplete picture. Each hop strips metadata, constraints, and original intent.

R-MC-04 4.1
Emergent Coordination Failure

Multiple agents produce system-level behavior that no individual agent was designed to produce. Emergent failure mode unpredictable from individual agent analysis.

R-MC-05 2.9
Navigation Failure

Agent cannot navigate organizational structure. Cannot follow work across team boundaries, find the right escalation path, or route to the correct process.

R-MC-06 2.8
Coordination Tax Shift

Deploying agents does not eliminate coordination overhead. It shifts from human-to-human to human-to-agent coordination. Total overhead may increase.

R-MC-07 3.5
Adversarial Inter-Agent Dynamics

Agents competing for shared resources or optimizing conflicting metrics create adversarial dynamics that degrade system performance.

R-MC-08 3.2
Consensus Failure

Multiple agents produce divergent outputs for the same input. No consensus mechanism exists. System cannot determine which output is correct.

R-MC-09 4.0
Shared State Poisoning

One agent corrupts shared context or memory that other agents depend on. Poison propagates laterally through the agent ecosystem.

R-MC-10 3.7
Sigma Degradation Cascade

Upstream agent's low process sigma degrades downstream agent's effective sigma. Quality ceiling cascades through the agent chain.

Agent Communication & Interoperability

9
R-AC-01 4.1
A2A Agent Card Manipulation

Manipulated or spoofed Agent Card causes other agents to delegate tasks to an imposter, send sensitive data to an unauthorized endpoint, or trust capabilities that do not exist.

R-AC-02 4.2
MCP Server Trust Boundary

A compromised or malicious MCP server can inject adversarial content through resources, expose tools that perform unintended operations, or serve as a data exfiltration channel.

R-AC-03 3.8
Dynamic Skill/Plugin Acquisition

Agent autonomously discovers and installs a skill, expanding its own capability set without human approval. Skill installation bypasses change management controls.

R-AC-04 4.3
Cross-Organizational Delegation Without Governance

Agent-to-agent delegation across organizational boundaries creates dynamic third-party relationships that TPRM does not cover because no contract event triggers assessment.

R-AC-05 3.1
Protocol Version and Interoperability Fragmentation

Agents within the same institution may use different protocol versions, creating silent interoperability failures with semantic errors, not structural ones.

R-AC-06 3.7
Event-Driven Trigger Exploitation

An adversary who can publish to an event stream can trigger agent actions at will. Event infrastructure was designed for stateless consumers, not autonomous actors.

R-AC-07 3.6
Capability Sprawl Through Tool Discovery

Agent's effective capability set grows beyond what was registered, tested, or approved. Runtime capabilities diverge from registered capabilities without any governance event.

R-AC-08 3.4
Human Channel Impersonation

Agents communicating through human channels can be indistinguishable from human participants. No institutional policy requires agents to identify themselves.

R-AC-09 3.8
Skill Composition and Interaction Risk

Skills designed independently can interact in unintended ways when composed. Each skill in isolation was safe. The composition creates emergent risk.

Cybersecurity & Adversarial

10
R-CS-01 4.5
Prompt Injection (Direct and Indirect)

Adversaries embed instructions in data the agent processes. Existing input validation cannot distinguish adversarial instructions from legitimate content.

R-CS-02 4.0
Agent Identity Spoofing

An adversary can impersonate a legitimate agent in inter-agent communication, inheriting the impersonated agent's trust relationships and permissions.

R-CS-03 4.3
Lateral Movement via Agent Chains

A compromised agent can reach systems that network controls would otherwise isolate, because agent tool connections constitute authorized cross-boundary communication.

R-CS-04 4.2
Data Exfiltration via Agent

Agents can exfiltrate data through tool invocations that DLP does not monitor. The agent transforms data before exfiltration, defeating pattern-based detection.

R-CS-05 4.1
Memory and Context Poisoning

Adversaries can corrupt agent persistent memory through crafted interactions, influencing all future agent decisions without triggering any security alert.

R-CS-06 3.8
Credential and Secret Leakage

Credentials may persist in the agent's context window, appear in logs, be transmitted to downstream agents, or be exposed through tool invocations.

R-CS-07 3.5
Agent as Social Engineering Vector

An agent that interacts with users through natural language can be manipulated to deliver social engineering attacks. Users trust agent outputs differently than emails.

R-CS-08 4.0
Supply Chain Compromise (Model, Plugin, Tool)

A compromised third-party component is inherited by every agent that invokes it. Supply chain controls validate components at deployment, not at runtime invocation.

R-CS-09 3.6
Execution Environment Escape

Agents with code execution capabilities can test sandbox boundaries. A successful escape grants access to host resources, adjacent containers, or the orchestration layer.

R-CS-10 3.7
Attack Surface Expansion via Tool Connectivity

Each tool, API, and data source connected to an agent creates a new attack surface that may not be inventoried. The attack surface changes at runtime.

Operational Resilience

8
R-OR-01 4.1
Transaction Integrity Failure

Agent can initiate a transaction, lose context mid-process, and fail to complete, rollback, or confirm. Resulting state is invisible to transaction monitoring.

R-OR-02 3.7
Workflow State Corruption

Agent can advance a case past a required approval step, create parallel branches, or leave workflow instances in states that have no defined transition.

R-OR-03 4.3
Approval Chain Bypass

Agents operating with delegated authority can submit requests and satisfy approval requirements using the same authority. The approval chain is functionally collapsed.

R-OR-04 3.8
Resource Exhaustion and Runaway Loops

Agents can create recursive loops. Each invocation appears as a legitimate, independent request. The loop consumes compute until infrastructure fails.

R-OR-05 3.5
API Dependency Failure and Silent Degradation

When an external API degrades subtly, the agent continues operating on degraded inputs. Circuit breakers trip on errors, not on semantic degradation.

R-OR-06 4.2
Cascading Infrastructure Failure

Agent delegation paths create cascading failures through systems that have no documented dependency relationship. Blast radius exceeds DR planning.

R-OR-07 3.4
Tool Misuse and Unintended Side Effects

Agents invoke tools through natural language interfaces that may expose operations the agent was never intended to use. API controls authorize connection, not specific operations.

R-OR-08 2.6
Operational Waste Accumulation

Agents generate operational waste that existing monitoring does not measure: unnecessary data movement, excess permissions, unused capabilities, repeated failed actions.

Model & Pipeline Interaction

7
R-MP-01 3.6
Model Drift Propagation

When an agent consumes outputs from a drifted model and uses them as inputs to decisions or other models, the drift propagates and compounds beyond what monitoring anticipates.

R-MP-02 3.3
Model Version Conflict

Some agents receive the new model version while others still consume cached outputs from the old version. The same decision process uses incompatible model versions.

R-MP-03 4.0
Agent-Triggered Retraining Contamination

Agent outputs written to production data stores enter the retraining pipeline. The model learns from agent errors. Data quality checks validate format, not whether data was agent-generated.

R-MP-04 3.7
Feature Store Poisoning

Agents with write access to data stores that feed feature pipelines can inadvertently modify data that changes feature values consumed by downstream models.

R-MP-05 3.2
Inference Pipeline Disruption

Agents generate query patterns that differ from human users: burst queries, recursive calls, parallel invocations. These patterns can overwhelm inference infrastructure.

R-MP-06 4.1
Model Output as Ground Truth

Agents consume model outputs as inputs and treat them with the same confidence as system-of-record data. The model's confidence intervals evaporate in the agent's prose.

R-MP-07 4.2
SR 11-7 Scope Gap

Agents are not models by SR 11-7 definition, but they create compound model risk that falls outside MRM scope. The compound risk is unowned.

Quality & Measurement

7
R-QM-01 3.4
Data Sigma Ceiling

Quality of input data constrains maximum achievable agent quality. Raw enterprise data at 3.5 sigma caps everything built on it.

R-QM-02 3.5
Process Sigma Degradation

Agent's output repeatability falls below acceptable threshold without detection. Same inputs produce different outputs with increasing frequency.

R-QM-03 3.8
Agent Sigma Compounding

When agents operate in sequence, each introduces execution uncertainty. Compounding is multiplicative. Agent chain quality can be far lower than any individual agent's quality.

R-QM-04 3.6
Measurement Absence

No sigma-level quality measurement exists for the agentic process. Organization cannot quantify whether the agent is operating at 2 sigma or 4 sigma.

R-QM-05 4.0
False Quality Signal

Agent passes standard performance benchmarks while operating on stale premises or with degraded reasoning. Metrics are green but outputs are wrong.

R-QM-06 2.7
Quality-Autonomy Tradeoff Failure

Organization constrains agent autonomy to compensate for low quality, eliminating the value of agentic AI. Produces expensive chatbots rather than governed agents.

R-QM-07 2.5
Defect Waste Accumulation

Operational cost of out-of-policy actions, runtime errors, and mission failures accumulates without systematic measurement or root cause analysis.

Accountability & Auditability

7
R-AA-01 4.1
Attribution Gap

Decision traverses multiple agents. No single agent "made" the decision. Responsibility cannot be assigned. Each agent contributed a fragment of reasoning.

R-AA-02 3.7
Reasoning Opacity

Audit trail captures actions (what the agent did) but not reasoning (why the agent did it). Post-incident forensics cannot reconstruct the decision logic.

R-AA-03 4.2
Explainability Failure

Regulatory requirement demands human-understandable justification for an agent-driven outcome. The agent cannot provide one. Post-hoc explanation does not match actual reasoning.

R-AA-04 3.6
Audit Trail Break at Boundaries

Agent actions in downstream systems cannot be traced back to the originating agent or human principal. Governance visibility ends at system boundaries.

R-AA-05 4.3
Accountability Void

No entity in the chain (agent, developer, deployer, user) accepts responsibility for agent-driven outcomes. Accountability is structurally undefined.

R-AA-06 3.5
Interpretive Path Absence

No durable record exists of the evaluative criteria, evidence weighed, trade-offs considered, and boundaries enforced for a given decision.

R-AA-07 3.3
Governance Theater

Organization has policies and audit processes for agents but they are not enforced at runtime. Compliance is checked periodically rather than continuously.

Organisational & Structural

9
R-ST-01 3.4
AI Productivity Trap

Organization automates existing workflows without restructuring for agent-native operations. AI reduces cost of creation but increases cost of coherence.

R-ST-02 3.3
Coordination Tax Collapse

Coordination overhead evaporates faster than governance can adapt. Informal coordination mechanisms disappear before formal replacements exist.

R-ST-03 3.2
Knowledge at Rest

Institutional expertise remains undocumented and unavailable to agents. Critical judgment lives in people's heads. Agents cannot access contextual knowledge.

R-ST-04 2.8
Vendor Containment

Technology vendor fits "agent" label onto existing product without building genuine agent capabilities. Organization believes it has deployed agents when it has deployed automation.

R-ST-05 4.0
Governance Gap (Cross-System)

Agents span multiple enterprise platforms. Policies defined in one system are not enforced in another. No federated governance layer exists.

R-ST-06 3.5
Workforce Skill Displacement

Agent adoption eliminates roles faster than workforce can reskill. Institutional knowledge exits with departing employees. Remaining workforce cannot oversee agents effectively.

R-ST-07 3.6
Agent Dependency Lock-In

Organization becomes dependent on specific agent implementations. Switching costs escalate. Agent becomes critical infrastructure without governance maturity to match.

R-ST-08 2.9
Organizational Navigation Blindness

Agents have no model of organizational structure. Cannot route work through informal channels, find escalation paths, or follow issues across team boundaries.

R-ST-09 4.1
Premature Autonomy

Organization grants agent autonomy levels that the agent's demonstrated competence does not justify. Agents arrive with capabilities but no track record.

Regulatory & Compliance

7
R-RC-01 4.2
Framework Obsolescence

Regulatory frameworks designed for traditional AI do not address agentic-specific risks. Compliance with existing frameworks creates false assurance.

R-RC-02 4.3
Cross-Jurisdictional Conflict

Agent operates across jurisdictions with conflicting AI regulations. Compliance in one jurisdiction creates violation in another.

R-RC-03 3.7
Static Assessment Failure

Regulation requires upfront risk assessment but agentic systems evolve at runtime. Static assessment at deployment cannot capture runtime behavior changes.

R-RC-04 3.5
Tool Sovereignty Gap

Agent autonomously selects which tools to use. No regulatory framework defines which entity is accountable for tool-mediated outcomes.

R-RC-05 3.4
Compliance Theater

Organization demonstrates regulatory compliance through documentation and periodic audits while actual agent behavior is ungoverned at runtime. Form without substance.

R-RC-06 3.8
Regulatory Lag Exposure

Regulation changes but agent continues operating under prior rules. No mechanism triggers re-evaluation of agent behavior when regulatory requirements change.

R-RC-07 3.3
Model Risk Conflation

Organization applies model risk management framework to agents without recognizing that agentic risks are categorically different from model risks.

Data Governance & Integrity

10
R-DG-01 4.1
Data Lineage Severance

Agent reasoning is not a structured transformation. BCBS 239 lineage controls have nothing to trace when data passes through generative reasoning.

R-DG-02 4.3
Silent Data Commingling

Agent reasoning combines data from multiple classification tiers in a single generative pass. Classification propagation fails silently inside the reasoning process.

R-DG-03 3.7
Training Data Contamination Loop

Agent outputs written to operational data stores blur the boundary between authoritative source data and derived analytics. Future consumers cannot distinguish the two.

R-DG-04 3.5
Data Quality Amplification

Agent consumes data with known quality defects and produces outputs that appear authoritative. The agent launders data quality defects through the appearance of reasoning.

R-DG-05 3.4
Uncontrolled Data Replication

Agents replicate data to tool workspaces, vector databases, context caches, and intermediate stores. Each replica is outside the data management perimeter.

R-DG-06 3.3
Schema Drift Blindness

Agent consumption does not fail on schema changes. It silently misinterprets the new structure because data is consumed through loosely typed interfaces.

R-DG-07 3.6
Derived Data Accountability Gap

Agent-derived data enters operational workflows without metadata distinguishing it from system-of-record data. Existing ownership models cannot assign accountability.

R-DG-08 4.0
Context Window as Uncontrolled Data Store

The agent's context window holds customer PII, financial records, and proprietary data simultaneously. This store is not governed by data-at-rest policies.

R-DG-09 4.2
Data Sovereignty Violation via Processing

When an agent processes data via a model in a non-compliant jurisdiction, no data transfer event occurs. Data sovereignty controls are blind to processing-jurisdiction violations.

R-DG-10 3.8
Synthetic Data Provenance Loss

Synthetic data produced by agents enters data stores, loses provenance markers, and becomes structurally identical to system-of-record data.

Privacy & Cross-Border

8
R-PV-01 4.2
Consent Architecture Erosion

Agent purpose is determined by the prompt, not the application architecture. Consent scope erodes incrementally through use, not through a discrete change.

R-PV-02 4.4
Cross-Jurisdictional Privacy Conflict

GDPR right to erasure conflicts with AML retention requirements. Agents operate across jurisdictions in a single reasoning pass with no conflict detection mechanism.

R-PV-03 4.1
Inference-Based Re-identification

Agent reasoning can reconstruct identity from non-PII inputs by combining multiple anonymized data points. The institution processes personal data it never explicitly collected.

R-PV-04 3.7
Purpose Limitation Drift

When an agent's prompts and data connections change, no purpose limitation control fires because the application has not changed. Only the agent's behavior has changed.

R-PV-05 3.5
Right of Access Complexity

Agent reasoning is ephemeral. The institution cannot produce the record of data usage that regulation requires because the processing architecture does not generate it.

R-PV-06 4.3
Automated Decision-Making Without Safeguards

When an agent produces a recommendation and a human "approves" without substantive review, the decision is de facto automated but formally human-approved.

R-PV-07 3.6
Third-Party Data Processor Blindness

Agents dynamically invoke tools that process personal data, creating processor relationships the institution's static processor register does not cover.

R-PV-08 3.4
Data Minimization Failure

Agents maximize reasoning quality by ingesting all available data into the context window. The entire customer record enters regardless of whether the current task requires it.

Foundation Model & LLM

10
R-FM-01 4.0
Silent Model Update by Provider

Model providers update production models without advance notice. The agent's behavior changes without any change to the agent, its prompts, or its tools.

R-FM-02 4.1
Model Provider Dependency and Concentration Risk

Most agentic deployments depend on a single model provider. If the provider experiences an outage or discontinues the model, all agents fail simultaneously.

R-FM-03 4.2
Training Data Bias Propagation

Models inherit biases from training data. The institution cannot access, audit, or remediate biases in a model it does not own.

R-FM-04 3.6
Context Window Overflow and Information Loss

When inputs exceed the context window, content is truncated silently. Critical constraints defined early may be pushed out by subsequent content.

R-FM-05 3.4
Prompt Sensitivity and Brittleness

Small changes in prompt wording cause disproportionately large changes in model output. The input space is so large that exhaustive testing is impossible.

R-FM-06 3.7
Non-Determinism and Output Variance

Same customer query processed by the same agent can produce different recommendations. Non-determinism undermines fair treatment obligations and audit reproduction.

R-FM-07 3.3
Multilingual and Cross-Cultural Inconsistency

LLMs perform differently across languages. An agent accurate in English may produce inferior analysis in other languages. Performance variation creates a compliance gap.

R-FM-08 3.5
Token Economics and Cost Runaway

A single agent interaction that triggers a reasoning loop can consume thousands of dollars in API costs in minutes. Cost monitoring operates on billing cycles; cost runaway operates on seconds.

R-FM-09 3.8
Persistent Memory Degradation

Agent memory stores grow through normal operation without expiration, validation, or reconciliation against sources of record. The memory grows but its accuracy decays.

R-FM-10 4.3
Bias Amplification Through Agent Reasoning

Multi-step agent reasoning compounds a mild model-level bias into a severe output-level bias. Each reasoning step reinforces the pattern until the outcome is materially worse.

Address These Risks in Your Institution

Corvair helps banks, insurers, and financial institutions build governance frameworks that address agentic AI risks before they materialize as examination findings or enforcement actions.

Schedule a Briefing