Model & Pipeline Interaction Risks

7 Risks

Risks from agents consuming, invoking, and producing ML model outputs in regulated model risk management contexts. How the agent-model boundary breaks SR 11-7 assumptions and creates compound risk that neither model validation nor agent governance currently addresses.

Positioning Note

SR 11-7, MAS AIRG Domain 6, and institutional model risk management (MRM) frameworks govern the development, validation, and monitoring of models used in regulated decisions. These risks document the specific ways agents interact with ML models, inference pipelines, and feature stores in ways that fall outside the scope of existing MRM frameworks. The agent-model boundary creates compound risk that neither model validation teams nor agent governance teams currently own.

Category Overview

SR 11-7, MAS AIRG Domain 6, and institutional model risk management (MRM) frameworks govern models used in regulated decisions. These risks document the specific ways agents interact with ML models, inference pipelines, and feature stores in ways that fall outside MRM scope. The agent-model boundary creates compound risk that neither model validation teams nor agent governance teams currently own.

What makes these risks specifically agentic is the agent's role as an intermediary between models and decisions. An agent that consumes a model's risk score, reasons over it in natural language, and produces a recommendation strips the model's confidence intervals, uncertainty bounds, and validation context. The model's output becomes an unqualified input to the agent's reasoning. Downstream humans read the agent's analysis with no visibility into which claims are model-derived versus factual.

Who should care

Model risk management teams, model validation teams, data science leadership, compliance officers overseeing SR 11-7 or MAS AIRG programs, and any institution deploying agents that consume or produce ML model outputs.

Aggregate DAMAGE Profile

3.7
Average DAMAGE Score
4.2
Highest: R-MP-07 SR 11-7 Scope Gap
3
Critical-Tier Risks
CriticalHighModerateLow
3400

All Model & Pipeline Interaction Risks

R-MP-013.6
Model Drift Propagation

When an agent consumes outputs from a drifted model and uses them as inputs to decisions, the drift propagates and compounds beyond what monitoring anticipates.

R-MP-023.3
Model Version Conflict

Some agents receive the new model version while others still consume cached outputs from the old version. The same decision process uses incompatible model versions.

R-MP-034.0
Agent-Triggered Retraining Contamination

Agent outputs written to production data stores enter the retraining pipeline. The model learns from agent errors.

R-MP-043.7
Feature Store Poisoning

Agents with write access to data stores that feed feature pipelines can inadvertently modify data that changes feature values consumed by downstream models.

R-MP-053.2
Inference Pipeline Disruption

Agents generate query patterns that differ from human users: burst queries, recursive calls, parallel invocations. These patterns can overwhelm inference infrastructure.

R-MP-064.1
Model Output as Ground Truth

Agents consume model outputs and treat them with the same confidence as system-of-record data. The model's confidence intervals evaporate in the agent's prose.

R-MP-074.2
SR 11-7 Scope Gap

Agents are not models by SR 11-7 definition, but they create compound model risk that falls outside MRM scope. The compound risk is unowned.

Related Categories

Address Model & Pipeline Risks

The agent-model boundary creates compound risk that SR 11-7 was not designed to govern. Our advisory engagements help institutions extend MRM frameworks to cover agent-mediated model consumption and compound model risk.

Schedule a Briefing