ACTIVE US

SR 11-7 Model Risk Management Guide for Banks

The foundational federal framework for model governance in US banking, now directly applicable to AI systems.

What Is SR 11-7?

Federal Reserve Supervisory Letter 11-7, issued in 2011 jointly with the Office of the Comptroller of the Currency (OCC) Bulletin 2011-12, established the foundational framework for model governance in US banking. Despite being fifteen years old, SR 11-7 remains the most direct regulatory guidance on what constitutes adequate model risk management and applies to all banks and bank holding companies: from the largest global institutions to community banks. The letter emerged after the 2008 financial crisis exposed widespread failures in model oversight: credit risk models that systematically underestimated defaults, value-at-risk models that proved dangerously incomplete during market stress, and pricing models that had drifted from reality without anyone noticing. SR 11-7 set the expectation that regulators would examine how banks develop, validate, and govern models, and would not accept "we didn't know" as an excuse.

The Three Pillars

SR 11-7 organizes model risk management around three mutually reinforcing pillars: Model Development and Implementation, Model Validation, and Model Governance. These are not separate silos but an integrated cycle.

Model Development and Implementation covers how models are designed, trained, tested, and deployed. Banks must clearly document the intended use of each model (whether it's credit loss estimation, fraud detection, or interest rate forecasting) and ensure that the data, methodology, and parameters are appropriate for that use. For a credit risk model, this means defining which borrowers the model applies to, what historical data it was trained on, what assumptions about default rates are embedded in it, and what input data it requires. Development documentation should be rigorous enough that a qualified independent reviewer could understand and potentially replicate the model's logic. Banks must also establish formal sign-off procedures: the model owner, risk function, and IT must all agree it is ready before production deployment. This sounds bureaucratic but serves a critical purpose: it forces dialogue between business units and risk, often surfacing assumptions that no one had questioned.

Model Validation is independent testing to confirm a model works as intended and performs adequately in use. This is where SR 11-7 makes clear that validation must be independent: not performed by the same team that built the model. A bank might assign model validation to its independent risk management function, an internal model risk team, or an external consultant, but it cannot be self-validation by the model developer. Validation includes back-testing (does historical performance match what the model predicted?), stress-testing (how does the model behave under extreme scenarios?), and sensitivity analysis (if key inputs change, how does the output change?). Validation also probes for model limitations and biases. If a credit model was trained on data from 2005–2007, does it understand post-2008 credit dynamics? If a model has never seen a recession in its training data, will it underestimate defaults when a recession occurs? Regular revalidation is mandatory. Models do not get validated once and left alone. Initial validation might occur before deployment, then annual or biennial revalidation, and more frequent revalidation if the model environment changes materially.

Model Governance establishes who is accountable for each model throughout its lifecycle and how the organization responds to emerging risks or deficiencies. Governance means documenting the model's owner (the business line accountable for its performance), the risk officer overseeing it, the technical maintainers, and escalation paths for problems. Governance also means tracking a model's status over time: is it in use, in testing, deprecated? If a model is no longer in use, is it properly decommissioned or could it still be influencing decisions unknowingly? SR 11-7 expects senior management and the board to understand what models are in use, what risks they carry, and what limits are placed on them. Regular model inventory reviews (by the board's risk committee, for instance) are part of good governance. When validation identifies a deficiency, governance determines the response: Can the model be fixed, or must its scope be limited until fixed? Does it need immediate retraining?

The Challenge: AI Doesn't Fit the Old Model

SR 11-7 was written for statistical models with defined inputs and outputs: logistic regression for credit scoring, value-at-risk using variance-covariance methods, or linear models for rate forecasting. These models have clear parameters, their behavior is mathematically explainable, and their limitations are relatively well understood. Artificial intelligence systems, particularly large language models and agentic systems, don't fit neatly into the SR 11-7 framework. A language model is not really a "model" in the statistical sense: it's a neural network with billions of parameters, trained on vast amounts of text, and its internal behavior is not easily interpretable. An agentic AI system that can plan actions, make decisions, and interact with other systems introduces a layer of autonomy that traditional model governance does not contemplate.

Despite this mismatch, federal banking regulators have made crystal clear that SR 11-7 principles apply to AI systems, and in fact the standards are expected to be even more rigorous. Because AI systems are less explainable and their failure modes are less understood, banks must apply deeper skepticism. If a 1990s credit model behaved unexpectedly, a quant could usually debug it. If a neural network trained on 100 billion tokens of text produces biased lending decisions, debugging is far harder. Regulators expect banks to acknowledge this unknowability and manage it conservatively: more extensive testing before deployment, more frequent monitoring in production, lower tolerance for unexplained behavior, and strict limitations on autonomous decision-making until confidence is very high.

Third-Party AI Risk (2023 Joint Guidance)

The Federal Reserve, OCC, and FDIC issued joint guidance on third-party relationships in 2023, making clear that banks cannot outsource model risk management. If a bank contracts with a cloud vendor for AI services (for example, using a GenAI API for document processing or a third-party fraud detection model), the bank remains responsible for validating the model's performance on its own data, understanding its limitations, and monitoring its behavior. Many banks have tried to pass this responsibility to vendors: "It's your model; you manage it." Regulators have rejected this. A bank's customers, regulators, and the financial system rely on the bank to ensure its systems are working properly, regardless of who built them. This means banks must have technical capability to audit and validate third-party AI, or they must engage qualified external validators. This significantly raises the cost and complexity of using third-party AI, which is why many larger banks are building in-house AI model validation expertise.

Enforcement: Real Consequences

SR 11-7 violations have resulted in real enforcement actions and financial penalties. The Federal Reserve and OCC have issued supervisory findings and matters requiring attention regarding inadequate model governance, leading to consent orders requiring banks to hire external consultants, overhaul model validation processes, and report quarterly to regulators on remediation. In some cases, these consent orders have included restrictions on bank activities until model risk is addressed. A bank subject to a consent order for model risk deficiencies may be prohibited from deploying new models, expanding certain business lines, or completing planned mergers until regulators are satisfied the governance gaps are closed. Beyond consent orders, the banking system has seen settlements with substantial civil penalties tied in part to model risk failures. TD Bank's $3 billion penalty in 2024 included criticism of anti-money laundering models that failed to detect suspicious activity. City National Bank paid $65 million partly for deficient model risk management. These cases show that regulators view model failures not as technical issues but as systemic risks and potential violations of banking laws.

What This Means for Banks

Every AI system deployed in a regulated bank is now subject to SR 11-7 principles, whether it's a 25-year-old statistical model or a cutting-edge language model. This means documentation is non-negotiable: model owners must be able to articulate what the model does, what data it uses, what assumptions it makes, and what can go wrong. Independent validation must happen before production deployment and on an ongoing basis. Governance structures must be in place so that when problems emerge, there is clear accountability and a decision process for addressing them. For many banks, this requires building or acquiring model risk management expertise that has not historically been a priority. Small regional banks may find this especially challenging, which is why many are exploring consortiums, shared validation services, or outsourcing to specialized firms.

The other implication is urgency. As AI becomes more prevalent in banking (in lending decisions, fraud detection, customer service, and internal process automation), the regulatory focus on AI governance will only intensify. Banks that have invested in model documentation and governance now will be better positioned for regulatory exams and less vulnerable to enforcement action if something goes wrong.

How Corvair Helps

Corvair operationalizes SR 11-7 compliance by providing a centralized platform for model inventory, development documentation, independent validation workflows, and governance tracking. Rather than model risk management living in spreadsheets, email, and scattered systems, Corvair creates a single source of truth that auditors and regulators can examine, reducing the friction and cost of demonstrating compliance.

Schedule a Briefing

Related Regulations

NIST AI RMF

The voluntary AI risk management framework that complements SR 11-7 for comprehensive AI governance.

Read guide

Treasury FS AI RMF

Treasury's AI risk management framework tailored for financial services institutions.

Read guide

US Frameworks Comparison

Side-by-side analysis of US federal and state AI governance frameworks.

View comparison