GUIDANCE US

US Federal Agency AI Guidance for Financial Services

SEC, FTC, CFPB, and EEOC are applying existing law to AI systems — no new legislation required and no AI exemptions granted.

Overview

While the United States has not enacted a comprehensive federal AI law comparable to the European Union's AI Act, financial institutions face a more complex and arguably more consequential reality. Multiple federal agencies with statutory authority over financial services are actively applying existing law to AI systems. The SEC regulates securities markets and investment advisers. The FTC enforces consumer protection law. The CFPB oversees consumer lending and fair lending. The EEOC polices employment discrimination. None of these agencies have waited for Congress to write new AI-specific statutes before extending their oversight to algorithmic decision-making. The practical effect is that existing rules apply whether decisions are made by a human loan officer, a statistical model, or a neural network.

This enforcement approach matters deeply because it resolves a common misconception in financial services: the notion that complexity, automation, or algorithmic opacity provides legal cover. Banks cannot invoke AI's black-box nature as an excuse for failing to explain decisions, detecting discrimination, or managing conflicts of interest. The burden falls squarely on financial institutions to ensure that AI systems comply with laws that were written before AI existed, which requires active interpretation and careful implementation.

Securities and Exchange Commission (SEC)

The SEC's jurisdiction over AI derives from its existing authority over securities transactions, investment advisers, and market integrity. The agency is actively applying these authorities to AI-driven activities, with particular focus on three areas: conflicts of interest in AI-driven trading and advisory services, systemic risk and market stability concerns, and disclosure obligations to investors and clients.

In December 2025, the SEC's Investor Advisory Committee issued recommendations on AI governance and disclosure, noting that investors cannot assess risks if they don't understand how AI systems influence their investments. The SEC's 2026 examination priorities explicitly call out AI governance, bias, and conflicts of interest as areas where examination specialists will probe deeply. The agency has signaled through enforcement actions and guidance documents that algorithmic trading systems, robo-advisers, and AI-driven portfolio construction tools must be held to the same fiduciary standards as human traders and advisers. There is no AI carve-out from SEC rules. If a human investment adviser would be required to manage a conflict of interest, an AI system performing the same function must do so as well.

Banks offering investment services or advisory products powered by AI should expect SEC examiners to request documentation of how the bank identified and managed conflicts of interest embedded in the AI system. For example, if an AI-driven advisory system recommends products that earn the bank higher margins, can the bank demonstrate that it selected the recommendation threshold to prioritize client interests over its own revenue? The SEC's position is that algorithmic recommendations do not exempt advisers from fiduciary duties; they intensify the obligation to demonstrate that the system was designed and deployed with client interests paramount.

Federal Trade Commission (FTC)

The FTC's authority over AI derives from Section 5 of the FTC Act, which prohibits unfair or deceptive practices in commerce. In March 2026, the FTC issued a formal policy statement clarifying its approach to AI, reiterating that the agency views "unfair or deceptive AI" as a category of violation that merits enforcement action. The agency focuses particularly on algorithmic discrimination that harms protected classes, deceptive claims about AI capabilities or accuracy, privacy violations connected to AI systems, and competition-reducing uses of AI.

The FTC has been explicit in stating that there is no "AI exemption" to existing law. If a company makes a claim about what an AI system does (say, that it can accurately predict credit risk) and that claim is not substantiated, the FTC may pursue false advertising enforcement. If an AI system produces discriminatory outcomes that the company failed to test for or remediate, the FTC may characterize that as an unfair practice. The agency has shown increasing willingness to bring enforcement actions against AI systems in consumer finance, with particular scrutiny directed toward companies making broad claims about AI safety and accuracy without adequate testing.

For banks, this means that any AI system used in consumer-facing contexts should be subject to stringent accuracy testing, bias testing, and truthful claims review before launch. Marketing materials about AI-driven products or services should contain no claims that cannot be substantiated. If an AI system exhibits discrimination against protected classes, the bank should remediate it promptly and document the corrective action. The FTC is not primarily interested in punishing innocent mistakes but rather in addressing negligence or deception.

Consumer Financial Protection Bureau (CFPB)

The CFPB is the federal agency with the most direct and consequential jurisdiction over AI in consumer lending, and its enforcement approach has become increasingly aggressive as AI systems have proliferated in credit decisions. The CFPB's authority derives from two statutory foundations: the Truth in Lending Act and the Fair Credit Reporting Act, which require lenders to provide adverse action notices explaining why credit was denied, and the Dodd-Frank Act's grant of authority to regulate unfair, deceptive, or abusive acts or practices (UDAAP).

A critical CFPB position, articulated in Circular 2022-03 and reinforced in 2023-03, holds that the complexity of an AI system does not excuse a lender from providing meaningful explanations to consumers who are denied credit. The CFPB rejects the notion that a bank can credit-deny a consumer and then claim it cannot explain why because the decision was made by an inscrutable machine-learning model. Instead, the CFPB requires that lenders provide adverse action notices that explain the decision in language the consumer can understand. In practice, this may require that banks use interpretability tools, sensitivity analysis, or model-agnostic explanation techniques to generate understandable reasons for credit decisions.

Equally important is the CFPB's clear position that adverse action notices must accurately attribute the decision to the lender, not to the AI system. A notice that reads "our AI system declined your application" fails CFPB expectations; the notice should read "we declined your application" while disclosing that the decision was influenced by algorithmic analysis. This distinction emphasizes that the bank remains accountable for the decision, regardless of who or what made the recommendation.

The CFPB's Fair Credit Reporting Act and Equal Credit Opportunity Act authorities mean that the agency oversees fair lending compliance for AI lending systems. The Fair Credit Reporting Act requires that adverse actions based on credit bureau information be disclosed; the ECOA requires that credit decisions not discriminate on the basis of protected characteristics. When a bank uses AI to make credit decisions, the Fair Credit Reporting Act notice requirement applies just as it would for human underwriters. The ECOA's fair lending obligations apply with equal force. The CFPB has examined banks' AI systems for compliance with these requirements and has taken enforcement action when it found discrimination.

A March 2025 CFPB enforcement action against Cleo AI, a company offering AI-driven financial management tools, resulted in a $17 million settlement for allegedly providing incomplete or deceptive AI advice to consumers. The settlement demonstrated the CFPB's willingness to pursue AI companies aggressively, even when the product is designed to help rather than harm consumers. For banks, the lesson is clear: the CFPB is scrutinizing AI systems in financial services, and it will enforce existing law against AI-driven violations with the same vigor it applies to human underwriters or agents.

Equal Employment Opportunity Commission (EEOC)

The EEOC's authority over AI derives from Title VII of the Civil Rights Act, which prohibits employment discrimination on the basis of race, color, religion, sex, or national origin. When banks use AI systems to make hiring, promotion, compensation, or performance evaluation decisions, those systems are covered by Title VII just as human hiring decisions are. The EEOC enforces the four-fifths rule: a statistical safe harbor that presumes discrimination if a selection process produces disparate impact on any protected class at a rate less than 80 percent of the rate for the most-favored group.

For example, if a bank uses an AI-driven resume screening tool and the tool passes white applicants at an 80 percent rate but only 60 percent of Black applicants, the tool has triggered the four-fifths rule and creates presumptive evidence of disparate impact. The bank would then bear the burden of proving that the screening process is job-related and consistent with business necessity. This can be an expensive and time-consuming defense, particularly if the bank has not conducted validation studies or bias audits of the AI system before deployment.

The EEOC has made clear that employers are liable for discrimination produced by third-party AI tools, even if the employer did not build the tool itself. If a bank licenses a vendor's AI hiring tool and that tool discriminates, the bank (not the vendor) faces EEOC enforcement. This underscores the importance of due diligence on AI vendors, pre-deployment testing for disparate impact, and ongoing monitoring of AI system performance across demographic groups.

Penalties

The penalties imposed by federal agencies for AI-related violations are substantial and diverse. The FTC can seek civil penalties, injunctions, and in egregious cases, monetary redress to consumers. The SEC can impose fines on firms and individuals, suspend trading privileges, and bar individuals from serving as investment advisers. The CFPB can issue enforcement orders mandating corrective action, restitution to harmed consumers, and civil penalties. The EEOC can pursue cease-and-desist orders and damages, including back pay, compensatory damages, and in cases of willful violation, liquidated damages. For a large bank facing multi-million-dollar settlements like the Cleo AI case or the earlier $200 million CFPB settlements against lenders for fair lending violations, the financial stakes of AI enforcement are clear.

Beyond financial penalties, enforcement actions result in public damage to reputation, increased regulatory scrutiny in future examinations, and demands for expensive remediation and governance changes. A bank facing CFPB, SEC, or FTC enforcement over AI must typically undergo external audits, implement new governance structures, and demonstrate sustained compliance over years. The reputational cost — particularly in an era where social media amplifies enforcement stories — can be substantial.

What This Means for Banks

The coherent message across all federal agencies is straightforward. Existing law is the enforcement mechanism. Banks should not wait for new AI regulations to issue from Congress or for agencies to publish comprehensive AI guidance. The rules are already in place. Investment advisers already have fiduciary duties; AI advisory systems must respect them. Lenders already must explain credit decisions and avoid discrimination; AI lending systems must do so as well. Employers already must avoid discrimination; AI hiring systems must avoid it as well. Deceptive practices are already prohibited; AI systems cannot deceive.

For banking institutions, this means several concrete actions. First, conduct a comprehensive audit of all AI systems currently deployed or under development in consumer-facing or employment contexts. For each system, identify which federal agency has jurisdiction and what specific statutory obligations apply. Second, implement testing protocols to identify discriminatory outcomes and deceptive performance before deployment. Third, ensure that governance documentation is complete and contemporaneous. Regulators expect to see evidence of deliberation, testing, and risk assessment, not retroactive justifications of decisions already made. Fourth, establish monitoring systems that track AI system performance over time and trigger remediation if bias or degradation emerges.

Federal agencies are applying existing law to AI with increasing sophistication and confidence. Banks that treat AI systems as exempt from regulation or as inherently less transparent than human judgment will face enforcement risk. Institutions that implement AI systems with the same rigor, testing, and governance applied to other consequential business decisions will find themselves well-positioned to demonstrate compliance if questions arise.

How Corvair Helps

Corvair.ai provides banks with the governance infrastructure needed to comply with federal agency expectations. By automating testing for bias and disparate impact, maintaining documentation of AI system performance, integrating regulatory requirements into model development workflows, and producing the kind of contemporaneous records that federal examiners expect to see, Corvair helps financial institutions demonstrate that they have implemented existing law requirements for AI systems. For institutions subject to CFPB, SEC, FTC, or EEOC oversight, Corvair reduces the compliance burden while strengthening the quality of AI governance.

Schedule a Briefing

Related Regulations

US Executive Orders

How presidential executive orders on AI affect the federal regulatory environment for banks.

Read guide

Colorado AI Act & State Laws

The first comprehensive US state AI law targeting high-risk systems in credit and financial services.

Read guide

NIST AI RMF

The voluntary AI risk management framework referenced by federal agencies as a compliance baseline.

Read guide