GUIDANCE US

US Executive Orders on AI for Financial Services

Federal AI policy has shifted dramatically between administrations — but banks' core compliance obligations under existing law remain unchanged.

The Policy Pendulum

The regulatory environment for AI in America has swung dramatically between competing visions of how government should approach technological innovation. In October 2023, President Biden issued Executive Order 14110: a comprehensive, precautionary approach to AI governance that mandated safety testing, bias assessments, and extensive federal oversight. On January 20, 2025, President Trump revoked that order on the first day of his administration, signaling a fundamentally different stance. Innovation should proceed with minimal regulatory friction, and federal agencies should not impose unnecessary burdens on AI development. By December 2025, the Trump administration had escalated its deregulatory agenda, issuing an executive order directing federal agencies to challenge state AI laws and preempt them wherever possible.

This policy shift matters for financial institutions because it shapes the regulatory environment and federal enforcement posture. However, it is crucial to understand that the pendulum at the federal level does not directly overturn sectoral financial regulation. The SEC, CFPB, FTC, EEOC, and banking regulators retain their statutory authorities regardless of whether a sitting president favors innovation or precaution. The executive orders affect the probability of new comprehensive AI regulation, not the applicability of existing law.

Biden Executive Order 14110 (October 2023, Revoked January 2025)

Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," represented the Biden administration's attempt to establish comprehensive, government-wide AI governance without waiting for congressional action. The order created directives for over 50 federal entities and initiated more than 100 specific actions across federal agencies. It required agencies to conduct safety testing and bias assessments for AI systems used in consequential domains, established standards for AI transparency and explainability, required agencies to incorporate AI safety into procurement decisions, and directed the National Institute of Standards and Technology to develop voluntary AI standards and best practices.

For financial institutions, Biden's EO 14110 was not directly binding: executive orders typically apply to federal agencies, not private companies. However, the order created momentum for federal agencies to interpret their existing authorities expansively, to view AI governance as a priority, and to issue guidance that filtered down to regulated institutions. The CFPB, SEC, and EEOC all cited the executive order's principles when issuing AI guidance in 2024 and early 2025. The order signaled that agencies should be proactive in AI oversight and that companies could expect heightened scrutiny.

The revocation of Biden's EO 14110 on January 20, 2025, eliminated the government-wide mandate for agency-wide AI safety initiatives and removed the policy signal that AI governance was a presidential priority. However, the revocation was symbolic rather than operational; NIST continued to develop standards (now explicitly framed as "voluntary" rather than "baseline"), and individual agencies continued to apply their existing authorities to AI systems based on their statutory mandates.

Trump Executive Order: Removing Barriers (January 2025)

Within days of taking office, President Trump issued an executive order titled "Removing Barriers to American Leadership in Artificial Intelligence." The order directed federal agencies not to impose new requirements or restrictions on AI development unless compelling evidence of genuine risk justified the burden. It explicitly stated that agencies should prioritize economic competitiveness and innovation, avoid duplicative regulations, and work with industry on AI governance standards rather than imposing top-down mandates.

The practical effect of this order was to eliminate any appetite within the federal executive for proactive new AI regulation. Agencies were directed not to issue new guidance that would burden AI companies, not to use procurement authority to impose requirements, and not to support regulations that went beyond existing statutory authority. For financial institutions, the order meant that the federal government would not be the source of new, comprehensive AI regulation in the near term. The order also signaled that existing agencies should interpret their authorities narrowly rather than expansively.

However, the order did not eliminate the need to comply with existing law. Banking regulators retained their authority to oversee AI under existing statutes. The CFPB retained its authority to enforce fair lending and consumer protection law. The SEC retained its authority over investment advisers and market integrity. The executive order constrained the federal government's ability to create new burdens but did not provide cover for violating laws already on the books.

Trump Executive Order: National AI Policy Framework (December 2025)

By December 2025, the Trump administration's approach to AI governance had intensified beyond deregulation into an active campaign against what it characterized as state regulatory overreach. The second executive order, titled "Ensuring a National Policy Framework for Artificial Intelligence," went further than the January order by directing the Department of Justice to establish an AI Litigation Task Force with explicit authority to challenge state AI laws in federal court, arguing that they were preempted by federal authority or constituted unlawful obstacles to interstate commerce.

The order simultaneously directed the Commerce Department to identify which state AI laws it viewed as "onerous," and it directed federal agencies to prepare legislative proposals for Congress that would establish a unified, innovation-oriented federal AI policy framework. A March 2026 legislative blueprint released by the administration explicitly urged Congress to adopt a federal AI law that would preempt state laws and establish a baseline that prioritized innovation over precaution.

The December order and March 2026 blueprint represent the administration's bet that it can eliminate the emerging patchwork of state AI laws by establishing a federal floor that is more innovation-friendly than existing state law. This strategy is controversial; 36 state Attorneys General have formally opposed federal preemption, and there is no clear consensus in Congress on whether a preemptive federal AI law will advance. However, the strategy signals the administration's strong commitment to limiting state AI regulation.

What This Actually Means for Financial Services

The critical point that banks must understand is this: federal executive branch deregulatory action does not eliminate or substantially alter the regulatory regime for financial services. Financial services are specifically carved out from the deregulatory agenda because they are subject to the Dodd-Frank Act's broad grant of authority to the CFPB, to banking regulators under the Bank Holding Company Act, to the SEC under securities law, and to the FTC under consumer protection law. These statutes are not subject to executive order and cannot be narrowed by administration policy.

The Office of the Comptroller of the Currency, the Federal Reserve, and the FDIC all issued guidance on AI governance in 2023 and 2024, with particular emphasis on large banks' obligations to manage AI risk through appropriate governance, testing, and monitoring. These agencies retained this stance even after the Trump administration's January 2025 executive order. In March 2026, the Federal Reserve issued updated guidance on model risk management that explicitly covers AI systems, reinforcing that banking regulators continue to view AI governance as a priority.

The Gramm-Leach-Bliley Act's requirements for financial institutions to protect customer information remain in full effect. The Fair Credit Reporting Act's requirements for credit reporting, adverse action notices, and dispute resolution remain in full effect. The Fair Housing Act and Equal Credit Opportunity Act's requirements for lending practices remain in full effect. None of these statutes can be modified or narrowed by executive order. What changes with administrations is the intensity of federal enforcement and the likelihood of new regulations, not the baseline legal obligations themselves.

For state AI laws like Colorado's SB 24-205, the Trump administration's preference for federal preemption creates uncertainty about the long-term viability of those laws. If Congress enacts a federal AI law that includes preemption language, state laws could be displaced. However, Congress has shown no appetite for rushing to preempt state AI law, and there is substantial uncertainty about whether a preemptive federal AI statute will ever be enacted. As of March 2026, banks should assume that state AI laws like Colorado's remain binding and should continue planning for multi-state compliance.

Banks operating in high-risk lending (mortgages, auto loans, unsecured consumer credit) or investment advisory services should particularly note that sectoral regulation has intensified rather than relaxed. The CFPB, despite the Trump administration's deregulatory posture, continues to examine banks' fair lending practices and has brought enforcement actions against lenders for AI-driven discrimination. The SEC continues to examine investment advisers' AI systems for conflicts of interest and has cited AI governance in its examination priorities. The deregulatory executive orders have created space for AI companies to develop systems with less federal oversight, but they have not created space for banks to evade fair lending, consumer protection, or investment adviser requirements.

How Corvair Helps

Corvair.ai helps banks navigate this uncertain and divided regulatory environment by providing governance infrastructure that satisfies multiple standards simultaneously. By implementing a platform that covers Colorado SB 24-205, NIST AI RMF, and federal agency expectations from the SEC, CFPB, FTC, and EEOC, banks can ensure compliance across jurisdictions and regulators regardless of which administration's policies prevail. Corvair's approach anticipates that the regulatory landscape will continue to evolve, with states advancing AI oversight even if federal policy favors innovation.

Schedule a Briefing

Related Regulations

US Federal Agency Guidance

How SEC, FTC, CFPB, and EEOC apply existing law to AI in financial services regardless of executive policy.

Read guide

NIST AI RMF

The voluntary AI risk management framework that remains relevant under both regulatory approaches.

Read guide

Treasury FS AI RMF

Treasury's sector-specific AI risk management framework for financial institutions.

Read guide