ACTIVE Singapore

MAS AIRG Guide for Banks & Financial Services

The Monetary Authority of Singapore's comprehensive Guidelines on AI Risk Management — one of the most prescriptive and detailed AI governance frameworks issued by any financial regulator in the world.

What Is the MAS AIRG?

The Monetary Authority of Singapore (MAS) has developed the Guidelines on Artificial Intelligence Risk Management (AIRG): a comprehensive regulatory framework designed specifically for the financial institutions under MAS supervision. This includes banks, insurers, capital markets operators, and payment institutions. The AIRG represents one of the most detailed and prescriptive approaches to AI governance that any financial regulator has published globally.

MAS released the AIRG as a consultation paper in November 2025, closing the feedback period in January 2026. The final guidelines are expected to be published in mid-2026, with a 12-month compliance window for regulated entities. This timeline is intentionally generous: MAS recognizes that most institutions will need substantial time to assess their current AI footprint, redesign governance structures, and implement the nine domains of risk management that the framework requires.

Unlike some regulatory approaches that remain abstract or principle-based, the AIRG is prescriptive and detailed. It tells institutions not just what outcomes to achieve, but the functional components they need to build into their AI systems and governance processes. For financial institutions with operations in Singapore or aspirations to expand there, the AIRG is no longer optional guidance: it is the regulatory baseline.

Why This Matters Now

Singapore's financial regulator is positioning the city-state as a global leader in responsible AI governance. While other regulators have issued AI principles or general guidance, MAS is the first major financial authority to publish a comprehensive, sector-specific framework that addresses the full lifecycle of AI systems: from identification and risk assessment through monitoring, incident response, and decommissioning.

This matters for two reasons. First, the AIRG will become the de facto expectation not just for MAS-regulated entities, but for any global financial institution operating in Singapore or seeking to partner with Singapore-based firms. Regulatory expectations tend to diffuse across borders, especially when they come from a leading financial hub. Second, MAS's approach, particularly its emphasis on proportionality, human oversight, and transparency, provides a template that other regulators are likely to adopt or adapt in the coming years. Being ahead of the curve on AIRG compliance puts institutions in a stronger position globally.

For banks operating in Singapore or seeking to do so, non-compliance with the AIRG will carry real consequences. MAS does not rely on statutory penalties for supervisory breaches; instead, it uses more subtle but equally damaging tools: enhanced reporting, thematic reviews, restrictions on new AI deployments, and escalated supervisory action. For a bank, friction with MAS translates directly into operational and reputational costs that no compliance team can absorb indefinitely.

The Nine Domains of AI Risk Management

The AIRG organizes its requirements into nine distinct domains, each addressing a critical phase or aspect of the AI lifecycle. Understanding these domains is essential because they form the architecture of any compliant AI governance program.

Governance & Oversight establishes the foundational accountability structure. MAS expects the board of directors to understand AI risk at a strategic level, with clear delegation to senior management for day-to-day oversight. This is not a box-ticking requirement; MAS expects institutions to establish dedicated AI risk committees with cross-functional membership, pulling in representatives from technology, risk, compliance, legal, and business units. These committees should meet regularly, escalate issues appropriately, and have authority to restrict or halt AI deployments that pose unacceptable risks. The message is clear: AI governance is a business governance problem, not a technical problem delegated to data science teams.

AI Identification & Inventory requires institutions to catalog every AI system in use, including shadow AI that business units may have deployed without formal approval and third-party AI systems embedded in vendor solutions. This sounds straightforward but is often the hardest domain to implement because most institutions lack visibility into their full AI footprint. MAS expects a living inventory that tracks the purpose, owner, data inputs, risk materiality tier, and current compliance status of each system. Without this inventory, an institution cannot begin to assess its true AI risk exposure.

Risk Materiality Assessment introduces a three-dimensional framework for determining which AI systems require enhanced controls. The three dimensions are Impact (financial, reputational, or operational harm if the system fails or performs poorly), Complexity (the technical sophistication and opacity of the model), and Reliance (the degree to which human operators depend on the system or override its recommendations). Materiality is not binary. MAS uses tiers to enable proportionality. A low-materiality system might face lighter requirements, while a high-materiality system used for credit decisions or fraud detection will face rigorous controls across all nine domains. This is where many institutions fail: they either over-regulate low-risk AI or under-regulate high-risk systems because they lack a structured materiality framework.

Data Management addresses the quality and governance of training, validation, and operational data. MAS expects institutions to implement data quality checks, ensure that datasets are representative of the populations the system will serve in production, encrypt sensitive data both at rest and in transit, and maintain an auditable record of how data flows through the system. The AIRG recognizes that many AI failures are fundamentally data failures: biased training data, missing features in validation sets, or data drift in production. Robust data governance is therefore non-negotiable.

Fairness & Transparency requires institutions to move beyond a vague commitment to "responsible AI" and instead implement measurable fairness outcomes, bias detection mechanisms, and explainability standards. MAS expects institutions to define what fairness means in the context of their business (for example, parity in approval rates across protected characteristics), measure whether their systems achieve it, and take corrective action when they do not. Transparency goes hand-in-hand with fairness: institutions must be able to explain how an AI system made a decision in language that a customer or regulator can understand. In many cases, MAS expects institutions to notify customers that AI was used in a decision affecting them.

Evaluation & Testing mandates rigorous pre-deployment testing that includes adversarial scenarios, stress testing under distribution shift, and validation across demographic subgroups. MAS recognizes that traditional cross-validation practices often miss edge cases and adversarial inputs that motivated attackers might exploit. This domain requires institutions to think like attackers: What inputs could cause the system to fail or produce biased outputs? What happens if the system is used in ways its designers did not anticipate? Institutions that have not built adversarial testing into their development pipeline will need to do so.

Human Oversight & Control establishes clear requirements for kill switches, human-in-the-loop thresholds, and override mechanisms. This domain acknowledges a critical reality. Fully autonomous AI in banking is not yet a responsible practice. MAS expects humans to remain meaningfully engaged in high-stakes decisions. This means defining the confidence thresholds below which human review is mandatory, ensuring that humans have the authority and information to override system recommendations, and implementing technical safeguards (kill switches) that allow humans to stop a system in real time if something goes wrong. The emphasis on human oversight is particularly strong in the context of agentic AI.

Third-Party AI Risk addresses the supply chain dimension. Most institutions do not build all their AI systems in-house. They rely on vendors, cloud providers, and third-party model providers. MAS expects institutions to conduct rigorous due diligence on vendors, require contractual transparency about how third-party AI systems work, and retain audit and testing rights. This is a domain where many institutions will struggle because vendor contracts are often heavily weighted in the vendor's favor. Renegotiating contracts to include AI transparency and audit clauses will require business-level engagement.

Monitoring & Incident Response requires institutions to continuously monitor AI systems in production for drift (performance degradation over time), unexpected behavior, or security incidents. MAS expects institutions to establish incident reporting protocols, define escalation paths, and have a clear process for decommissioning systems that can no longer be trusted. This is not a one-time compliance effort; it is an ongoing operational commitment.

Agentic AI: The AIRG's Hardest Challenge

The AIRG explicitly addresses a category of AI systems that most institutions have minimal experience governing: agentic AI. An agentic AI system is one that operates with a degree of autonomy: it can use tools, delegate tasks to other systems, iterate on problems, and take actions in the world without human approval for each individual decision. Chatbots that can book meetings, trading systems that can execute large orders in tranches, or risk analysis systems that can alert humans and automatically trigger hedges are all forms of agentic AI.

The AIRG requires enhanced controls for agentic systems because autonomous behavior introduces failure modes that traditional AI systems do not present. An agentic system might take a chain of actions that, in isolation, each seem reasonable but collectively produce an unintended outcome. It might drift in capability over time, leading to escalating autonomy creep. It might be manipulated through prompt injection or other adversarial attacks. It might develop unexpected emergent behaviors that the system designers did not anticipate.

To comply with the AIRG's requirements on agentic AI, institutions must document failure modes, conduct adversarial testing specifically designed for agents (including prompt injection, goal specification gaming, and tool misuse), implement robust human oversight with clear thresholds for autonomous action, and maintain the ability to shut down an agent immediately if it enters an unsafe state. Most institutions have not developed this capability in-house, and most vendors have not yet built agentic systems with the level of auditability and control that MAS expects.

How the AIRG Relates to Other Singapore Frameworks

The AIRG does not exist in isolation. It works in concert with two other significant regulatory and guidance frameworks in Singapore. The Infocomm Media Development Authority (IMDA) published the Agentic AI Framework, which provides cross-sector guidance on governing autonomous AI systems: guidance that applies to any organization deploying agents, not just financial institutions. The Personal Data Protection Act (PDPA), Singapore's data protection law, governs how personal data can be collected, used, and retained, with specific implications for AI systems that process personal data.

The relationship is hierarchical and complementary. The AIRG is the financial sector-specific framework that banks and insurers must follow. Where the AIRG is silent, institutions may refer to IMDA's Agentic AI Framework for best practices on agent governance. And where both frameworks touch on personal data, the PDPA is the baseline legal requirement that no institution can fall below. A compliant bank will satisfy all three frameworks simultaneously, but the AIRG is the binding constraint for financial institutions.

Enforcement and Consequences

MAS does not enforce the AIRG through statutory penalties or fines. Rather, enforcement operates through supervisory action and discretion. If MAS determines that an institution is not compliant with the AIRG, the regulator can require enhanced reporting on AI governance, conduct thematic reviews focused on AI risk, restrict the institution's ability to deploy new AI systems, or escalate the matter to supervisory action. In practice, these consequences are more damaging than a fine because they constrain the institution's ability to innovate and compete.

For a bank, regulatory friction with MAS has downstream effects. Market participants notice when a bank comes under regulatory scrutiny. Investors worry about compliance risk. Counterparties may reduce credit lines. These reputational and operational costs are real. This is why forward-thinking institutions are preparing for AIRG compliance now, even though the final guidelines are not yet published. The final guidelines are very likely to resemble the consultation draft, and institutions that have already begun implementing the nine domains will have a smoother transition when compliance becomes mandatory.

Key Dates

What This Means for Banks

The AIRG's most important feature is its proportionality principle. Institutions are not required to apply the same level of rigor to all AI systems. A small bank with a handful of AI use cases will face lighter compliance requirements than a universal bank running hundreds of models across lending, trading, and operations. The framework scales: impact, complexity, and reliance determine the materiality tier, and materiality determines the intensity of required controls.

This principle is both liberating and demanding. It is liberating because it means institutions do not need to boil the ocean to become compliant. They can focus their resources on high-materiality systems. It is demanding because institutions must be able to articulate, with clear evidence, why a particular system is low-materiality. Regulators will expect institutions to make this judgment soundly and be prepared to defend it in a review.

For most banks, implementation will require three simultaneous workstreams. The first is governance: establishing the board-level and senior management oversight structures that the AIRG requires. The second is technical: assessing current AI systems against the nine domains, identifying gaps, and building the testing, monitoring, and control infrastructure needed to close them. The third is contractual: renegotiating vendor agreements to include the transparency, audit, and liability provisions that the AIRG implies. All three workstreams are necessary; none can be delegated entirely to a single team.

How Corvair Helps

Corvair's methodology is purpose-built for the AIRG's most challenging requirements. The framework demands that institutions implement explicit controls over agentic AI: systems that are novel, complex, and difficult to govern using traditional risk frameworks. Corvair provides structured approaches to agentic AI identity, capability quantification, and oversight that make it possible to demonstrate compliance with the AIRG's human oversight and control domain. Beyond agentic AI, Corvair's blast radius calculation and impact modeling provide the quantitative foundation that the risk materiality assessment domain requires. Rather than relying on qualitative judgment to classify AI systems as high, medium, or low risk, institutions can use Corvair's methodology to measure impact systematically. This creates defensible, audit-ready materiality classifications that regulators will accept. Finally, Corvair's continuous monitoring and drift detection approach integrates with the AIRG's monitoring and incident response requirements, giving institutions the visibility and responsiveness that MAS expects.

Schedule a Briefing

Related Regulations

Singapore PDPA

Singapore's Personal Data Protection Act governs collection, use, and retention of personal data — with direct implications for AI systems processing customer information.

Read guide

IMDA Agentic AI Framework

Singapore's cross-sector governance framework for autonomous AI systems — the detailed operational playbook for AIRG's agentic AI requirements.

Read guide

Global Framework Comparison

How MAS AIRG compares to the EU AI Act, NIST AI RMF, UAE governance, and other major jurisdictions — for multi-market compliance planning.

Read guide