UPCOMING Singapore

IMDA Agentic AI Framework Guide

In January 2026, Singapore's IMDA published the world's first governance framework purpose-built for agentic AI systems — setting the global baseline for responsible autonomous AI deployment in financial services and beyond.

What Is This Framework?

In January 2026, Singapore's Infocomm Media Development Authority (IMDA) published the first governance framework specifically designed for agentic AI systems. Released under the stewardship of Minister Josephine Teo and unveiled at the World Economic Forum, this framework represents a watershed moment in AI regulation: the first time any jurisdiction has created governance guidance purpose-built for AI systems that act autonomously rather than merely advise or predict. The framework is voluntary, not statutory, so it does not carry legal penalties on its own. However, it establishes clear expectations and best practices that regulators, enterprises, and industry bodies worldwide are beginning to reference as the baseline for responsible agentic AI deployment.

Why Agentic AI Needs Its Own Framework

Traditional AI systems are fundamentally different from agentic AI, and this difference demands regulatory separation. A traditional predictive model ingests data and produces a score or classification (a loan approval probability, a fraud risk rating, or a customer churn forecast). A human decision-maker then receives that output and decides what to do. Agentic AI operates on an entirely different premise. These systems can autonomously browse the web, execute transactions, compose and send communications, call application programming interfaces, delegate tasks to other agents, and iterate their own behavior based on outcomes. An agentic AI system deployed in a banking environment might autonomously monitor market conditions, rebalance a customer's portfolio within preset parameters, and execute trades (all without human intervention at each step).

This autonomy introduces categories of risk that traditional AI governance frameworks do not adequately address. The question is no longer just "Is the model accurate?" but "What can the system do? Who authorized those actions? What happens when the system makes an irreversible decision?" An agentic AI system can lock in harm at machine speed, so by the time a human reviews the outcome, the damage may be operational, financial, or reputational. The IMDA framework was designed precisely to address this gap: to provide guardrails for autonomous agency before deployment, not merely auditing of decisions after the fact.

The Four Core Dimensions

The IMDA framework organizes agentic AI governance around four interdependent dimensions, each reflecting a different layer of control and accountability.

Risk Assessment and Bounding represents the foundational layer. Before deploying any agentic system, organizations must conduct a rigorous assessment of the risks that system could create: What harm could occur if the agent acts beyond its intended scope? What financial, operational, or compliance damage is possible? This assessment drives the creation of explicit boundaries around the agent's autonomy. Bounding means defining the agent's tool access (which APIs can it call?), data access (what customer information can it view or use?), and decision authority (what actions can it take without human approval?). A wealth management agent might be bounded to execute trades only within a defined dollar range and asset class, never to liquidate positions beyond predefined thresholds, and to escalate any decision that crosses a risk boundary to a human advisor.

Human Accountability ensures that agentic AI does not become a black box of autonomous decision-making divorced from human responsibility. The framework requires organizations to maintain explicit checkpoints: gates in the agent's workflow where a human must review and approve the system's proposed action before it executes. These checkpoints are not optional oversight; they are structural safeguards against automation bias, the dangerous tendency for human decision-makers to defer uncritically to algorithmic recommendations. In a KYC (Know Your Customer) context, this might mean an agent can flag a suspicious transaction pattern and recommend escalation. However, a compliance officer must explicitly approve any action that blocks or reports that transaction. Accountability also means the chain of decision-making is traceable: every agent action must be loggable and attributable to either the agent's programmed logic or a specific human approver.

Technical Controls and Processes translate the above principles into concrete safeguards. This dimension covers baseline testing protocols (the agent must be stress-tested against edge cases and adversarial scenarios before deployment), whitelisting of external services (the agent can only call approved, pre-vetted APIs and data sources), and lifecycle controls (mechanisms to update, pause, or decommission agents without disrupting the broader system). Technical controls also address agent-to-agent coordination: if one agent can invoke another agent, what governance prevents cascading failures? This dimension is where operational risk meets technical architecture.

End-User Responsibility recognizes that transparency is a prerequisite for trust. End users (whether customers whose transactions an agent processes or employees who rely on agent recommendations) must understand that they are interacting with an AI system and what capabilities and limitations that system has. This is not merely a disclosure requirement but an educational imperative. Customers should understand whether their wealth advisor is a human, a human aided by AI, or an autonomous agent. Employees should understand what decisions an agent can make on their behalf and what their recourse is if that agent errs. End-user responsibility also encompasses the right to contest or override agent decisions and the human support available when automated processes fail.

How It Relates to MAS AIRG

The Monetary Authority of Singapore (MAS) published its AI Risk Governance (AIRG) framework in September 2025, establishing expectations for AI governance across the entire Singapore financial sector. For MAS-regulated institutions, the IMDA agentic AI framework and MAS AIRG are not competing frameworks but complementary. The MAS AIRG provides the financial-sector-specific requirements: governance structures, risk appetite frameworks, model risk management, and fairness and bias controls tailored to banking. The IMDA framework, by contrast, goes deeper into the specific hazards and control architectures for systems that act autonomously.

In practice, a bank regulated by MAS should treat the IMDA framework as the detailed playbook for how to implement AIRG's guidance on agentic AI. Where MAS AIRG might require institutions to "establish controls proportionate to the autonomy and impact of deployed AI systems," the IMDA framework provides the grammar and methodology for what those controls should look like. An institution building compliance with MAS AIRG on agentic AI deployment would systematically work through the IMDA framework's four dimensions to operationalize AIRG's expectations. The two frameworks are designed to work together, with IMDA providing the technical and structural depth that AIRG references but does not elaborate.

Penalties

Because the IMDA framework is voluntary, there are no direct statutory penalties for non-compliance. An institution that ignores the IMDA framework will not face a fine from IMDA itself. However, this does not mean the framework is toothless. Organizations remain fully legally accountable for their agents' behavior under existing laws. If an agentic AI system violates the Personal Data Protection Act by misusing customer data, the fact that the framework is voluntary provides no legal defense. Similarly, if an agent executes a transaction that breaches MAS AIRG requirements or violates a customer's contractual rights, the organization bears liability. The voluntary nature of the framework should be read as an invitation to engage proactively, not as a safe harbor from legal accountability.

What This Means for Banks

For banking institutions, the IMDA framework transforms agentic AI from an exploratory technology to a governable operational tool. The framework validates the business case for agentic AI (autonomous customer service agents, intelligent trade execution systems, continuous compliance monitoring) by providing a structured path to deployment that does not require avoiding risk but rather managing it explicitly.

The practical implications are substantial. Banks must invest in governance infrastructure before deployment, not after incidents occur. This means risk assessment methodologies specific to autonomous systems, explicit bounding architectures in code, and human approval workflows that integrate into existing decision-making cultures. It also means that a bank's competitive advantage in agentic AI will increasingly flow from governance maturity, not merely from technical capability. An institution with robust agentic AI controls and clear accountability chains can move faster and innovate more safely than a competitor that rushes to deployment without this foundational work.

For compliance and risk functions, the framework provides a common language. Rather than treating each agentic AI project as a governance one-off, the four dimensions give compliance teams a structured framework to apply consistently across the institution. This standardization reduces both governance overhead and the risk of missed hazards.

How Corvair Helps

Corvair's agentic AI platform is architected from the ground up to align with the IMDA framework's four dimensions. Built-in risk assessment and bounding enable institutions to define agent autonomy explicitly before deployment. Logging and accountability features ensure every agent action is traceable and human-approvable. Corvair's integration with MAS AIRG and institutional governance workflows means that the IMDA framework is not an external compliance overlay; it is embedded in how agents operate.

Schedule a Briefing

Related Regulations

MAS AIRG

Singapore's comprehensive AI risk governance framework for financial institutions — the binding sector-specific framework that IMDA's agentic AI guidance operationalizes.

Read guide

Singapore PDPA

Singapore's Personal Data Protection Act — the baseline legal requirement for all AI systems processing personal data of Singapore residents.

Read guide

UAE AI Governance

The UAE's multi-authority AI governance landscape — including DIFC Regulation 10 on autonomous systems and the Federal PDPL.

Read guide