The AIRG requirement your team doesn't know how to handle.
A focused 2-day workshop for AI teams, risk teams, and technology leadership on the specific risks introduced by AI agents — autonomous AI systems with tool access and decision-making authority — and the controls MAS expects. This addresses the part of AIRG that most financial institutions are least prepared for.
The AIRG explicitly identifies AI agents as introducing heightened risk, recognising that agents with tool access can autonomously execute actions with real-world consequences beyond those of traditional predictive models.
What makes AI agents fundamentally different from traditional AI/ML models.
Why an agent's effective permissions are emergent, not static.
Why agents can execute valid logic on invalid premises.
Current agentic processes operate at 1–1.5 sigma — below the quality threshold for any mission-critical process.
MAS AIRG specific requirements for AI agents: failure modes, enhanced testing, human oversight.
Trading agents, customer service agents, underwriting agents — real-world failure scenarios examined.
How to register and profile autonomous agents using Corvair's ten-layer governance model.
Just-in-time privilege, zero standing access, emergency kill switches.
How to audit whether an agent's reasoning chain is valid — using composable lenses, decision validity warrants, and the SCAR scoring rubric.
Defining sigma targets for agentic workflows, measuring data sigma and process sigma, applying DMAIC.
Adversarial testing approaches, red teaming, failure scenario design.
In-the-loop vs. on-the-loop vs. out-of-the-loop — when each applies.
Participants leave with an action plan tailored to their institution's agentic AI deployments.
No other consultant in Singapore can walk into a bank and teach agentic AI governance from their own original methodology. Cumulative operational authority, epistemic drift detection, and composable auditable reasoning are the curriculum — not synthesised from other people's research, but built from first principles and protected by patent filings.
Give your AI, risk, and technology teams the knowledge they need to govern autonomous agents — taught by the practitioner who developed the methodology.
Schedule a BriefingA structured evaluation of your institution's current AI governance posture against MAS AIRG requirements.
Learn MoreA tailored governance framework that addresses your institution's specific regulatory and operational requirements.
Learn MoreOngoing advisory support for institutions that need continuous access to agentic AI governance expertise.
Learn More