The policy provides structured guidance for organizations on the responsible deployment of autonomous AI systems.
Singapore’s Infocomm Media Development Authority (IMDA) released the Model AI Governance Framework for Agentic AI on January 22, 2026, providing structured, non-binding guidance for organizations to deploy agentic AI systems responsibly. The framework was announced by Singapore’s Minister for Digital Development and Information, Mrs Josephine Teo, at the World Economic Forum in Davos, Switzerland.
The framework provides a structured overview of the risks associated with agentic AI and emerging best practices for managing them. The guidance is intended for organizations planning to deploy agentic AI, whether built in-house or sourced from external providers.
The framework outlines four governance dimensions organizations should consider. First, it calls for assessing and bounding risks up front, including selecting appropriate use cases and placing limits on an agent’s autonomy, tools, and data access. Second, organizations should make humans meaningfully accountable for agentic AI by defining key checkpoints where human approval is required. Third, it recommends implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and access controls. Fourth, the guidance encourages enabling end-user responsibility by providing transparency into agent actions and user training.
The framework explicitly emphasises that humans remain ultimately accountable for the behavior and impact of agentic AI systems. The document builds on Singapore’s earlier Model AI Governance Framework, introduced in 2020, and reflects input from government agencies and the private sector.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI,” said April Chin, Co-Chief Executive Officer of Resaro, in a corresponding press release.
IMDA noted the framework is a living document, welcoming feedback and case studies to refine the guidance and help demonstrate responsible deployment of agentic AI. The authority is also developing additional guidelines on testing agentic AI applications, building on its existing starter kit for testing large language model-based systems.