At a look
Who: Singapore IMDA.
What: Singapore IMDA has launched the Mannequin AI Governance Framework for Agentic AI to supply steerage to organisations on how one can deploy brokers responsibly, recommending technical and non-technical measures to mitigate dangers, whereas emphasising that people are finally accountable.
Why: To assist the accountable improvement, deployment and use of AI, in order that its advantages might be loved by all in a trusted and protected method. This aligns with Singapore’s sensible and balanced strategy to AI governance, the place guardrails are put in place, whereas offering area for innovation.
When: The framework was introduced on the World Financial Discussion board Annual Assembly in Davos final week. IMDA intends this as a dwelling doc, and welcomes all suggestions from events to refine the framework.
Singapore has launched a mannequin for synthetic intelligence (AI) governance information to assist enterprises deploy agentic AI responsibly. It introduced the mannequin on the World Financial Discussion board Annual Assembly in Davos final week.
Developed by the Infocomm Media Growth Authority (IMDA), the framework for dependable and protected agentic AI deployment seeks to construct upon the governance foundations of the Mannequin Governance Framework( MGF) for AI, which was launched in 2020.
People are finally accountable
The Mannequin AI Governance Framework for Agentic AI supplies steerage to organisations on how one can deploy brokers responsibly, recommending technical and non-technical measures to mitigate dangers, whereas emphasising that people are finally accountable.
In response to IMDA, initiatives such because the MGF for Agentic AI assist the accountable improvement, deployment and use of AI, in order that its advantages might be loved by all in a trusted and protected method. That is in step with Singapore’s sensible and balanced strategy to AI governance, the place guardrails are put in place, whereas offering area for innovation.
Not like conventional and generative AI, AI brokers declare to have the ability to motive and take actions to finish duties on the behalf of customers. This permits organisations to automate repetitive duties, reminiscent of these associated to customer support and enterprise productiveness, and drive sectoral transformation by releasing up workers’ time to undertake increased worth actions.
“As the primary authoritative useful resource addressing the particular dangers of agentic AI, the MGF fills a important hole in coverage steerage for agentic AI”
Nonetheless, as AI brokers might have entry to delicate knowledge and the power to make adjustments to their setting, reminiscent of updating a buyer database or making a cost, their use introduces potential new dangers, for instance unauthorised or inaccurate actions.
IMDA claims the MGF for Agentic AI gives a structured overview of the dangers of agentic AI and rising greatest practices in managing these dangers. It’s focused at organisations trying to deploy agentic AI, whether or not by growing AI brokers in-house or utilizing third-party agentic options.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s traits at the moment: learn extra, subscribe to our e-newsletter, and change into a part of the NextTech neighborhood at NextTech-news.com

