As many nations rush to centralise management of AI, South Africa is taking a special strategy by spreading accountability throughout current companies and favouring coordination over top-down management.
On April 2, 2026, the South African cupboard revealed a draft model of the coverage, dated d 24 October 24, 2024, for public remark. Spearheaded by the Division of Communications and Digital Applied sciences (DCDT), the coverage is predicted to be totally applied within the 2027/2028 monetary 12 months.
“The AI coverage goals to make sure that each the advantages and dangers introduced by AI are evenly distributed throughout society and generations,” the South African Cupboard stated in an announcement.
No super-regulator
In nations like Nigeria and Kenya, policymakers are shifting towards centralised AI oversight. Devoted companies, commissioners, and top-down buildings have gotten the usual. Nigeria’s proposed Nationwide Digital Economic system and E-Governance Invoice follows a prescriptive, risk-based strategy impressed by the EU AI Act. Excessive-risk AI methods—particularly in surveillance, finance, and public administration—would want licensing, audits, and annual affect assessments.
Kenya’s 2026 AI invoice takes the same risk-based path however provides a powerful political dimension. With elections approaching, it targets artificial media and AI-driven manipulation, imposing prison penalties for non-consensual deepfakes. On the similar time, it maintains flexibility for innovation by way of regulatory sandboxes, permitting startups to check new AI merchandise below lighter oversight.
South Africa is doing the other.
As a substitute of making a brand new regulator, South Africa’s AI coverage leans on establishments already embedded inside these sectors. The Monetary Sector Conduct Authority (FSCA) and the South African Reserve Financial institution will oversee monetary AI methods. The South African Well being Merchandise Regulatory Authority (SAHPRA) will deal with AI in medical diagnostics. The Data Regulator retains its function as the first enforcer of knowledge privateness below the Safety of Private Data Act (POPIA).
The logic is that regulators closest to the issue are greatest positioned to handle it. A mining regulator understands mining dangers. A monetary regulator understands monetary methods. Why construct a brand new paperwork when experience already exists?
Regulating by threat
The spine of South Africa’s AI framework is risk-tiered regulation. Not all AI methods are handled equally. As a substitute, they’re grouped into 4 classes: unacceptable, excessive, restricted, and minimal threat.
On the high finish, sure purposes, manipulative behavioural methods or types of mass surveillance are banned outright. Excessive-risk methods, reminiscent of these utilized in hiring, lending, or healthcare, face stricter scrutiny, together with audits, affect assessments, and necessities for human oversight. Decrease-risk purposes function with lighter-touch guidelines.
The thought is to focus regulatory firepower the place it issues most. Relatively than blanket restrictions, the system sends a transparent sign: the upper the potential hurt, the heavier the compliance burden.
In idea, this creates house for innovation whereas sustaining safeguards. In observe, it relies upon closely on execution.
To carry the AI coverage system collectively, the coverage proposes an online of coordinating our bodies. A Nationwide AI Coordination Workplace would information implementation and set requirements. Inter-departmental boards would align ministries. Advisory panels and multi-stakeholder teams would feed in technical and moral experience.
On the centre sits an AI Advisory Council, a non-executive physique bringing collectively researchers, trade leaders, authorized consultants, and civil society. Its function is to advise, not implement.
And that’s the crux of the strategy: none of those our bodies has binding powers. They’ll information, advocate, and coordinate, however they can not compel motion.
The enforcement hole
Nevertheless, this design introduces a basic pressure. Distributed oversight affords flexibility and sector-specific perception, nevertheless it additionally dangers fragmentation.
The framework is obvious on what must be carried out: classify threat, conduct audits, guarantee transparency, however it’s much less clear on who finally ensures compliance. Enforcement is left to current regulators, every with totally different capacities, priorities, and ranges of technical experience.
The end result may very well be uneven oversight. Monetary regulators, usually well-resourced, could implement guidelines rigorously. Different sectors may lag. Gaps and overlaps could emerge. Firms, in flip, could be taught to navigate these inconsistencies, exploiting weaker hyperlinks within the system.
Capability is one other constraint. Danger-tiered regulation is technically demanding. It requires the power to evaluate evolving AI methods, monitor real-world efficiency, and adapt guidelines as applied sciences change. Many regulators are already stretched. Constructing these capabilities will take time—and cash.
Even the act of classification just isn’t easy. AI methods evolve. A chatbot that begins as a low-risk software can turn out to be a high-stakes choice engine because it scales or integrates new knowledge. Figuring out threat ranges requires fixed reassessment, elevating the potential for inconsistent rulings throughout sectors.
For companies, that creates uncertainty. A product deemed compliant at this time may face stricter guidelines tomorrow.
Past governance, the framework can be an industrial technique. It emphasises the necessity for native datasets, African language processing, and the combination of indigenous information methods.
The aim is to make AI methods extra related and fewer biased. Fashions educated on overseas knowledge usually fail to seize native realities, reinforcing exclusion slightly than fixing it. By investing in native knowledge infrastructure, South Africa hopes to construct a extra inclusive AI ecosystem.
However this ambition provides one other layer of complexity. Knowledge governance, privateness, and data-sharing frameworks should now be coordinated throughout the identical fragmented system that governs AI itself.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a world community of future-focused thinkers.
Unlock tomorrow’s developments at this time: learn extra, subscribe to our e-newsletter, and turn out to be a part of the NextTech group at NextTech-news.com

