About
Subscribe

The CIO's case for South Africa's AI governance model

With mature regulators in every major sector, SA chose to govern AI through institutions that already exist, have teeth and understand the industries they oversee.
Bramley Maetsa
By Bramley Maetsa, IT digital and innovation enablement lead, Sasol.
Johannesburg, 22 Apr 2026
Bramley Maetsa, IT digital and innovation enablement lead at Sasol.
Bramley Maetsa, IT digital and innovation enablement lead at Sasol.

As OpenAI calls for centralised industrial and the world looks to Brussels for a blueprint, South Africa has quietly made a smarter choice: governing where value is actually created.

Two documents reveal the contrast. One is OpenAI's Industrial Policy for the Intelligence Age. The other is South Africa's Draft National AI Policy. On the surface, they come from very different worlds. Read together, however, they raise a more important question: what kind of AI actually works, and for whom?

OpenAI's vision is sweeping and centralised. South Africa has chosen something more textured. Its draft policy is built around a defining structural choice: there will be no single AI regulator. Governance will instead be distributed across existing sector authorities, embedded in institutions that already understand where AI risk is real and where value is created.

As a technology executive, I believe this is not just a pragmatic compromise. It is the right model for our context.

Breaking from Brussels

For decades, South Africa's regulatory instinct was to look to Brussels. POPIA carries the unmistakable DNA of Europe's GDPR. When AI governance arrived on the agenda, many expected a similar pattern. The department did benchmark extensively, studying the Netherlands, Chile, Thailand, Norway, Rwanda and the EU AI Act itself. But something different happened.

Deputy director-general Alfred Mmoto told Parliament that the department shared concerns about the EU model and was instead pursuing a middle-of-the-road approach. Parliament heard the tension directly: India treats heavy regulation as a brake on innovation, while the EU leans restrictive. South Africa landed deliberately in the middle.

AI inherits the risk profile, the ethical obligations and the safety requirements of the sector it enters.

That matters. The EU AI Act created new enforcement bodies, an AI Board, an AI Office and sweeping cross-sector obligations. For South Africa to import that architecture wholesale would mean building a new regulatory institution with no enforcement track record, limited sector depth and heavy resource demands.

Instead, South Africa chose to govern AI through institutions that already exist, already have teeth and already understand the industries they oversee.

South Africa studied Brussels and then deliberately chose a different path − not out of defiance, but out of design thinking.

Governing where value is created

The foundational argument is simple: AI is not a sector. It is a general-purpose capability that will disrupt every existing industry differently, at different speeds and with different consequences. That distinction has profound implications for governance.

If AI were an industry, a single regulator would make sense. But AI in a coal gasification plant is not the same phenomenon as AI in a retail bank or an emergency room. It inherits the risk profile, the ethical obligations and the safety requirements of the sector it enters. Centralised governance produces rules that fit no industry particularly well.

South Africa already has mature regulators in every major sector, and mining, chemicals, energy, finance and health show why they are the better governance home.

A single AI deployment can activate multiple regulatory relationships at once. An AI system optimising combustion in an energy facility touches NERSA's energy mandate, the DFFE's air-quality enforcement and occupational safety obligations simultaneously.

An algorithm flagging workers for medical surveillance on the basis of chemical exposure readings must answer to hazardous-chemical regulations, occupational health law and POPIA in the same breath. A diagnostic model used in healthcare cannot be governed as if it were merely another software feature; it sits inside an existing clinical and regulatory environment.

That is the real point. When AI enters these environments, it does not create entirely new regulatory relationships. It activates existing ones.

The Mine Health and Safety Inspectorate already understands that an AI seismic monitoring system failing silently underground carries a categorically different consequence from a product recommendation engine getting a suggestion wrong. No centralised AI regulator could hold that contextual depth across all sectors. Accountability flows from expertise, and expertise lives in the sector.

What this means for CIOs right now

The policy is targeted for finalisation in 2026/27, with sector-specific regulations expected to follow thereafter, though Parliament has already questioned whether those timelines are achievable. The 60-day public comment period is therefore not a formality. It is an invitation. Technology executives who engage now can still shape how sector strategies are framed on explainability, algorithmic auditing and supervisory oversight.

In the immediate term, CIOs should map their AI deployments against the legal and regulatory landscape that already exists. POPIA's provisions on automated decision-making, the Cyber Crimes Act’s relevance to AI threats and the Employment Equity Act’s implications for algorithmic hiring are live today.

Your primary regulatory interface is not hypothetical. It is already determined by your sector. Know your regulator, and build your AI governance so that it can speak that language.

There is also a strategic opportunity here. South Africa’s approach creates room for innovation sandboxes, as demonstrated in fintech, and for early movers to help shape the standards that will later govern their industries. The organisations that engage proactively will not only be better prepared. They will also have disproportionate influence over what responsible AI comes to mean in their own domains.

One risk deserves honest acknowledgment. Multi-regulator models can fragment. Without strong coordination, sector regulators may develop conflicting standards, creating compliance confusion for organisations operating across industries.

The coherence of this model will depend heavily on how well the AI Office and its coordinating structures function. CIOs should watch this carefully and advocate for clear regulatory interfaces and published governance roadmaps.

A model built for where we are

South Africa is not the EU. It cannot set up a comprehensive new AI regulator, draft novel cross-sector legislation and enforce it coherently within the timelines that AI deployment demands. What it does have are sector regulators with real expertise, a legal framework already governing significant AI risk and a policy that is relatively honest about the country's constraints.

The multi-regulator model is not a second-best option. It is a governance architecture that places accountability where expertise lives − at the edges where AI value and AI harm are both produced. Every major South African industry, from banking to aviation to agriculture, already carries a dense regulatory footprint built over decades.

As IT leaders, we should welcome this model and engage with it. The policy window is open. The question is whether the technology leadership community will walk through it.

Share