About
Subscribe

Sovereignty AI: The real test of SA’s AI strategy

The draft AI policy is more than a procedural embarrassment – it’s a warning that SA must secure enough control over the AI stack to make that policy enforceable.
Bramley Maetsa
By Bramley Maetsa, IT digital and innovation enablement lead, Sasol.
Johannesburg, 07 May 2026
Bramley Maetsa, IT digital and innovation enablement lead at Sasol.
Bramley Maetsa, IT digital and innovation enablement lead at Sasol.

South Africa’s first draft did more than stumble over a credibility problem. Even before it was withdrawn, it exposed a deeper national question: can a country govern AI responsibly if it does not have sufficient control over the layers on which scaled AI depends: energy, chips, compute, data centres, foundation models, data and applications?

The withdrawal of the , after concerns about fictitious or unverifiable references, should not be treated as a procedural embarrassment alone. It is a warning.

AI governance cannot rest on good intentions, ethical language or policy documents that have not withstood scrutiny. It requires disciplined drafting, independent checking and human oversight.

The same is true of AI sovereignty.

A country can have responsible AI principles on paper and still remain strategically dependent in practice. It can regulate AI, but still rely on foreign-owned clouds, foreign compute, foreign foundation models and foreign commercial priorities.

That is the real test now facing South Africa: not only whether it can write a better AI policy, but whether it can secure enough control over the AI stack to make that policy enforceable.

South Africa was right to pursue an AI governance model. But governance is only half the job. The harder question is whether the country controls enough of the AI stack to enforce its policy choices and ensure the economic, social and public sector benefits of AI are not captured elsewhere.

The global sovereign AI debate has moved past theory. Countries are asking which parts of the AI stack are strategic, which can be sourced externally, and which must remain under national control.

South Africa was right to pursue an AI governance model. But governance is only half the job.

AI is built across layers: energy, chips, compute, data centres, foundation models, data, applications and public sector services. If a country does not control the critical layers, it becomes a consumer in someone else’s industrial strategy.

South Africa is not alone in facing this choice. The World Economic Forum and Bain have argued that the US and China account for around 65% of aggregate global AI investment. That is the world in which smaller and resource-constrained economies must make AI policy: not from unlimited resources, but through practical choices about where to partner, where to regulate, where to build local capability and where national control is non-negotiable.

For South Africa, the decisive issue is not ownership alone. It is control rights. Who owns the infrastructure? Whose law applies? Who holds the encryption keys? Who can access the data? Who can audit the models? Who controls pricing? Who can move a strategic workload if commercial or geopolitical conditions change?

President Cyril Ramaphosa has already placed digital infrastructure in the national growth story, pointing to 55 data centres already built and more than R50 billion of expected investment over three years. That is positive. But investment alone does not create sovereignty.

A country can host infrastructure without controlling the terms on which it is used. If the infrastructure supporting critical public and strategic workloads is foreign-owned, foreign-operated and governed by foreign legal exposure, South Africa may have local capacity without real control.

This does not mean South Africa must try to build everything alone. That would be unrealistic. The country will still need global cloud providers, international partnerships, open-source communities, foreign capital and access to leading technologies.

Sovereignty is not isolation; it is strategic interdependence: knowing what to build, what to buy, where to partner and where to insist on national control.

That matters most in the public sector. The policy should classify AI workloads by risk. Low-risk workloads can use ordinary commercial cloud controls. High-risk workloads, such as identity, justice, health, tax, social protection, policing, defence and critical infrastructure, require a higher standard. They should run only where South African jurisdiction, auditability, data access, encryption control and operational resilience are enforceable.

POPIA can govern data handling. It cannot create sovereign capacity where none exists. Cyber security rules can protect systems. They cannot guarantee strategic independence if the underlying compute, models and operational control sit elsewhere. A serious AI strategy must say plainly that high-risk public-sector AI workloads need sovereign-grade operating conditions.

Fortunately, South Africa does not need to build from scratch. The local ecosystem is not empty. Lelapa AI, Wits MIND, UCT’s African Compute Initiative and UP’s AfriDSAI point to credible foundations across local model development, African-language AI, research talent and compute capacity. These examples should be recognised carefully, without turning the policy into a list of favoured institutions.

Supporting local capability is not an academic courtesy. It is industrial policy. It is how South Africa reduces long-term dependency on foreign platforms and builds genuine capability in the technologies that will shape public services, competitiveness and national resilience.

The revised AI strategy must therefore pass five policy tests. It must define AI sovereignty as enforceable control, not symbolic ambition; set minimum sovereign terms for strategic AI infrastructure investment, including jurisdiction, audit rights, resilience, skills transfer and exit rights; classify public sector AI workloads by risk and require sovereign-grade controls for high-risk use cases; create sustained support for local research institutions, model builders and African-language AI capability; and align AI policy with energy and transmission planning, because compute without reliable power is not a strategy.

Private projects such as Teraco’s 120MW solar plant show that the market is already moving, but private energy projects cannot substitute for national energy and transmission planning. AI sovereignty at the infrastructure layer needs policy coordination, not only corporate problem-solving.

These are not extreme positions. They are practical choices for a country with limited resources operating in an AI economy shaped by global power, compute concentration and platform dependency.

South Africa cannot control every layer of the AI stack. But it must distinguish between three categories: layers it must control directly; layers it can govern through sovereign terms such as jurisdiction, audit rights, key control, data residency and exit rights; and layers it can safely access through trusted partnerships.

Other countries are not waiting for perfect policy before acting. They are funding national models, building sovereign compute, strengthening domestic AI ecosystems and aligning AI with industrial strategy. South Africa does not need to copy those models, but it must make its own choice: remain a downstream user, or become a country that shapes its own AI future.

The first AI policy draft failed a credibility test. The revised policy must not fail the sovereignty test.

South Africa does not need to build everything itself. But it must be clear-eyed about the world now taking shape. AI capability will be heavily influenced by countries and companies with far greater compute, capital and model-building power. That makes sovereignty more important, not less.

The task is not to pretend that South Africa can match the superpowers layer for layer. The task is to make deliberate choices: which parts of the AI stack it must control directly; which parts it must govern through law, procurement, audit rights, data controls, key custody and exit rights; and which parts it can safely access through trusted partnerships. Anything outside those choices becomes unmanaged dependency, not strategy.

Without sovereignty, adoption becomes dependency. And dependency is not an AI strategy.

Share