South Africa is poised to adopt a sector-specific, risk-based approach to AI policy and regulation. Instead of adopting a prescriptive “regulator” approach like the EU did, or a permissive “innovator” approach as adopted by India, Cabinet is considering the integration of AI controls into existing legislation and regulatory frameworks while simultaneously relying on established international risk management standards from NIST, ISO, OECD and UNESCO. This “middle road” approach may be more suitable given South Africa’s current social priorities, but there are risks given our economic circumstances and regulatory capacity.
The Department of Communications and Digital Technology (DCDT) released its Artificial Intelligence Planning discussion document in October 2023, followed by the National AI Policy Framework (NAPF) in August 2024. The NAPF received mixed reviews, ranging from strategic optimism to significant practical concerns about its implementation, which are further amplified by the current global state of AI in 2026.
The NAPF is lauded for its methodology that explicitly considers the weight of South Africa’s historical inequalities alongside its ambitious vision for using AI to transform South Africa into a future state where poverty, unemployment and equality are overcome. The legal community welcomed the NAPF’s commitment to human-centred AI and transparency, and for harmonising with global standards and positioning South Africa as a potential regional leader. The 12 strategic pillars contained in the NAPF are seen as a unified roadmap that should focus the national response to talent, infrastructure, ethics and R&D.
Critical observations regarding the NAPF include the fear of regulatory overlap because of the proposed multi-regulator model, which may lead to a lack of accountability when it comes to compliance enforcement across sectors. NAPF also fails to address talent retention, having somehow overlooked the migration of local AI expertise to international markets. Crucially, the NAPF was released in late 2024, yet there has been no further movement towards a dedicated AI Act since then. As a result, the NAPF has not kept pace with global AI developments in several key areas, according to Deloitte’s 2026 report on the state of AI in the enterprise:
- Agentic AI – AI agents acting autonomously is now the new global standard. NAPF is largely focused on weak AI and “human-in-the-loop”. It needs to be revised to include guardrails for autonomous agents. This requires governance by design instead of retrospective auditing.
- Work redesign – talent strategies need to build AI fluency in employees and (more importantly) rearchitecting roles, workflows and career paths around AI. NAPF identifies capacity building as being important but falls short of mandating job re-architecture.
- Sovereign AI – where AI is built matters as much as what it can do and who owns the technology. What matters in the boardroom is strategic independence. Building on infrastructure you control, using your own data, models, talent and ecosystem, you can innovate securely and responsibly. NAPF cares about the pillar of digital infrastructure and accepts reliance on foreign models, completely omitting the need for strategic independence.
- Physical AI – a class of AI systems is situated at the intersection of AI, machine learning, sensors, controls and robotics. Globally, physical AI is already embedded into operations, and this trend is growing rapidly. NAPF mentions safety concerns in industrial environments but does not fully comprehend the expanding role of physical AI for the mining and manufacturing industries.
There are several societal risks yet to be addressed locally by the NAPF. Globally, there is a concern about crossing the chasm from AI experimentation (access) to AI adoption (activation) in the enterprise. Leaders who treat AI pilots as a stepping stone to production, rather than an isolated experiment, are more likely to reach durable impact much faster. NAPF talks at length about access in the context of digital inclusion amid concerns about poor rural infrastructure and high data costs. However, if the access-activation gap is not addressed locally, then the digital divide will get wider.
Given the concerns raised about the NAPF, investing now in AI GRC is the only immediate defence against AI-related liability as we wait for policy to be codified. If the “middle road” AI Policy is approved, the private sector can anticipate a demanding compliance landscape. Existing legislative obligations will now get AI-related overlays and be viewed through a sectoral lens for risk-differentiated AI use cases. A precursor, parallel example of what this might look like are the recent FSCA/PA joint standards for IT governance and risk management together with cyber security and cyber resilience, which are mandatory compliance instruments for the entire financial sector. The implication for AI in the private sector will be increased exposure to significant liability for causing harm and for legislative non-compliance, which would result in business disruption and reputational damage.
Identifying and understanding AI risks
Traditional GRC is a mature discipline that is presently being disrupted by increasing levels of AI adoption. Prior to modern generative and autonomous AI systems, the focus was on controls found within frameworks that anticipated threats as being largely predictable and static.
In South Africa, international standards from NIST (SP 800-30 & 37) and ISO (27005 & 31000) are the cornerstone for King V governance, the POPI Act and sector-specific overlays such as the joint standards for the finance sector. Under these conditions, traditional GRC has been a matter of identifying threats and vulnerabilities, performing manual scanning and threat modelling, and evaluating likelihood versus impact. Risk mitigation is achieved through applying predefined controls, insuring against the risks and signing off on the accountability.
AI is presently disrupting this model of risk management because generative and autonomous AI systems are fundamentally different to traditional IT systems, which are deterministic and developed using imperative software design. On the other hand, AI models are non-deterministic because they do not produce the same output every time because they are inherently stochastic.
The emergent and non-deterministic behaviour of AI systems increases the risks associated with their use. AI GRC must adapt and adjust accordingly. New harms are inherent to generative and autonomous systems (eg, hallucinations, bias, large-scale disinformation, vulnerability to adversarial attacks) and additional methods of managing and mitigating these risks are needed, which will include the application of AI to the discipline of AI GRC.
It is time to gear up for a journey down South Africa’s “middle road” towards AI GRC. Start by engaging with an expert GRC team with proven AI experience to assist in preparing AI overlays for your existing GRC processes and get ahead of the bow wave of change. South Africa’s AI future is being written now – make your voice count.
References
- Deloitte Development LLC (2026).State of AI in the Enterprise: The Untapped Edge. [Cited for Agentic AI, Work Redesign, and the Access-Activation Gap].
- Department of Communications and Digital Technologies (DCDT) (2024).National Artificial Intelligence Policy Framework (NAPF). Republic of South Africa.
- Department of Communications and Digital Technologies (DCDT) (2023).Artificial Intelligence Planning Discussion Document. Republic of South Africa.
- International Organization for Standardization (ISO).ISO/IEC 27005:2022 – Information security risk management and ISO 31000:2018 – Risk management guidelines.
- Michalsons (2024).The South African AI Act: Latest Developments. [Cited for the legal community’s reception of the NAPF].
- National Institute of Standards and Technology (NIST) (2023).AI Risk Management Framework (AI RMF 1.0).
- National Institute of Standards and Technology (NIST).NIST SP 800-30 Rev. 1: Guide for Conducting Risk Assessments and SP 800-37 Rev. 2: Risk Management Framework for Information Systems and Organizations.
- Republic of South Africa (2013).Protection of Personal Information Act (POPIA), No. 4 of 2013.
- Institute of Directors South Africa (IoDSA) (2024).King V Report on Corporate Governance for South Africa. [Cited as the cornerstone for local IT governance].
The Cyber Security Institute (CSI)
The Cyber Security Institute (CSI) is a well-established information security company which specialises in information security Governance, Risk and Compliance consulting and cyber security training. Their highly regarded security consultancy provides expert leadership in ISO 270001, NIST CSF, J1 & JS2 Standards and Data Privacy regulations to the public and private sectors, including AI-related overlays to their existing offerings.
The CSI Academy offers fully accredited, bespoke cybersecurity training programs, offered in partnership with universities, and a full range of PECB Certifications, which include AI risk-management, auditing, and system implementation certifications.
Editorial contacts


