About
Subscribe

Responsible AI poses tough test for regulators

Christopher Tredger
By Christopher Tredger, Technology Portals editor, ITWeb
Johannesburg, 12 Mar 2026
Donovan Byrne, director for Africa at LexisNexis Risk Solutions.
Donovan Byrne, director for Africa at LexisNexis Risk Solutions.

Responsible and transparent deployment of () is under increasing scrutiny as organisations adopt end-to-end AI decisioning to improve efficiency. Regulators are focused on how decisions are made and whether AI-driven processes can be defended.

This is according to LexisNexis Solutions, which has examined whether businesses in SA are prepared to manage AI deployment and regulatory risks, particularly as work continues on the country’s Draft National AI Policy Framework.

Donovan Byrne, director for Africa at LexisNexis Risk Solutions, says AI is rapidly transforming compliance and fraud prevention, bringing new regulatory considerations.

“AI is finally breaking down the walls between fraud, anti-money laundering and cyber risk. Traditionally, these were separate functions each with their own budgets and databases, which allowed some threats to slip through the cracks. A unified AI platform can monitor for all threats simultaneously across the entire customer journey. This means compliance becomes a growth engine, not a cost centre, and businesses can onboard customers faster because they have more confidence in their own data,” says Byrne.

Technology and cyber security professionals stress that responsible AI deployment must start with governance rather than technology.

Byrne adds: “While AI can spot patterns and reduce false positives, it can’t operate as a black box. Organisations need systems and decision logic that they can explain, audit and, most importantly, retain control of. This is especially true when AI supports high-stakes decisions that could materially impact a customer’s life. Ultimately, those companies that can prove they have proper control over their AI decisioning are the ones that will win the trust of both regulators and the public.”

LexisNexis Risk Solutions warns that deploying unexplainable or autonomous AI models in compliance processes can expose companies to increased regulatory scrutiny. Businesses that fail to implement AI responsibly could face legal penalties, stricter regulation and reputational damage, highlighting the need for trust, accountability and effective risk management.

Byrne notes that governance responsibility ultimately lies with company leadership.

“Under frameworks like King IV, ultimate accountability for AI failure rests with the board of directors, which is legally responsible for the governance of technology and can’t outsource that accountability. While a chief information officer or risk officer might manage the day-to-day operations, the board has a fiduciary duty to oversee the risks. Regulators such as the Financial Sector Conduct Authority and Prudential Authority have clear expectations: boards must provide an effective challenge to these systems. If an algorithm causes harm or shows bias, leaders can’t claim that 'the computer said so'. This isn't a legal defence, it’s a failure of leadership.”

SA’s Draft AI Policy Framework

Meanwhile, the Department of Communications and Digital Technologies (DCDT) continues developing SA’s Draft AI Policy Framework and preparing it for public comment.

Cyber security firm Check Point Software Technologies has called for stronger security integration within the framework, particularly through a “prevention-first” approach to protect the country’s digital environment.

Hendrik de Bruin, head of security consulting for SADC at Check Point Software Technologies, says the government’s 14-pillar framework correctly prioritises ethics, skills development and infrastructure. However, he argues it should also include a risk-based classification system for AI applications, as well as clearer restrictions on how such technologies are deployed and secured.

“The EU AI Act, for example, has four key risk categories, each uniquely dictating the level of regulation, risk and required security. Without these, there is a danger that citizens would be exposed to unforeseen consequences as AI is adopted across a spectrum of applications,” De Bruin says.

Check Point has also proposed several measures to strengthen the framework. These include mandating “prevention-first” security architecture, so AI systems are built with security-by-design to stop complex attacks before they reach critical infrastructure. The company also warns that rapid AI adoption is creating new vulnerabilities, such as prompt injection and “zero-click” exploits, and says organisations should maintain visibility over all AI agents to avoid “shadow AI” risks.

The company further emphasises the importance of data sovereignty and integrity. While the draft framework encourages the use of local datasets to reduce bias, De Bruin says this information must be protected using advanced encryption and zero-trust models to prevent data poisoning or leakage.

In addition, Check Point believes the proposed national AI skills list should include specialised cyber security training. According to De Bruin, general AI literacy alone is insufficient; SA needs professionals capable of defending against AI-driven threats such as phishing and identity theft.

In February 2025, speaking at the ITWeb AI Summit, Dumisani Sondlo, AI policy and governance lead at the DCDT, outlined both the opportunities and governance challenges linked to AI adoption. He noted that regulation struggles to keep up with advances in technology, social demand, social equity, sustainable development, the digital divide, historical inequities and global leadership.

“If you don’t work out how to govern AI today, you are then playing by other people’s rules. Africa’s voice cannot be ignored when it comes to AI,” said Sondlo.

Byrne believes responsible AI deployment is key to delivering practical benefits beyond industry hype.

“AI should never be a black box. We build our systems around the principles of explainability, auditability and human oversight. AI must be able to explain its decisions to operate effectively in high-stakes environments. By prioritising ethical deployment and bias mitigation, companies can maintain their most valuable asset – the trust of their customers and regulators.”

Share