Artificial intelligence (AI) is entering a new era. The rise of AI agents is changing how we use AI, from generating content and answering questions, to planning tasks, coordinating workflows and making decisions with minimal human involvement.
This shift from passive assistance to autonomous action expands what businesses can achieve, and business leaders are paying attention: 62% of companies in a recent global survey said they’re at least experimenting with AI agents.
This shift does raise fundamental questions about trust, accountability and risk − questions that are especially pressing in African markets where regulatory clarity around AI agents is still emerging. While organisations experiment with AI agents, the frameworks needed to govern it are still being built.
Getting governance right at this stage is essential: not only to ensure safety and compliance, but to sustain public trust and encourage responsible innovation.
AI governance state of play
Globally, China, the European Union and the United States each represent distinct approaches to AI governance. China’s framework is state-driven and rooted in national sovereignty. The EU’s AI Act centres on fundamental rights and a risk-based regulatory system. The US model prioritises innovation and global standards, relying on flexible, voluntary and sector-specific mechanisms.
Africa has made significant progress over the past two years, with the announcement of the African Union’s Continental AI Strategy. Across Africa, the use of AI in finance, security, public services and digital platforms is growing faster than the development of formal governance systems.
Yet, countries such as South Africa, Rwanda, Kenya, Mauritius, Egypt, Nigeria and Ghana are taking important steps by drafting national strategies and frameworks that draw lessons from these global models, each taking a slightly different path.
The reality is that AI use is growing faster than governance.
South Africa’s forthcoming National AI Policy Framework sets out strong principles around transparency, ethics and human rights, and mirrors elements of the European Union’s AI Act. Kenya is taking a more adaptive approach through regulatory sandboxes, allowing innovators to test AI systems under supervision while broader laws are developed. Nigeria has drafted a comprehensive governance framework anchored in responsible and ethical AI development.
Despite this momentum, the reality is that AI use is growing faster than governance. Traditional governance approaches were designed around relatively static systems: fixed rules, predictable outputs and human decision-makers firmly in control. AI agents change that in a few important ways.
Firstly, they can plan and execute multi-step tasks across multiple systems without continuous human supervision. Their decisions compound over time, so small errors early in a workflow can trigger large downstream effects. And they are often built on general-purpose models trained on data that may not reflect African languages, cultures or socio-economic realities.
This all means that organisations can’t simply rely on generic AI policies or informal oversight. Instead, they need specific mechanisms that acknowledge how agents behave in real environments and how those behaviours can drift over time.
Fundamentals of AI agent governance
African organisations do not need to wait for finalised national legislation to act. Handled well, governance can accelerate AI adoption, not slow it down, and give boards, regulators, customers and employees confidence that AI agents are being used thoughtfully and responsibly.
A practical governance framework for AI agents should help leaders answer four foundational questions clearly and consistently:
- Who owns outcomes?
- How are decisions audited?
- How is fairness protected across diverse populations?
- Where are the boundaries of autonomy?
These questions are simple to state but demanding to implement and require that organisations move beyond one-off ethics statements into concrete design choices, process changes and role definitions.
For example, ‘owning outcomes’ means every agent should have a named human owner that is responsible for approving its use, monitoring its behaviour and responding when things go wrong.
Agents must leave an audit trail. Organisations need logs that show what actions were taken, what data was used, and what recommendations or decisions were produced. This allows internal teams, regulators and affected stakeholders to reconstruct how a decision was made and challenge it when necessary.
Protecting fairness includes testing for bias across gender, race, language and geography as a standard practice, especially when agents affect access to jobs, credit, healthcare or public services. Systems trained on non-African data need additional safeguards and local validation.
Balancing innovation and control
Organisations should also acknowledge that not every decision should be fully automated. High-impact decisions may require human approval before execution; medium-risk ones may be monitored via dashboards and alerts; low-risk tasks may run autonomously.
It’s critical that the level of human oversight is explicitly defined for each use case, reviewed regularly and adjusted as risk profiles change.
Here, technology providers have a crucial role to play. Trustworthy AI is not just a matter of policy. Trust must be embedded into the architecture of the systems that organisations use every day.
That means building products with clear ethical principles, structured review processes for high-risk use cases, and technical safeguards such as role-based access controls, logging, monitoring and red-teaming of models and agents. It means designing interfaces that surface explanations, not just outputs, and that make it simple for users to override or correct AI recommendations.
Enterprise platforms are moving in this direction. SAP, for example, has introduced comprehensive AI ethics policies, dedicated governance bodies and technical controls across its AI portfolio. Its Joule assistant is evolving from a copilot into an orchestrator of multiple agents, but always within defined guardrails and with human oversight at the centre. The principle is simple: humans set objectives and policies; agents help execute them, not replace them.
For African organisations, this combination of governance-by-design in the technology stack and governance-by-policy inside the business is key. It reduces the burden on individual teams and ensures decisions about ethics, compliance and risk are reflected directly in how systems behave.
The 2025 G20 Leaders summit in Johannesburg elevated AI to a top-tier priority in the Leaders declaration, committing G20 members to cooperate on safe, inclusive and human-centric AI for sustainable development, supporting capacity-building in the Global South, and exploration of interoperable governance approaches rather than a single global code.
Closing Africa’s governance-innovation gap in this contest will require translating these G20 commitments into concrete regional and national actions. Critical to this will be the scaling of regulatory sandboxes and institutional capacity so that regulators can learn alongside innovators; embedding Ubuntu-informed impact assessments and community participation into AI lifecycle governance; and ensuring African voices shape emerging G20 and multilateral AI norms, rather than merely adapting external templates.
This integrated approach can position Africa not only as a responsible adopter of AI, but also as a global leader in human-centred, cooperative technology governance.
By asking the right governance questions early, balancing innovation with control, and demanding trust mechanisms are built into the tools they deploy, African organisations can shape an AI future that works on their terms. But this emerging technology is evolving exponentially – and governance needs to catch up.
Share