MIT’s State of AI in Business 2025 report paints a sobering picture. Despite global enterprise investment of between $30 billion and $40 billion in generative AI, around 95% of organisations have yet to see measurable returns. Only about 5% of pilot projects have progressed to production and achieved tangible profit and loss (P&L) impact.
Crucially, the divide is not driven primarily by model quality or regulation – it is about approach.
MIT identifies the same points of friction that South African employers encounter when AI systems begin to manage people: rigid workflows that fail to reflect daily operations, tools that do not learn from context and limited integration that makes results difficult to explain or audit.
The report highlights four key patterns:
- Limited disruption across most sectors, except for technology and media, where industry-level disruption has been significant.
- An enterprise paradox, where large organisations lead in pilot projects but report the lowest conversion rates from pilot to scale.
- An investment bias towards visible, customer-facing projects with easily measured outcomes aligned to board-level KPIs – such as marketing and sales – rather than to back-office functions such as legal, procurement or finance, where the efficiency gains are harder to quantify but often more enduring.
- An implementation advantage for external service providers. Projects executed through external partnerships are roughly twice as likely to reach full deployment and achieve higher employee adoption than internally developed tools.
Put simply, if an AI tool cannot adapt, retain context and fit seamlessly into real workflows, it is likely to fail commercially, and even more likely to fail legally when applied to workforce management.
The South African legal context
South Africa’s constitutional and statutory framework places those implementation choices under careful scrutiny. The Constitution protects fair labour practices, privacy and dignity – all of which shape the permissible boundaries of algorithmic monitoring and scoring.
Under the Protection of Personal Information Act, 2013 (POPIA), employers must process employee data lawfully, adhering to the principles of purpose limitation, data minimisation, accuracy and security. They must also provide employees with meaningful avenues to exercise their data subject rights, an element too often overlooked when automated systems generate unfavourable outcomes.
Section 71 of POPIA is particularly relevant. Where an automated decision has legal or similarly significant effects – such as automated shortlisting that excludes candidates, performance scores that trigger sanctions or risk flags feeding into disciplinary processes – the decision may not be based solely on automated processing. Safeguards must exist, including the opportunity for employees to make representations and to obtain sufficient information about the methodologies used in the automated decision-making process to make those representations meaningful.
There are limited exceptions:
- Where the processing produces a favourable outcome for the data subject and appropriate measures protect their legitimate interests.
- Where the automated decision-making is required by law or a code of conduct that itself provides such safeguards.
If a tool cannot be explained or challenged, it risks breaching POPIA. As the MIT report indirectly illustrates, opaque systems rarely gain trust, are poorly used and ultimately fail to scale.
Monitoring and interception
Alongside data protection concerns, the Regulation of Interception of Communications and Provision of Communication-Related Information Act, 2002 (RICA) introduces another area of risk. AI-enhanced oversight – such as CCTV analytics, keystroke logging, e-mail scanning or productivity dashboards – must not cross the threshold into unlawful interception.
Compliance requires clear notice to employees, strict purpose limitation, proper system-owner authorisation and secure handling of any derivative analytics, even when the underlying objective (eg, fraud or data-loss prevention) is legitimate.
Where value is emerging
The MIT findings also indicate where AI is delivering practical value: in narrow, process-specific tasks embedded within daily operations and capable of learning from feedback.
In technology, media and telecommunications environments, early success has emerged in service operations (eg, call summarisation, ticket routing and quality-assurance checks), in document-heavy workflows (such as clause comparison and drafting support with human sign-off), and in back-office controls (including reconciliations and policy checks).
These applications produce returns sooner because they complement rather than replace human judgement – precisely the stance required under POPIA. By contrast, attempts at “end-to-end” HR automation or extensive customisation with weak explainability tend to fail or deliver limited value.
A strategy for algorithmic management implementation
A practical implementation strategy combines MIT’s operational insights with South Africa’s privacy and monitoring laws.
Begin by developing a human-readable AI and monitoring policy that:
- Identifies approved tools.
- Prohibits risky behaviour (for example, uploading client or HR data to personal accounts).
- Sets out clearly when human review is mandatory.
Before deployment, conduct task-level impact assessments classifying each step as:
- Assistive.
- Quality-control.
- Consequential for people.
For consequential tasks, ensure documented human oversight and provide employees with a clear route to challenge outcomes.
Select tools that retain feedback, adapt to organisational workflows and maintain auditable logs – all of which support compliance with POPIA and RICA.
Finally, align internal governance, particularly board-level AI committees or steering groups, with the pillars of South Africa’s National AI Policy Framework: explainability, fairness, privacy and data governance, security and human oversight. This ensures that managers have a consistent, defensible vocabulary for describing how algorithmic management is conducted.
Conclusion
The same attributes that drive return on investment – learning, integration and explainability – are the qualities that make algorithmic management lawful, transparent and trusted within the South African legal environment.
Share