MIT’s State of AI in Business 2025 report should temper any rush to restructure workforces around artificial intelligence. Headcount reductions based on assumptions of immediate, sweeping efficiency gains reflect poor risk management. Only around 5% of enterprise AI pilots globally progress to production and deliver measurable profit and loss (P&L) impact, with value typically clustering in narrow, incremental use cases that improve over time.
MIT’s findings reveal:
- Large enterprises often lead in pilot projects but lag in scaling successful ones.
- Externally developed solutions are twice as likely to be deployed as internal builds.
- The most consistent early savings occur in back-office functions – through reduced outsourcing or agency spend – rather than in the customer-facing areas where bold efficiency claims are usually made.
While workforce impact is visible in fields such as customer support, software engineering and administrative services, the transition is gradual and selective, not abrupt.
Legal framework: Adaptation and fairness
South African labour law accommodates technology-driven change while maintaining a strong emphasis on fairness. Section 185 of the Labour Relations Act, 1995 (LRA) protects employees from unfair dismissal, and sections 189 and 189A establish the process to be followed for dismissals based on operational requirements.
The Constitutional Court’s decision in NUMSA v Aveng Trident Steel (2021) captures the balance between competitiveness and fairness:
“In an ever-changing economic climate characterised by increasing global competition… generally, businesses that adapt quickly will survive and prosper. Those that do not will decline and fail.”
The Court confirmed that “operational requirements” encompass not only downsizing but also restructuring the way existing work is performed. Any such restructuring must comply with both the procedural and substantive fairness obligations of section 189.
Once automation or restructuring is contemplated, the section 189 process is triggered. This requires, among other things, early and genuine consultation with employees and unions, disclosure of relevant information (including the automation plan and supporting evidence), serious exploration of alternatives such as redeployment, reskilling or natural attrition, and fair, objective selection criteria where retrenchment proves unavoidable.
The 2025 Code of Good Practice: Dismissal reinforces that dismissal is a measure of last resort and that consultation must be context-sensitive when technological change drives the process.
Staging implementation: Pilot | Prove | Scale
MIT’s findings support a staged approach that aligns neatly with the LRA’s consultation framework: 1. Pilot. 2. Prove. 3. Scale.
Before relying on automation to justify structural change, employers should demonstrate that a tool delivers measurable business benefit – such as improved quality, consistency or production rates – within a live workflow.
AI systems should be integrated into existing processes, capable of learning from user feedback and retaining contextual memory. Where an employer cannot substantiate that a system provides genuine structural, operational or financial value, it will be difficult to justify any resulting job losses. This increases the risk of findings of unfair dismissal.
Dismissal should always remain the last option. Where efficiencies do arise, employers should explore how freed capacity or potentially redundant roles might be repurposed. This creates scope for redeployment discussions and alternative proposals during consultation.
New or transformed positions often emerge from technological change. For example, in AI oversight, data governance or digital services. Employers should identify such developments early to inform a proactive upskilling and re-skilling plan. Re-training existing employees, who already understand the business context, may offer a credible and cost-effective alternative to retrenchment, while maintaining institutional knowledge and trust.
A practical three-track programme
A structured implementation plan can run along three parallel tracks:
1. Evidence: Select narrow, high-volume processes. Prioritise solutions that learn from feedback and integrate seamlessly. External partnerships may deploy faster and achieve higher adoption rates, as MIT’s data suggests. Measure performance before and after deployment and scale only what demonstrably works.
2. Governance (policy-aligned): Align internal governance with South Africa’s National AI Policy Framework – particularly its pillars of explainability, fairness, privacy and data governance, security and human oversight. Map where section 71 of POPIA requires a human decision-maker to remain “in the loop” for consequential employment actions such as promotions, remuneration adjustments or disciplinary outcomes.
3. People (LRA-compliant): Initiate section 189 consultations early. Disclose the underlying business case and pilot data. Table redeployment, re-skilling and attrition options. Record how selection criteria link to the stated operational rationale and maintain clear minutes and decision trails throughout.
Conclusion
Handled correctly, automation is neither a licence for upheaval nor a reason to resist change. The Aveng principle affirms the legal space for adaptation; the LRA prescribes the process for doing so fairly; MIT’s data helps sequence implementation sensibly; and South Africa’s National AI Policy Framework offers a guiding public-policy compass that supports both innovation and employee trust.
Organisations that align these four dimensions will modernise more effectively – and with far fewer legal and reputational scars – than those that rely on brittle tools and rushed restructuring.
Share