About
Subscribe

Measurable AI returns are anchored in clarity and realistic outcomes

Enterprise AI is at a crossroads. While investment is increasing, measurable returns remain elusive and challenging.
Johannesburg, 15 Apr 2026
The real issue behind AI’s limited impact lies in its execution. (Image source: 123RF)
The real issue behind AI’s limited impact lies in its execution. (Image source: 123RF)

Over the past two years, a significant percentage of AI and generative AI initiatives have failed to reach production, or they’ve not delivered a measurable return on investment (ROI). Despite high enterprise curiosity (62%) and proof points that clearly show the value of AI, McKinsey has found that only 39% of companies are showing EBIT impact.[1] Companies are integrating the technology and prioritising use cases, but AI is still not quite meeting the right levels of functionality across workflows and processes.

Deloitte’s research agrees – the company found that even though more than 91% of companies are set to increase their AI investment in 2026, the returns they’ve had so far have been patchy and uneven.[2] Companies need to find a better way of extracting value from their AI investments while dodging the most common pitfalls that come with AI initiatives – unclear objectives, poor data quality, limited buy-in across the company and poorly defined success metrics.

The real issue behind AI’s limited impact lies in its execution. Many AI projects start without a clearly defined problem statement and a deep understanding of what it is the business wants to achieve. Business leaders have a clearly articulated ambition, often one that’s focused on efficiency or growth, but this isn’t anchored in specific operational constraints. This ambition runs in parallel to the IT teams assessing tools and feasibility and, to add to the fragmentation, considerations around risk and compliance are often introduced further down the line.

To add complexity to the already complicated mix, the AI market is saturated with overlapping tools and solutions, all making very similar promises. Companies aren’t even sure where to start.

The key is to approach AI differently. First, define the business result that AI needs to achieve. This includes identifying the exact process or decision point that requires improvement and then establishing measurable success criteria at the outset. This clarity minimises the risk of AI amplifying weaknesses in existing workflows or embedding existing problems even deeper into processes. AI doesn’t resolve structural ambiguity, it tends to make it worse.

Process readiness is a prerequisite as well. Decision points need to be documented, data inputs and outputs clarified, exceptions need to be understood and all the right foundations need to be in place. This is the only way to ensure that AI meaningfully improves performance and doesn’t end up being another limp technology that costs the business money.

Another aspect of the process is delivery, which can be challenging in light of how traditional IT programmes operate. They tend to assume that requirements are stable and outcomes are predictable, but with AI, there are additional variables that must be factored in from the outset, such as data quality, integration constraints and user interaction patterns.

Mint Group has developed a way to separate the distinctive phases of AI into clear, methodical steps to ensure its implementation delivers meaningful value. Mint’s AI Enablement Framework moves organisations from ambition to accountability by structuring AI delivery across five deliberate phases: Listen, assess, apply, execute and mature. It starts by grounding every initiative in a clearly defined business outcome, then builds an honest view of data quality, ownership and governance before any development begins.

From there, use cases are prioritised against measurable impact, and delivery follows a staged life cycle that separates proof of concept from proof of value and proof of technology to ensure feasibility, operational improvement and scalability are tested in the right order. Integration into live workflows is treated as essential, not optional, with governance, cost control and security embedded from the outset. By creating a single system of record for AI requests and aligning business, technology and risk teams around shared metrics, Mint ensures that AI is not deployed as an isolated toolset but implemented as an operational capability that delivers traceable, repeatable returns.

When organisations anchor AI in defined business outcomes, strengthen their data and process foundations, and scale only once value is proven inside real workflows, returns become both visible and sustainable.

[1] https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[2] https://www.deloitte.com/global/en/issues/generative-ai/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html

Share