Bad decisions cost global businesses an estimated $112 billion a year in lost market value, and artificial intelligence (AI) on its own will not change that reality unless organisations fundamentally rethink how they make choices.
That was the central message from Dr Mark Nasila, chief data and analytics officer in the chief risk office at FNB South Africa. He was speaking at the ITWeb Data Insights Summit held last week in Johannesburg.
Nasila warned that while AI has moved from hype to reality at breathtaking speed, many organisations are failing to extract real value because they focus on technology rather than decision-making. “Good decisions help companies succeed. Bad decisions are extremely costly,” he said.
Nasila said organisations are operating in an increasingly complex environment shaped by geopolitical uncertainty, cyber risk, climate risk, new regulation and rapidly rising digital customer expectations.
“Technology has made customers far more demanding,” he noted. “It’s no longer enough to digitise processes. You have to understand people, context and expectations, while still being efficient and profitable.”
At the same time, investors are asking tough questions about the return on massive AI investments. “They’re saying: show me the money,” Nasila said, adding that despite billions being spent on generative AI, evidence of value creation remains limited.
He cited research showing that only a small percentage of organisations have demonstrated measurable value from AI, either through revenue growth or cost reduction.
According to Nasila, the low success rate has little to do with the capabilities of AI itself.
“Decisions about creating value from AI do not depend on technology alone. The biggest gaps are change management, process redesign and how people work with AI,” he said.
He cautioned against what he described as a “Dunning-Kruger effect” following the launch of generative AI tools, where enthusiasm quickly turned non-experts into self-proclaimed AI specialists.
“Generative AI predicts the next word very well, but it does not magically fix broken processes. That’s why decision intelligence as a discipline has become critical,” added Nasila.
He positioned decision intelligence as the framework that brings together managerial science, decision theory, data science and social science to guide better choices.
He also challenged the popular phrase “data-driven”.
“I don’t like the term data-driven. People and strategy drive organisations. Data should inform those strategies, not replace judgment,” he said.
He argued that leaders should move beyond reporting on what happened in the past and focus on understanding why things happen and how future outcomes can be shaped.
Nasila warned that leaving AI unchecked could lead to serious social and economic harm, particularly in a country like SA.
“If AI falls into the wrong hands, the damage can be significant. There has to be a balance between financial performance and what happens to people.”
He stressed that AI is not inherently biased. “Technology exposes our weaknesses. If you want to fix what goes wrong with AI, you need to fix the human decisions behind it.”
On jobs, Nasila said the debate should not focus on whether AI will replace people, but on how work is redesigned. Routine and optimisable tasks are increasingly automated, while human roles shift towards empathy, creativity, judgment and complex decision-making.
Nasila concluded by urging organisations to stop forcing AI into old ways of working.
“Real AI value is realised when processes change, not when old processes are simply digitised. You either iterate slowly, or you break things down to first principles and redesign them for a new reality.”
In a world where most future data may be AI-generated and traditional models no longer reflect lived realities, Nasila said organisations that rethink how they make decisions – ethically, intelligently and with people at the centre – will be the ones that survive.
Share