It should come as no surprise that businesses are investing a lot of money in GenAI products and projects. According to a new report from Gartner, global GenAI spend is expected to reach $644bn this year, which is nearly 77% more than in 2024. As excited as organisations are to deploy this tech at an enterprise scale, GenAI is still in its infancy. From hallucinations to reasoning challenges, contextual understanding and computational requirements, GenAI is limited, which is why many ambitious projects end up costing more than they should, or simply fail. Another Gartner survey found that, on average, only 48% of AI projects go into production and at least 30% of GenAI projects are abandoned. One of the main reasons for this shortfall is “unclear” business value. Going from potential to profit isn’t clear-cut, even when 75% of executives rank GenAI as a top three strategic priority, as seen in BCG AI’s 2025 Radar.
Data readiness
So what exactly does successful GenAI deployment look like? First, it comes down to data hygiene. McKinsey & Company put it clearly in a recent report: if your data isn’t ready for GenAI, your business isn’t ready for it either. One of the reasons for this is because GenAI doesn’t just look at traditional, structured database systems. It can process unstructured information like emails, video files, images, audio recordings and social media posts. “[It’s] information that doesn’t follow a fixed format,” says Doug Woolley, general manager and vice president, Dell Technologies South Africa. While GenAI can work with this unstructured information, it still requires proper management and, unfortunately, many organisations only discover they have data challenges after implementing GenAI solutions. “You need to clean up your data before moving forward with any projects,” says Woolley.
Keep AI accountable
“If you’re planning to implement GenAI tools, data needs to be well prepared, easy to manage and accessible in a range of different environments.”
Data quality is a top priority to avoid bottlenecks that can lead to AI project failures, he adds. Woolley’s recommendation is to build robust data infrastructure such as data pipelines, secure storage and tools to integrate data from diverse sources. Dorit Dor, Check Point’s chief technology officer, has similar advice, saying in order for an organisation’s data to be useful, “you have to have a data pipeline with semantic understanding”.
You can calculate what you're spending on bots versus people and see the difference.
Hans Zachar, Nutun
While building internal GenAI capabilities offers the greatest measure of control, many organisations are turning to external GenAI services for faster implementation and reduced technical overhead.
“Do you choose a giant LLM and be dependent on that company, or do you want to choose an open-source LLM and be dependent on the criteria of open-source companies?” asks Dor. Big players such as OpenAI, Anthropic and Google all offer enterprise grade versions of their consumer-facing tools, but this convenience comes at a cost. “The more information you give it, the better the analysis you get,” says Ofir Israel, Check Point's vice president of threat prevention and AI products. There are many useful business applications for GenAI and the more you use it, the more tempted you may be to share more information. “I might trust OpenAI, but who can guarantee that 100% of people [in the workplace] are actually using a trusted GenAI application?”
This is the number one topic businesses speak to him about, he says. They all want to use GenAI applications, but are worried about security. Solving the trust problem requires a combination of visibility, enforcement and user awareness, he adds. Without these things, organisations have two options, he says. “One is to block all GenAI applications…but then you miss out on tremendous sales opportunities. Or allow everything and pray that nothing leaks.”
New risks
For GenAI to be effective in a business environment, leaders need complete observability into how it is being used. “Context matters,” says Israel. Is an employee using GenAI to debug code, or write an email containing sensitive company information? And what happens when internal source code is pasted into an application like ChatGPT? The traditional risks of application security, things like SQL injection, broken access control and supply chain vulnerabilities are well known, but when GenAI is introduced, it brings new risks like LLM jailbreaking or prompt injection. And now, there are AI agents that go beyond text generation and give LLMs the power to do things like run API commands, access specific data sources and use tools to complete multistep tasks. Connecting AI language capabilities to actual workflows and systems could deliver business value, but there's a catch: each new permission or integration creates potential security risks. “This freedom allows the magic of AI that we all know and want, but wherever there is freedom, an attacker sees a potential attack surface,” says Israel. “New risks need new protection solutions.”
Is it worth it?
After addressing the “can we do it safely” question, organisations must tackle the equally complex, “is it worth it” calculus. Is it really possible to measure GenAI's true business impact? "It depends," says Hans Zachar, Nutun's CIO, “on what is the most appropriate use of AI.” Looking at a narrow use case like contact centres, the value is easier to quantify. When GenAI handles simple customer interactions, businesses can directly compare costs between human agents and the AI systems used to automate low complexity tasks.
Block all GenAI applications… but then you miss out on tremendous sales opportunities. Or allow everything and pray that nothing leaks.
Ofir Israel, Check Point
"There's a direct one-for-one replacement in that world," says Zachar. "We have a human being freed up by a bot to do a certain activity. You can calculate what you're spending on bots versus people and see the difference." But it gets trickier when companies roll out GenAI tools more broadly. How do you measure ROI when employees in different departments use ChatGPT or Copilot? Zachar believes you shouldn't try to quantify everything and suggests a more holistic approach. "You almost have to push the tools into the environment and let the creativity of the people show you the value." He adds that the synergy between intelligence and AI capabilities delivers benefits that don't fit neatly into traditional measurement frameworks. Sometimes the most valuable outcomes are the hardest to quantify with conventional metrics; it’s a challenge business leaders will need to embrace as GenAI becomes increasingly woven into everyday work.
AT WHAT COST?
Looking beyond the buzz, it's important to ask what the real price tag on GenAI is. As a technology, the promise is that GenAI will revolutionise industries, but you need to carefully evaluate potentially unexpected costs. “Building up AI capabilities is the tip of the iceberg. Ninety percent of the actual work that needs to be done is below the surface,” says Nutun's CIO, Hans Zachar.
Infrastructure demands, for example, are a substantial investment and computational resources can eat into budgets. "One major concern is the computational cost of training and running LLMs, which require significant amounts of hardware acceleration and, ultimately, power," says Johan Robinson, EMEA AI platform lead at Red Hat. What many don't realise is how quickly these costs can spiral. When input data increases tenfold, costs surge a hundredfold due to the quadratic nature of transformer models that power most LLMs today. This mathematical reality affects every organisation implementing GenAI, regardless of size or sector. There are also expenses around data acquisition and storage that might not appear in initial calculations. High-quality training data often requires licensing or creation costs that quickly mount up, especially when dealing with specialised industry information.
Talent is another challenge. "Organisations struggle with competing for and retaining data scientists and AI engineers," says Robinson. With global shortages of AI specialists, businesses should expect to pay premium rates that can be double, or triple, standard IT salaries. “You can look at a direct replacement of salary cost versus AI cost, but then you miss all the capabilities around it that are new and different,” says Zachar, alluding to the fact that there are ongoing operational expenses after implementation which often get forgotten. System integration isn't a one-time cost – model maintenance and addressing data drift require continued investment as market conditions and data patterns evolve. A report by Deloitte found that for many AI and machine learning systems, maintenance costs over a five-year period can equal or exceed the initial development costs, particularly for systems that require frequent retraining or that operate in dynamic environments. Security and compliance add yet another layer of costs, with regulatory requirements often demanding additional software, regular audits and industry specific certifications. The reality is that many companies are now looking at smaller, more focused language models as a practical and cost-effective alternative. Microsoft, for example, recently launched its Phi-3 family of open models that it says can outperform models of the same size and next size up across a number of benchmarks. These small language models don't try to know everything, but excel at specific business tasks while using far less computing power. Organisations are also increasingly adopting a project-based approach to AI implementation. Starting with narrowly focused use cases allows for clearer measurement of returns and better cost management.
* Article first published on brainstorm.itweb.co.za
Share