Karl Fischer, CTO of Obsidian Systems.
There is a pattern I keep seeing in boardrooms and project meetings. The question is framed as: “How much will AI cost us?” The better question is: “What will happen if we do not control how we use it?”
AI adoption across South African organisations is accelerating. Teams are experimenting with agents, copilots, assistants and embedded AI features across productivity tools, development environments and service platforms. The enthusiasm is understandable. The productivity gains can be real. But the hidden costs surface quickly once experimentation turns into operational reality.
The first surprise is onboarding. AI is not a single tool with a single interface. Different agents interact differently. Some plug into existing platforms. Others require API integrations, policy reviews or governance approvals. People often underestimate the time required to integrate AI safely into production environments. There are access decisions to make. What data can the AI reach? What systems can it query? Who is accountable for its output?
Those are not licensing questions but operational ones.
Beyond the initial roll-out, costs can multiply quickly. A team procures one agent for documentation. Another adopts a separate tool for code generation. A third introduces a specialised AI for analytics. Suddenly, one person is using multiple agents, each with its own subscription model, usage limits and integration footprint. Usage thresholds are exceeded. API limits are triggered. Infrastructure costs rise because automated queries are running continuously in the background.
That is where AI sprawl becomes a financial risk.
The most common budgeting mistake I see is assuming AI is a fixed line item. It is not. AI is a moving target. New capabilities appear. Vendors merge features into core platforms. Standalone tools become redundant. In some cases, costs fall as functionality consolidates. In others, they increase because teams layer multiple agents across the organisation without a unifying strategy.
The result is unpredictability. Finance teams struggle to model the long-term cost profile. Technology teams struggle to justify spending when outcomes are vaguely defined.
This leads directly to the next problem: unclear return on investment.
Many AI initiatives begin with the language of transformation. Very few begin with defined outcomes. If the objective is “add AI to our service offering”, that is not measurable. If the objective is “reduce proposal turnaround time by 30%”, that is.
Some projects fail because the promised feature is not mature enough for production use. Others fail because scope creep happens. AI can do something close to what you need, but not exactly what you need, so the project drifts. The only way to prevent that is to invest time upfront in defining what success looks like.
When AI works, it shows up in productivity and confidence. Engineers deliver faster. Support teams resolve tickets more consistently. Analysts handle more complex scenarios with fewer errors. But there is an important qualifier. AI must augment expertise, not replace it. If users become entirely dependent on AI outputs without subject-matter validation, the risk increases. The strongest outcomes occur when AI responses are reinforced by someone who deeply understands the domain.
Control is essential. One practical mechanism is to ring-fence AI access through a central gateway, such as an MCP gateway. Think of it as a toll gate. All agents pass through it. You can see which agent is connecting to which system, and who initiated that request. Without that visibility, you cannot manage cost or risk effectively.
Data quality is another overlooked factor. Organisations assume AI will magically fix fragmented data landscapes. It will not. If your data is inconsistent, duplicated or poorly structured, AI will reflect those weaknesses in its output. Clean data is not optional. It is foundational.
So, what does AI that pays for itself look like? In practice, it often means augmenting existing teams rather than expanding headcount. I have seen engineering teams deliver at a higher throughput without increasing staff. That said, AI does not automatically recoup its cost. It requires deep integration into daily workflows. If it is used only 5% or 10% of the time, the numbers will not justify the spend.
AI is not inherently expensive. Lack of strategy is. Organisations that treat AI as a governed capability, define measurable outcomes and integrate it fully into operational processes are the ones that will see sustained returns.
The rest will see a rising subscription bill and very little to show for it.