Artificial intelligence has captured the imagination of businesses worldwide, but building real-world AI applications is far more complex than simply plugging in a model. That’s because the difficult parts of getting it done are unsexy and frequently overlooked.
That’s according to Cape Town-based TrueMark Senior Solutions Architect Jonathan Leibbrandt, who says attention to the foundations of security, compliance and architecture are an investment into future success. “AI has buzzed since the release of OpenAI’s models, and the ease with which it can be used delivers something of a paradox,” he says. “It appears so easy to set up a new AI solution that we see plenty of examples of rushing in without a clear purpose.”
Given the promises of productivity gains and the elimination of ‘busy work’, that’s hardly surprising. But while shiny and new (in its current incarnation, as AI goes back a good 50 years), some of the problems are traditional fetters for the technology industry. Which is to say in some cases, AI is a solution looking for a problem.
“Businesses want AI to elevate their operations but often have zero idea of where or how to integrate it,” Leibbrandt confirms. “And the bottom line is that if it isn’t solving a defined and quantified problem, then it’s not necessary.”
More than that, he points out that in most cases, AI isn’t the answer but rather a portion of the solution. “When you’re dealing with enterprise environments, change and adoption of any technology is often the bigger part of the challenge. Easily established prototypes created with a few clever prompts are often starkly different when it comes to production and operational environments.”
This is where brass tacks matter: is your AI trustworthy in the real world? Is it secure? Complaint with industry and government regulations? “Prompt injection exploits, compliance requirements like data residency and retention, and the fact that if you can’t trace your AI end-to-end and understand what it is doing and why, makes it unusable in regulated industries,” Leibbrandt stresses.
What seems like an AI feature – the ability to generate useful outputs – quickly starts looking like a bug. “It’s powerful, but unpredictable, and that necessitates careful control.”
Continuing, Leibbrandt warns that many underestimate the risks of exposing company data to AI models. “Often those asking for AI aren’t the ones who understand the risks. Before invoking a model, controls need to be in place. Guardrails, like blocking user input so the model doesn’t see sensitive data, must be baked in, architected from the start. Yes, it slows down the delivery pipeline, and time to value, but a few extra months now is a no-brainer compared to the risks.”
He notes that different vertical industries impose unique constraints that shape the architecture more than the AI model itself.
“Industry-specific knowledge is essential. Finance demands traceability and risk management. Public sector requires transparency. Healthcare must protect data. Retail is latency-sensitive and often requires substantial scalability to cope with uneven demand. These constraints and factors shape the architecture more than the AI model and it has to be built around the context.”
He says the pace of AI advancement combines with the rate of change within businesses and industry to make today’s solution potentially unworkable or financially unviable tomorrow. Appropriately designed and configured AI systems therefore should have optimisation as part of the deal. “Context windows grow and weaken, costs rise. You need to reduce unnecessary token usage, tighten context windows and use intelligent caching. Not everything has to go back to the model; for example, thanking the AI for its help uses up tokens and drives costs - do that for 500 000 users and it gets expensive.”
He highlights the importance of retrieval tuning, monitoring accuracy with humans in the loop and tracking ROI. “Spending R200 000 a month on AI without knowing the results and the value it delivers means flying blind. Know what the AI is doing, what it is delivering, how it is saving time or money.”
Ultimately, Leibbrandt frames AI as a tool for solving problems, not a magic solution. “When a customer wants to use AI, the first question is always ‘what problem are we trying to solve?’ Sometimes automation is a better answer. AI doesn’t have to be the centrepiece, but it can be part of the solution. Don’t chase hype, chase the best tool for the job.”

