What’s the true cost of chasing the latest AI hype? More often than not, it’s a solution in search of a problem.
In this sharp, pragmatic new episode of Tech Unboxed, BBD software engineers Riselle Rawthee and Hyla Fourie pull back the curtain on the tension between flashy AI solutions and the right tool for the job. They challenge the pervasive belief that large language models (LLMs), retrieval augmented generation (RAG) and agentic systems are always the answer, urging teams to start with the simplest, most viable path – which might be a clear prompt, a smaller model or even a non-AI approach.
The 'garbage in, garbage out' reality
The conversation drills down into the most critical factor for reliable AI: data realism.
The engineers argue that better data is better than more data. RAG, while powerful, doesn't repeal the "garbage in, garbage out" law; it only sharpens it. Poorly structured, outdated or noisy data will simply yield wrong answers with extra confidence.
The speed vs stability trade-off
The episode also tackles the evolving role of the developer in the age of generative AI. Assistive coding tools accelerate learning and compress research cycles, a process the guests call “vibe coding”. While this speeds onboarding to new stacks, it poses a risk to long-term maintainability when engineers don't understand the generated code.
The future favours adaptable problem-solvers who can own end-to-end systems. This means:
- Knowing how to constrain a model with retrieval.
- Understanding when to escalate from a prompt to a full pipeline.
- Possessing the discernment to say no to AI entirely when a simpler method (like a database or search index) suffices.
Agentic AI: Autonomy with a warning label
The episode concludes with a sober look at agentic AI. These systems promise smarter reasoning and autonomy by co-ordinating specialised agents (eg, routing math tasks to a calculator), but they come with a high risk of over-engineering and operational cost.
The guidance is clear: judge agentic systems by measurable outcomes, operational costs and the clarity of tool handoffs, not by their marketing allure. The shared takeaway is optimistic but grounded: engineers must stay in the driver's seat. AI is a powerful accelerant for good engineering practice, but we must resist the urge to treat it like magic. The path to reliable AI starts with clarity on the objective, data hygiene and a commitment to scaling complexity only as justified by value.
Interested in more insights?
Watch Tech Unboxed: Is your AI hallucinating? From RAG to vibe coding and everything in between – with BBD’s Riselle Rawthee and Hyla Fourie, now.
Share
Editorial contacts