About
Subscribe
  • Home
  • /
  • TechForum
  • /
  • KRS warns against ‘vibe coding’ as AI development trend gains traction

KRS warns against ‘vibe coding’ as AI development trend gains traction

Cape Town, South Africa, 23 Mar 2026
Industry experts warn that human oversight remains critical to software quality and security. (Image: KRS)
Industry experts warn that human oversight remains critical to software quality and security. (Image: KRS)

As the South African tech industry continues to embrace AI coding tools and integrate them into everyday workflows, a new term has emerged: vibe coding.

Vibe coding refers to the practice of prompting AI tools to generate large volumes of code and deploying it with little to no human review or understanding.

While it promises a faster, simpler way to build software, KRS, a Cape Town-based custom software development house, is urging businesses and developers to take a more critical view.

Lorraine Steyn, CEO and Founder of KRS, says embracing AI-augmented development has given the company clear insight into what works and what doesn’t.

“It’s tempting to lean into the speed AI promises, but vibe coding doesn’t deliver sustainable results in a professional software environment. The real value comes from using these tools with intention and accountability.”

The rise – and risk – of ‘vibe coding’

While vibe coding can create the illusion of speed and efficiency, it introduces significant technical, operational and business risks.

Recent global developments reinforce this concern. In early 2026, reports indicated that engineers at Amazon reviewed their use of generative AI coding tools following service outages that affected thousands of customers.

Internal discussions examined whether AI-assisted code deployments may have contributed to incidents with a “high blast radius”, where failures propagated across multiple systems.

“Even highly mature engineering teams are now questioning how AI-generated code behaves at scale,” says Steyn. “The risk isn’t the tool itself. It’s how it’s used.”

The takeaway is not that AI tools are inherently dangerous, but that automated code generation without strong engineering oversight can introduce risks that scale rapidly in complex systems.

“AI is an incredibly powerful tool, but treating it as a replacement for engineering discipline is where things start to break down,” she adds.

The hidden risks behind the hype

KRS highlights several critical issues associated with vibe coding:

1. The productivity illusion

Developers may feel more productive using AI tools, but research suggests otherwise. One study found developers believed they were 20% faster, when in reality they were 19% slower, a nearly 40% gap between perception and actual output.

“You’re seeing more code being produced, so it feels like progress,” says Steyn. “But if that code isn’t maintainable or correct, you’re actually moving backwards.”

2. Unpredictable errors

AI-generated code has an error rate estimated at 10%-20%, largely due to the non-deterministic nature of these tools (they don’t always produce the same output from the same input).

Common issues include:

  • Mocking functionality instead of implementing it.
  • Fixing a bug in one commit, then undoing it in the next.
  • Randomly changing constants, especially version numbers.
  • Adding duplicate fields or functions.

AI often follows the beginning and end of instructions but skips critical details in between. “These aren’t complex edge-case failures. They’re basic, avoidable mistakes,” Steyn explains. “That’s exactly why human review can’t be optional.”

She continues: “The solution is simple but non-negotiable: review every single line of AI-generated code. No exceptions. If you're not willing to read and understand the code, you shouldn't be deploying it.”

3. Over-engineering and complexity

AI coding tools tend to generate more complex solutions than necessary. They handle edge cases that will never occur, introduce unnecessary abstractions and duplicate code rather than refactor it properly.

This happens because AI models have limited context. They optimise for producing a working answer, not the simplest or most appropriate one for a specific system. Without full visibility of the codebase, business constraints or long-term maintenance needs, the result is often overly complex and fragile.

“AI doesn’t understand your business context or long-term goals,” says Steyn. “It optimises for output, not for simplicity or sustainability, and that’s where problems start to compound.”

4. Security and compliance risks

Perhaps the most concerning aspect of vibe coding is security and data protection. AI tools do not understand your threat model, production environment or legal obligations, and they routinely generate patterns that would fail a proper security review.

They may also recommend packages without verifying whether they meet industry security standards.

Without experienced engineers reviewing against recognised frameworks such as OWASP, teams risk introducing:

  • Broken access control
  • Insecure authentication flows
  • Unsafe data handling
  • Silent leakage of sensitive information

AI-generated code may also violate data protection regulations, including South Africa’s POPIA, by over-collecting data, logging confidential information or ignoring consent and retention principles.

For businesses, this highlights the importance of working with development partners who prioritise security, governance, and long-term maintainability.

“The danger isn’t that the code doesn’t work. It’s that it works while quietly creating regulatory exposure, customer risk and long-term security debt,” Steyn warns.

A long-term career concern

Beyond immediate technical risks, vibe coding use highlights a broader issue: developer skill erosion.

Relying heavily on AI without understanding the underlying code can reduce problem-solving ability and weaken engineering fundamentals over time. This poses a long-term risk to both individuals and teams.

“If developers stop thinking critically about the code they’re shipping, they’re limiting their own growth,” says Steyn.

A better approach: AI-augmented development

Rather than rejecting AI tools, KRS advocates for a more disciplined model: AI-augmented development.

This approach keeps humans firmly in control, using AI to enhance productivity without compromising quality or accountability.

Best practices include:

  • Review every line – go beyond skimming and fully understand what’s been generated.
  • Own critical paths – areas like payment processing, authentication and validation should be handled by experienced engineers.
  • Use AI for repetitive tasks – such as testing, documentation and refactoring.
  • Maintain context awareness – quickly recognise when AI output deviates from the intended outcome.
  • Invest in testing – increased AI use should result in stronger, not weaker, test coverage.

When applied correctly, AI tools can drive meaningful productivity gains. These typically come from faster boilerplate generation, reduced context switching when working with unfamiliar libraries and the ability to prototype and explore ideas more quickly. Teams can also benefit from automated refactoring and code transformations, as well as clearer, more accessible documentation.

Practical guidelines for South African development teams

Based on its experience, KRS outlines the following recommendations:

  • Establish code review standards – AI-generated code should face the same scrutiny as human-written code, if not more. This includes clear commit messages, meaningful tests and documented assumptions.
  • Scale tool usage to task complexity – Match AI involvement to the complexity of the task. Simple tasks can be AI-led with quick review, while complex work should be human-designed and AI-assisted.
  • Implement a three-strike rule – If AI fails to resolve an issue after three attempts, switch to manual debugging. This prevents wasted time and repeated cycles of ineffective outputs.
  • Maintain skills through deliberate practice – Developers should regularly write code without AI support. This is especially important early in their careers to build strong fundamentals.
  • Own the mental model of the code – Developers must maintain a clear understanding of how systems work. AI should support that understanding, not replace it.
  • Document your context – Provide clear guidance for AI tools, including coding standards, architectural decisions, security requirements (especially POPIA) and approved technologies.

AI needs discipline, not blind trust

Vibe coding promises rapid development, increased productivity and reduced effort, but these claims do not hold up in practice.

Even leading engineering teams are reassessing how AI-generated code performs in complex systems. What does work is AI-augmented development: using AI to enhance human capability while maintaining engineering rigour.

For businesses, this makes one thing clear: adopting AI successfully isn’t just about the tools. It’s about the approach. Asking the right questions of development partners is critical to ensuring long-term quality, security and scalability.

“The future of software development isn’t humans or AI. It’s humans with AI,” says Steyn. “But responsibility must always sit with the human.”

KRS has embraced AI tools with a clear understanding of both their potential and limitations – improving productivity without compromising quality.

“The choice isn’t whether to use AI,” Steyn concludes. “It’s how to use it responsibly.”

Share

KRS

KRS is a Cape Town-based software development house specialising in AI-augmented development. The company helps South African businesses build secure, scalable, and maintainable software by combining modern tools with strong engineering practices.

Editorial contacts

Ayesha Bagus
HR Director
ayesha.bagus@krs.co.za