About
Subscribe
  • Home
  • /
  • Malware
  • /
  • AI-driven identity fraud supercharges emergency scam threat

AI-driven identity fraud supercharges emergency scam threat

Christopher Tredger
By Christopher Tredger, Technology Portals editor, ITWeb
Johannesburg, 16 Jan 2026
Richard Ford, group CTO, Integrity360.
Richard Ford, group CTO, Integrity360.

-driven is not a futuristic sci-fi plot – it is a current risk management issue that belongs in mainstream compliance discussions alongside POPIA and FICA, says Richard Ford, group CTO at Integrity360.

Ford says fraud is no longer just about financial theft; it’s about identity theft in the most visceral sense.

“Emergency scams are not new. Criminals have phoned victims for decades, usually targeting older individuals, pretending to be a grandchild in trouble. They rely on fear and urgency to bypass critical thinking. The difference now is realism. Scammers no longer need to be vague. They can scrape audio from a TikTok video, an Instagram story or a Facebook clip and use inexpensive – often free – AI tools to generate a clone of a voice. They use this to invent accidents, arrests or hijackings,” says Ford.

Herein lies a glaring security paradox, he adds. “For years, the security industry has pushed for a move from passwords to biometrics. We trust our faces and voices to unlock our banking apps, verify our identity with SARS and secure our phones. Biometrics are safer than passwords because you cannot forget them and they are unique to you.

“But what happens when the 'key' to your digital life can be copied? If an attacker can clone one or more of the signals you use to prove (that) you are you, the foundation of trust can begin to crumble.”

Risk to business

Ford says while the family emergency scam grabs headlines, the risk to the South African business is arguably higher.

“Business leaders are highly visible. We appear in webinars, speak on podcasts and post video updates on LinkedIn. This provides hours of high-quality training data for an attacker. Consider a finance administrator at a mid-sized logistics firm. They receive a WhatsApp voice note from the financial director. It sounds exactly like them – the same cadence, the same tone. The message asks for an urgent payment to a new supplier to secure stock before the weekend.

“It is not a request that triggers a cyber security protocol; it triggers a subservient reflex. The employee wants to be helpful. They recognise the boss’s voice. They make the payment. Many of us are familiar with the extreme version of this recently in Hong Kong, where an employee was tricked into paying over R400 million to fraudsters after attending a video call where every other participant was a deepfake recreation of their colleagues. But South African SMEs do not need to lose millions to be crippled; a diversion of R50 000 is enough to ruin cashflow for the month.”

Tech alone can’t work

The Integrity360 executive warns that technology alone cannot be the only defence, and the most effective control is often completely non-technical: the ‘pause and verify’ rule.

“For families, this means agreeing on a protocol while everyone is safe and calm. Agree on a 'safe word' or a specific question that only a real family member would know the answer to. If a panicked call comes in, ask the question. If the voice on the other end cannot answer, hang up and call them back on their saved number.

“For organisations, the principle is identical. No financial transaction should ever be approved based solely on a single channel of communication. If the instruction comes via WhatsApp voice note, verify it via a phone call or an e-mail. If it comes via e-mail, verify it via a call to a known internal extension. The losses of failing to act on urgency are going to be less than acting on potentially invented and fraudulent urgency.”

Ford asserts that AI-driven identity fraud needs to be included in the mainstream compliance discussion involving regulatory adherence and risk mitigation.

“If we accept that seeing is not believing and hearing is not enough, we can adapt. By normalising the act of verifying – by pausing before we pay – we make life significantly harder for criminals who rely on our reflex to trust what we hear.”

Highly relevant

AI expert and founder of AIforBusiness.net Johan Steyn says the issue is highly relevant and accelerating quickly.

“AI-driven identity fraud has increased exponentially over the past 12-18 months – not because fraudsters have become 'smarter', but because the tools have become cheap, accessible and convincing. Voice cloning, face swaps, synthetic identity profiles and 'emergency' social engineering scripts now allow criminals to scale impersonation in ways that are much harder for staff and customers to detect.”

Steyn agrees that from a FICA and POPIA perspective, this issue can no longer sit on the sidelines as a “fraud problem” alone – it belongs in mainstream compliance and governance conversations.

“It directly impacts customer due diligence, consent and data integrity, and the duty to safeguard personal information. In practice, it means that 'knowing your customer' must now include protecting against synthetic and manipulated identity signals.”

Steyn advises business leaders to treat identity as a risk system, not a once-off verification step. This includes:

  • Strengthening step-up verification for higher-risk actions.
  • Improving liveness and anti-spoofing controls.
  • Using device, behavioural and transaction signals to detect anomalies.
  • Introducing out-of-band confirmations for sensitive changes (eg, beneficiary changes, account resets, high-value payments).

“Just as importantly, train frontline teams on deepfake and voice clone red flags, test controls against synthetic attacks and ensure third-party vendors (especially onboarding and contact centre tech) are held to clear security and audit expectations.”

Share