Nine seconds. That's how little audio is needed to generate a convincing cloned voice. Copy the voice of a CEO, CFO or some other authority, and you have a definite edge, a way to misdirect anyone subordinate to those people.
Yet, it doesn't even require a synthetic voice. Sometimes, a WhatsApp conversation is enough. This was the unfortunate experience for an employee of The Foschini Group, who was manipulated into supporting the transfer of R22 million to fraudsters.
Blame the game, not the victim
To some observers, the employee was gullible. But Tony Christodoulou sees it differently. We underestimate how easy and common this kind of impersonation attack is.
"I refer to this as a synthetic threat. It's a form of simulated authenticity designed to hijack your trust by default. It's as simple as a WhatsApp photo. With the benefit of hindsight, we look at it and think, 'How could you fall for this?' But at the time, the context created by the impersonator is everything.”
Christodoulou, founder of Cyber Dexterity and an expert in cyber psychology, provides a more detailed analysis of the attack. He identifies several potential contributing factors.
There is authority bias, when a subordinate is approached by a senior figure. Attackers also use channel intimacy, exploiting communication channels that employees use often, such as WhatsApp. If an organisation uses channels that blur professional and personal realms, it can create an opportunity for impersonators.
He also points to how attackers created a sense of urgency and secrecy, even requiring the victim to sign a non-disclosure agreement that further cemented a sense of validity and authority.
"They not only pretended to be the CFO on WhatsApp. They came up with signing an NDA. This is not a shotgun approach. The fact that the victim got a call from someone pretending to be the lawyer, and getting her to sign an NDA – that shows targeted intent and design. The NDA was a smart misdirection to further legitimise the process."
It also suggests that the attacker compiled dossiers of employees, likely with the help of AI, and selected their target intentionally: someone with enough authority to influence a money transfer but still low enough on the organisational chart not to question orders from above.
"I want to add that she later suspected something was wrong. That's how they uncovered the fraud. But in the moment, she was being manipulated at a psychological level that didn't undo her instincts and training. It completely sidestepped them."
A new front in cyber crime
These attributes define what Christodoulou calls synthetic threats, which pose a new challenge different from other cyber risks. In cyber security, training and awareness are key. Make sure people know what the threats are, can spot phishing links and grasp the tactics of online criminals.
But the era of digital convenience and AI-generated synthetic personas creates threats that knowledge doesn't automatically prevent and standard policy design isn't prepared to handle.
"This is one of the most important paradigm shifts in cyber security right now. Traditional wisdom and training focus on more obvious deception. Now we are reasonably good at catching those cues when trained. But evolving cyber attack tactics fundamentally undermine this model because the manipulation is now playing to our perceptual systems as humans that we rely on and trust."
Humans rely on heuristics to make quick decisions, and evolution has bequeathed us with an innate sense to make snap judgments on voices, faces and relationships. Deepfakes exploit exactly that hardwiring and then diminish the instincts we rely on through fear and sense of urgency.
If a company's culture does not tolerate challenging those above you, you are more likely to follow orders than question them. If policies are designed as rules to follow, not strategic responses to risks, they are easy to subvert. When we believe our gut instincts are above reproach, we are primed for manipulation.
How to counter deepfake attacks
Most security training emphasises knowledge. While that's effective against many cyber attack tactics, it's of little use against deepfake attacks.
Instead, look at behavioural training.
"These types of impersonation don't just fool judgment. It bypasses our judgment entirely by satisfying our perceptual trust mechanisms before conscious analysis even begins. Preparation must go further than that by focusing on the behavioural aspect," says Christodoulou.
- Verification should be automatic: Questioning is not insubordination.
- Create a safe-to-challenge culture: No request, however senior the individual(s), should bypass established processes.
- Shift beyond awareness: Knowledge is important, but training should include the behavioural choices during an event and look to develop the “flex” in response when it matters.
- Exposure to threats: Expose people to simulated deepfake attacks (a form of red teaming but to develop psychological resilience) to build their perception and alertness.
Deepfake technology is evolving by the day, and there are already thousands of such tools available at very low prices. Where traditional cyber security trains the mind, cyber psychology trains behaviour. By understanding how people respond under pressure and where those responses can be exploited, it then builds the kind of instinctive, automatic defences that hold even when conscious judgment is being bypassed.
"Implement a cyber culture programme at a strategic level," says Christodoulou. "We can't fight this with a policy. You've got to use strategy focused on behaviour. That's what makes the difference. Organisations that invest in this approach don't just have more aware employees; they have a culture that is structurally harder to manipulate."
Share