A few years ago, 'verifying the sender' largely meant checking an e-mail address. Today, attackers can join a virtual call with a familiar face and recognisable voice, and your brain will do what it's primed to do. Trust by default, make decisions quickly with imperfect information, and respond to unorthodox requests under pressure.
That’s the real shift. Deepfakes are not just a new type of content; they’re a new 'trust by default' delivery mechanism. While most conversations still focus on detection technology, liveness checks, model forensics and virtual meeting acceptance codes, the operational failures we’re seeing in the field are overwhelmingly psychological in nature. These include well-designed trust heuristics, persuasion, coercion and the exploitation of human decision-making shortcuts.
Deepfake-enabled fraud is increasingly being described as operating on an 'industrial' scale, with bespoke scams becoming easier to produce and distribute. This matters because it changes the economics: criminals only need to fool the right person at the right moment with the right triggers.
We are not asking the right questions
Most organisations are still asking: “Can people spot a deepfake?” The better question is: “Can people resist their natural human tendencies?” Tendencies which are then further exploited through simulated authenticity tactics.
Because the attack path isn’t “fool the eyes”. It’s more insidious than that. It:
1. Triggers truth-default (the brain assumes honesty in normal communication methods). For example, receiving an invite to attend a virtual meeting from a legitimate e-mail account (which may have been compromised) leads to an inherent acceptance and trust of the source.
2. Exploits trust heuristics (mental shortcuts that enable individuals to evaluate the credibility of an engagement quickly without the need for extensive, deliberate analysis). Deepfakes amplify the most powerful trust heuristics:
- Familiarity: “I recognise that face/voice.”
- Authority: “This person outranks me; compliance is the safe path.”
- Social proof: “Others on the call seem aligned, so it must be real.”
- Consistency: “This fits the narrative of what I expect to be happening.”
What makes deepfakes so dangerous is that they seem to provide unimpeachable authority. A voice note from your CFO doesn't register as a 'claim'; it registers as 'reality'. This keeps people in what deception researchers call a 'truth-default state', which is our tendency to presume honesty unless something clearly disrupts that assumption.[1]
3. Add time pressure (urgency)
4. Isolate the target (confidentiality)
5. Force an irreversible action (transfer, credential reset, malware execution)
Deception works best when it keeps people inside the “truth-default”, where suspicion doesn’t naturally arise until something breaks the frame.[2]
Deepfakes are powerful because they minimise "frame breaks" (small inconsistencies or “out-of-place” cues that disrupt a believable situation and make your brain pause to reassess whether what you’re seeing or hearing is authentic). When the voice sounds right, the face looks right and the meeting feels right, your brain stops searching for alternatives.
From “Can you detect it?” to “Can they move you?”
Most people approach deepfakes based on their visual and auditory qualities. However, deepfake attacks are rarely won based on the lack of pixel imperfections alone. They are won by the narrative created and the intentional design of the desired outcomes. The aim is to get a target to take a high-risk action while bypassing normal controls.
Consider the pattern behind several high-profile deepfake incidents:
- Two weeks ago, the Foschini Retail Group (TFG) was scammed by an impostor posing as the CFO, resulting in payments being authorised totalling more than R22 million to fraudulent bank accounts.[3]
- In early 2024, fraudsters used deepfake participants in a video meeting to convince an employee to make multiple transfers totalling about US$25 million, later linked publicly to Arup.[4]
- In 2025, Singapore Police Force described a case where a finance director was drawn into a “senior leadership” video call and instructed to transfer a large sum (nearly US$500 000), with cross-border co-ordination later helping recover funds.[5]
- In February 2026, Google Cloud threat intelligence described North Korea-linked actors using fake video meetings and AI-enabled lures (including alleged deepfakes) to push targets into executing “troubleshooting” commands, turning a trust event into a malware event.[6]
Different sectors, different end goals, same psychology.
Deepfakes are being used to:
- Borrow legitimacy (the face/voice becomes the “credential”)
- Accelerate compliance (urgency + authority)
- Collapse verification (“don’t slow us down with process”)
- Redirect the channel (move you off the safest path)
- Convert intent into action (wire the money, share the data, run the command)
This is why awareness alone is ineffective. Even if people know that deepfakes exist, they can still comply because cognition under pressure is shaped by emotion, hierarchy and perceived consequences, all of which influence behaviour. Therefore, the real focus needs to be on strengthening resilience in the moment.
What we should be asking instead?
If “deepfake awareness” is the headline, these are the questions that actually change outcomes:
- Where are our trust decisions made at speed? (finance approvals, vendor changes, payroll updates, access grants, incident response)
- Which roles are most exposed to authority pressure? (exec assistants, finance ops, service desk, HR, customer support)
- What actions can be triggered by a single person in a single moment?
- Where do we rely on sensory cues (voice/face) as proof, instead of protocol as proof?
- Which processes have “override culture” baked in? (“Just do it; we’ll fix later.”)
- Do we train people to pause, or do we train them to comply faster?
This shifts the conversation from “synthetic media detection” to trust governance, which is increasingly where agencies like Europol say organised crime is heading as AI lowers the cost and raises the scale of impersonation and scams.[7]
Building behavioural immunity: The future skill is applying trust governance
The goal is not to turn every employee into a deepfake analyst. The goal is to build a behavioural reflex: verify before taking a high-risk action even when the stimulus feels real.
That requires three design shifts:
1. Move verification from optional to automatic
If a request triggers money movement, credential changes or sensitive data release, verification should be default, not discretionary. The policy must be designed so that slowing down is normal, not insubordination.
2. Create a “safe-to-challenge” culture
Deepfakes exploit hierarchy. Countering them requires leaders to normalise challenge:
“If I ever ask for something urgent and sensitive, I expect you to verify me.”
Culture is the permission layer that makes verification socially possible.
3. Train for emotion, not information. Is your measure of success compliance achieved or behavioural change applied?
Most training teaches recognition (“here’s what a deepfake is”). Deepfake resilience is a performance skill under stress. It needs scenario-based practice that simulates authority, urgency and ambiguity so people learn what it feels like to hold the line without escalating panic or damaging relationships. Socialisation learning is the key!
This matters because we’re already seeing signals of scale: for example, Entrust’s reporting highlights both rapid growth in deepfake activity and the frequency of attempts in 2024. The FBI has noted thousands of AI-related complaints reaching IC3 in 2025, spanning multiple scam categories, including AI-enabled impersonation scams (fake profiles plus voice/video clones), pressure-to-pay scams (pushing you to send money via gift cards or crypto-currency) and online-contact scams (someone you’ve only met online or over the phone trying to get sensitive personal information).[8]
The solution: Practical behavioural controls that can be enabled by trust governance
- Two-person integrity for high-risk actions: No single individual can execute certain transfers or changes without a second approver who verifies independently.
- Out-of-band verification rituals: A short, standard protocol (call-back to known numbers, verified corporate channel, pre-agreed executive code phrases for high-risk requests).
- Friction by design: Deliberate “speed bumps” in the processes attackers try to rush through (cool-off timers, mandatory checklists, confirmation prompts).
- Language that breaks the spell: Teach scripts that pause without confrontation: “I’m going to verify this through our standard channel and come right back.”
- Red-Team the psychology: Test not only whether people click links, but whether they’ll bypass process when pressured by authority and realism.
Conclusion
Deepfakes are not a technology problem with a psychology side-effect. They are a psychology problem delivered through technology.
This is why the strategic response must involve investing in cyber culture and trust governance. Culture is what makes it socially acceptable to pause, challenge and verify without fear of being seen as insubordinate. Trust governance turns verification from an optional 'best practice' into a default operating model where identity is never assumed, channels are never treated as credentials and high-risk actions cannot be completed by one person under pressure.
If we continue to treat them as a detection challenge, we’ll continue to measure the wrong outcomes, such as how many people can identify a fake clip. The simpler metric that matters is how many people can resist a manipulated moment and follow the existing process without deviation.
In an era of synthetic trust, organisations that build behavioural immunity will be the most resilient. This involves establishing verification habits that withstand pressure, fostering cultures that encourage challenge and designing processes that make the safe choice the easy choice.
Author: Antonios (Tony) Christodoulou
Founder and CEO, Cyber Dexterity | Adjunct Faculty GIBS Business School (Gordon Institute of Business Science) | PhD candidate in Cyberpsychology at Capitol Technology University, US | Former CIO for a Global Fortune500 Company, American Tower Corporation.
[1] Levine, ‘Truth-Default Theory (TDT)’.
[2] Levine, ‘Truth-Default Theory (TDT)’.
[7] FBI, ‘FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence’.
Share