
“As AI will enable the art of possibility, so will the dark side of its exploitation," explains Tony Christodoulou, Founder and CIO/CISO at Cyber Dexterity. This view is not speculative. Cyber criminals are using mainstream artificial intelligence (AI) tools to scale and evolve their attacks, such as fooling people into making damaging mistakes.
IBM's new Cost of a Data Breach Report 2025 reveals that one in six breaches involved AI-driven attacks, and 13% of organisations reported security incidents involving AI models or applications. Attendees at the recent Black Hat conference, the world's premier cyber security event, expressed similar concerns about the increasing use of AI in all types of attacks.
The deepfake danger
While many attacks now incorporate AI, it's the ones that target people which raise the biggest concerns.
Deepfakes exemplify this darker side, simulating authenticity to manipulate trust, the bedrock of human interaction. People are already regarded as the softest targets for cyber criminals. Stanford University's research from 2024 reveals that human error accounts for 88% of cyber security breaches. Findings from Cyber Talk (2024) state that 90% of data breaches originate from phishing attacks.
It's no surprise that AI-powered deepfakes are becoming more common when targeting people, says Christodoulou.
"Deepfakes exploit inherent human cognitive biases. Our brains naturally rely on instinctive shortcuts, known as heuristics, to quickly process the vast amount of information we encounter daily. While these mental shortcuts are efficient, they are also easily exploited. Cyber criminals who use deepfakes exploit the psychological principle of 'truth-default theory', whereby people tend to believe rather than doubt. Therefore, the more realistic the deepfake, the greater its credibility, which dramatically amplifies the risks to individuals and organisations."
Security training lags
Even though organisations diligently train their staff to spot attack attempts, there is an urgent need for more psychologically informed cyber security strategies, he adds.
"Traditional training programmes are predominantly compliance-orientated and knowledge-based. They often fail to address the deeper behavioural and emotional triggers that underpin decision-making in real threat scenarios. Deepfakes target those triggers, so we need to revisit how we prepare people for them."
He calls this approach "cyber dexterity": human alertness and perception developed through experiential, immersive learning engagements that integrate cognitive, emotional and social elements. They provide the intuitive, behaviourally embedded capacity to detect, interpret and respond to dynamic cyber threats.
There are several important cyber dexterity skills, such as emotional intelligence, self-awareness and the psychological agility required to recognise and resist manipulation. These areas relate closely to cyber psychology, the study of technology's psychological impact on human minds. Effective modern cyber security training takes guidance from that field, offering immersive experiences that integrate storytelling, emotional engagement and realistic simulations of cyber threats.
"The goal is to help employees respond adaptively and intuitively. AI threats are particularly insidious because they abuse trust at the most fundamental levels. Employees must cultivate continuous self-awareness of their susceptibility to cognitive biases, emotional triggers and stress-related vulnerabilities."
Trust, not punishment
Cyber dexterity also moves away from the shame and punitive actions sometimes associated with cyber security training. Organisations must prioritise psychological safety by empowering their employees to openly discuss cyber security concerns and report suspicious activities without fear of repercussions.
All these points lead to a shared conclusion: modern cyber security habits need to become intuitive, using training that helps people improve their mindset and instincts. It's no longer just a matter of spotting a strange-looking e-mail. Deepfakes and other AI tools are trying to fool our deepest safeguards.
"We instinctively trust in authority figures. If the interaction seems authentic, it can override our scepticism," says Christodoulou. The most infamous example is that of a UK-based engineering company which lost $25 million when cyber criminals used live deepfake technology to impersonate its executives during a virtual meeting. But more attacks are happening, with a deepfake attack attempt every five minutes, according to the Entrust 2025 Identity Fraud report.
Some of the answers to these challenges are technological. However, humans are still the first line of defence. When they receive the right training, their suspicion and scepticism can be more powerful than any digital safeguard.
"Organisations can transform their human firewall from a vulnerability into their greatest asset. In the face of evolving cyber threats, true resilience is achieved not merely through compliance but through fostering a culture of instinctive, empowered and psychologically agile defenders."
Share