AI and generative AI have been transformative for the cyber crime industry, giving attackers easy access to a wealth of tools that enable them to move faster and more effectively. At the same time, legitimate organisations using generative AI could be making themselves vulnerable and putting their sensitive data at risk.
This is the warning from Henry Denner, Managing Director at SecureConekt, a partner of CipherWave.
Denner says the best-known example of how AI is changing the cyber security landscape is its effective use in social engineering. “AI-composed phishing mails and deepfake photos, video and voice clones have been responsible for a number of high-profile social engineering and business e-mail compromise attacks, which resulted in serious losses,” he says. “AI is becoming better and better at drafting convincing, targeted phishing content that is virtually impossible for recipients to discern as fake.”
Trevane Paul, CEO of Conekt Holdings, says: "We’re seeing a new kind of threat emerge – one that scales at machine speed and adapts on the fly. Traditional perimeter defences aren’t enough. You can’t firewall your way out of an AI-led threat landscape."
But AI isn’t just writing e-mails and creating deepfakes – it’s also being deployed to seek out potential victims and scan targets’ online presence to make phishing mails more believable, he says. “AI can also find and exploit vulnerabilities autonomously, and is being used to code malware and build hacking tools. Attackers with limited technical skills can now develop sophisticated malware with the help of dark web AI that is not restricted from committing crime the way ChatGPT is.”
In addition to being weaponised by attackers, the use of AI also brings risk into the organisation, Denner says. “The biggest risk is where sensitive data is fed into AI systems without proper governance and security in place,” he says. “Less common, but also a risk, is prompt injections that result in data exposure or deliberately incorrect AI outputs.”
Paul explains: "AI is not just a cyber weapon. It’s a mirror. It reflects the strength – or weakness – of your organisation’s internal controls, data discipline and security maturity. If you don’t know what data is being used, by whom, and where it’s going, AI will expose you faster than any hacker can."
Denner says traditional cyber security measures are struggling to keep pace with the speed and adaptability of AI-driven attacks.
“The non-deterministic nature of AI models complicates prediction and prevention efforts. Time-to-exploit is collapsing: guardrail-free AI can now compress weeks of reconnaissance and malicious software coding into hours, giving security teams far less breathing space to patch or mitigate,” he says. “Equally concerning is that for the price of a business lunch, criminals with minimal skills can obtain expert, step-by-step instructions on how to commit cyber crime – complete with the tools to do it.”
Denner emphasises that defensive AI must match offensive AI. “Organisations must invest in AI-driven monitoring, code review and threat-hunting tools. Their AI-based defensive systems must match the sophistication and agility of offensive AI tools.”
Paul concludes: "This is no longer about big IT budgets or enterprise-grade tools. It’s about mindset. We need to stop reacting to breaches and start building intelligent, responsive security ecosystems that evolve as fast as the threat actors do. That’s the mission we’re on at Conekt and SecureConekt."
This is the third in a series of press releases by CipherWave and SecureConekt on AI and cyber security. The next press release will cover how AI is taking on AI in a next-gen cyber security battlefield. To see the other press releases in the series, click here.
Share