Subscribe
About

How cyber attackers are using AI and how businesses should respond

Joanne Carew
By Joanne Carew, ITWeb Cape-based contributor.
Johannesburg, 02 Jun 2025
Dmitry Berezin, global security solutions expert at Kaspersky.
Dmitry Berezin, global security solutions expert at Kaspersky.

Just a few years ago, it was fairly easy to spot a deep fake video or an AI-generated voice clip. Think unnatural eye movements, mismatched lighting, blurry features and unnatural rhythm and intonation. But now, detecting AI generated audio and video isn’t so simple.

Just ask the employee of a multinational firm who accidentally paid $25.6 million to cyber scammers. This individual joined a video conference call where a recreated version of the company’s CFO requested that she make 15 transactions to five local bank accounts. Everyone on the video call, except for the victim, was fake. The scammers used deepfake technology to turn publicly available video footage into convincing versions of real people.

This news story was shared by Dmitry Berezin, global security solutions expert at Kaspersky, at the ITWeb Security Summit at the Cape Town International Convention Centre (CTICC) on Wednesday. “Today, these videos are so convincing that some of the biggest players in the AI space are talking about putting digital watermarks on everything generated by AI so that it’s possible to detect when something isn’t real. But, as we all know, there are always ways to work around this.”

Berezin gave another example to outline just how much AI has changed the threat landscape. Five years ago, if a hacker compromised the account of a single, low priority user, it would have taken them several months to move laterally across the business and to access more sensitive, proprietary information. But when AI is used the process is much, much faster. Referencing a recent incident, he explained that cyber criminals used a single compromised account to send around 3 000 highly personalised and convincing emails across the company. And they did this within just 10 minutes.

Given these risks and realities, cybersecurity professionals need to up their game. “In order to fight AI-powered attacks, we develop security solutions that are also enabled by AI.”

This means using AI in cyber triage, to filter false positives, to assist analysts and up their productivity and to reduce the amount of time allocated to detection and response, among other things. Instead of sifting through threat intelligence reports, we use AI to get a short summary of the main points and highlight where there might be gaps in your environment. This allows businesses to be more proactive and less reactive when it comes to cybersecurity.

Today, using AI to enhance and improve your cyber defences isn’t a nice-to-have, it’s a must, he concluded. “Using AI powered security tools is the only way we can be on the same page as the attackers and, hopefully, even be a few pages ahead of them.”

Share