Subscribe
About

No limits: Dark AI on the offensive in cyber security wars

By Tracy Burrows, ITWeb contributor.
Johannesburg, 07 May 2025

Threat actors are mastering the development of AI with no limitations to identify vulnerabilities faster and launch more effective attacks. To stay ahead of dark AI progress, organisations need to change certain operational processes, increase awareness programmes and training, and harness advanced AI for defence.

This is according to Dmitry Berezin, global security solutions expert at Kaspersky, who was speaking ahead of the annual ITWeb Security Summit.

Berezin says: “Attackers have mastered AI. According to some research, mentions of AI on the dark web doubled last year, while our Digital Footprint Intelligence division also sees an increase. In 2023, the Kaspersky Digital Footprint Intelligence service discovered nearly 3 000 posts on the dark web discussing the use of ChatGPT for illegal purposes or talking about tools that rely on AI technologies. Plus an additional 3 000 posts about stolen ChatGPT accounts and services offering their automated creation.”

Cyber criminals are using solutions like WormGPT, XXXGPT, and FraudGPT – language models without the restrictions on ChatGPT and with additional functionality, Kaspersky says.

“Attackers are looking for these solutions to automate their activities and improve their tool stack. For example, for the development of backdoors or ransomware attacks – they require more sophisticated tools that avoid the typical cyber security solutions and don’t trigger any kind of alerts or incidents. That's why they also use an AI co-development approach, using compromised AIs sold on the dark web markets, which do not have any limitations. This AI can be used to create ransomware or other malware,” he says.

Dmitry Berezin
Dmitry Berezin

With AI, criminals can now carry out phishing attacks at scale, with personalised e-mails to thousands of employees. If only 10% fall victim to the social engineering, hundreds of employees could be compromised.

Threat actors are also using AI to generate images, audio and video to commit crimes, he says. “Some of them are stealing the open source AI models to carry out fake kidnappings for ransom, creating realistic audio or videos of people being held hostage to convince their families to pay a ransom,” he says.

Berezin adds: “AI is so powerful these days that nobody knows how it will evolve within the next couple of years. Just a few years ago, pictures and videos generated by AI were easily recognisable as fake, but now they are very convincing. Imagine your ‘boss’ calls you and, during the video call, he asks you to make a transaction or to send some confidential data to the partners. It appears to be his face and his voice – how will you know that you are not talking with the real person?”

Berezin says mitigating the risks posed by AI will require more stringent security protocols. “For example, in the financial industry, organisations might consider new processes for authorising transactions. SMS and voice are no longer a reliable source of information, so we need to extend security by asking for specific keywords, or ask something that only this person knows – perhaps some random numbers that were assigned during the creation of the business account. Two-factor authentication with physical tokens also helps, because it remains difficult to circumvent, especially with a physical device. Awareness programmes are also crucial for all employees.

“In IT, we need to use AI-powered cyber security solutions in order to fight AI-powered cyber intruders. We should use it in our cyber security tools in order to identify anomalies in the behaviour of the people or information security systems, and identify vulnerabilities. Kaspersky Managed Detection and Response uses AI extensively for the initial analysis of incidents, assigning priority and identification to decrease the workload on our team members. Almost one third of the incidents are automatically addressed by AI without intervention from human beings. We keep an eye on a huge number of endpoints globally, so we need to use ML and AI as much as possible.”

Berezin notes: “Our Kaspersky Threat Intelligence Portal is also powered by AI, so you can simply insert an IP address, click and it shows all the information about this IP address: Is it related to the APT group? If yes, what kind of APT group is behind it? What kind of indicators of compromise do they leave as traces that you can double check within your infrastructure? It provides full security reports with action steps, and it’s all fully automated. We're constantly improving this and other tools also powered by AI, such as our XDR solution, which offers accelerated threat detection, automated response and real-time visibility.”

Berezin will speak on how cyber attackers are using AI and how businesses should respond at the ITWeb Security Summit 2025, in Cape Town, on 28 May, and at the Sandton Convention Centre, in Johannesburg, on 3 June. Kaspersky will also demonstrate its AI-enabled threat intelligence and XDR solutions, AI protection for industrial networks and AI-powered automobile security solutions.

For more information and to register, go to https://www.itweb.co.za/event/itweb-security-summit-cpt-2025/ or https://www.itweb.co.za/event/itweb-security-summit-2025/.

Share