Beware of deepfakes as scammers exploit AI tools

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 09 Apr 2024
Malicious use of deepfake technology represents a growing threat to companies.
Malicious use of deepfake technology represents a growing threat to companies.

It’s becoming increasingly easy for threat actors to use deepfakes in their attacks due to widespread availability of AI tools, warns cyber security firm Kaspersky.

Bethwel Opil, enterprise client lead at Kaspersky in Africa, says, “While the time and effort to create these attacks often outweigh their potential ‘rewards’, Kaspersky warns that companies and consumers across Africa must still be aware that deepfakes will likely become more of a concern in the future. The potential for malicious use when it comes to deepfakes is clear.

“From blackmailing to perpetrating financial fraud and spreading misinformation via social media, the potential knock-on effects may be significant. Invariably, cyber criminals are still looking for cheaper and quicker methods to propagate their campaigns. However, we anticipate an increase in targeted attacks using deepfakes, especially against influential people and high net worth individuals or organisations in the coming years that will justify the time and effort it takes attackers to create deepfakes.”

On the darknet

Kaspersky research has found the availability of deepfake creation tools and services on darknet marketplaces. These services offer GenAI video creation for a variety of purposes, including fraud, blackmail, and stealing confidential data. According to the estimates by Kaspersky experts, the price of one minute of a deepfake video is around $300

Human behaviour, the lack of digital literacy and the inability to differentiate fake from legitimate material all adds pressure to an already difficult situation.

According to the 2023 Kaspersky Business Digitisation Survey, which gathered input from 2 000 respondents across the Middle East, Turkey, and Africa region, 51% of employees believed they could differentiate between a deepfake and a real image. However, in a test, only 25% were actually able to distinguish a real image from an AI-generated one.

This puts organisations at risk given how employees are often the primary targets of phishing and other social engineering attacks, says Kaspersky.

The potential for malicious use when it comes to deepfakes is clear.

For example, cybercriminals can create a fake video of a CEO requesting a wire transfer or authorising a payment, which can be used to steal corporate funds. Compromising videos or images of individuals can be created, which can be used to extort money or information from them.

Opil adds: “Despite the technology for creating high-quality deepfakes not being widely available yet, one of the most likely use cases that will come from this is to generate voices in real-time to impersonate someone. For example, a finance worker at a multinational firm was recently tricked into transferring $25 million to fraudsters because of deepfake technology posed as the company’s chief financial officer in a video conference call. Africa is not immune to this threat and it’s important to remember that deepfakes are a threat not only to businesses, but also to individual users - they spread misinformation, are used for scams, or to impersonate someone without consent – and are a growing cyberthreat to be protected from.”

The company recommends strengthening the human firewall by educating employees about deepfakes, how they work, and the risks they pose. This involves ongoing awareness and education efforts to teach employees how to identify deepfakes.