Subscribe
  • Home
  • /
  • Devices
  • /
  • Unmasking the threat: Navigating the landscape of deepfake audio, video scams

Unmasking the threat: Navigating the landscape of deepfake audio, video scams


Johannesburg, 28 Nov 2023
Detecting deepfakes remains challenging.
Detecting deepfakes remains challenging.

In today's increasingly digital landscape, the rise of deepfake audio and video scams poses a significant threat to individuals, businesses and society at large. These scams leverage generative artificial intelligence, employing deep learning algorithms to create convincingly realistic but entirely fabricated content. The danger lies in the psychological impact of these deepfakes, exploiting a phenomenon known as 'processing fluency', wherein our brains accept the manipulated material as genuine, leading to potential disinformation, manipulation and scams.

Notable incidents of deepfake scams have been on the rise, showcasing the evolution and growing sophistication of these attacks. The 2019 UK deepfake voice attack (source: WSJ -https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402), the 2023 Retool incident (source: KnowBe4 - https://blog.knowbe4.com/deepfake-defenses), and the Tom Hanks deepfake selling a dental plan (source: The Guardian https://www.theguardian.com/film/2023/oct/02/tom-hanks-dental-ad-ai-version-fake#:~:text=Tom%20Hanks%20has%20warned%20fans,was%20used%20without%20his%20permission.) underscore the urgency for organisations to address the vulnerabilities exposed by deepfakes.

Deepfake scams often augment existing attack forms, such as business e-mail compromise (BEC), by incorporating manipulated audio or video to enhance the credibility of the attack. The technology behind deepfake audio involves leveraging deep learning techniques, utilising tools like recurrent neural networks (RNNs) and convolutional neural networks (CNNs). Similarly, deepfake videos require a combination of deep neural networks, auto-encoders, GANs, facial recognition, tracking and computer vision technology.

Recent advancements in generative AI have primarily focused on improving the quality of deepfake content. Accessible technology now enables the creation of realistic video and audio, amplifying the potential impact of deepfake scams.

The realistic nature of deepfakes poses risks of reputational, financial and psychological damage to individuals and businesses. The spread of misinformation through manipulated content can have profound consequences on public opinion, and the technology can be exploited for fraudulent activities such as blackmail and eliciting illicit behaviour.

Deepfake scams can be leveraged to manipulate public and personal opinions, blackmail individuals and even gain unauthorised access to confidential information through real-time injection into video calls.

Detecting deepfakes remains challenging, with current technology lacking real-time capabilities. Preventive measures include implementing safe words during video calls, securing video calls with passwords and educating individuals on recognising potential signs of deepfake content.

The future of deepfake audio and video scams suggests an increasing reliance on this technology by malicious actors. Organisations should invest in anti-deepfake measures, including security awareness training and eventually adopting mitigative technologies as standard practice.

Strategies for combating evolving threats:

▪ Understand the impact of generative AI on your business.

▪ Prepare your organisation against malicious deepfakes today.

▪ Stay informed about rapid developments in deepfake technology.

▪ Allocate budget for anti-deepfake measures, treating it as a standard security investment.

In the face of deepfake audio and video scams, awareness, preparedness and proactive measures are essential for safeguarding against the evolving threat landscape. By understanding the technology, learning from past incidents and staying ahead of emerging trends, individuals and organisations can navigate the challenges posed by deepfake scams and mitigate potential risks.

This press release acknowledges the contribution of Wiering in shedding light on the evolving landscape of deepfake audio and video scams.

Share