Subscribe
  • Home
  • /
  • Malware
  • /
  • Deepfakes: Africa hoodwinked by ‘weapons of mass disruption’

Deepfakes: Africa hoodwinked by ‘weapons of mass disruption’

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 03 Mar 2023

Awareness of deepfakes and how they work is very low in Africa and this puts users at risk, says KnowBe4 Africa.

The Top Risks Report 2023 by the Eurasia Group defined advances in deepfakes and the rapid rise of misinformation as ‘weapons of mass disruption’, notes KnowBe4.

The company says advances in AI and powerful facial recognition and voice synthesis technologies have shifted the boundaries of reality, while the recent explosion of AI-powered tools like ChatGPT and Stable Diffusion have made it harder than ever to distinguish between the work of a human versus that of a machine.

A recent survey undertaken by KnowBe4, involving 800 employees in Botswana, Egypt, Mauritius, Kenya and South Africa, showed that 74% of respondents believed a communication via e-mail or direct message, or a photo or video, was true when, in fact, it was a deepfake.

Just over 50% of respondents to the survey said they were aware of deepfakes, while 48% were unsure or had little understanding of what they were. However, most respondents (72%) said they did not believe that every photo or video they saw was genuine, which was a positive step in the right direction, even though nearly 30% believed that the camera never lies.

Anna Collard, SVP of content strategy & Evangelist at KnowBe4 Africa, says, “It is also important to note that nearly 67% of respondents would trust a message from a friend or legitimate contact on WhatsApp or a direct message while 43% would trust a video, 42% an e-mail and 39% a voice note. Any one of these could be a fake that the trusted contact did not recognise or their account was hacked.”

Interestingly, when asked if they would believe a video showing an acquaintance in a compromising position, even if this was out of character, most were hesitant to do so and nearly half (49%) said they would speak to the acquaintance to get to the bottom of it. However, nearly 21% said that they would believe it and 17% believed a video is impossible to fake.

These deepfake platforms are capable of creating civil and societal unrest when used to spread mis- or dis-information in political and election campaigns, and remain a dangerous element in modern digital society.

Anna Collard, KnowBe4 Africa.

The response was similar when they were asked the same question, but of a video with a high-profile person in the compromising situation, with 50% saying they would give them the benefit of the doubt and 36% saying they would believe it.

“Apart from abusing these platforms with online bullying, shaming or sexual harassment, such as fake revenge porn, these tools can be used to increase the effectiveness of phishing and business e-mail compromise (BEC) attacks,” says Collard. “These deepfake platforms are capable of creating civil and societal unrest when used to spread mis- or dis-information in political and election campaigns, and remain a dangerous element in modern digital society. This is cause for concern and asks for more awareness and understanding among the public and policymakers.”

Loss to company

“Another concern, other than reputational damage, is loss to company,” says Collard. “Most respondents would be cautious if they got a voice message or an e-mail asking them to carry out a task they would not normally do (57%) but 20% would follow the instructions without question.”

When people were asked to select clues that they thought would give away a fake, most said that the language, spelling and expressions used would not be in the person’s usual style (72%) or that the request was out of ordinary or unexpected (63%).

If it was an audio or video file, they believed they could identify a fake based on the words, tone and accent sounding unlike the person being emulated (75%), while 54% said the speech would not flow naturally.

When asked ‘What clues do you think would give away a deepfake in a video?’, respondents selected ‘their mouth movements do not sync with the audio’ (73%), ‘The request or the message is out of the ordinary, alarm signals should go off’ (49%), ‘Their head movements seem odd’ (49%), ‘The person doesn’t blink’ (46%), and ‘The person’s skin colour looks unnatural’ (44%).

Collard adds, “The problem is deepfake technology has become so sophisticated that most people would find it challenging to spot a fake. Training and awareness have become critical. These are the only tools that will help users to understand the risks and recognise the red flags when it comes to faked photo and video content.”

KnowBe4 advises businesses to invest in training to ensure that employees do not believe everything they see and prevent reaction to any unusual instructions without first confirming they are legitimate. 

Share