Tackling the scourge of deepfakes

All technologies have a good and a bad side, but the manner in which deepfakes are being abused to sow discord and anger means it is more vital than ever to apply analytics to identify these early.

Johannesburg, 06 Jul 2020
Read time 3min 50sec
Ye Liu, senior machine learning developer, SAS
Ye Liu, senior machine learning developer, SAS

The modern era is undoubtedly the information age, with knowledge essentially available on demand, at any time and from anywhere. At the same time, however, it is also the age of information disorder, with false information and fake news found at every turn.

This has been exacerbated in 2020, with the COVID-19 pandemic becoming the catalyst for a hotbed of fake news to develop, made even worse by modern tools that are capable of generating hyper-realistic data, or deepfakes. For the non-expert, these deepfakes can be indistinguishable from real data, so fighting this misinformation has never been more crucial.

According to Ye Liu, senior machine learning developer at SAS, a deepfake is generally some form of synthetic media where a person in an existing image or video is replaced with someone else, in much the same way as people have used Photoshop to create fake images. The main difference, she explains, is that the deepfake process is automated and the results are significantly better.

“One of the key technologies here is known as generative adversarial networks (GANs), which have huge potential for both good and evil. Because GANs can be trained to mimic any distribution of data, this means that in the wrong hands, it can be used for fraud. However, focused correctly, GANs can also help us to create worlds in any domain – from images to music and speech,” she says.

Liu adds that even the principle behind deepfakes themselves can be put to good use, helping to anonymise people in videos, while keeping their personality intact. She explains that in such an instance, the deepfake can provide an avatar – instead of merely digitally blocking out the person’s face – that maintains facial expressions, allowing these to be properly seen, while still keeping their identity secret.

Fijoy Vadakkumpadan, R&D manager at SAS, adds that in the wrong hands, however, deepfakes can help create fake news at an entirely new level. Nonetheless, he points out, if you know what to look for, there are telltale signs that indicate if something is actually a deepfake.

“Remember that most often, deepfakes are used in order to engender strong emotions within the reader or viewer, such as anger or fear. Analytical tools are able to search for the signs that indicate something is amiss, and as these indicators are uncovered, these can be presented to the readers as they consume the news.

“Ideally, these analytics will inspire the readers to filter this information through their own critical thinking filters, or to at least investigate the subject further before posting it on social media, thereby helping to curb the spread of disinformation.”

A simple example of how this technology is applied is the way it is used by less reputable online retailers. Here there are times where the seller uses existing videos of famous people to generate fake testimonials, in order to help them to sell more products.

“However, it appears that the most likely approach with deepfakes is for these sources of misinformation to use stock photo images to draw the reader in. By taking the images in a given news article and performing a reverse search for them in a large database of photos, the artificial intelligence (AI) is quickly able to tell whether the image is a stock photo. These analytics are equally good at identifying titles that do not match well with the content, which quickly identifies them as click bait headlines, and thus, more likely to be fake.”

These various indicators, when presented to the readers, should inspire them to filter any strong emotional responses through their critical thinking faculties, states Vadakkumpadan. This will be a major step towards curbing the spread of these fake news articles and their inaccurate claims.

“It is quite clear that disinformation is already a massive social problem facing the world today, and is something that is being driven to all new heights by deepfake technology. However, at the same time, it should be remembered that it is also technology – in the form of image and text analytics and AI – that is shaping up to be the answer to defeating this ongoing concern,” he concludes.