Surviving a deepfake world

Numerous capabilities in the public domain deliver deepfake audio and video abilities to hundreds of thousands of cyber criminals.
Read time 5min 50sec

It is a pity to have to use terms like sophisticated, highly-evolved, complex and productive when referring to the 2020 cyber attack landscape, but any attempt to downplay the resources of the enemy would be nothing short of being disingenuous.

However, it should be noted there is also the emergence of far less knowledgeable criminal players who are successfully using the latest technology advances to their advantage.

Just peruse the business technology press headlines from 2019 and you will see an array of reports on ransomware, malware and remote desktop protocol attacks.

Cyber criminals are increasing the complexity and volume of their attacks and campaigns, and are forever seeking ways to stay one step ahead of cyber security practices. Moreover, to add insult to injury, they utilise, to their own ends, the world’s latest technology advances.

Artificial intelligence (AI) and machine-learning have led to invaluable technological gains for businesses and society as a whole, but never make the mistake of thinking only the good guys are clever enough to deploy these advances – cyber criminals utilise them to the full.

Rise of two-stage extortion modus operandi

As we head into 2020, experts such as McAfee Labs foresee more cyber criminals targeting corporate networks to exfiltrate corporate information in two-stage ransomware campaigns.

The prospect of nefarious actors presenting a deepfake video of a world leader as a real communication is alarming and could be a threat to global security.

It is expected this year will see increased penetration of corporate networks via two-stage extortion attacks. In the first stage, cyber criminals will deliver a crippling ransomware attack, extorting victims to get their files back.

In the second stage, criminals will target the recovering ransomware victims again with an extortion attack, but this time they will threaten to disclose the sensitive data stolen before the ransomware attack.

2019 showed how cyber criminals partnered to boost their threat capabilities. Ransomware groups used pre-infected machines from other malware campaigns, or used RDP as an initial launch point for their campaign. These types of attacks required collaboration between groups.

This partnership drove efficient, targeted attacks which increased profitability and caused more economic damage. In fact, Europol’s Internet Organised Crime Threat Assessment named ransomware the top threat that companies, consumers and the public sector faced in 2019.

Based on what McAfee Advanced Threat Research is seeing in the underground, it expects criminals to exploit their extortion victims even more moving forward. The rise of targeted ransomware has created a growing demand for compromised corporate networks. This demand is met by criminals who specialise in penetrating corporate networks and selling complete network access in one-go.

Cloud considerations

With more and more enterprises adopting cloud services to accelerate their businesses and promote collaboration, the need for cloud security is greater than ever.

As a result, the number of organisations prioritising the adoption of container technologies will likely continue to increase in 2020.

The increased adoption of robotic process automation and the growing importance of securing system accounts used for automation raise security concerns tied to application programming interfaces and their wealth of personal data.

The incredible potential of AI

AI has extended the capabilities of producing convincing deepfake videos which, like anything, can have positive applications like making digital voices for people who have lost theirs, or updating film footage instead of reshooting it if actors trip over their lines.

However, as the technology is increasingly refined so is the progression in the use of it for malicious purposes.

Researchers at the University of Washington in the US posted a deepfake of former US president Barack Obama and then circulated it on the Internet, making it clear how such technology can be abused. They were able to make the video say whatever they wanted it to say.

The prospect of nefarious actors presenting a deepfake video of a world leader as a real communication is alarming and could be a threat to global security.

Enter stage left: Less technically experienced cyber criminals

There is nothing new about the ability to create manipulated content. As far back as World War II, images were used in campaigns by all protagonists, designed to make people believe things that weren’t true.

What’s changed with the advances in artificial intelligence is you can now build a very convincing deepfake without being an expert in technology. There are Web sites set up where you can upload a video and receive in return, a deepfake video.

There are very compelling capabilities in the public domain that can deliver both deepfake audio and video abilities to hundreds of thousands of potential cyber criminals with basic skills but with enough to create persuasive spurious content.

Deepfake video or text can be weaponised to enhance information warfare. Freely available video of public comments can be used to train a machine-learning model that can develop a deepfake video depicting one person’s words coming out of another’s mouth.

Attackers can now create automated, targeted content to increase the probability that an individual or group will believe in a campaign. In this malevolent way, AI and machine learning can be combined to create massive chaos.

Global experts feel that in general, adversaries are going to use the best technology to accomplish their goals, so if we think about people in a position of power at national level but with a set of scruples bent on manipulation of, let’s say an election, using deepfake video will be the route they will go.

There are many scenarios that one can paint as one considers how business adversaries with the wrong set of corporate ethics might decide to produce a deepfake of a CEO making what appears to be a compelling statement regarding earnings or a product flaw, requiring a massive recall – in, for example, the motor industry.

The negative impact on share price and business confidence of a brand would be immense, to say the least.

Is it all gloom and doom with experts like McAfee predicting 2020 will see the ability of an untrained class of cyber criminal with capabilities to create deepfakes which in turn will increase the quantity and spread of misinformation?

Certainly not; however, security teams will have their work cut out this year but will still remain well ahead. Organisations implementing good governance, security compliance measures and overall security of cloud environments will have less to worry about.

MJ Strydom

MD, DRS, a Cyber1 company

Strydom joined DRS in 2006 as part of the finance team and worked himself up through the company and was appointed managing director in 2017. He boasts a wealth of experience in finance and business, and oversees the smooth daily running of the company, which has over 85 employees and in excess of 130 clients. Strydom took a Bachelor of Social Sciences degree at Rhodes University, and upon graduation, went to London where he spent the next four years in financial management roles.

Login with