How BPM can help counter deepfake fraud


Johannesburg, 01 Oct 2019
Denis Bensch, CIO, FlowCentric Technologies.
Denis Bensch, CIO, FlowCentric Technologies.

There’s a new kid on the corporate cyber fraud block – one that poses a level of threat never encountered before. It’s a threat that seems to come directly from a cheesy sci-fi flick, except it’s so real that a UK energy company, which recently fell victim to it, was defrauded of a whopping €220 000 in a single transaction. The new and growing threat has been labelled ‘deepfake’ fraud.

The word deepfake is a portmanteau of ‘deep learning’ and ‘fake’. A deepfake utilises AI-based technology to synthesise human images or voices with the express goal of creating fake media that’s indistinguishable from real media.

A US congresswoman, Yvette Clarke, who is calling for new legislation in an attempt to counter this phenomenon, stated: “Deepfakes are one of the most alarming trends I have witnessed as a congresswoman to date.” She goes so far as to suggest it poses a terrifying threat to democracy itself.

According to Denis Bensch, CIO of FlowCentric Technologies, deepfake fraud attempts are virtually impossible to prevent, and exceptionally difficult to detect.

“There is nothing in the current conventional IT security arsenal to counter deepfakes. However, a business process management system, if properly implemented and maintained, offers the best and most effective strategy to deal with this growing new threat,” he says.

Among the most important characteristics of deepfake fraud is that it exploits our innate psychological conditioning. Humans are hardwired to believe what we see with our own eyes and hear with our own ears – and to make decisions based on that information. It’s the same human condition that illusionists and magicians exploit: one part of us knows that what we are seeing isn’t real; but because we are tricked into seeing what the performer wants us to see, our brains tell us that it is real.

A deepfake fraudster utilises almost exactly the same modus operandi – but with an additional advantage over the illusionist in that we don’t know that what we are seeing or hearing is, in fact, an illusion.

Deepfake fraud defined

Deepfake fraud is the use of artificial intelligence and other modern technologies to enhance video and/or audio to produce falsified results. For example, it has been used to produce videos of celebrities, politicians and others saying or doing things they never actually said.

Prominent examples that made headlines recently included a video of former US President Barak Obama; and one of Facebook CEO Mark Zuckerberg seemingly boasting about how the social media platform abuses data collected from Facebook users. While the makers of these videos claimed to have produced them to demonstrate the power of AI-enhanced video tampering, deepfakes have been used to create fake celebrity pornographic videos, revenge porn, malicious hoaxes and fake news – including prominent politicians seemingly slurring their words as if drunk or making inappropriate statements.

And now deepfakes are making their way into the business arena.

Deepfake threat to business

“Deepfakes pose two clear threats to business – financial and reputational,” Bensch says.

Imagine the damage to a company’s reputation if a video of the CEO making racists and/or sexist remarks about employees, customers, suppliers were to be ‘leaked’ to social media. Imagine the fallout if the company does a great deal of business with government, and the disparaging remarks are directed at the government, or even the president. The backlash could be catastrophic.

“In that scenario, the deepfake fraudster’s motive could be purely malicious, or aimed at undermining a rival. In cases of financially motivated deepfakes, the reward for the perpetrator is immediate – such as the €220 000 fraud perpetrated against the UK energy company,” Bensch explains.

In that instance, the fraudster managed to replicate the voice of the CEO of the German parent company, and convinced the UK CEO to urgently transfer the money to a Hungarian supplier. It is believed commercially available voice-generating software was used to perpetrate the scam. The UK CEO only became suspicious when his ‘boss’ called again and requested another transfer – but by that time, the €220 000 had disappeared into the ether.

This was not the first such case. A recent report by the BBC quotes Symantec as stating that it had seen three cases in which deepfaked audio of CEO voices were used to trick senior financial controllers into transferring funds to fake accounts.

Fraud that relies on human gullibility is not new. According to insurance company AIG, business e-mail compromise (BEC) has overtaken ransomware and data breaches caused by both hackers and employee negligence as the reason for cyber-insurance claims. In its April 2019 report, the US Federal Bureau of Investigation (FBI) said losses from BEC scams had doubled in 2018 from 2017 to top $1.3 billion.

“Deepfake fraud is taking BEC-type attacks to a whole new level,” Bensch says.

Countering deepfake scams

At present, there is little that can be done to counter reputational deepfake attacks until reliable technology is developed that can detect such fakes. Facebook recently announced it was investing $10 million into a deepfake detection project.

However, Bensch says, businesses can use BPM to reduce the risk of financial deepfake attacks.

Properly implemented, BPM removes the human gullibility factor from the deepfake fraudster’s arsenal, making it considerably more difficult for the fraud to be perpetrated.

“Every business should have watertight processes in places for the authorisation of any transaction. There also has to be a separation of duties so that no single employee has control of an asset (money), can authorise the use of that asset, or keep a record of the asset,” Bensch explains.

“It is essential that checks and balances are built into the payment process and that no one, not even the CEO, can override these without some kind of additional verification process. This could include something like a call back, or demanding that the person issuing the instruction utilise a frequently changed password.

“This might be frustrating, but any person who is concerned about preventing fraud in their organisation would understand the need to protect the business against increasingly sophisticated fraudulent attacks,” Bensch concludes.

Share