Rise in digital customer interactions increases risk of cyber fraud
In the past 18 months since the advent of the COVID-19 pandemic, even technophobic consumers have turned to digital channels for almost every aspect of their lives, from banking to shopping and ordering take-away meals. There has even been an increased uptake in online medical consultations.
And while most aspects of our lives are slowly starting to return to “pre-pandemic normal”, international research has found that this increased level of digital interaction is likely to continue.
This finding was echoed in an October 2020 McKinsey analysis, which found that the pandemic had accelerated digital transformation by several years with consumers having have moved dramatically towards online channels.
While companies have responded by innovating and launching a broad range of intelligent, digital-first offerings, Sebastian Reeve, Director of International Go-To-Market, Intelligent Engagement at Nuance Communications, warns that despite the unprecedented level of convenience delivered by digital channels, there are also vulnerabilities in the digital realm that need to be addressed.
“Consumers' expectations have risen – they demand faster, simpler service that prioritises their needs, time and, importantly, their safety,” he adds.
Many consumers – slightly more than half of those quizzed in OnePoll’s survey of 10 000 people – still prefer to deal with a human when they have a problem or a complex question.
According to Reeve, the key to giving consumers the superior experience they expect is to utilise technology – particularly artificial intelligence – judiciously. For example, this involves combining AI-powered digital customer experiences with effective human-assisted service when necessary.
“Having AI and human agents work together as one team has never been more important now that customer service agents are mostly working from home, yet are still expected to deal with complex customer queries,” he says.
“This will help to build stronger, more valuable customer relationships and deliver significant competitive advantage.”
Reeve points out that having an AI co-worker ensures that help is always at hand for human agents who would otherwise be pretty much on their own as they trawl through multiple knowledge bases in search of answers while an increasingly frustrated customer sits waiting.
“Thanks to modern, conversational AI, agent interactions can be monitored and real-time support provided. This could include relevant insights into customer history, advice on best practices and next best actions, and targeted product and offer recommendations – whatever insights and tools are needed to reach resolutions faster and handle customer enquiries and complaints confidently,” he says.
But while the increased adoption of digital channels and AI is good news for companies wanting to improve customer experience, Brett Beranek, Vice-President and General Manager, Security & Biometrics Line of Business at Nuance, warns that the rise in the number of digital interactions increases the vulnerability of companies and their customers to ever-more sophisticated cyber crime attacks.
Of particular concern is the growing use and sophistication of deepfake technology – the manipulation of video and/or audio to make (usually high profile) individuals appear to say or do something when they have not.
Research conducted in 2020 found there were already 100 million deepfake videos on the internet – a 6 820-fold increase from 14 678 in 2019. While most of these fakes are not particularly sophisticated, nor likely to fool the majority of viewers (the British sovereign’s 2020 alternative Christmas message is an example) deepfake technology is improving rapidly, enabling more realistic deepfakes to be produced more readily.
This is especially true when it comes to voice cloning, which could have serious repercussions for businesses, both financial and in terms of reputation.
One of the first reported cases of a deepfake fraud involved the cloning of an energy company’s CEO’s voice to con the company out of almost a quarter of a million dollars.
“It’s a relatively small step from cyber criminals posing as a senior executive to gain access to confidential information, to pretending to be a customer withdrawing a significant sum of money,” Beranek says.
“Businesses need to act today and get the tools and strategies in place to defend themselves and their customers against this next chapter in fraud.”
But how? When criminals are able to use technology to effectively mimic an individual’s accent and style of speaking, separating the real from the fake can be extremely difficult, especially for the human ear.
Beranek maintains the only way to address the problem is for businesses to utilise biometric technologies that analyse voices and detect anomalies.
“Human voices are as unique as fingerprints, but voice recognition technology alone may not be enough to detect that a voice has been cloned. Conversational biometric technology, on the other hand, goes further. It uses sophisticated algorithms to analyse more than 1 000 voice characteristics including vocabulary, grammar and sentence structure, to validate a caller’s identity in the first few seconds of an interaction. There’s also technology that can determine whether a voice is real (human) or synthetic (fake),” he says.
Another protective layer on top of voice biometrics is behavioural biometrics, which measures how someone interacts with a device. Analysing how they type, tap, swipe or even hold the phone can help determine whether they are who they say they are.
“Because these technologies can’t be compromised in the same way as knowledge-based security methods such as passwords and PINS, and because they help to secure true identities and prevent fraudsters from conning both customers and employees, they are increasingly being used by savvy companies as a successful authentication tool in this era of ever-increasing digital interactions,” Beranek concludes.