Using technology to fight friendly fraud
Fraud thrives in times of social disruption, and there has been no greater source of disruption in recent times than the COVID-19 pandemic.
In April, Bruce Dorris, president and CEO of the Association of Certified Fraud Examiners (ACFE), the world’s largest anti-fraud organisation, warned that the coronavirus pandemic presented all the elements of a “a perfect storm” for fraud: The pressure of a sharp and sudden economic downturn coupled with increased opportunity resulting from a hastily revised working environment.
This is particularly true of contact centre agents who have experienced a dramatic change in working patterns and locations, which has placed them under unprecedented pressure, says Simon Marchand, Chief Fraud Prevention Officer at Nuance.
“Disruption drives people to seek answers from organisations they rely on, sign up for emergency schemes, reactivate old accounts, question refund policies, check stock levels and so on. However, while demand for agent interaction and responses rise, the emergency measures taken, such as remote working, may significantly reduce agent capacity,” he says.
According to Marchand, professional fraudsters were quick to take advantage of the pandemic by playing on public fears, exploiting overstretched systems, and taking advantage of the unusual times to camouflage suspicious behaviours.
As a result, there was a 400% increase in attempted fraud cases experienced by just one retail bank in the US during the first few months of the outbreak. And in its COVID-19 and Stimulus report for the period 1 January to 14 October 2020, the US Federal Trade Commission recorded fraud reports valued at over $160 million.
However, it’s not only career criminals who have exploited the situation for illegal gain. Under unexpected financial pressure and presented with fresh opportunities, formerly trusted employees and customers – “friendly fraudsters” – have climbed in on the act too. During the pandemic, one financial institution reported a doubling in the number of fraud attempts by legitimate account holders.
Together with the professionals, friendly fraudsters are able to take advantage of three key weaknesses that often arise in over-extended contact centres:
- Cracks appear in ID and authentication processes, as agents who are under pressure and overwhelmed by demand try to deal with as many calls as possible, rather than apply identity verification measures as rigorously and consistently as they should.
- Remote working, without support and supervision – and facing new types of questions, products and procedures occasioned by the pandemic – may force agents to guess at the correct course of action. This could make them more susceptible to social engineering by fraudsters seeking to steal personally identifiable information (PII), data that could potentially identify a specific individual. Protecting this information is essential, as, with just a few bits of an individual’s personal information, thieves can create false accounts, run up debt, create a false ID or passport, or sell that individual’s ID to criminals.
- With “normal” behaviour having changed as a result of the pandemic, and legitimate customers acting in unusual ways, fraudulent behaviour becomes harder to spot. This, combined with higher work volumes for fraud management teams, means more fraud is like to go undetected for longer.
“It is absolutely essential that organisations reinforce their internal controls. Biometrics and artificial intelligence is available to assist with this,” Marchand says.
As customer passwords and PII have become easier to purchase on the dark Web, biometric authentication has become increasingly popular. Instead of an agent asking a customer for PII or a password, customers are identified using a characteristic unique to them – their voice or the way that they type.
Biometrics also helps to ease the time pressure on agents by no longer requiring them to ask a series of knowledge-based authentication questions of legitimate, but often stressed, customers.
Another advantage of biometrics is it removes the need for agents to have access from customers’ PII – and thus reduces the opportunities fraudsters have to extract that information for ID or sale, or for formerly trusted employees to sell that information for gain. This is called occupational fraud. It’s important to note that, in addition to external fraud and friendly (or opportunistic) fraud, there is a risk of occupational fraud escalating in times of social disruption. It was the case after the subprime crisis, as a bad socio-economic situation might lead to increased financial pressure on employees. This, paired with an opportunity (such as working from home unsupervised), creates a situation where they might be tempted to steal information from customers to resell or use to their own benefit.
Biometrics can also actively identify known fraudsters by their voice or behaviour. Cross-referencing this with a database containing the voiceprints or known fraudsters will be able to ensure that suspicious calls are flagged for further security checks. Performing clustering analysis will trigger alerts when a unique voice is heard multiple times in a given period of time, which could indicate a new fraudster trying to socially engineer agents, or testing security processes.
However, Marchand points out that while biometric authentication can be effective in preventing trusted employees from committing fraud and in quickly identifying legitimate customers, a different approach is needed to identify customers who might have been driven to break the law.
This is where “credibility authentication” comes to the fore.
While biometric authentication will confirm that the customer is who they say they are, the credibility authentication system analyses the customer’s voice as well to evaluate whether the customer is speaking normally and is not showing signs of stress or of trying to deceive the agent.
If the credibility authentication system does find reason to believe the customer is acting dishonestly, the call can be quickly be flagged for further investigation.
“Both biometric and credibility authentication systems depend on AI rather than human resources and expertise. As a result, they can be scaled up and out quickly in times of social disruption, to help mitigate the increased incidence of attempted fraud,” Marchand concludes.