Subscribe

Cybercrime in the AI era

With GenAI-enhanced phishing emails, socially-engineered deepfakes and automated malware, cybercriminals are also a part of the AI revolution.
By Tiana Cline, Contributor
Johannesburg, 16 May 2024
Amin Hasbini, Kaspersky
Amin Hasbini, Kaspersky

At the beginning of February, a finance worker logged onto a video call. It was a multi-person call with colleagues he knew – even the company’s UK-based chief financial officer was on the call. Even though the worker was initially sceptical about the fact that the CFO needed him to perform a secret transaction, something that first came up in an email, any doubts he had were dismissed after the video call. After all, it was face-to- face with the CEO and several other senior staff members.

Two hundred million Hong Kong dollars later, it turned out that it was all a highly sophisticated social engineering scam utilising deepfake AI to create convincing images, audio, and video – in this case the CFO and other colleagues. The original mail was a phishing attempt, and the finance worker was duped by his colleagues’ deepfake replicas.

“In the future, will cybercriminals completely depend on AI bots to create these attacks, or will they still need some manual work to be done by a human? Everyone is afraid of losing their jobs to AI and probably some cybercriminals too.”

Amin Hasbini, Kaspersky

From Apple to Accenture, Spotify, Samsung and JPMorgan Chase, the number of prominent companies banning generative AI (GenAI) tools is growing. According to Cisco’s 2024 Data Privacy Benchmark Study, organisations are limiting the use of GenAI over data privacy and security issues. While some businesses are using AI, others are fearful about security breaches and how these third-party tools store sensitive corporate data. Cisco’s study found that nearly 50% of GenAI users have entered information into AI tools that could be problematic, including employee information or internal company information.

Smarter scams

Banning (or limiting) GenAI in the workplace won’t stop cybercriminals from going after data and using AI. As more people adopt AI and become familiar with it, cybercriminals are also adopting AI for malicious purposes. “Attackers are actually engaging with AI continuously. They’re adopting AI in their work and using it,” says Amin Hasbini, Kaspersky’s head of global research and analysis for the META region. “I worry about the future. How will people discern reality from virtuality? Even those people who believe they are very smart are easily manipulated as technology blurs the separation between reality and the virtual world.”

Phishing scams, which were once easy to spot with their obvious spelling errors and impersonal content, are now more convincing than ever. Generic, mass phishing emails can become more engaging and persuasive with a bit of help from generative AI. Inexperienced cybercriminals can create trojans on demand, using an LLM like ChatGPT to write code, with no programming skills required. “Using AI for phishing, cybercriminals are increasing the sophistication of emails, localising on an industry, country, language or even religious level, bringing every context possible to be very specific – and more convincing – to the victim,” Hasbini says. Attackers are enhancing their capabilities with AI for better efficiency. GenAI, in particular, has given cybercriminals the ability to spread their attacks faster and on a larger scale.

Universities use AI content detection engines like Turnitin, Grammarly and QuillBot to fight plagiarism (and check originality). Still, Hasbini says that a similar scientific approach can be used for social engineering scams like phishing to analyse text. “If it’s too correct, it’s considered a parameter that an email could be generic,” he says, emphasising that accuracy is only one measurable parameter in phishing emails – there are details like perplexity and burstiness to consider.

EXPLAINER

Perplexity: Perplexity is a method used in AI content detection to measure the unpredictability of text. It analyses how complex and unpredictable the structures are within the content. Lower complexity and predictability indicate the text was more likely generated by an AI system rather than a human, as human-generated text tends to be more sophisticated and varied.

Burstiness: Burstiness looks at the length and structure of sentences. Shorter sentences with less variation in length and structure could indicate AI-generated text, as AI may struggle to create natural-sounding sentences as humans do. Burstiness analyses the flow and variation of sentences to help detect text potentially generated by an AI system rather than a human.

AI IN THE CYBER WORKPLACE

Long before GenAI put large language models (LLMs) in the spotlight, AI has been a part of cybersecurity. Most security vendors already have AI integrated into their solutions to enhance their threat-prevention capabilities. But now, AI is filling another gap – the cybersecurity skills shortage. According to Splunk’s latest predictions report, 86% of CISOs believe GenAI will alleviate skills gaps and talent shortages they have on security teams. “While educational or academic programmes are in place to train more cyber talent, it seems the global demand for talent keeps outpacing the available supply,” says Michael Brink, CTO, CA Southern Africa.

There’s no question that AI is capable of dealing with large volumes of data – it’s faster and more efficient than humans at making predictions with a reasonable amount of accuracy. By training on attack patterns, AI can learn to recognise suspicious activities even on vast quantities of data in real-time. For containment, AI can be used to assist with automated procedures. “These capabilities, coupled with advances in automation technology, make AI an ideal and practical solution to help bridge the talent gap,” says Brink.

From burnout to bridging the gap

Finding cybersecurity talent is a challenge for businesses everywhere. Cyber teams are often short staffed, and working with limited resources, which can lead to burnout. According to a recent Forrester study, 66% of cybersecurity professionals experienced extreme stress or burnout, with more than half requiring medication to manage their mental health. And when employees are under pressure, they’re more likely to make mistakes.

This is where GenAI comes in. A study from Kaspersky reveals that even though C-level executives are “unsure how it actually works”, they plan to use GenAI to cover critical skills shortages. The findings showed that despite concerns about the use of sensitive proprietary information to power GenAI tools, half of the executives surveyed plan to automate mundane tasks that are now carried out by employees. “GenAI offers a low-barrier way to complete in a matter of minutes what have been resource-intensive tasks that require skills and experience,” says David Emm, a principal security researcher at Kaspersky. There are already AI systems like Aidrian that can be operated by professionals who aren’t cybersecurity experts. “This makes it easier for businesses to protect themselves without needing additional specialist staff,” says Wilnes Goosen, a fraud consultant at Experian Africa. “With Aidrian, the increased level of accuracy in identifying fraudulent transactions considerably reduces the volume of manual reviews, reducing pressure on clients’ fraud teams.”

Emm says that GenAI is infiltrating and spreading through businesses like wildfire before policies have been implemented. “You can end up with a situation where IT teams are playing catch-up with their security, just as we saw with the BYOD trend a decade ago,” he says. “Once Pandora’s Box is open, it’s tough to close, and once data has been innocently entrusted to AI platforms, no amount of retrofit AI policy is going to restore that IP.”

Share

* Article first published on brainstorm.itweb.co.za