Subscribe

Assessing generative AI’s impact on cyber risk

By David Hoelzer, Fellow at The SANS Institute

Johannesburg, 07 Aug 2023
AI chatbots are easily influenced by text prompts embedded on web pages.
AI chatbots are easily influenced by text prompts embedded on web pages.

The rise of ChatGPT and generative AI has ushered in an extensive range of new opportunities seemingly overnight. With the ability to automate a plethora of capabilities, the tool has garnered the attention of the masses. From streamlining copywriting and generating complex code to solving maths equations and producing movie scripts, these new AI tools have a lot to offer people from all walks of life.

As the dust has settled, however, inevitable questions on what these tools mean for the future of cyber security are at the forefront. Amid our always-on digital world, opportunistic threat actors are known to leverage new technologies to deploy cyber attacks. ChatGPT has been on their radar since it was released last fall and is undoubtedly among the newest tools in their arsenal. It’s important for organisations to be cognisant of these risks and have protocols in place to help mitigate them.

How hackers could exploit generative AI

While ChatGPT has game-changing capabilities, it’s also amplifying threat actor toolkits, with criminals able to find ways to use, abuse and trick the system into doing their dirty work. With 450 000 new pieces of malware detected every day and a staggering 3.4 billion daily phishing e-mails entering our inboxes, attacks of this nature have become so commonplace and sophisticated that they are harder than ever to detect. Global agencies have issued warnings on chatbot creators storing personal data. And just like any change to the ways we work and behave, along with the buzz comes the promise of security threats as cyber criminals will look to exploit and expand their hacker toolkits.

The easiest and most commonplace application of AI chatbots for cyber criminals will be generating sophisticated and persuasive phishing e-mails. Phishing attacks have increased by 61% YOY and are only rising in volume and velocity with no signs of slowing down. Typos are a common indicator for phishing e-mails. However, the use of intentional typos is a common hacker tactic to avoid e-mail filters. Threat actors can instruct chatbots to be imperfect by prompting them to use a few typos within the body of an e-mail – allowing phishing campaigns to reach their intended targets at a higher rate.

Furthermore, a human could piece together content crafted by ChatGPT to arrive at a polymorphic piece of code. Research has revealed that AI chatbots are easily influenced by text prompts embedded on web pages. In turn, cyber criminals can use ‘indirect prompt injection’ where they secretively embed instructions within a webpage. If a user unknowingly asks a chatbot to ingest a page, this can activate the placed prompt. Researchers even found that Bing’s chatbot can detect other tabs open on a user device, which means cyber criminals can embed the instructions on any webpage tab and then easily manipulate victims to obtain their sensitive personally identifiable information (PII).

A new range of privacy concerns

Generative AI technological advancements come with various risks in the form of bias, misinformation, privacy concerns, automated attacks and malicious use. Search engines already represent a well-known privacy risk considering all information scraped by them will potentially be indexed. To some extent, this has been mitigated over the years as search engine companies have recognised certain patterns that are particularly damaging and actively do not index them, or at least do not allow public searches for that information. An example would be social security numbers.

On the other hand, generative AI tools trained on something like CommonCrawl or The Pile, which are large, curated scrapes of the internet, represent fewer familiar threats. With sophisticated large language models like ChatGPT, threat actors can potentially sift through this internet “snapshot” for the personal data of large volumes of ordinary people through careful prompt engineering. However, since the responses are being generated based on probabilities rather than recalled from “scraped” data, it is much less likely that all the findings will be accurate, especially for things like social security numbers or phone numbers.

It’s also important to remember that ChatGPT is not learning in real-time, it is just making predictions based on the training data and the reinforcement tuning performed by humans scoring its responses. It cannot currently be directed to automate ransomware attacks. It is a research tool created to show the world what is possible, see how people use it and explore potential commercial uses. Contrary to popular assumptions, we’re not indirectly training AI chatbots every time we use them.

A force for good

ChatGPT can help cyber defenders just as much as it aids bad actors. Threat hunters and security teams should be actively working to understand how generative AI tools can be scaled within their everyday operations. For example, it can be leveraged as a force for good by preventing cyber attacks through phishing, prompting the tools to identify language generally used by internal employees and therefore detect any deviations from this that are used by outside threat actors.

The possibilities generative AI can bring for the future are exciting and transformative. However, it’s important not to lose sight of the threats that also come alongside it. Like any transition in the way we do things online, AI chatbots introduce many new possibilities for the cyber criminals that use them too. Raising awareness of the specific threats at play is critical to avoiding attacks.

Share