Subscribe
  • Home
  • /
  • IOT
  • /
  • How AI is expanding the attack surface

How AI is expanding the attack surface

How even inexperienced attackers can use generative AI to undertake attacks demonstrates the increased need for layered security measures, built on a zero trust policy.

Johannesburg, 24 Jan 2024
Len Noe, technical evangelist and white hat hacker, CyberArk.
Len Noe, technical evangelist and white hat hacker, CyberArk.

Security has long been a major organisational concern, with enterprises traditionally relying on a set of preventative controls to address the attack surface. While such controls aim to protect against known threats and vulnerabilities, it is crucial to recognise that the threat landscape is constantly evolving. Therefore, businesses need to constantly adapt and remain proactive in their security practices.

The increasing interconnectedness of systems, coupled with the rapid adoption of new technologies, has further complicated the task of securing organisations, explains Len Noe, technical evangelist and white hat hacker at CyberArk.

“The proliferation of internet of things (IOT) devices, cloud services and remote work environments has significantly widened the attack surface, making it more challenging to protect against new threats. Well-funded attackers continue to innovate, developing new techniques to gain unauthorised access or control over critical assets on a daily basis,” he says.

“A good example here is the generative chatbot. At its core essence, it is essentially a very powerful computer, running advanced machine learning software. However, by virtue of being a computer, it can be compromised by bad actors.”

Noe points out that what is particularly concerning is that generative artificial intelligence (GenAI) solutions are ideal targets for so-called ‘script kiddies’. This term refers to individuals who download hacking tools without comprehending or caring about their inner workings, and without any particular skills.

“Script kiddies are less experienced with cyber security exploits, relying on pre-written programs found online. They have less developed hacking skills, using easier attacks and doing little research. Their intent is often for personal acclaim or to troll, focusing on quantity rather than quality of attacks,” he continues.

“To ‘hack’ a chatbot, you don’t need any advanced technical skills. All you need to do is convince the chatbot that it's fine for it to do what you are asking it to do. Hacking chatbots is as simple as giving the AI a prompt that will override the base instructions given by the developer. This process is called ‘jailbreaking’ the prompt.”

Noe suggests that by jailbreaking ChatGPT, attackers can unlock a range of restricted features, including generating unverified data, expressing authentic thoughts on various topics, providing unique responses, sharing dark humour jokes, making future predictions, fulfilling user demands and showcasing results on topics restricted by OpenAI's policies.

Worse still, he adds, many such attackers may not possess the same level of comprehension regarding the potential consequences and damage that can be caused by their actions.

“The growth of such jailbreaking techniques allows us to truly see the expansion of the attack surface. We are at the point where anyone who can formulate the correct question to ask will be able to perform attacks – without considering the second, third and fourth level collateral damage.

“Once we reach this stage, things are definitely taking a turn for the worse. Essentially, the chatbot has now begun acting as the experienced attacker that is providing the code, configuration, settings – whatever is needed.”

We are now at the point, he indicates, where getting code as well as instructions is no longer reserved for experienced attackers – anyone with access to the internet can use the free versions of ChatGPT, or any number of AI chatbots, and as long as they can phrase the prompt correctly, they will get the answer they are looking for.

“It is worth noting that these concerns extend beyond the exploits of script kiddies alone. The use of AI by legitimate threat actors, such as in the development of polymorphic malware, also poses further risks that demand attention.

“This is going to be the new normal moving forward, and it is why we need to start with the foundation of identity security as the cornerstone of any layered security approach. Adopting the principles of zero trust will go a long way towards securing both our physical and our machine identities,” states Noe.

Zero trust, he explains, emphasises the importance of having the appropriate controls in place to accurately identify genuine threats amid the noise, while also considering the potential misuse of AI technology by both malicious parties and legitimate threat actors.

“There are, however, several specific mitigations that businesses can put in place, including: segmented architecture, allowing traversal between segments through restricted and monitored paths; identity security for all human and machine access, including adaptive multi-factor authentication as a core tenet of security; privileged access management with complex, unique and frequently changing passwords; and endpoint privilege management and application control, which includes the ability to restrict the top privileged groups in Active Directory.

“The tools related to these mitigations are nothing new at this point in the technology timeline. In my opinion, what is different is how we, as security practitioners, need to start leveraging everything we have known and done in the past. Essentially, we must emphasise protection overlap, true defence in-depth, along with a segmented layered security approach, a strong identity security program, effective analytics and, ultimately, a strong zero trust policy,” concludes Noe.

Share