Subscribe

AI’s potential for good in the cyber security space

Cyber security professionals can use generative AI to better defend against the attackers who are using the same AI tools to create attacks.
Michael Brink
By Michael Brink, Chief technology officer of CA Southern Africa
Johannesburg, 31 Oct 2023
Michael Brink, CTO of CA Southern Africa.
Michael Brink, CTO of CA Southern Africa.

When people engage in the artificial intelligence (AI) debate, top of the list of concerns is often the intellectual property (IP) and copyright issue, especially regarding code generation or creative outputs from generative AI tools such as ChatGPT.

Think about a scenario where a developer creates code using ChatGPT – ask yourself, is that the organisation’s IP, or is it considered the IP of the internet? What if the code created came from two sources? Is it a 50/50 attribution or an abrogation of another developer’s IP? 

The capability of AI to perform creative tasks leads to one of the most contentious issues. Is what AI creates in any sense original, or is it always just a copy and as such fake? This is a question that will continue into the foreseeable future.

On the darker side of the coin, AI is already being used to generate phishing sites. AI’s ability to help write spam, scam phishing e-mails and malware is increasing the volume and frequency of cyber attacks.

There are benefits attached to the informed and cautious use of AI.

Fortunately, AI is not yet sophisticated enough to make these attacks much different from what already exists. What that means is that AI enables more variations on existing code samples or malicious code samples, but not more sophisticated versions of these code samples.

For a generative AI program to come up with a more sophisticated attack, someone – of the human variety − must have already thought about it and published context for it to the internet.

So, AI cannot intentionally create a more sophisticated attack strategy on its own. Not yet anyway!

However, these considerations on IP and originality are in no way hampering the uptake of AI. In a Gartner survey, 34% of organisations are reported to already be using or implementing AI application security tools to mitigate the accompanying risks of generative AI (GenAI).

Moreover, the global research house reveals that over half (56%) of respondents said they are also exploring such solutions. Gartner goes on to advise that IT and security plus risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI trust, risk and security management.

The survey highlights the top-of-mind risks associated with GenAI, which it notes are significant, continuous and constantly evolving, with the following stats endorsing this:

  • 57% of respondents are concerned about leaked secrets in AI-generated code.
  • 58% of respondents are concerned about incorrect or biased outputs.

Gartner cautions that organisations that don’t manage AI risk will witness their models not performing as intended and, in worst case scenarios, will cause human or property damage, security failures, financial and reputational loss.

But it is not all gloom and doom – there are benefits attached to the informed and cautious use of AI.

AI benefits

Yes, AI is being used to increase the volume and frequency of malware and social media attacks, but one of the benefits is that conversely, it allows cyber security professionals to use ChatGPT, or other AI tools, to better defend against the same attackers who use them.

Enterprises can leverage the capabilities of these tools more broadly within their organisations as it allows them to add to the expertise of their security operations centre teams. It also vastly increases the speed at which these teams can respond to potential threats.

It’s important to state what should be obvious and that is that AI and automation go hand-in-hand − both the good and the bad. Automation is one of the most transformational use cases for AI, as it enables the automation of attack chains, while at the same time it automates the defences against those same attack chains. It is also a key aspect of adaptive AI, a form of AI that automatically adjusts to changing conditions. Automation makes adaptive AI possible.

Essentially, AI is intended to help simplify human tasks or perform tasks that a person would need to learn – this requires automation. These could be jobs that make life easier, such as repetitive tasks, but can also include creative tasks that AI may perform even better.

In a nutshell:

  • Generative AI enables valuable new tools for enterprise organisations.
  • Generative AI does not pose an existential threat if people control its uses and evolution.
  • The widespread adoption of generative AI may well be signalling the beginning of the fifth industrial revolution.
  • Constantly evolving new cyber security solutions – developed using AI − ensure it remains a force for good.

In conclusion, there is no doubt that generative AI systems provide a valuable and powerful new tool for enterprise organisations, as it enables security professionals to do their jobs even better.

It can help to address the shortage of skill sets in cyber security and other analytics-driven fields. It also shortens the time to access actionable knowledge by reducing the time to detection, incident response and remediation.

Share