Subscribe
  • Home
  • /
  • Security
  • /
  • Artificial intelligence creates 'self-learning attackers'

Artificial intelligence creates 'self-learning attackers'

Companies should look past the myths and hype of artificial intelligence to discover its real potential, for good and bad.

MJ Strydom
By MJ Strydom, MD, DRS, a Cyber1 company
Johannesburg, 28 Mar 2019
MJ Strydom.
MJ Strydom.

It might come as a shock to some, but the reality is that while artificial intelligence (AI) is changing our lives and the way we do business, in terms of cyber security it is as much a tool for the implementation of security measures, and the vendors selling them, as it is for cyber attackers.

Latterly, the hype surrounding AI has been huge. Every time a new network security technology is introduced, you can be certain terms such as deep learning, machine learning, self-learning algorithms, cognitive analytics and neural networks are part of the pitch.

But the industry's focus on AI as a way to boost protection against threats ignores the larger problem, which is that of 'self-learning attackers'.

Business efficiency, running neck and neck with security, is a principal driver of the growth in demand for AI-based solutions. A Radware global industry survey revealed businesses had high expectations when implementing AI solutions and wanted multiple benefits from their investments.

Radware's "IOT Attack Handbook: A Field Guide to Understanding IOT Attacks from the Mirai Botnet to its Modern Variants" warns that bots are one of the fastest growing threats in the security landscape that adapt as they seek to cause harm.

We continue to see developments in the chess game of machines trying to deceive each other as they try to steal or protect information.

From a technology perspective these interactions are fascinating, unless of course your business is at the receiving end of an attack, in which case academic interest is negated by what can be serious damage to the organisation.

It can actually be horrifying because even though more robust solutions are available, most organisations continue to rely on older technologies and paradigms to defend against these evolving threats.

We continue to see developments in the chess game of machines trying to deceive each other as they try to steal or protect information.

So there are two questions to ask here: Firstly, how can organisations cut through the hype around AI to understand the most important issues they should be addressing? Secondly, how can they incorporate AI into their current security strategies to take advantage of the technology's ability to detect and mitigate attacks that incorporate the same capabilities?

In order to answer these questions, it is important to understand what machine and deep learning mean.

Machine learning

By definition, an AI system improves and adapts to its environment. In most AI-based security systems, today's technology is mainly based on machine learning. Machine learning consists of a vast collection of algorithms, including deep neural networks.

While those algorithms have the capability to improve the quality of their predictions over time, they still perform a single, specific task. The amount of data needed to be effective will depend on whether that system is based on traditional (non-deep) machine learning or deep learning.

Traditional machine learning has been used for many years with great success. It is able to detect and block many types of attacks through behavioural tracking and anomaly detection. Although very specific and limited to a particular task, it is effective and can provide near real-time protection from unknown attacks. It can also be used to detect behavioural anomalies in traffic patterns as an indicator for denial-of-service attacks.

Deep learning

Recently, deep learning technology has found its way into information security solutions to detect complex attacks and correlate multiple individual indicators of malicious intent and behaviour with only one goal: criminal resolve.

These systems can detect complex sequences of events in huge amounts of data; events that humans would never be able to notice. On the down side, they are prone to false positives and known to produce unexpected results. Their efficiency is primarily dependent on a huge amount of good, carefully classified data.

Other challenges for deep learning systems are that they are not transparent, are hard to reproduce and have learning challenges in adversarial contexts.

Many companies are asking if the use of AI-based attacks by cyber criminals will drive adoption of AI-based mitigation solutions. My response to that is: yes, but not necessarily at the same pace.

There are three factors to consider: the attack vector, its speed and its evasion technique. For example, using AI for phishing does not affect the victim in terms of change in attack vector, but it does increase the scale and number of targets, compelling every organisation to improve its protection. This may or may not include AI-based systems.

On the other hand, as attacks become more automated, organisations will have to automate their security to ensure they keep on top of the rising number and accelerated speed of attacks.

When new evasion techniques based on AI are leveraged by cyber criminals, it will ultimately lead to the use of better detection systems based on AI.

So, what will the status of AI be in security going forward?

AI will become an important, if not the most important, component of future cyber security strategies. Organisations will not run or maintain the AI system themselves, but rather will use the results that can be gained from cloud-based systems.

Initially, we will not see on-premises black-box fully autonomous AI systems that provide real-time protection.

AI, and deep learning specifically, is a modern cyber security strategy that enables experts, not replaces them. However, budgets, testing, deployment and even decisions are supervised by humans who do not keep pace with technology.

Share