Subscribe

Using AI, ML to prevent cyber crime

Kirsten Doyle
By Kirsten Doyle, ITWeb contributor.
Johannesburg, 11 May 2022

The need for machine learning (ML) and artificial intelligence (AI) in cyber security, has been driven by sheer scale.


ITWeb Security Summit 2022: 31 May - 2 June

Book your seat now to get up to speed on cyber security trends, solutions and best practices. The annual gathering of cyber security decision makers and practitioners will feature experts and thought leaders from across the globe, who will share their knowledge and insights on the most critical issues facing businesses today. It will also feature a range of workshops, training courses, and much more. For more information, and to register, go here.

The volume and diversity of cyber security incidents has been on the rise for years, and this trend exploded over the past few years due to the direct monetisation of cyber security attacks through crypto-currency.


This is according to Nimrod Partush, VP of data science at CYE Israel, who will be presenting on “The role of AI and ML in cyber security”, at the ITWeb Security Summit 2022, to be held at the Sandton Convention Centre from 31 May to 2 June.

He says reports reveal that the amount of successful ransomware attacks grew between 50 and 100% last year.

“This tsunami of attacks cannot be thwarted by traditional rule-based sensors and human analysis alone, they happen too fast and change too often,” he says. Creating tools that operate automatically, learn from the past and from current attacks, and apply advanced reasoning, is the only way to address the threat landscape. These tools can only be powered by AI.

Combatting cyber crime

Speaking of how these technologies can be used to fight cyber crime, Partush says to date, the application of AI in various industries has prevented a slew of crimes.

“A great example for this is banking and credit card systems. These systems are met with multitudes of fraudulent transactions daily. For years, the banking industry has invested trillions in automated tools, and specifically ML models, in combating fraud.”

He says every transaction made is scrutinised by multiple layers of AI models, examining each aspect of the transaction, and advising whether it is safe or suspicious.

One cannot imagine the sheer amount of money that would have been lost if it weren't for AI-based safeguards, says Partush.

"Another example are endpoint detection and response (EDR) tools, or AV software,” he adds. “Since malware is highly pervasive and mutative, a leading strategy for detecting new strains of malware is via application of AI. Models are fed with the flood of malware samples gathered daily, which allows them to generalise and learn how to identify new versions of malware automatically and immediately.”

Accuracy and explainability

Speaking of the pain points to avoid when implementing these technologies, he says there are two: accuracy and explainability.

“To reach high levels of accuracy, AI models require immense amounts of diverse data, and computing resources, to train on. Since these aren't always available, many of the resulting models must compromise on some aspect of accuracy. This usually means a high number of false positives, or false negatives.”

For users, dealing with a high false positive rate means sifting through many alerts, for example, which can be an arduous and time consuming task. On the other hand, dealing with a high false negative rate means the organisation is compromising on some aspect of cyber security, which requires risk planning or supplementary measures.

“The second major challenge is understating the results produced by AI. Explainability is one of the most challenging aspects of ML, as in many cases the resulting model, albeit accurate, does not give any reasoning as to why a particular prediction was made.”

He says this can result in reduced trust in the models, and users avoiding or double checking the predictions of the models.

“Avoiding these pain points means firstly clearly defining acceptable accuracy goals for the tools, according to the organisation's strategy, and then seeking out tools that provide supplementary information which can back the predictions made by them.”

During his presentation, Partush will examine how AI and ML are being used to detect and prevent cyber attacks, and will unpack any new developments. He will also share how to incorporate AI/ML into existing security architecture, as well as understanding the difference between application-based AI and ‘infusing’ AI into security.

In addition, Partush will cover investing internally in AL/ML skills to build bespoke security solutions, as well as how bad actors are using these technologies to boost and expand their attacks and how the same technologies can be used to defend against this.

Share