Subscribe

AI now on cyber criminals' agenda

Regina Pazvakavambwa
By Regina Pazvakavambwa, ITWeb portals journalist.
Johannesburg, 01 Mar 2018
Nicolas Reys, associate director and head of Control Risks' cyber threat intelligence team.
Nicolas Reys, associate director and head of Control Risks' cyber threat intelligence team.

Artificial intelligence (AI) and its subsets will play a larger role in facilitating cyber attacks in the near future, say experts.

According to analysis from global risk consultancy firm Control Risks, the development of techniques to use artificial intelligence and tools to enhance capabilities is now increasingly on the agenda of cyber threat actors.

As more organisations begin to employ machine learning and artificial intelligence as part of their defences against cyber threats, hackers are recognising the need to advance their skills to keep up with this development, it says.

The use of AI is unlikely to become widespread soon, given the financial investment currently needed, says Nicolas Reys, associate director and head of Control Risks' cyber threat intelligence team. However, as more research is produced and AI technologies become more mature and more accessible to threat actors, this will evolve, he adds.

Similarly, Etienne Greeff, SecureData CTO, says there is no question AI will be used for cyber crime in the future.

The sad reality is that the AI process is more suited to adversarial uses than defensive uses, says Greeff. This is because for AI and machine learning to work, there has to be a lot of data points to train the algorithms, he explains.

"There is a lot of data of applications and Internet-facing infrastructure out there which provides a very rich training set together with vulnerability data. The number is in the order of millions. On the other hand, there isn't a lot of data to show how attackers work. This is in the order of thousands."

Meanwhile, Indi Siriniwasa, vice-president, Sub-Saharan Africa for Trend Micro, says like many advanced and innovative technological processes, machine learning can be leveraged for both beneficial enterprise purposes as well as malicious activity.

"Sophisticated cyber criminals are continually on the lookout for the next big hacking strategy, and aren't shy about trying out new approaches to breach targets and infiltrate enterprises' IT assets and sensitive data."

Attack methods

Many Web sites and systems leverage CAPTCHA technology as a way to distinguish human users from bots or machine input, says Siriniwasa. However, in the age of machine learning, even these formerly tried-and-true access protections aren't impervious, he adds.

Alexey Malanov, malware expert at Kaspersky Lab, says it is very important to note that attackers can try to poison machine learning technology working on the side of defenders.

ITWeb Security Summit 2018

Registration is open for ITWeb Security Summit 2018, which will feature cyber security guru Mikko Hypponen and other international infosec players as plenary speakers. Get involved in #SS18HACK and choose from two half-day workshops or a full-day Boot Camp plus five training courses. Click here for the agenda. For the first time, ITWeb Security Summit will also take place in Cape Town.

"Imagine that a machine learning model learns to distinguish malicious and benign files. Attackers can send a huge number of clean files very similar to his malicious one. The model will probably lose its generalisation ability."

Moreover, Control Risks says threat actors could use algorithms to generate spear-phishing campaigns in victims' native languages, expanding the reach of mass campaigns.

Similarly, larger amounts of data could be automatically gathered and analysed to improve social engineering techniques, and with it the effectiveness of spear-phishing campaigns, it adds.

Also, based on its assessment of the target environment, AI technology could tailor the actual malware or attack in order to be unique to each system it encounters along the way, says Control Risks.

This would enable threat actors to conduct vast numbers of attacks that are uniquely tailored to each victim, it notes. Only bespoke mitigation or responses would be effective for each infection, rendering traditional signature or behaviour-based defence systems obsolete, notes Control Risks.

This isn't the first time machine learning has emerged as a way for hackers to break through, says Siriniwasa.

"In 2017, researchers used machine learning to support 98% accuracy to sidestep Google reCAPTCHA protections. This threat means enterprises will have to strengthen their security protections, particularly those that prevent botnet access on customer-facing systems."

He also points out that when hackers create malware, they don't just look to breach a business, but aim to remain within victims' systems for as long as possible. And as a result, hackers can leverage machine learning to fly under the radar of security systems aimed at identifying and blocking cyber criminal activity.

"Organisations should be aware of the potential for these types of attacks to emerge in the course of 2018. Staying informed and being able to identify relevant emerging attacks, technologies and vulnerabilities is therefore just as important as being prepared in the event of an attack," says Reys.

Share