Subscribe
  • Home
  • /
  • IOT
  • /
  • What AI can (and can't) do for organisations' cyber resilience

What AI can (and can't) do for organisations' cyber resilience

By Brian Pinnock, cyber security expert at Mimecast

Johannesburg, 31 May 2022

Technologies such as artificial intelligence (AI), machine learning, the internet of things and quantum computing are expected to unlock unprecedented levels of computing power.

These so-called fourth industrial revolution (4IR) technologies will power the future economy and bring new levels of efficiency and automation to businesses and consumers.

AI in particular holds enormous promise for organisations battling a scourge of cyber attacks. Over the past few years, cyber attacks have been growing in volume and sophistication.

The latest data from Mimecast's State of Email Security 2022 report found that 94% of South African organisations were targeted by e-mail-borne phishing attacks in the past year, and six out of every 10 fell victim to a ransomware attack.

Companies seeing potential of AI

To protect against such attacks, companies are increasingly looking to unlock the benefits of new technologies. The market for AI tools for cyber security alone is expected to grow by $19 billion between 2021 and 2025.

Locally, adoption of AI as a cyber resilience tool is also growing. Nearly a third (32%) of South African respondents in Mimecast's latest State of Email Security 2022 report were already using AI or machine learning – or both – in their cyber resilience strategies. Only 9% said they have no plans at the moment to use AI.

But is AI a silver bullet for cyber security professionals looking for support with protecting their organisations?

Where AI shines – and where it doesn't

AI should be an essential component of any organisation’s cyber security strategy. But it’s not an answer to every cyber security challenge – at least not yet. The same efficiency and automation gains that organisations can get from AI are available to threat actors too. AI is a double-edged sword that can aid organisations and the criminals attempting to breach their defences.

Used well, however, AI is a game-changer for cyber security. With the correct support from security teams, AI tools can be trained to help identify sophisticated phishing and social engineering attacks, and defend against the emerging threat of deepfake technology.

In recent times, AI has made significant advances in analysing video and audio to identify irregularities more quickly than humans are able to. For example, AI could help combat the rise in deepfake threats by quickly comparing a video or audio message against existing known original footage to detect whether the message was generated by combining and manipulating a number of spliced-together clips.

AI may be susceptible to subversion by attackers, a drawback of the technology that security professionals need to remain vigilant to. Since AI systems are designed to automatically 'learn' and adapt to changes in an organisation's threat landscape, attackers may employ novel tactics to manipulate the algorithm, which can undermine its ability to help protect against attack.

Shielding users from tracking by threat actors

A standout use of AI is its ability to shield users against location and activity tracking. Trackers are usually adopted by marketers to refine how they target their customers. But unfortunately, threat actors also use them for nefarious purposes.

They employ trackers that are embedded in e-mails or other software and reveal the user's IP address, location and engagement levels with e-mail content, as well as the device's operating system and the version of the browser they are using.

By combining this data with user data gained from data breaches – for example, a data breach at a credit union or government department where personal information about the user was leaked – threat actors can develop hugely convincing attacks that could trick even the most cyber aware users.

Tools such as Mimecast's newly released CyberGraph can protect users by limiting threat actors' intelligence gathering. The tool replaces trackers with proxies that shield a user's location and engagement levels. This keeps attackers from understanding whether they are targeting the correct user, and limits their ability to gather essential information that is later used in complex social engineering attacks.

For example, a criminal may want to break through the cyber defences of a financial institution. They send out an initial random e-mail to an employee with no content, simply to confirm that they're targeting the correct person and what their location is. The user doesn’t think much of it and deletes the e-mail. However, if that person is travelling for work, for example, the cyber criminal would see their destination and could then adapt their attack by mentioning the location to create the impression of authenticity.

Similar attacks could target hybrid workers, since many employees these days spend a lot of time away from the office. If a criminal can glean information from the trackers they deploy, they could develop highly convincing social engineering attacks that could trick employees into unsafe actions. AI tools provide much-needed defence against this form of exploitation.

Empowering end-users

Despite AI's power and potential, it is still vitally important that every employee within the organisation is trained to identify and avoid potential cyber risks.

Nine out of every 10 successful breaches involve some form of human error. More than 80% of respondents in the latest State of Email Security 2022 report also believe their company is at risk from inadvertent data leaks by careless or negligent employees.

AI solutions can guide users by warning them of e-mail addresses that could potentially be suspicious, based on factors like whether anyone in the organisation has ever engaged with the sender or if the domain is newly created. This helps employees make an informed decision on whether to act on an e-mail.

But because it relies on data and is not completely foolproof, regular, effective cyber awareness training is needed to empower employees with knowledge and insight into common attack types, helping them identify potential threats, avoid risky behaviour and report suspicious messages to prevent other end-users from falling victim to similar attacks.

However, less than a third of South African companies provide ongoing cyber awareness training, and one in five only provide such training once a year or less often.

To ensure AI – and every other cyber security tool – delivers on its promise to increase the organisation's cyber resilience, companies should prioritise regular and ongoing cyber awareness training.

Brian Pinnock will be discussing how AI and ML fit into an organisation’s defensible cyber security strategy at this year’s ITWeb Security Summit. IT decision-makers can learn how to ensure the implementation of security solutions is not just a tick-box exercise but rather a defensible strategy that shows meaningful impact and lowers risk for the organisation. Visit the stand and chat to the team about Mimecast’s newest offering, CyberGraph, which utilises artificial intelligence (AI) to protect from the most evasive and hard-to-detect e-mail threats, limiting attacker reconnaissance and mitigating human error.

Mimecast is the Urban Café sponsor of the annual ITWeb Security Summit 2022 to be held at Sandton Convention Centre in Sandton, Johannesburg on 31 May and 1 June 2022 and a Silver sponsor at Century City Conference Centre, Cape Town on 6 June 2022. Now in its 17th year, the summit will again bring together leading international and local industry experts, analysts and end-users to unpack the latest threats. Register today.

Share