About
Subscribe
  • Home
  • /
  • Features
  • /
  • Why it’s time to stop blaming staff for breaches

Why it’s time to stop blaming staff for breaches

Security awareness training has been the industry's answer to human error for decades. But what if the question asked has been wrong all this time?
By Tiana Cline, Contributor
Johannesburg, 23 Apr 2026

training has become one of the most consistent line items in corporate budgets. Most companies run it quarterly and employees click through it, answer the questions and collect their certificates. And yet, the breaches keep coming and the post-incident reports point to the exact same thing: human error. In a recent Forrester’s security survey, 97% of security decision-makers said their companies have a security awareness training programme, but it hasn’t delivered results. And according to Mimecast's ‘State of Human 2025’ report, 87% of organisations now train employees at least once a quarter, yet a third still name employee error as their top concern. Awareness training is definitely happening, but something else is wrong. The usual response is more training – another campaign, another completion rate, another round of certificates. But, as Deryck Mitchelson, global CISO at Check Point Software Technologies, says, the industry has been asking the wrong question entirely. “I don't think humans are the weak link,” he says. “Technology needs to do a much better job of preventing 99.9% of phishing emails from ever reaching the inbox.” The problem is not that employees are failing the training, but that the training and the technology behind it are failing the employees.

One of the reasons Mitchelson holds this view relates to how much phishing has evolved. The emails arriving in inboxes today bear almost no resemblance to the poorly worded messages that defined the threat a decade ago. AI has changed all that, with modern phishing campaigns built on the details in profiles from social media and public data. Today’s phishing attacks are personalised, professional and no longer sent by Nigerian princes looking for love.

Cybersecurity is not only a technical function. It’s a human one.

Aneka Botha, IPT

“The AI learns who you are,” Mitchelson says. “It's able to scrape the internet to see who you're connected with, what you've posted, what your hobbies are, and then the attacks become really personalised around things that are of interest to you. They look real.” Researchers from Harvard Kennedy School, using AI agents to automate spear phishing campaigns in 2024, found that personalised AI-crafted emails achieved a click-through rate of 54%, compared to 12% for standard phishing emails. “We've been able to prove that if you use AI-based, personalised phishing emails, one in two emails will be clicked. That's how effective personalisation is and that's what our training and simulations need to match,” says Mitchelson.

Aneka Botha, IPT
Aneka Botha, IPT

This is why expecting employees to reliably spot and avoid malicious emails is not a reliable training strategy. At best, they may become proficient at spotting simulated phishing mails. When attacks are personalised, clicking on the wrong link is an inevitable outcome. Employees are also more likely to click on a malicious link on their cellphones. “One click is all it takes,” says Mitchelson. When you’re working on your computer, there is space to pause, hover over a link and look more carefully. On a phone, the instinct is to scroll and tap, often in moments of distraction or pressure. A hospital consultant reviewing emails between patients, for example, is not in the right headspace to inspect a domain name. “They don't have time to stop and think,” says Mitchelson. “They need to rely on technology to step up to actually make it safe.”

If it's just generic training, it will always be a tick box. I don't like tick boxes. Cyber training has to be real and relevant to that company or industry, or people just won't buy into it.

Deryck Mitchelson, Check Point Software Technologies

Mimecast research estimates that 8% of employees are responsible for 80% of security incidents. A blanket programme that treats everyone the same dilutes its impact. Targeted, data-driven training built around who is actually clicking, on what and why, is the difference between a programme that changes behaviour and one that generates certificates. If threat actors are using AI to build targeted attacks, awareness training needs to match that level of specificity. A phishing simulation sent to a CFO should look different to the one sent to someone in HR. Threat actors already know this, but training programmes are still playing catch up.

If technology is doing its job, security awareness training does not need to be the last line of defence. But most workplaces are still treating it as exactly that and are delivering it in a way that is almost designed to fail. Forrester’s ‘Future of Work 2024’ research found that only 19% of employees say they have received formal AI training at work, yet 47% of leaders believe they are regularly providing it. “If it's just generic training, it will always be a tick box,” says Mitchelson. “And I don't like tick boxes. 

THE 8% PROBLEM

One of the more counterintuitive findings in Mimecast's ‘The State of Human Risk 2025’ report is that the risk is far more concentrated than most organisations assume. While businesses focus on training the entire workforce, an estimated 8% of employees are responsible for 80% of security incidents. These are not necessarily careless people, but tend to be in high-pressure roles, with high email volumes, who are more exposed to the threat than most. A one-size-fits-all awareness programme is, almost by definition, spending most of its budget on the 92% of people who are not the problem. Identifying and supporting the highest-risk individuals, with tailored training, more targeted simulations and closer monitoring is where the real return on security investment lies. “Cybersecurity is not only a technical function,” says Aneka Botha. “It’s a human one.”

Cyber training has to be real and relevant to that company or industry, or people just won't buy into it.” His approach is an 80/20 split. Around 80% of any training programme should cover the universal fundamentals, such as password hygiene, multifactor authentication and recognising suspicious activity. The remaining 20% is where the real work happens, with industry-specific, role-relevant content that connects the abstract concept of cyber risk to something an employee genuinely cares about. “When training fits naturally into the workday, people are more likely to engage with it,” says Aneka Botha, HR lead at cyber managed service provider IPT. “Cybersecurity training sticks when it becomes a habit rather than an event.” Her recommendation is short, consistent bursts of training every few weeks.

Mitchelson says one of the most counterproductive dynamics in a business’ security culture is the instinct to punish people who make mistakes. Accountability matters, but a punitive approach actively undermines any kind of human firewall security culture organisations need to build. “Rather than reprimanding people for clicking on phishing links, we should be rewarding the good behaviour,” says Mitchelson. A person who clicks on a suspicious link and stays quiet, hoping nothing comes of it, is far more dangerous than someone who clicks, recognises what happened and flags it immediately with IT. In containing a breach, speed of detection is critical and a culture of fear slows that down. “Companies I see doing the best are those that pivot from negativity to [where] cyber is something that's positive,” says Mitchelson. “And that message needs to come from the top.”

Deryck Mitchelson, Check Point Software Technologies
Deryck Mitchelson, Check Point Software Technologies

Training employees to behave securely only goes so far if the technology they use every day makes that secure behaviour harder, not easier. When security feels like an obstacle, people find ways around it. Cumbersome MFA processes get disabled or ignored and complex password policies result in passwords written on sticky notes. “If you bring in security that's complex and adds overhead, you actually weaken security,” says Mitchelson. “The best security is the kind that just works.” Biometrics are a good example of where usability and security can converge. A fingerprint scanner or facial recognition on a mobile device adds no meaningful friction to the working day. It’s faster than typing a password, requires no separate authentication app and is significantly harder to compromise.

“In 2026, why are businesses still using passwords when biometrics can be so much more secure?” asks Mitchelson. That said, not all biometrics are created equal. Voice recognition, which is still used in some banking applications for authentication, is no longer reliable thanks to deepfakes. “I’ve done talks where I actually sample my voice, and it speaks more eloquently than I do,” laughs Mitchelson. “We need to understand where biometrics work and where they've already been made weak.”

Getting the balance right is the challenge. Technology can filter threats and data can target training, but neither works without the right culture underneath. “Cybersecurity awareness should move beyond being seen as an IT issue,” says IPT’s Botha. “It is fundamentally about behaviour.” This is something Forrester calls human risk management, which it touts as a replacement for legacy awareness and training solutions that don’t instil the cybersecurity culture organisations desire. But behaviour does not exist in a vacuum, and is shaped by the tools people are given and the environment they work in. “We all make mistakes,” says Mitchelson. “But if you’ve got an open organisation that wants to help and improve, it's easy to own up to these things. That's the type of company I want to belong to, one where everyone thinks of themselves as a human firewall.”

* Article first published on www.itweb.co.za

Share