For security experts, backups are often referred to as the Achilles’ heel of cyber defence. They’re the safety net an organisation relies on during a crisis, yet they’re often misconfigured or overlooked, making them a prime target for cybercriminals.
In Rubrik Zero Labs’ ‘State of Data Security in 2025: A Distributed Crisis’ report, 74% of organisations had their backup or recovery systems compromised during attacks. And in over a third of cases, these systems were rendered unusable, leaving organisations without a way to recover. A separate study by Continuity looked at 10 000 storage and backup systems in 300 enterprise environments. On average, each environment had around 10 vulnerabilities affecting its storage and data protection infrastructure, half of which were high-risk, and could be used to break in or cause serious damage.
Backups are the safety net, but recovery is the performance test.
Dean Wolson, Lenovo Infrastructure Group
When a weak spot is struck, even the strongest security posture can fall apart. That’s why backups aren’t just a recovery tool, like firewalls or endpoint protection, but can quickly become the single point of failure. At the same time, backups often have different retention requirements.
Some data might have to be stored for many years, due to data policy requirements, which makes it difficult to replace backup technologies. “You could almost say that backups are like marriage,” says Louis van der Westhuizen, solutions architect: DMS at Datacentrix. “It’s a long-term commitment and not something you would change regularly.” Backups shouldn’t be a tick-box exercise, or an afterthought. And, like any relationship, they require work. “The business’ recovery is only as good as the backups, and should they be compromised, corrupt or encrypted, this will have a huge impact on data recovery,” he says.
Managed service providers (MSPs) now offer Backup-as-a-Service (BaaS). According to Mordor Intelligence, the global BaaS market is projected to reach $33.18bn by 2030.
Companies need stronger protection without the overhead of managing it themselves. While BaaS adds a level of control and consistency often missing from internal setups, MSPs like Datacentrix include safeguards such as security, recoverability and data availability.
“Backups best practice helps ensure that your data is protected, recoverable and secure,” says Van der Westhuizen. And as cybercriminals focus more on backups, AI is being used to surface early warning signs. A sudden increase in backup size could point to large-scale file encryption. A drop in frequency might mean the system has stopped running. An unexpected policy change could indicate tampering. On their own, these events might be missed. Together, they point to something bigger – and AI will pick it up.
You’re handing it to the least experienced member. And in order to do the backups, they are a very powerful person. They have the ability to overwrite every file in the organisation.
W. Curtis Preston
In a backup environment, AI can track delays, flag unexpected changes and see patterns that point to possible compromise. “AI helps make sure backups don’t just run, but that they’re actually usable when something goes wrong,” says Van der Westhuizen.
It’s not automation for its own sake, but visibility that turns backups into part of the threat response. AI is also starting to change how recovery actually works. Many platforms now use AI to map out disaster recovery, deciding which systems come back first, when to trigger failover and how to test recovery under pressure. It should also decrease response time. “With the rise of AI and automation, we’re seeing a shift towards self-healing systems,” says Steve Porter, MD at Metrofile Cloud. “Backups must be designed with recovery in mind, not just storage. They’re your safety net, but unless they work when it matters, they’re just expensive archives.”
You could almost say that backups are like marriage. It’s a long-term commitment and not something you would change regularly.
Louis van der Westhuizen, Datacentrix
Confidence is also part of the problem. Many teams simply assume their backups are solid, but don’t regularly c heck if recovery points a re intact or usable. “Backups are the safety net, but recovery is the performance test,” says Dean Wolson, general manager of Lenovo Infrastructure Group, Africa.
Veeam’s ‘Data Protection Trends Report 2024’ found t hat only 22% of organisations test their backups more than once a quarter. But as Wolson says, that’s starting to change. “Traditionally, backup and recovery were manual, time-consuming tasks. Today, AI and automation allow us to shift from reactive processes to proactive, intelligent operations.”
Backup options range from traditional on-premises systems to cloudbased services like Druva, Veeam or NetApp. What companies choose is based on their data volume and recovery needs. For instance, a small marketing business might back up 500GB of data daily to a cloud service, while a bank could manage petabytes of data with multiple daily backup cycles in different locations. The important considerations are backup frequency, storage location, and whether the backup is air-gapped or immutable to protect against cyber threats.
BACKUPS BEST PRACTICE
There’s a difference between having backups and knowing they’ll work when you need them. That’s why getting the basics right still matters. “The right backup strategy ensures business continuity, protects against ransomware and supports compliance,” says Steve Porter, MD at Metrofile Cloud. Here are his seven recommended best practices for backups:
1. Follow the 3-2-1 rule
Keep at least three copies of data on at least two types of storage media, with one copy stored off-site, such as in the cloud.
2. Automate and schedule backups
Automate backups to run regularly (daily or hourly depending on criticality) to eliminate human error and ensure consistent data protection. Use automated scheduling, with flexible policies tailored to the environment, including file-level, application and full system backups.
3. Encrypt data in-transit and at-rest
Use end-to-end encryption to protect sensitive data from interception or breach. All data handled must be encrypted during transfer and while stored, ensuring compliance with the PoPI Act and other regulatory frameworks.
4. Test and validate backups regularly
Recommend conducting routine backup integrity checks and restore drills to ensure data can be recovered when needed.
Perform automated backup validation and easy restore testing, which will allow the business to quickly simulate disaster recovery scenarios.
5. Tier your backup strategy by criticality
Not all data needs the same level of protection. Define Recovery Time Objectives (RTO) and Recovery Point Objectives (RPOs) for different data types. Customisable backup policies are best, which lets IT teams prioritise mission-critical workloads while optimising costs.
6. Ensure compliance and audit readiness
Maintain audit trails, retention policies and reporting to align with data governance and legal obligations. Consider solutions that include compliance-ready features, detailed logs and longterm archiving, ideal for industries with strict data regulations.
7. Use immutable backup where possible
Protect backups from ransomware by using immutable storage, or data that cannot be modified or deleted for a defined period. Immutable backup options are best to safeguard against cyberthreats and accidental deletion.
THE IMMUTABILITY SPECTRUM
Not all backups are created equal and not all that claim to be immutable actually are. Immutability means a backup cannot be changed, deleted or encrypted during its retention period, but in practice, it’s a little more complicated. “Immutable should be a binary condition,” says author W. Curtis Preston, who has a long career designing data protection systems.
“It can’t be changed, it can’t be deleted, it can’t be encrypted, it can’t be modified, but there’s really a spectrum and that is the problem.” Some vendors advertise immutability while still allowing an admin to shorten retention windows or expire backups early. Others include options to reset protection flags with a privileged account or via backend access. Technically, the data is still there. Functionally, it is defenceless. The biggest shift is that attackers have learned exactly how and where to target backups. “They don’t have to go hit 20 different systems,” says Preston. “If they get to your backup system, all your data is there.” And because backups are designed for restoration, they also make for fast and convenient data theft. The goal is often double extortion.
Delete the backups, extract the data and leave the business with no way out. This is where true immutability matters. Some of the strongest protections are found in cloud environments that support compliance-grade object locking. These systems cannot be altered or deleted, even by an administrator, for the duration of the retention period. If that retention is set for 90 days, the backup will remain untouched for 90 days, no matter what. But that level of enforcement comes with trade-offs. “If it’s truly immutable,” says Preston, “you should not be able to blow it away. If you change your mind, you can’t change your mind.” Slightly less rigid models include hardened Linux repositories with immutability flags. These are common in platforms such as Veeam and require root access to remove protection manually. When implemented correctly, with automated patching and strict access control, they can offer strong protection against ransomware. If root access is compromised, however, that protection collapses. Append-only file systems are another option. They restrict changes to backup files, but may still allow deletion if permissions are misconfigured.
Little attention
At the weakest end of the spectrum are backups stored on traditional file shares or mounted as drives with no restrictions. “It’s E:\backups,” says Preston, “which just screams, come delete me.” Despite years of guidance, these setups remain in use. Backup systems are still being deployed with default credentials, limited logging and poor isolation from production infrastructure. If an attacker gains access, there is little standing in the way. Preston also points to an ongoing problem with how backup is treated inside organisations. It is routinely neglected during patching, left out of security reviews and given little attention during infrastructure design. “It goes ignored from a cybersecurity perspective,” he says. “It doesn’t get put front of the line when we start talking about putting out patches, it literally just goes ignored.” When new platforms are rolled out, backup planning often comes last. And in many organisations, the job of managing backups is passed to whoever happens to be available. “You’re handing it to the least experienced member,” he says. “And in order to do the backups, they are a very powerful person. They have the ability to overwrite every file in the organisation.”
Every organisation should understand exactly how its backup system is protected and where those protections fall short. “Something is always better than nothing,” says Preston.
But when everything else has failed, the only thing that matters is whether your backup still exists.
* Article first published on brainstorm.itweb.co.za
Share