Subscribe
  • Home
  • /
  • Storage
  • /
  • Is your company prepared for large-scale data loss?

Is your company prepared for large-scale data loss?

Backups and business continuity, more generally, have never been more vital −all the more reason to get them right.
Byron Horn-Botha
By Byron Horn-Botha, Business unit head, Arcserve Southern Africa.
Johannesburg, 05 Mar 2024
Byron Horn-Botha, business unit head, Arcserve Southern Africa.
Byron Horn-Botha, business unit head, Arcserve Southern Africa.

Up to 94% of companies that experience a severe data loss never recover, with half closing within two years and 43% never reopening. The statistics are even more bleak for smaller businesses: almost 70% close within a year of losing a large amount of data.

Cyber crime is primarily to blame for these data losses, and South Africa has the dubious honour of being the most targeted African country when it comes to ransomware and e-mail attacks −more than half of our region suffered a ransomware attack in 2022.

Unsurprisingly, CSIR research indicates companies intend to increase their already significant cyber security spending by 22%.

All well and good, but most cyber security commentators warn that falling victim to some form of cyber crime is a matter not of "if" but "when". Given that cyber crime almost inevitably involves the loss of data, it is evident that recovering from an attack is even more important than protecting against it.

In short, data backups that can be quickly and reliably restored are vital to any organisation's ability to survive a significant data loss. In effect, backups should form part of a comprehensive business continuity and cyber security strategy.

Based on my interactions with clients across sectors, it's clear that many organisations are not adequately prepared for the almost inevitable cyber attack resulting in data loss.

As anyone who has lived through such an event can attest, a crisis is not the time to discover that backups are faulty or corrupt, or there are no processes to promptly bring those critical servers back online to reduce business impact.

Like any mission-critical, complex process, recovering rapidly from data loss requires focus and a commitment to getting the basics right. Doing so inadvertently takes a seemingly cumbersome task, making it easy, repeatable and transparent to all involved. Here are some of the key points to think about:

Develop a business continuity plan: This is a blueprint for how the company can deal with any disaster, from a fire to a catastrophic ransomware attack. Ensuring data is backed up, that the copies are reliable through continuous testing, and that they can be used to rapidly bring critical systems back online are all integral to the wider business continuity plan.

Many companies make the fundamental mistake of developing a business continuity plan and then filing it away. On the contrary, it needs to be a living document constantly being improved and widely disseminated across the company. Everybody needs to know what the plan is and their role in it.

Classify the data: Moving from the broad sweep of the business continuity plan to the narrower area of backups, it's vital to classify data into categories (such as gold, silver and bronze, for example). Storage is expensive, and not all data is of equal importance. For instance, some data needs to be retained for a longer period.

Like any mission-critical, complex process, recovering rapidly from data loss requires focus and a commitment to getting the basics right.

Another important consideration is that some data is so critical that even losing 20 minutes of data is undesirable, which means it needs to be backed up regularly; for example, every 15 minutes, with the backups available quickly, or in a high availability scenario in a standby state to spin up that mission-critical system. In business continuity jargon, data classification is all about recovery time objective and recovery point objective. These parameters need to be agreed upon and integrated into the business continuity and data classification plan.

In many industries, such as financial services, there are regulations about how long certain types of data must be retained. Data classification must thus consider the applicable governance framework regarding retention and data loss.

It is important to note that once the framework has been established, any new services added will be categorised accordingly.

Design the backup plan based on the business continuity plan. Once the plan is developed and the data classification completed, the solution itself must be designed. Key elements to consider here include how much of it needs high availability.

High availability is there to act as a standby when a critical production system goes down − it's there to serve as a conduit while production is brought back online. They must also provide reliable crossovers to backup systems, and failures must be detected as they occur to rectify them.

All of this adds to costs, demonstrating why the basic task of classifying the data is so critical. Enabling e-mail alerting will allow for close to real-time notification − if there is a standby resource, they can action the alerts where necessary. This could be the most cost-effective method, depending on how the backup environment is set up.

Ensure the company is following best practices: Business continuity and backup/disaster recovery are well documented, with a large body of best practices available. There is no need to reinvent the wheel.

For example, when it comes to backups, the so-called 3-2-1 rule is relevant, but the newer 3-2-1-1-0 rule is even more relevant, a refinement of the former. The new rule essentially means three copies of data, on two different media, one offsite and one air-gapped copy could be immutable storage/tape or cloud, while the zero refers to data consistency.

A word about offline storage such as tape − the key point here is that the backup is not accessible via any network and is, therefore, physically impervious to ransomware attacks. This copy cannot be altered or deleted if cloud immutability or on-premises immutable storage is used.

Automated testing is a must: Any business continuity professional will tell you that the key is to regularly test the organisation's ability to recover from a disaster, and this would include its data and systems.

Testing will ensure not only that the plan itself works but also that everybody knows what role they must play. Automated testing, like full system, is recommended on a larger scale and enables the company to significantly scale down manual testing.

As I've said, you never want to be blindsided during a crisis. In our digital world, it's vital that businesses can recover quickly from data loss and get back to being operational. Getting the basics right is the first and non-negotiable step.

Share