Subscribe

Unpacking the true cost of downtime

The critical need to protect against unplanned downtime from the full range of potential failures cannot be overstated.
Byron Horn-Botha
By Byron Horn-Botha, Business unit head, Arcserve Southern Africa.
Johannesburg, 21 Nov 2019

The causes of downtime can range from single application failures and temporary Internet service interruptions, to the obvious risks of natural disasters, ransomware and other malware.

In the case of application failure, the impact to business continuity will depend on the type of business and the particular application. For example, a highly-transactional online business that experiences unplanned downtime of its database or transaction software could lose immense sums of money in less than an hour, depending on the time of day or the day of the week.

An IDC report examines the true cost of downtime and infrastructure failure, as well as DevOps adoption, and the numbers revealed are staggering:

  • For the Fortune 1 000, the average total cost of unplanned application downtime per year is $1.25 billion to $2.5 billion.
  • The average hourly cost of an infrastructure failure is $100 000 per hour.
  • The average cost of a critical application failure per hour is $500 000 to $1 million.

Computer system failure at high-profile/global brand enterprises often serves as a wake-up call. The critical need to protect against unplanned downtime from the full range of potential failures – not just major disasters – cannot be overstated.

South African business owners are all too aware of the impact and cost of even short-term power or Internet outages that end up bringing the organisation to a grinding halt.

The impact of unplanned downtime

If the company provides 24/7 services, and unplanned downtime would result in significant client distress, it will be impacted by both immediate, obvious downtime costs, as well as hidden costs that reveal themselves down the line – regardless of the cause. A system failure that shuts down the ability to perform core business functions can result in hefty customer remediation costs.

There’s also a risk that system failure could put clients at risk. For a software company that suffers a malware attack that makes its flagship product unavailable to clients for even a few hours, the costs of downtime will ripple through the business all the way through to clients.

Moreover, a system outage of any size can cause reputational damage that can’t be fixed by reimbursing clients for their expenses. In a situation where the outage was preventable, customers may question whether the company is a trustworthy partner.

Quite aside from the obvious loss of productivity due to downtime, internally, system outages can negatively impact staff morale, especially where they have to respond directly to a host of frustrated customers.

In the case of global organisations working around the clock in multiple locations, there’s no longer a built-in window of time when unplanned downtime is acceptable, or when maintenance can tackle an outage without disrupting the business.

Recovery for all critical systems, applications and data needs to be instantaneous, which is challenging. When it comes to business continuity, determining priorities will help the enterprise develop the most effective, cost-efficient strategy possible. And, that begins with evaluating which risks the company wants to most actively protect itself against, and which ones it’s comfortable dealing with reactively.

It is crucial to determine the compliance obligations for the sector in which the company operates. The industry may have regulations dictating the level of business continuity that must be maintained. Even where there are no formal regulations, the industry standard can be a guide. If competitors have invested in business continuity, current and potential clients will expect the firm to do the same.

The company must also consider what the value of the brand is to clients and what kind of service is expected. If customers depend on it for services that must be accessible at all times, it will have more at stake when it comes to business continuity planning.

Calculate the risk factors

In other words, what’s the likelihood of the business suffering malware or ransomware attacks, in addition to natural disasters and other causes of system outages, and unplanned downtime? Only when that is clear, can the cost of downtime be estimated.

All data is not created equal. Some databases or files can go offline for a few days without severely impacting anyone in the business. Calculate how much downtime can be tolerated for each asset that needs to be protected.

Having determined the company does need to implement stronger continuity plans, there are a few issues to examine. Most businesses aren’t going to be able to afford the highest level of protection for every asset.

Trade-offs may need to be made, in which case the business will definitely need to be clear on what it regards as non-negotiable.

In assessing new data protection capabilities, vendors and the solutions they offer will need to be scrutinised. For example, is an in-house IT team capable of customising a solution and if not, does the vendor have the necessary skills?

Not all vendors have implementation capabilities and it may be necessary to hire an expert in that area, or invest in a more turnkey or flexible solution.

Ultimately, to avoid the damaging cost of downtime, investing in a business continuity plan is crucial. While no solution is a panacea, taking a vigilant and proactive approach can greatly reduce the risk.

Share