Subscribe

Hyper-availability: Hype or mission-critical tactic?

Unpacking hyper-availability and what it really means to businesses, as downtime for even minutes could cause irreparable damage to revenue and productivity.
Driaan Odendaal
By Driaan Odendaal
Johannesburg, 08 Oct 2019

I think we are all worn out by the interminable buzz phrases for which the technology sector is notorious. However, when the term describes an essential operational goal for every modern business, it is another matter.

There is a fair amount of media hype and coverage in business technology press around hyper-availability and with good reason, in a world where businesses are globalised with the need for 24/7 operations.

Every company has systems and applications that must remain always-on and backup and recovery is no longer good enough. These organisations have applications and systems that store proprietary IP, keep e-commerce sites and air traffic control running, logistics and ERP tools working, and make financial transactions feasible.

Downtime for even minutes could cause irreparable damage to revenue and productivity. 

Change your thinking

In order to protect systems and applications, organisations must change their approach from backup to continuous data protection. They must move from recovery time and point objectives (RTOs/RPOs) to never needing to recover.

It’s important to understand the concept of availability, which is often inappropriately used to describe technology that doesn’t deliver on this promise. So, what does true application and system availability mean?

Perhaps it’s easier to start by explaining what it is not.

For example, the phrase – continuous availability for the modern enterprise – may not be what the company thinks it has bought into.

It’s important to understand the concept of availability, which is often inappropriately used to describe technology that doesn’t deliver on this promise.

Most solutions say they can support the modern IT environment with always-on availability or continuous uptime, but the reality is they’re often snapshot-based backups that aren’t intended to provide continuous availability. 

Whilst this type of technology may be great for many systems and applications, the company is still forced to deal with RTOs and RPOs and is still in a situation of backup and recovery – not prevention.

For critical operations that cannot be disrupted, look for a solution with a journal-based process that replicates data in real-time at the file system level of files/folders, applications, and full physical or virtual systems. Combined with heartbeat-powered automatic failover, the organisation will never need to worry about recovery time or data loss again.

Keep downtime to a minimum

Companies should know this may not apply to all environments or support application-level replication. If proprietary systems or critical applications are running on physical servers, the company will need an alternative to ensure availability during a disruption. Moreover, it may require a duplicate production environment which can increase costs immensely.

Some “high-availability” solutions require manual failover, increasing time from detection to mitigation. This means it’s not high-availability – it’s just replication with a short RPO. Moreover, some only support virtual environments with VM-level replication, so the organisation is left on its own to handle application and/or system-level replication for other environments.

This not only leaves the business exposed, but often adds complexity and significantly more cost.

It would do better to search for an agnostic solution that can protect applications as well as physical and virtual servers, and which offers high-availability, locally or in remote locations – including the cloud.

Asynchronous replication technology is needed to guarantee businesses remain fully operational during an outage. This needs to be combined with heartbeat-triggered automatic failover for continuous data protection of applications and systems on-premises, remote and in the cloud.

Beware of native replication tools

These tools are focused on a specific application or environment. This sounds great but the enterprise can end up with a situation where it has 100 applications and is forced to use 100 different tools to manage them, and proprietary applications will not be covered.

There may also be restrictions if the environment is virtual or physical, not all the tools are user-friendly, and licence upgrades may break the budget.

If needing to test the environment, or temporarily move production because of a planned downtime, the process can be slow and painful, not to mention needing application experts to monitor the entire process of running multiple applications in the disaster recovery site.

Companies can eliminate the high cost and headaches with a completely agnostic solution that can work in dissimilar hardware with one-to-one, many-to-one, and one-to-many replications. Deploy and manage from a browser-based console with SLA reporting and real-time application and server monitoring to examine performance at a quick glance.

A survey conducted by Arcserve and MayHill Strategies in September 2018 noted 93% of IT decision-makers revealed they could tolerate minimal data loss from critical business applications, with 50% stating they have less than one hour to recover before it starts impacting revenue.  

But the consequences of unplanned downtime don’t just hit the IT department. In most cases, it’s far-reaching – jeopardising business reputation, loss of customers and compliance issues relative to compromised operations. Frustrated employees have a lower output and e-commerce comes to a halt.

It’s clear that modern organisations need to change their approach to ensure applications and data are available all the time.

Share