Organisations are struggling to recover mission-critical data on virtual and cloud systems.
This is according to the global results of Symantec's sixth annual disaster recovery study released this week.
Brian Duthie, Syamantec TSO manager, says organisations shouldn't cut corners when it comes to disaster recovery.
“While organisations are adopting new technologies such as virtualisation and the cloud to reduce costs and enhance disaster recovery efforts, they are currently adding more complexity to their environments and leaving mission-critical applications and data unprotected.”
The security company states that 44% of data stored on virtual systems is not regularly backed up and only one in five respondents use replication and failover technologies to protect virtual environments.
In addition, 60% of virtualised servers are not covered in their current disaster recovery plans. This is an increase from 45% in last year's Symantec survey.
Protecting the cloud
In terms of cloud computing, organisations run approximately 50% of mission-critical applications in the cloud. Two-thirds of survey respondents (66%) report that security is the main concern of putting applications in the cloud.
However, Symantec reveals that the biggest challenge companies face when implementing cloud computing and storage is the ability to control failovers and make resources highly available (55%).
To compound the problem, the research shows, around 82% of back-ups occur only weekly or less frequently, rather than daily.
Symantec indicates that resource constraints, lack of storage capacity and incomplete adoption of advanced protection methods restrict the deployment of disaster recovery strategies in virtual environments.
Downtime risks
According to the survey, 72% of a system's downtime is caused during system upgrades, which results in an average of 50.9 hours of downtime.
While 70% of organisations experience downtime from power outages and failures that result in 11.3 hours of downtime, 63% of an organisation's downtime is attributed to cyber attacks over the past 12 months, resulting in 52.7 hours of downtime.
Symantec says only 26% of respondents have conducted a power outage and failure impact assessment.
Duthie advises organisations to plan and automate systems to minimise downtime. “Data centre managers should simplify and standardise so they can focus on fundamental best practices that help reduce downtime.”
He adds: “Treat all environments the same: Ensure that mission-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of disaster recovery and planning.”
Managing complexity
Earlier this year, Symantec revealed in its report entitled '2010 State of the Data Centre Study', that one-third of enterprises haven't evaluated their disaster recovery plan in the past 12 months.
Sheldon Hand, Symantec storage specialist, previously stated that data centres are becoming too complex to manage because of disparate server and storage systems. “Security, back-up, recovery and continuous data protection are the most important initiatives in 2010, ahead of virtualisation,” he said.
According to research firm Gartner, by 2015, 40% of the security controls used within enterprise data centres will be virtualised, up from less than 5% in 2010.
Share