The growing challenge of managing disparate virtual, physical, and cloud environments is making it harder for data center managers to protect and recover mission-critical applications and data.
The study, conducted by market researcher Applied Research West, polled more than 1,700 enterprise IT managers worldwide in October 2010. Symantec then conducted follow-up focus groups by phone with a range of IT professionals to gain additional insight into their experiences.
This article looks at the top findings of the study along with some of the steps that data center managers can take now to help reduce downtime.
The study found that data center managers are struggling to “protect, control, and manage” the steadily increasing amount of applications and data that reside in virtual and cloud environments. Even though more applications and data are in virtual environments than ever before, 60% of virtualized servers aren’t covered in current DR plans.
What’s more, Symantec survey respondents indicated that only half (56%) of the data on virtual systems is regularly backed up. While a percentage of this unprotected data may be for test and development uses, data center managers should take steps to ensure that production applications and data are not inadvertently left without protection. Only 20% of mission-critical applications in virtual environments are protected by replication or failover technologies.
Respondents said that between one-fourth and one-third of all applications are in virtual environments. Eighty-four percent of respondents said that virtualization has led them to re-evaluate their DR plans.
A majority of the managers (58%) said that the use of multiple tools to manage and protect virtual and physical environments poses a challenge.
As one data center manager for a manufacturer in the automotive sector in California stated: “If I knew of a tool that would do everything for us, I’d be happy to take a look at it.”
Bottom line: The added complexities of virtual and cloud environments mean they’re not being protected as they should be.
Interestingly, the study also found that the actual time to recover from an outage is twice as long as the respondents perceived it to be.
When asked how long it would take them to recover if a significant disaster were to occur that destroyed their main data center, respondents said it would take just over two hours to be up and running. However, respondents reported that in the past 12 months, the average amount of downtime per incident was five hours.
On average, organizations experienced four downtime incidents in the past year.
Asked to list the major causes of downtime over the past five years, the managers replied as follows:
- 72% experienced downtime from system upgrades (50.9 hours)
- 70% experienced downtime from power outages and failures (11.3 hours)
- 63% experienced cyber attacks (52.7 hours)
Given how many organizations cited power outages and failures as a leading cause of downtime, it’s surprising that only 26% of organizations have conducted a power outage and failure impact assessment.
The study found that significant improvements have been made in terms of the frequency of disaster recovery testing; however, disruption to employees, sales, and revenue continues to be high.
Approximately 82% of organizations test their DR plans either once a year or more frequently. That’s a significant increase from the 66% in last year’s study.
Respondents said that more than one-fourth (26%) of the total annual IT budget goes to DR-related initiatives, and is seen as part of the basic infrastructure of doing business. In terms of budget, organizations spent an average of $606,948 on DR testing in the past 12 months.
Reasons cited for not testing more frequently include: budget (60%); disruption to employees (59%); disruption to customers/disruption to sales and the revenue stream (24%); and people’s time (26 percent).
Now in its sixth year, the Symantec Disaster Recovery Study highlights business trends regarding disaster planning and preparedness. It also provides insight and understanding into some of the more complicated factors associated with DR.
In particular, Symantec believes data center managers should simplify and standardize so they can focus on fundamental best practices to reduce downtime. That includes the following steps:
- Treat and test the recovery of mission-critical information the same: regardless of where it lives: Ensure that mission-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning.
- Use integrated tool sets that work across physical, virtual, and cloud environments: Using fewer tools that manage physical, virtual, and cloud environments will help organizations save time and training costs and help them to better automate processes.
- Simplify data protection processes: Embrace low-impact backup methods and deduplication to ensure that mission-critical data in virtual environments is backed up, efficiently replicated off campus.
- Plan and automate to minimize downtime: Prioritize planning activities and tools that automate processes to minimize downtime during system upgrades.
- Identify issues earlier: Implement solutions that detect issues, reduce downtime, and recover faster to be more in line with expectations.
- Don’t cut corners: Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that will have disastrous consequences.