A recent high-profile cloud computing outage that temporarily knocked out a number of popular websites served as a reminder that, while cloud outages are rare, they can happen. Although service was restored for many of the sites later the same day, the incident “sent a chill” through the cloud community, according to one analyst. ¹
The outage also underscored many of the findings of the most recent Symantec Disaster Recovery Study
, which found that, for today’s organizations, the growing challenge of managing disparate physical, virtual, and cloud resources is adding more complexity to their environments and leaving business-critical applications and data unprotected.
Continue reading to learn about the specific management challenges posed by virtualization and the cloud and the steps your organization can take to help reduce downtime.
The Symantec survey, which polled more than 1,700 IT managers in large organizations across 18 countries, provides ample evidence that virtual systems are not being properly protected. And this comes at a time when respondents reported that between one-fourth and one-third of all applications are in virtual environments.
For example, the survey found that nearly half of the data on virtual systems is not regularly backed up, and only one in five respondents use replication and failover technologies to protect their virtual environments. Respondents also indicated that 60% of virtual servers are not covered in current disaster recovery plans. That’s up significantly from 45% reported by respondents in 2009.
Another key finding: Using multiple tools to manage and protect applications and data that reside in virtual environments causes major headaches for data center managers. In particular, nearly 60% of respondents who encountered problems protecting business-critical applications in physical and virtual environments said this was a major challenge for their organization. As one data center manager for an automotive company put it: “If I knew of a tool that would do everything for us, I’d be happy to take a look at it.”
Approximately two-thirds of the respondents said security was their main concern about putting applications in the cloud. However, the biggest challenge respondents face when implementing cloud computing or cloud storage is the ability to control failovers and make resources highly available.
Best practices to reduce downtime
Symantec believes data center managers should simplify and standardize as much as possible so they can focus on fundamental best practices that help protect critical applications and reduce downtime:
- Treat all environments the same. Ensure that business-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning.
- Use integrated tool sets. Using fewer tools to manage physical, virtual, and cloud environments will help organizations save time and training costs and help them to better automate processes.
- Simplify data protection processes. Embrace low-impact backup methods and deduplication to ensure that business-critical data in virtual environments is backed up and efficiently replicated off campus.
- Plan and automate to minimize downtime. Prioritize planning activities and tools that automate and perform processes that minimize downtime during system upgrades.
- Identify issues earlier. Implement solutions that detect issues, reduce downtime, and recover faster to be more in line with expectations.
- Don’t cut corners. Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that will have disastrous consequences.
When it comes to ensuring the high availability of business-critical applications, today’s IT organizations have little margin for error. Recent research illuminates the extremely tight parameters that businesses are working with. According to a report by the Enterprise Strategy Group, respondents said their organizations would suffer “significant revenue loss or other adverse business impact” if their business-critical applications were unavailable for anything from no time up to 1 hour. ²
Of course, ensuring the availability of business-critical applications means more than just ensuring that the virtual machine is running. Just because the virtual machine is available doesn’t mean the application is running properly. While VMware HA provides a robust mechanism to detect failures of infrastructure components, there’s still the question of monitoring the health of an application running within a virtual machine.
Symantec has extensive experience monitoring an application’s state and reacting accordingly in the event of an application failure. ApplicationHA, Symantec’s high availability solution for VMware virtual environments, provides application visibility and control while monitoring the health of an application running within a virtual machine.
The latest release of ApplicationHA enables administrators to monitor the health of hundreds of applications, at once, across their VMware environment via a dashboard.
At the same time, ApplicationHA’s deep integration with VMware vCenter Site Recovery Manager helps organizations address the challenges of traditional disaster recovery so that they can meet their Recovery Time Objectives, Recovery Point Objectives, and compliance requirements. With Application HA and Site Recovery Manager, organizations can quickly manage failover from their production datacenters to disaster recovery sites and ensure their applications are running in the event of a disaster.
As more and more IT organizations adopt new technologies such as virtualization and cloud computing to reduce costs and enhance disaster recovery efforts, they’re adding more complexity to their environments and leaving business-critical applications unprotected. These organizations should strongly consider adopting tools that provide a holistic solution across all environments. Data center managers could then focus on fundamental best practices to help reduce downtime.