Businesses emerging from the global recession are increasingly looking to virtualization to help fuel their recovery.
That’s according to a recent report¹ from research firm IDC, which goes on to observe that IT departments battered by the global recession last year are now adopting a “virtualize first” mentality. As a result, automation tools that help IT administrators more easily manage their virtualized environments will be a top priority going forward, IDC says.
But while virtualization gains momentum, many organizations nevertheless continue to put off plans to move their business-critical “Tier-1” applications into virtual machines. There have been several reasons for this. For example, with just VMware HA, organizations protect physical machines and prevent OS-level failures, but they lack visibility into the application’s availability; in many cases, issues must be resolved manually only after they are reported by people using the services that the application provides.
This article looks at why an effective high availability solution for business-critical applications requires physical to application level protection. It then examines how virtualization itself is maturing into a new way of managing IT that is more dynamic and responsive to business needs.
As pressures mount for IT to ensure that their critical systems are up and continuously running, downtime is not an option. Plus, with flat budgets organizations are looking for a way to effectively capture the benefits of virtualization, confidently virtualize business- critical applications, and efficiently manage their HA operations. Small wonder, then, that enterprise IT administrators in a recent IDC survey cited the need to ensure application availability as the biggest challenge in managing virtualized servers and storage.²
That’s because virtualization can increase availability risks by consolidating the points of failure on fewer servers. In addition, ensuring high availability for business-critical applications such as Exchange Server, SQL, SAP, and Oracle that are deployed on a combination of physical and virtual server nodes introduces further complexities. For example, an ERP application may have middleware components running in a virtual server, but the underlying database is on a physical server.
For the typical enterprise, the recovery time objective or RTO (the tolerable amount of application downtime) for business-critical applications is less than one hour. In some cases, no more than a few minutes can be tolerated because downtime can mean millions of dollars in lost revenue or worker productivity. For instance, if Microsoft Exchange hangs, then everyone using that application loses their central point of communication. For these applications, the typical recovery point objective or RPO (the amount of data loss that can be tolerated) is near zero.
Unfortunately, many server virtualization HA tools don’t have the ability to monitor the health of the applications running in virtual servers and cannot remediate the problem when a failure occurs. Moreover, the introduction of a new toolset is likely to increase the chance of operator error. According to the Uptime Institute
, a New York-based research and consulting organization that focuses on data-center performance, human error causes roughly 70% of the problems that plague data centers today.
The three chief areas of concern, then, are:
- Increased risk due to server consolidation. Businesses that “put all their eggs in one basket” to reduce costs and complexity create a virtualization challenge that needs to be addressed to avoid single points of failure that can disrupt business operations.
- Limited visibility. Virtualization encapsulates application components (operating system, database, middleware, drivers, network gateways, etc.) to make it easier to move application workloads between servers. But that same encapsulation also reduces visibility into the state of those components.
- Increased management complexity from multiple vendor-specific tools. That raises the likelihood of operator error.
Ensuring visibility and controlSymantec believes a production-class HA solution for business-critical applications should provide monitoring of the application and application resources, including the virtual machine, physical machine, storage components, and network components.
Moreover, such a solution should provide notification during a failure of any of those resources, as well as automated failover without any manual processes, either within the same data center or to a remote data center (if a business has more than one site to be virtualized).
is the result of a strong partnership between Symantec and VMware. ApplicationHA and VMware HA together protect against application failures, virtual machine failures, and physical host failures. ApplicationHA enables organizations to virtualize their business-critical applications with confidence and to provide SLAs that are comparable to physical environments.
Standardizing on Symantec’s high availability solution enables organizations to:
- Automate the failover of applications on both virtual and physical servers, mitigating risks and reducing downtime
- Increase visibility and control with centralized management
- Manage all operations of ApplicationHA through VMware vCenter, avoiding the need for additional tools and associated training
While many organizations were first attracted to virtualization to reduce physical server sprawl, there are other key virtualization functionality areas that provide businesses with benefits. In fact, what started as an effective tool for server consolidation has matured, for many organizations, into a new way of managing IT that is significantly more dynamic and responsive to business needs. For example, storage, application, desktop, platform and network virtualization should be considered and implemented if it fits a business’ virtualization strategy. Among the benefits:
- Reduced costs. Virtualization technology can reduce the capital expenditures required to acquire new server, storage, and endpoint hardware. The smaller footprint allows organizations to reallocate their staffing and resources to critical projects.
- Faster deployment. IT organizations can use virtualization to respond faster to computing need demands.
- Easier management. Virtualization allows for easier management of businesses’ current and future business processes, which promises to increase employee productivity and significantly reduce downtime.
But remember: Without an effective strategy that addresses both physical and virtual environments, the benefits of virtualization can be offset by the challenges of new requirements that affect key service level processes, such as backup, high availability, storage management, and endpoint management, to name just a few. For example, as organizations scale their virtual environments, they may find themselves challenged to keep up with the increased storage consumption and performance issues that come with hundreds and even thousands of virtual machines.
Bottom line: Businesses are learning that it’s not worth investing in virtualization unless they also have the implementation and operational plans in place to make it successful.
Today, recovering from an application failure in most VMware environments requires manual intervention. Not surprisingly, that has prevented many organizations from putting their business-critical applications into virtual machines.
As a result of the partnership between Symantec and VMware, organizations now have an application-aware high availability solution that largely automates the process of detecting and recovering from application failures inside a virtual machine. With ApplicationHA, organizations are in a better position to deploy their business-critical applications in a VM with confidence.
- ¹ Worldwide Quarterly Server Virtualization Tracker, IDC, April 28, 2010
- ² Choosing Storage for Virtualized Servers, IDC, October 2009