System failure and human error remain the most common causes of downtime, but power outages, the increasing frequency of natural disasters like Hurricane Sandy, and escalating concerns over cyber threats are putting data center professionals on high alert when it comes to business continuity.
The average enterprise experienced 16 outages in 2011, racking up over $5 million in costs associated with downtime, according to Symantec's "State of the Data Center Survey." Outages on internal systems like email and ERP take a significant chunk out of employee productivity, while downtime on customer-facing Web sites and ecommerce systems raise the potential for lost sales and put a firm's brand reputation at risk.
Fueling the complexity around business continuity are technologies like virtualization, which is moving to the heart of the data center and becoming the standard platform for mission-critical business applications. Data backup and data replication, both critical pillars of a business continuity strategy, are not enough to ensure continuity of service. Organizations need to address application failover as part of their overall disaster recovery plan, yet the practice is more complicated in a virtualized world.
environment in addition to the shortcomings of native tools to provide sufficient visibility, control, and high availability for multi-tier applications. Unlike the data center of the past where business-critical applications ran as workloads on solitary physical servers, applications in a virtual world are handled as a set of multi-tier services, leveraging multiple physical and virtual servers which work together to get the job done. For example, a set of front-end Web servers might host the user interface components, which are linked to another set of virtualized servers that support the business logic and these servers are connected to a back-end database sitting on a Unix system.
Traditionally, IT has protected systems with data backup, which depends on time consuming and tedious restoration from disk or tape backup to recover from a data or system loss. However, that kind of prolonged downtime is untenable for business-critical applications, which operate on higher service-level agreements (SLAs).
Moreover, it's difficult to achieve visibility and detect application failures when running within a virtualized environment. Native solutions provide high availability at the hypervisor and virtual machine level, but not for applications running on the VM, making it difficult for IT to adequately monitor and take corrective action in the event of a system failure. As a result, it's quite possible for IT to remain in the dark about a system failure until a business user calls to complain, impeding any ability to meet critical SLAs and jeopardizing IT's good standing with the business.
To support the high availability requirements of multi-tier business services that are dependent on virtual and physical machines and to meet rigorous SLAs, IT organizations need to address business continuity at an entirely different level. What's required is a solution that protects the various components across a distributed virtual and physical infrastructure with intelligence into the dependencies and underlying infrastructure of the blended environment—an approach that enables automatic detection of failures and re-orchestration of applications, ensuring a recovery with minimal impact to the business.
Symantec's latest version of Veritas Cluster Server takes a holistic approach to business continuity. It delivers automated local and wide area recovery of applications running across both physical and virtual environments and hardware platforms. The high availability solution, which ensures fast recovery in response to a wide range of failures, can also aid in reducing planned downtime for routine maintenance tasks such as upgrades and patches.
Veritas Cluster Server advances business continuity in virtualized environments and includes the following benefits:
- Reduced recovery times. Veritas Cluster Server manages the application data attached to a specific virtual machine (VM), immediately reassigning the system to a standby server in the event of a failure. By doing so, VCS eliminates the need for VM reboots and server migration resulting in faster, nearly transparent failover for many applications.
- Multiple architectures for automated application recovery. VCS will move applications to another node within the data center in the event of a localized outage like a server failure or support metropolitan clustering, which makes an application available in another data center within 60 miles in case another one fails. There is also support for global clustering to make applications available at a remote facility over an unlimited distance.
- Integration with the VMware portfolio. Unlike other clustering products, Veritas Cluster Server integrates with VMware's vCenter management console, unifying management in a single view. It is also fully manageable from VMware vSphere clients and works with vMotion for live migration, VMware Distribute Resource Scheduler, and VMware Site Recovery Manager.
- Disaster recovery testing. Regular testing of disaster recovery plans is critical for ensuring a successful recovery. VCS includes Fire Drill, a tool that simulates disaster recovery tests non-disruptively, reducing the time and expense of testing.
As virtualization goes the next mile in the data center and becomes the de facto platform for mission-critical enterprise applications, IT organizations need a fresh approach to disaster recovery and ensuring continuity of critical business services. To find out how Symantec can help you when your only choices are “Up” and “Running,” go to Symantec Business Continuity