Video Screencast Help

Veritas Storage Foundation High Availability 6.0: An Enterprise Strategy Group Product Brief

Created: 30 Nov 2011 • Updated: 30 Nov 2011
Language Translations
Raissa_T's picture
0 0 Votes
Login to vote

Abstract

The biggest announcement from Symantec’s Storage Availability & Management Group in five years, this release touches all of the major storage and availability products in Symantec’s portfolio. This is less about products and more about customers and how they think about core competencies: running the business at hand. As IT budgets largely remain flat, the demands of business continue to rise. IT needs to maximize efficiencies in existing infrastructure, and Symantec aims to help them do it. By focusing on business service availability, Symantec is helping IT organizations keep pace with the speed of business.

Overview

Raising the subject of availability is often met with conversations about high availability (HA) and replication products and the work involved in configuring and keeping “data” available for access. Business service availability (comprised of all of the components required to access highly-available data) is equally important as data availability. This is a challenge since IT staffing remains flat overall, and the speed of business continues to escalate. In spite of this, IT needs to find a way to deliver on its goals and objectives: keeping the business running even in the event of an IT interruption. This paper examines Symantec’s latest announcement related to achieving business availability and meeting the service level agreements (SLAs) in place today.

Business Service Availability

Without question, most business managers would say that business operations are critical and must remain available 24x7x365. If there is catastrophic failure at the primary data center and all things are created equal, then it would make sense that the various systems could simply be recovered alphabetically. But honestly, how many would bring up the accounts payable system before the accounts receivable system? Who would recover customer relationship management (CRM) software after recovering the engineering applications? While all parts are important to the running of the business, some carry a higher value in the face of recovery. In order to maintain business continuity, the order of recovery is critical. Today’s environments are riddled with dependencies and cross-dependencies that may impact overall successful recovery if not followed closely. Think of it in the same way as you would when you build a house: the roof is not the first component you add to the structure. The same is true for recovering business services. All parts are important, but the order in which recovery takes place, otherwise known as recovery objectives, is critical.

Recovery orchestration gets more complicated with a service-oriented architecture (SOA). Application architectures that consist of Web front-end servers, and back-end application and database servers (potentially running on multiple operating systems and hypervisors) create more complexity due to the interdependencies of service components, and multiple management tools. An unplanned downtime event for the service that necessitates a recovery requires coordination between administrators responsible for the application, database, storage, and servers. Without a streamlined, concerted effort, valuable time can be lost—potentially causing irreparable damage to a company’s reputation or financial assets.

Understanding recovery objectives here is key:

  • The Recovery Time Objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) to avoid unacceptable consequences associated with a break in business continuity. It specifies the amount of downtime the business can tolerate. For example, the RTO for a payroll function may be two days, whereas the RTO for sales order processing may be two hours.
  • The Recovery Point Objective (RPO) is the point in time (relative to the disaster) to which you plan to recover your data. Different business functions may have different recovery point objectives. RPO is expressed backward in time from the point of failure. Once defined, it specifies the minimum tolerance for data loss, and therefore, the frequency of making copies for recovery.

To get some perspective on RTOs, ESG research1 explored the downtime tolerances for the highest-valued data (tier-1) to the least valued (tier-3), and found a staggering 74% of respondents indicated recovery requirements of three hours or less, while 53% can only tolerate one hour or less (see Figure 1). These very tight timelines for the highest-valued data applies even more pressure to IT organizations looking for solutions to help address these issues.

Symantec Targets Functional Operations

Symantec’s Storage Availability and Management Group 6.0 announcement is all about building resilient business services. A highlight of this announcement is a framework called Virtual Business Service (VBS), which enables automatic recovery of specific business services based on priorities. A finance business application, for example, may be made up of components such as Web services, an application interface, and finally the database itself. Supporting servers for those components have network, storage, IP addresses, etc.

To further complicate things, the Web servers may be running in virtual machines, the application server tier may be running on Linux and the database might be sitting on a physical UNIX server. Symantec can provide visibility across the entire business service and orchestrate the recovery of this business service automatically. It can do this because application availability in VMware, Red Hat KVM, Solaris LDOM, and IBM AIX LPARs are protected with Symantec ApplicationHA. Applications and databases running on physical servers and the underlying operating system, server, and network resources are protected with Veritas Cluster Server. With the 6.0 launch, Symantec can coordinate acrossoperating systems and virtualization technologies and provide a single solution and a central management point that can provide high application availability across the different platforms in the data center.

A high availability architecture with automated business service recovery is key to minimize planned and unplanned downtime. A high availability configuration is designed to eliminate single points of failure with no or minimal downtime. If an unplanned interruption of service does occur, recovery happens automatically—without coordination between functional teams or performing manual tasks. In a planned downtime incident, inadvertent consequences are alleviated since all dependencies are known and coordinated. Services can easily be taken down or brought online as needed—without manual intervention or the risk of error. The bottom line is that less downtime in recovery situations saves administrator’s time, lowers costs, and minimizes risk.

While the 6.0 release helps IT organizations address stringent storage budgets with storage efficiency, increasingly strict SLAs with its high availability suite, growing data center complexity through its management software, and more, Symantec focused its announcement on building more resilient business services. With this latest announcement, Symantec is enabling organizations to keep pace with the speed of business.

The Bigger Truth

With this announcement, Symantec has further simplified the customer experience, as well as its messaging to current and prospective customers regarding enhancements to its continuity solutions. Symantec avoids speaking about individual products and instead takes a higher-level approach to its customers’ businesses. This gives Symantec the ability to have a business conversation at the highest level, while naturally still enabling most technical conversations with its traditional IT audience.

Symantec has done a great job of pulling together the product groups and the leadership needed to drive toward a common goal: improve the customer experience as it transitions to a more dynamic data center—which may include leveraging cloud services. The simple truth is that IT organizations are having a hard time keeping up with data growth and increasing business needs, so they must identify clever ways to continue to support internal customers and provide much higher service level agreements. Solutions from Symantec may prove to be a very good way for customers to get more control over their environments. However, because this is such a massive announcement, Symantec made sure the key value points are not lost in the details of each product group. By keeping its message focused on the very basic things this announcement brings to the customers—keeping the lights on (resilient business services)—without diving too deeply into the technical depths of extraneous features, it has a great deal more impact than even Symantec may have anticipated.

 

By: Lauren Whitehouse, Senior Analyst, Enterprise Strategy Group


1 Source: ESG Research Report, 2010 Data Protection Trends, April 2010.

 

All trademark names are property of their respective companies. Information contained in this publication has been obtained by sources The Enterprise Strategy Group (ESG) considers to be reliable but is not warranted by ESG. This publication may contain opinions of ESG, which are subject to change from time to time. This publication is copyrighted by The Enterprise Strategy Group, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of the Enterprise Strategy Group, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact ESG Client Relations at (508) 482-0188.