When it comes to building out a recovery strategy, the fundamental challenge facing IT professionals is to align the IT budget with business requirements. The current economic crisis makes this challenge more daunting than ever.
On the one hand, IT budgets are expected to remain flat over the next year. On the other, organizations are increasing their reliance on IT for fundamental business processes, and requirements for higher service levels are going up as well.
Faced with this dilemma, many organizations will likely respond either by overspending or by under-protecting. Organizations typically overspend when they back up data that doesn’t change, use multiple point tools that aren’t integrated (which adds complexity to already complex IT operations), and keep less important data on expensive storage when it should be archived. Organizations often find they’re under-protected when a disaster strikes and a mission-critical IT service can’t be recovered as quickly as is required by the business.
This article looks at how to find the optimal middle ground, where the right applications are matched with the right level of protection so that less-critical assets aren’t depleting resources that should be spent elsewhere.
To ensure that data and applications are protected, organizations should first conduct a business impact analysis of their IT services and classify them based on their importance to the business. As a result of this business impact analysis, recovery service levels can be established and applications tiered based on their level of importance.
- Tier 1 applications. These are the most critical to your business and can include databases, ERP and CRM applications, and transaction-oriented systems such as credit-card applications. For the typical large enterprise, the recovery time objective, or RTO (the amount of downtime that can be tolerated), for these applications is less than 1 hour. The typical recovery point objective, or RPO (the amount of data loss that can be tolerated), is near zero.
- Tier 2 applications. These applications are still important, but their RTOs are usually about 6 hours, and an RPO of hours is acceptable. Applications in this tier might include email and certain types of databases.
- Tier 3 applications. These applications aren’t so business-critical, such as a company intranet or virtualized environments. Here RTOs of less than 12 hours are acceptable and a day’s worth of data loss can be tolerated.
- Tier 4 applications. These applications typically have an RTO of “when convenient” and days of data loss can be tolerated.
Once you have assigned applications to their appropriate tiers, Symantec recommends you follow three recovery strategies: ensure application recovery, optimize information recovery, and protect more but store less.
As businesses rely more heavily on IT for their critical operations, recovery times for critical applications continue to decrease. According to the latest Disaster Recovery Survey from Symantec, RTOs for mission-critical applications fell from 9 hours to 4 hours in the past year. At the same time, application environments are becoming more and more complex, with the result that recovering applications manually is cumbersome, unreliable, and costly. Symantec recommends automating application recovery to provide local availability and global disaster recovery. With clustering, file systems run on parallel servers, eliminating the need for storage resources to be restarted during a failover. Testing by Symantec has shown this can mean 90% faster recovery for heavy workloads. Of course, to ensure that your disaster recovery solution works, you need to test it. And yet according to the Disaster Recovery Survey, 35% of respondents say they test their DR plans only once per year or even less frequently. Symantec recommends a clustering solution that provides non-disruptive DR testing, which allows administrators to do an actual failover of an application to the DR site without having to bring down the production service.
Applications may be essential to your business success, but they’re useless without information. One of the most effective ways to optimize information recovery is to use disk-based data protection. Using disk to augment or replace a tape-based environment not only helps to increase backup and recovery success rates, but it helps to make backup and recovery faster. This means a recovery time of minutes compared with the hours (or even days) needed for tape. Disk-based data protection also allows for granular recovery, meaning you can back up and store applications once but have two types of recovery: the full application/image for disaster recovery purposes as well as granular files, objects, or emails. There is also the issue of e-discovery to consider. According to a September 2008 IRM survey by Applied Research, the cost of recovering and reviewing information is a stunning 1,400 times the cost of storing it. Typically, an e-discovery or investigation request is treated as a fire drill, diverting IT’s attention from more productive tasks and consuming resources better used elsewhere. An intelligent archiving solution optimizes the recovery of individual files and enables inside counsel to more effectively control collection and preservation.
According to a May 2009 report from the Enterprise Strategy Group, database data is growing 25% per year, with unstructured data (such as email) increasing at two to three times that rate. This explosion of data is driving enterprises to adopt technologies such as data deduplication, which reduces backup requirements by eliminating multiple copies of the same data. Data deduplication is proving to be very versatile. In the data center, data can be deduplicated at the media server layer. Organizations are also finding that, by replacing tape with disk at remote offices, they can protect those locations directly from the data center. The Enterprise Strategy Group estimates that protecting data from a central location can reduce remote office operational expenses by as much as five times. Finally, deduplication can be leveraged to eliminate redundant data within an application, particularly in collaborative applications such as email. That allows for faster backup and recovery. Businesses can also gain savings through the efficient use of storage. Archiving allows you to move less frequently used unstructured information from high-cost disk and archive it to lower-cost storage.
Recently, The Tolly Group, an independent test lab, conducted a series of backup and recovery tests comparing Symantec Backup Exec and Symantec Veritas NetBackup with EMC NetWorker and CommVault Simpana. The tests, sponsored by Symantec, measured backup and recovery performance in two business environments: Microsoft Exchange and VMware.
In tests using a range of Microsoft Exchange scenarios, Symantec backup and recovery solutions using granular recovery technology were able to recover individual email messages up to 17 times faster than CommVault Simpana and 220 times faster than EMC NetWorker. Symantec can recover a 200MB mailbox up to 13 times faster than EMC NetWorker using EMC-recommended best practices.
In a series of tests run in a VMware ESX server environment, Symantec backup and recovery solutions reduced the time required to back up virtual environments by up to 350% over EMC NetWorker by eliminating the need to perform file- and folder-level backups. Symantec solutions were able to back up VMware environments more than twice as fast as CommVault’s new VMware agent.
Disaster Recovery preparedness can no longer be viewed as an expensive insurance policy. In today’s global, 24x7 economy, it’s a competitive necessity. As a result, organizations are under increasing pressure to ensure they have the right level of protection at the right price.
Symantec provides market-leading solutions for data protection, archiving, high availability, and disaster recovery. Technologies include backup, deduplication, replication, continuous data protection, classification, retention, e-discovery, HA/DR clustering, storage management, and DR testing.