Video Screencast Help
Symantec Appoints Michael A. Brown CEO. Learn more.

Stop Buying Storage Best Practices: Veritas CommandCentral Storage and Veritas Storage Foundation

Created: 31 Aug 2009 • Updated: 31 Aug 2009
Language Translations
andysinger's picture
0 0 Votes
Login to vote

Author: Tom Harwood, Symantec Storage and Availability Management Group

Introduction

“Why are we spending so much money on storage and what can we do about it?” are questions that many companies are asking themselves. Answering these questions, however, is often times difficult because most companies lack the tools that provide visibility into their storage and its associated cost to the business’ bottom line. If you can’t measure your entire storage stack from the physical disk up to the application, it is going to be very difficult to manage not only how much you’re spending on storage, but also storage consumption behavior. This document will shed light on how to gain visibility into your existing storage stack and optimize efficiency at each level, how to improve management practices along the stack, and how to encourage efficient consumption behavior within your organization.

The escalating cost of storage

One of the most perplexing challenges facing IT organizations today is the escalating cost of storage compared to the overall IT budget. According to Gartner, the storage budget for the average customer is growing at 7.7% while the overall IT budget is only growing at 2.5% (Passmore, 2008). This annual increase has warranted the attention of upper management to look into why storage costs are growing at 3X the rate of everything else, and to seek ways to reduce the costs associated with storing information while better understanding what information resources exist and why.

The first step in getting a handle on storage is to gain insight into what storage assets exist, where they are located, who is consuming them and how effectively they’re being utilized. The Head of Storage typically tasks the storage team with collecting this information and soon discovers that putting the big picture together is a manually intensive task. In the old days when applications and their data resided on a single system, knowing where information resided and how it was being managed was relatively simple. If an application was running on server A, then the data associated with that application resided on the disk drives attached to server A.

As environments evolved into distributed architectures where multiple servers and applications shared storage and network resources (as depicted in figure 1), understanding where information lived and how it was being managed became much more complex. In today’s storage environments it is difficult to determine how storage has been allocated across various arrays, which hosts are consuming that storage, how effectively the applications running on those hosts are utilizing that storage, and what type of data is stored there. Unfortunately, many of the tactical storage management solutions available today don't provide the level of holistic visibility needed to answer these questions.

imagebrowser image

Figure 1. Distributed Storage Environments

Additionally, as dedicated teams evolved to address the specialized areas of storage, networks, servers, applications and databases the information needed to optimize storage all along the supply chain became disjointed. Silos of information began to form across the various teams responsible for managing storage assets along the stack.

imagebrowser image

Figure 2. Administrative Silos

Complicating the situation further is that these groups do not communicate well between each other and sometimes are reluctant to share information. As a result of the combined complications of manual data collection, for both physical and political silos, the task of gaining visibility into the entire storage supply chain is very difficult. Without the ability to measure what storage assets exist, it is nearly impossible to manage those assets. This is why the average utilization of storage assets in the industry is around 40% (Russell, 2008), and typically the more storage that a company has, the lower the overall utilization becomes.

In order to help companies address the challenge of dealing with the compounded growth of storage, estimated to be 55% for the average company, and the inherent inefficiencies found along the average organization's storage supply chain, Symantec has developed best practices for reducing storage costs. By standardizing on a common set of management tools and providing end-to-end visibility from the physical disk to the logical data set, companies have found a better alternative for dealing with the challenge of managing their vast storage resources while controlling their associated costs.

Standardizing on a common set of management tools

The first step in enabling organizations to reduce storage costs is to provide end-to-end visibility into the storage stack. This requires a Storage Resource Management (SRM) solution that bridges the physical network and virtual resources with the logical data (applications and file data), providing a single, holistic view into the storage environment regardless of the hardware vendor. Veritas™ CommandCentral Storage allows you to discover and visualize your heterogeneous storage environment so that you can begin to manage its physical, virtual, logical, and business components.

imagebrowser image
Figure 3. CommandCentral supports heterogeneous systems and platforms in the data center

Through a combination of the SM I-S standard and API and CLI interfaces (depending on what the hardware vendor has enabled), CommandCentral has the ability to automatically discover and visualize all storage assets and to report against them. Specifically, it provides discovery and visualization into: arrays, array virtualization, disks, LUNs, controllers, Fibre Channel ports, switches, HBAs, hosts, virtual servers, applications and databases. By dynamically mapping physical devices to logical and virtual resources CommandCentral can take a user from no visibility to a full supply-chain view as can be seen in Figure 4.

imagebrowser image

Figure 4. CommandCentral Supply Chain View
 

Once the entire storage supply chain has been discovered, CommandCentral automates the collection and correlation of data, enabling users to move from the manual tracking of assets on spreadsheets to a capacity management model that enables users to analyze their storage utilization. With canned reports you’ll be able to address inefficiencies all along the storage stack and be able to answer the following types of questions:

  • How is storage distributed among arrays and other devices?
  • How is storage replicated as mirrors and clones?
  • How is storage allocated among hosts?
  • How is storage being consumed by hosts, volumes, and filesystems?
  • How is storage being used by users, groups and domains?
  • How is storage growing for applications and databases?
  • How old are my files and directories and how often are they being accessed?
  • What are the most common file types in my environment? Which files and directories use the most storage?
  • How many duplicate files do I have?

Once you can measure the storage assets that you have, and answer these types of questions, you can begin to make adjustments all along the storage supply chain to optimize those assets and reduce storage costs.

Reducing storage supply chain costs

Supply chain management is recognized as the management of key business processes across the network of organizations that comprise the business. This concept easily translates to storage management in today’s enterprise where the delivery of storage services involves a network of distributed resources and organizations. The goal in storage supply chain management is to reduce the excess or misused inventory, while improving the quality of service and reducing costs. By reporting against and analyzing the storage supply chain you can optimize every storage component that applications utilize and improve the responsiveness to the business.

Physical
imagebrowser imageMany organizations think they have utilization under control because they look at the array with the OEM provided tactical tools and the array appears to be fully utilized. While this tends to be one of the primary points of measurement for storage administrators, it is an inaccurate indicator of whether or not more storage needs to be purchased. Your hardware vendor will want you to focus on this number, but as you analyze the remaining components within the supply chain you will more than likely find many areas to reclaim storage and defer those purchases. Some of the best practices that you can adopt at this elemental layer in order to reduce costs are the following:

Commoditizing hardware
To gain control of huge expenditures on disk, IT departments should be able to commoditize storage and be free to choose among vendors. Not having a hardware agenda, Symantec provides solutions for organizations with heterogeneous environments by allowing them to select the “best of breed” hardware for the most cost-effective pricing. With unmatched reporting capabilities and support for heterogeneous environments, CommandCentral Storage helps organizations to increase price leverage with strategic storage hardware vendors by using a standards-based, heterogeneous management solution that helps prevent vendor lock-in. Some of our customers have seen their hardware costs drop as much as 50% simply by bringing another vendor in to compete for their storage business.

While adding another vendor will help to keep pricing competitive, having too many vendors can create unnecessary burdens for storage administrators, which can reduce the cost benefits of a multi-vendor strategy. In order to avoid these challenges it is common practice to introduce no more than two or three storage vendors and to leverage a common management platform that will support a heterogeneous storage environment. Symantec addresses the heterogeneous management challenges with CommandCentral Storage and Veritas Storage Foundation by:

  • Providing discovery, visualization, monitoring and reporting capabilities across all major hardware vendors
  • Enabling heterogeneous storage management capability at the volume and file system level for all major operating systems
  • Allowing migrations and dynamic storage tiering across different hardware vendors


On demand procurement

One of the greatest cost saving benefits of supply chain management is only buying what is needed when it is needed. Having un-configured disks at the physical level waiting to be spun up has an associated carrying cost, just like any other inventory. By helping organizations get a better handle on supply chain management with trending and forecasting information, CommandCentral Storage can help them decrease those inventory carrying costs and begin to purchase storage just-in-time instead of just-in-case.

Take for example, the average company that purchases storage once a year. If storage hardware costs are decreasing ~20% annually (Adam W. Couture, 2008) and Symantec can help customers better manage their supply chain and procure storage every six months instead of every twelve months, they can save 10% of their storage budget simply through more efficient procurement. The key to this savings is having visibility into all of the storage assets and being able to measure not only what currently exists, but how effectively it is being used as well as what is needed to meet future demand and when it will be needed.

Tiered storage

All data is not created equal, so the one-size-fits-all approach to storage no longer works. Storage requirements now vary by application and even by users within an application. Under the "all data is created equal" mentality, putting everything on Tier 1 storage, replicating it synchronously to a second site and having a third hot site for disaster recovery is very expensive. With CommandCentral Storage and Storage Foundation, Symantec can help customers classify their data and determine its value to the business and then map that value to the appropriate physical tier. This enables organizations to ensure that they have the right data, on the right tier, at the right price.

Common practice for tiered storage adoption is to establish and maintain 3-5 tiers. For example, Tier 1 storage might reside on 15k RPM Fibre Channel SAN drives, Tier 2 may reside on 7.5k RPM Fibre Channel SAN drives, Tier 3 might be SATA or NAS drives and Tier 4 may sit on CAS or VTL. Storage architects have proven that tiered storage infrastructures can effectively meet application and technical performance criteria, and according to the experts, by establishing multiple tiers of storage, organizations are recognizing cost savings of 10-15% with Tier 2 storage and 30-40% savings at Tier 3 ("Tiered Storage Tool Purchases Demand Careful Consideration", ComputerWeekly.com, 2007).

While there are significant cost benefits of implementing tiered storage, “... according to GlassHouse, 80% of large companies still keep data on a single tier” (Tucci, "Taking the tears out of tiered storage", TechTarget.com, 2006). The reason for this is that in most companies, the value of data to the business has not been defined and, as a result, most if not all storage requirements are mapped to the “high end” of the scale. Data classification can help to eliminate this problem by providing user accountability for high-end storage consumption and also laying the foundation for a cost model. Even a simple model of classifying data as having a “High”, “Medium” or “Low” value to the business will enable IT to align the storage infrastructure with the business value of the organization’s data.

Once the data has been classified, additional challenges relate to: porting data from one tier to another, moving data as its business value changes over time and making sure that the people who need the data still have access to it. Symantec addresses these challenges with CommandCentral Storage and Storage Foundation by:

  • Providing heterogeneous storage management visibility into the physical tiers
  • Providing business intelligence about the data to enable better decision-making on the appropriate class of storage
  • Accommodating different hardware tiers for data migration
  • Dynamically moving data from tier to tier (Dynamic Storage Tiering will be covered at the “logical” and “consumed” levels of the storage stack)

Tiered storage is an initiative that has been proven to reduce current and future capital costs, as well as operating costs, for medium and large storage enterprises. Knowing how storage assets are currently tiered and understanding how they could be tiered is extremely valuable given the significant cost differential that exists. Effective tiering is also vital as part of an overall data life-cycle management strategy.

Note: as we progress through this document, reclamation will be critical to optimization across the storage stack, especially at the physical array level, and Storage Foundation plays a key role in enabling organizations to simplify array migrations for tuning overall array allocations or moving data from one physical tier of storage to another.

Logical

imagebrowser imageMoving along the stack to the “Logical” level, we see the storage that has been configured and apportioned into LUNs. Again, this type of information can be collected from the array’s device manager but the OEM management tools provide more of a tactical silo view than the strategic view required to optimize storage. With a view into optimization across your entire storage infrastructure, regardless of hardware vendor, some of the best practices that you can adopt at the logical layer are as follows:

Reduce overhead
The “Logical” level is where we see the first big hit that is taken regarding the usability of storage. For example, you may purchase 10TB of storage and choose to configure it as RAID 1 so that it is mirrored. With the overhead associated with RAID 1 and the administrative overhead of the array you will end up with less than half of the 10TB being available for allocation. With such a large portion of your storage investment being spent on overhead it is critical to ensure that the value of the data is matched with the proper physical tier of storage.

Just as we saw at the physical level, storage can also be tiered at the logical level. For instance, a business might have ERP data on RAID-10 storage with multiple mirrors and have home directories on RAID-4. Again, with CommandCentral and Storage Foundation Symantec can help organizations classify data and determine the value it has to the business and then map that value to the appropriate physical tier. According to the experts, Tier 2 storage should cost 20% to 30% less than Tier 1, while Tier 3 storage should cost 50% to 60% less. ("Tiered Storage Tool Purchases Demand Careful Consideration", ComputerWeekly.com, 2007).

Reduce carrying costs
Similar to un-configured disks at the physical level, “available” and “unallocated” disks have associated carrying costs and should be kept at a minimum for proper supply chain management. This requires a consolidated, global view of storage resources and the ability to compare, over time, the key forecasting and trending metrics. CommandCentral provides this level of visibility and the ability to report against those supply chain metrics in order to decrease inventory costs. Buying just in time instead of just in case can decrease inventory costs. If storage hardware costs are decreasing 20% annually delaying a purchase by 6 months can save 10% of the procurement cost.

Claimed

imagebrowser imageThe “claimed” level best exemplifies the disconnect between administrative and political silos because few organizations have holistic visibility into the entire storage supply chain and the ability to correlate the data sets. As oftentimes happens in fast growing data centers, storage gets “lost” between the array and the host due to the silo views that both the storage administrator and the server administrator have. This problem has only been magnified by the recent adoption of virtualization at the host level. What appears to be an allocated LUN from the array view may not even show up on the server view. With a view into optimization across your entire storage infrastructure, regardless of hardware vendor, virtualization or operating system, some of the best practices that you can adopt are as follows:

Reclaim “unclaimed” orphaned storage
Orphaned storage is any data, file, table space, object, file system, LUN, physical volume or storage device that appears to be in use but has been abandoned or forgotten about. Orphaned storage can result from application or system errors that have not been cleaned up after a restart, system maintenance, upgrade, or from human error. For example, the storage may not have been de-allocated if the system administrator forgot to tell the storage administrator that the storage is no longer being used, or the documentation was not updated to indicate that the storage could be de-allocated and re-provisioned.

Correlate array and host views of storage
Since most organizations lack end-to-end visibility, identifying and reclaiming orphaned storage is a very difficult and time consuming task. With Symantec’s unique ability to correlate what the array sees with what the host sees, CommandCentral, with its single integrated database and unique end-to-end correlation capabilities, typically discovers that 7-10% of existing storage assets are orphaned. LUNs in this orphaned state are candidates for immediate reclamation. Reclamation simply involves unmasking the LUN and returning it to the storage pool. CommandCentral provides not only the visibility required to identify unclaimed storage but also the reclamation capabilities to get it back.

Consumption

imagebrowser imageThe “consumption” level also exemplifies the disconnect between administrative and political silos and helps identify another type of orphaned storage known as “unassigned”. Again, without end-to-end visibility and correlation capabilities along the storage supply chain, organizations do not have the ability to identify storage that has been allocated to a host, and claimed by that host, but never assigned to a DB, FS or VM. Additionally, the consumed level is the point in the supply chain where most over-provisioning occurs and helps create the poor utilization rates of 40% that the average company sees. (Russell, 2008)Reclaim “unassigned” orphaned storage This second type of orphaned storage, known as “unassigned”, typically accounts for 2-5% of storage assets for organizations without an end-to-end view. LUNs in this orphaned state are also candidates for immediate reclamation. Reclamation simply involves removing the OSOS handle, unmasking the LUN and returning it to the storage pool. CommandCentral provides the visibility required to identify unassigned storage and reclaim those orphaned assets.

Implement Thin Provisioning
Over-provisioning is the result of the following typical scenario: the application owner asks for 300GB, the DBA asks for 325GB, the server admin asks for 350 GB and the storage admin actually provisions 400GB. A year later the application is only using 100GB, but the company is paying for 400 GB (25% utilization). Thin Provisioning is expected to dramatically decrease the costs of over-provisioning by allowing arrays to represent LUNs to servers without dedicating the disk space required to support the LUN upfront. At LUN creation time, a thin provisioned LUN has no disk space behind it. Actual disk space is only allocated to the LUN when writes are performed. For instance, if 100GB of writes are performed on a 400GB thin provisioned LUN, there would only be 100GB of actual disk space behind that LUN, plus a small amount for overhead.

The challenge with Thin Provisioning is that when deploying it into an existing environment, ensuring that all host filesystems and volume managers are thin friendly can be a daunting task. In order to address this challenge, Storage Foundation compliments the hardware vendor’s array based thin storage by providing the only cross-platform host I/O stack that is thin friendly. Additionally, with SmartMove, Storage Foundation enables organizations to migrate online and reclaim unused space without requiring any steps other than enabling SmartMove on the host and mirroring the standard LUN to a thin LUN. This works by leveraging the host file system’s knowledge of which blocks actually carry data and which do not. This knowledge is then transferred to the host volume manager so that only the blocks that contain data are copied. By only writing blocks with data to the target LUN, all unused space is automatically reclaimed as part of the online migration. With Storage Foundation’s Thin Provisioning and SmartMove capabilities, utilization efficiency can be automatically improved, without heavy administrative overhead. This enables companies to purchase less storage capacity up front and defer capacity upgrades based on actual business usage.

All of the leading industry analysts agree and expect Thin Provisioning to have a profound impact on how storage is managed in large datacenters. Thin Provisioning can typically drive utilization to 85% of capacity (Zaffos, 2007) and with “only 25% of the total physical disk allocation for a standard system typically being needed (Williams, 2008), the savings can equate to a organization's entire storage budget for the next 2 years. With Storage Foundation Thin Provisioning and SmartMove capabilities, Symantec can deliver these savings today.

Host usage

imagebrowser imageThe best way to optimize storage at the “host usage” level is by adopting capacity management and planning practices for the filesystem and volume manager. By gaining insight into the true utilization of the storage that has been allocated to the host, organizations can determine whether or not they have over-allocated storage. This is the level where companies discover that they have only utilized 35% (Susan Gilmor, 2008) of allocated storage. Without a well defined storage capacity management process in place, it is been nearly impossible to accurately determine storage utilization and subsequently true storage needs. This poor utilization has created an enormous opportunity to achieve greater value from existing storage assets, while delivering extraordinary capital and operational savings year over year. For large companies, the compounded savings can quickly reach tens of millions of dollars.

Reclaim “underutilized” storage from the filesystem and volume
The first step in reclaiming underutilized storage at the host level is to identify utilization of the filesystems and volumes. CommandCentral provides this level of visibility and enables organizations to identify inefficiencies at the host consumption level. For the next step, Symantec compliments the solution with Storage Foundation Volume Manager by providing customers with a mechanism to reclaim inefficiently used storage. With Storage Foundation, customers can dynamically resize inefficiently sized volumes to something more appropriate. For instance, in the previous over-provisioned example at the consumed level, if 400GB had been allocated but only 100GB was being used (25%) we could resize the volume to something more appropriate, let’s say 150GB. This would increase the utilization rate from 25 to 67% (a 42% increase). Imagine reclaiming 42% of the existing storage assets that are on the floor. The savings on deferred storage costs would then be in direct correlation with the rate of data growth. For example, if the rate of data growth were 50% and 42% of existing assets were reclaimed, 92% of planned storage purchases could be deferred. One of Symantec’s largest customers adopted a zero-growth storage initiative based on this principal and nearly eliminated their need to purchase additional storage in 2008.

Reclaiming unused capacity at the host level isn’t always a simple task and may have challenges associated with it. For instance, a large volume with a low utilization rate might need to be migrated to a more appropriately sized volume, and that migration may require downtime, which is difficult to come by in many production environments. Additionally, large migration efforts typically require a significant amount of planning and coordination in order to minimize risk and reduce any necessary downtime. In order to address these and other common challenges that organizations face when trying to reclaim “underutilized” storage at the host level, Storage Foundation leverages virtualization. Virtualization is used to decouple the volume from the physical storage location. The advantage of this is that volume management can be performed without any downtime or impact to applications. With Storage Foundation virtualization, LUNs can be pooled together into a single volume at the host level, bypassing the hardware limitations of managing volumes at the array level, even if LUNs reside on different physical arrays. Underutilized storage within these virtualized volumes can be dynamically redistributed to achieve a more efficient utilization rate, and LUNs can be removed from the volume group for reclamation. While virtualization may have only recently gained recognition, Symantec has been doing virtualization at the volume manager level for over 15 years. With this capability Symantec customers have been realizing the benefits of improved utilization and avoiding costly data migrations.

Profile data for a DLM/ILM strategy
The host level is where organizations can perform file level analytics and discover metrics about the data itself. With this type of information they can begin to classify the value of the data that they have and understand its access patterns (the age of the data, when it was last accessed, what type of data it is, etc.). This is the information that organizations need in order to determine what tier of storage data should reside on and begin to establish a data or information lifecycle management strategy.

In most companies, the value of data to the business has not been defined, and as a result, most if not all storage requirements are mapped to the “high-end” of the scale. This is an expensive practice and is common in many environments today, where everyone’s data is the most important and requires Tier-1 storage. Profiling data for classification will help to eliminate this problem by providing user accountability for high-end storage consumption and also laying the foundation for a cost model. Even a simple model of classifying data as having a “High”, “Medium” or “Low” value to the business will enable IT to align the storage infrastructure with the business value of the organization's data.

CommandCentral enables the profiling of data and supports the first step in establishing a lifecycle management strategy. With a clear understanding of the true value of data to the business, customers can ensure that the right data is on the right tier at the right time and at the right price. Since a significant price/GB cost differential exists between different primary and secondary tiers of storage, understanding how data assets are currently tiered and their value to the business can provide extraordinary savings.

Dynamically tier storage

Once data has been profiled and classified into appropriate business value tiers, organizations can define a finite number of storage tiers and match the data’s value to the appropriate tier of storage. These data classification tiers can follow a customer defined pattern based on how data is profiled, used, aged, etc. This pattern matching can then be placed on the appropriate tier. Placing data on the appropriate tier of storage during creation will establish the foundation for a data lifecycle management strategy, but policies need to be defined for not only where data should be created but for how long it should be there. As the value to the business and utilization of that information changes, organizations need to move data across the different tiers of storage to ensure that the value of the data matches the value of the storage.

The challenge with any lifecycle management strategy is the operational overhead associated with monitoring which data resides where and moving it from tier to tier. Symantec has addressed this challenge with Storage Foundation Dynamic Storage Tiering which allows organizations to define policies for which tier of storage data should reside on, how long it should be there, and then dynamically move the data based on the defined policies.

As was discussed at the physical level, tiered storage offers cost benefits by shifting less valuable data to less-expensive storage media and allows for higher storage capacities at a lower cost per GB. CommandCentral can help customers determine the value of their data in order to establish the ideal storage medium for maximum cost savings. Storage Foundation compliments the solution by providing policies that place files by importance on different tiers of storage according to their business value, both at the time of creation and as an ongoing policy. For instance, files which are seldom accessed could be placed on lower end storage while files that are frequently accessed by multiple users could be automatically placed on faster more expensive storage. This capability is automatic, transparent to the users and simple to manage with a policy based system.

Combined, CommandCentral and Storage Foundation Dynamic Storage Tiering enables customers to:

  • Realize the cost savings of tiered storage. (Experts note that Tier 2 storage should cost 20% to 30% less than Tier 1, while Tier 3 storage should cost 50% to 60% less.) (Tiered Storage Tool Purchases Demand Careful Consideration, 2007)
  • Lower operational costs by automating data movement between storage tiers. (According to a Strategic Research Corp. study of 40 companies that had already implemented tiered storage, on average, these companies experienced a 40% reduction in storage operating costs.) (Hope, 10)
  • Reduce infrastructure costs by optimizing the use of existing devices through the alignment of data with appropriate storage resources.
  • Improve application performance by moving less frequently accessed data off high-end storage.
  • Provide a foundation for a Data / Information Lifecycle Management strategy.

Application usage

imagebrowser imageThe final level in the storage supply chain that organizations require insight into is the application layer itself. Applications are the most critical consumers of storage resources and often times where the greatest opportunities for improvement can be found. Similar to the host level, the best way to optimize storage at the application level is by adopting capacity management and planning practices for databases. By gaining insight into the true utilization of the storage that has been allocated to the database instance, tablespaces and redo logs, organizations can determine whether or not they have over-allocated storage.

Most organizations don’t have visibility at this level, unless the application owners are working closely with the storage administrators, and database utilization falls prey to the sentiment that what is out of site is out of mind. As a result, database utilization tends to be even less than volume or file system utilization and typically ranges from 25-30%. Just as with unstructured data at the host level, the poor utilization of structured data has created an opportunity for exceptional capital and operational savings.

Reclaim “underutilized” storage from the application

CommandCentral can enable visibility across political boundaries and into the most underutilized storage within the datacenter. By leveraging API and CLI integration, CommandCentral can provide visibility into Oracle, MS SQL, Sybase, DB2, and Exchange. With CommandCentral we can see across political boundaries and into the most underutilized storage within the datacenter.

While identification of the problem is the first step in correcting inefficiencies, the real value comes from being able to take action against those inefficiencies and reclaim underutilized storage. Similar to the Host Usage level, CommandCentral provides the visibility and Storage Foundation, in conjunction with the application’s resizing capabilities, provides the necessary tools to get the storage back. Imagine reclaiming 30%, 40% or even 50% of your existing assets by improving database utilization to 60%, 70% or 80% utilization. For many organizations, the savings from deferred storage costs could eliminate the need to purchase storage this year.

Improving ROI

Superior storage analytics and the management tools to execute against that information is the foundation for efficiency within the storage infrastructure. Adopting the best practices indentified in this guide will maximize your current storage infrastructure, providing an immediate ROI and significantly reducing future capital expenditures. Since ROI is a key concern for most customers who are already looking at flat or declining IT budgets, it is critical to measure the ROI of adopting storage management in terms of real cost savings and increases in efficiency. Below is a summary of the combined ROI for optimization:

Physical

Commoditizing hardware
Some of our customers have seen their hardware costs drop as much as 50% simply by bringing another vendor in to compete for their storage business.

On demand procurement
If storage hardware costs are decreasing 20% annually and Symantec can help customers better manage their supply chain and procure storage every six months instead of every twelve months, they can save 10% of their storage budget simply through more efficient procurement.

Tiered storage
According to the experts, by establishing multiple tiers of storage, organizations are seeing savings of 10-15% with Tier 2 storage and 30-40% savings at Tier 3.

Logical

Reduce overhead
With such a large portion of the storage investment being spent on overhead it is critical to ensure that the value of the data is matched with the proper physical tier of storage. Experts note that Tier 2 storage should cost 20% to 30% less than Tier 1, while Tier 3 storage should cost 50% to 60% less.

Reduce carrying costs
Buying just in time instead of just in case can decrease inventory costs. If storage hardware costs are decreasing 20% annually delaying a purchase by 6 months can save 10% of the procurement cost.

Claimed
Reclaim “unclaimed” orphaned storage
“Unclaimed” storage typically accounts for 7-10% of storage assets and is available for immediate reclamation.

Consumption
Reclaim “unassigned” orphaned storage
“Unassigned” storage typically accounts for 2-5% of storage assets and is available for immediate reclamation.

Thin Provision storage
Thin Provisioning can typically drive utilization to 85%. If the current average utilization is 35% of capacity and data is growing at 50% compounded annually, a customer can, in some cases, avoid purchasing storage for the next 2 years.

Host Usage
Reclaim “underutilized” storage from the filesystem and volume
With the current average utilization being 35% of capacity and a data growth rate of 50% compounded annually, even a simple improvement of 18% utilization (from 35% to 53%) can eliminate the need to purchase storage next year for the average customer.

Profile data for a DLM / ILM strategy
Classify the value of the data and understand its access patterns in order to determine what tier of storage data should reside.

Dynamically tier storage
Experts note that Tier 2 storage should cost 20% to 30% less than Tier 1, while Tier 3 storage should cost 50% to 60% less.

Application usage
Reclaim “underutilized” storage from the application
With utilization at the application level hovering ~30% , reclaiming 30%, 40% or even 50% of your existing assets by improving database utilization to 60%, 70% or 80% utilization could eliminate the need to purchase storage this year.

As the ROI for optimization demonstrates, storage management software can be used as a financial tool to improve the utilization of storage resources. With enterprises focusing more and more on how to squeeze a return from existing assets, the numbers behind a comprehensive storage management solution have never looked better.

Summary

The recent economic downturn and the fact that storage is taking a bigger and bigger chunk of the IT budget have forced organizations to seek ways to reduce the costs associated with storing information, and to better understand what information resources exist and why. By gaining insight into current assets, where they are located, who is consuming them, how effectively they are being utilized and what the value of the data is, companies can gain a handle on part of the IT budget that has gone unchecked for far too long.

Having tools that provide not only visibility into the storage infrastructure, but also the mechanics to address the problems that are found is critical to reducing storage costs. The combined capabilities of CommandCentral and Storage Foundation uniquely enable Symantec customers to realize the value of storage management by reclaiming existing assets and efficiently managing data growth. For many organizations, the savings that have been realized from deferred storage costs have eliminated the need to purchase storage.

Download complete white paper attached.