Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage & Clustering Community Blog

Data Center Consolidation: Reducing the Risks

Created: 10 Dec 2012 • Updated: 11 Jun 2014
dennis_wenk's picture
+1 1 Vote
Login to vote

The benefits of data center consolidation are apparent; they save millions of dollars and improve the overall quality of service.  It is easy to see that too many data centers adds unnecessary costs, it chips away at manageability, increases complexity and contributes to a number of operating inefficiencies.  Realizing the economic benefits of data center consolidation can be elusive, the challenge is to circumvent the potential pitfalls that complicated the transformation process.

Data center consolidations involve much more than just moving servers or data from one location into another.  Data centers have become a conglomeration of disparate technologies running on combination of virtual platforms, physical platforms, and clustered platforms that operate an assortment of systems and access a range of data-tiers that are stored on multiple arrays from a whole host of hardware vendors.  

In addition to the medley of technology, the management practices and process-maturity can vary widely from one data center to another.  Layer in the “Green” environmental issues such: as floor space, power consumption and cooling that also need to be addressed and it is easy to see that data center consolidation is one of the most thorny projects any organization.

Identifying the proper data assets to split and migrate is essential to a successful consolidation.  The general lack of information around data ownership, data usage, and theirs critical interactions is a formidable challenge.  These basics are requirements to ensure that sensitive information is protected and to avoid migrating obsolete data unnecessarily.

Symantec’s Data Insight identifies who owns the data, if it is used, how it is being used and who uses it.  After the proper data assets have been identified, migrating the data is a time intensive process.  Loss of access to critical data during a migration must be kept to a minimum.  Traditional backup/restore technologies even with compression and de-duplication capabilities do not address the time-constraints related to large data migrations.  Data replication is a requirement to reduce the loss of access to critical data and maintain time consistency.  Symantec’s Veritas Volume Replicator moves data from any storage array to any storage array and mitigate the loss of access to critical data.   

In the disparate, heterogeneous-world of data center consolidation Symantec’s Storage Foundation HA provides the inter-operable, end-to-end connectivity from any operating system to any storage array provides a huge advantage to simplify the migration effort. Symantec’s Cluster File System provides simultaneous access to data so that mission-critical applications will not lose access to critical data-sets. 

To reduce the probability and costly consequences of a fall-back contingency, Symantec’s Disaster Recovery Advisor identifies interdependencies and configuration drift to ensure that a target data center’s infrastructure has the proper configuration to accept the transition workload.

Data center consolidations are complex but Symantec helps organizations simplify the effort and manage the extensive number of variables related to a data center consolidation by providing compatibility in the disparate data center world.

Blog Author:
Mr. Wenk is Principal Resiliency Architect for Symantec’s Storage and Availability Management Group. He has consulted worldwide with large Fortune 500 customers; Generating demand for Cloud Infrastructures and architecting private cloud solutions for technology-intensive organizations in over 20 different countries; tackling some very challenging, complex, and ambiguous problems. His experience includes developing architectures and strategies for highly available, resilient and secure infrastructures in heterogeneous IT environments. He has performed quantitative operational risk assessments that were used to justify the significant investments required to build, transform and maintain resilient infrastructures; he has performed technology assessments, IT consolidation and transition strategies, and developed site selection criteria for complex heterogeneous technology consolidations. In addition, he has developed charging methodologies, performed capacity planning and performance evaluations in large, complex IT environments. Dennis has developed a number of risk-based services that quantify the return on technology investments that increase resiliency and improve continuity programs. His background includes experience with EMC Consulting as Senior Cloud Architect and with Hitachi Data Systems as Principal Global Solution Architect for High Availability Solutions, IBM Global Network as an Outsourcing Project Executive; Comdisco where he was Western of Director Technology Consulting; KPMG where he was Senior Manager, Group Leader for IT Operations and Transformations, as well as Heller Financial where he served as VP/Information Processing. Dennis Wenk earned an MBA in Accounting and Finance, BS in Computer Science from Northern Illinois University. He is a certified Information Systems Auditor (CISA), Certified Data Processor (CDP), and Certified Systems Professional (CSP), certified in ITIL Service Management. He was awarded Best Management Paper by Computer Measurement Group, and currently he sits on the Advisory Board for Continuity Insights and Serves as their Technology Chair. He has held the Cloud Special Interest Group Leader for the Outsourcing Institute and the Business Continuity Focus Expert for Information Technology Infrastructure Management Group. He is an advisor to Business Continuity Services Group. Dennis has written award-winning professional articles, white-papers and has been published in Information Week, Computer Performance Review, Trends and Topics, Continuity Insights, Infosystems, Computer Measurement Group, and DR Journal. He is a regular speaker at world-wide industry conferences. Some current topical expertise include; ‘3 Simple Complexities of Data Protection’, ‘Think About Never Failing, Not How To Recover’, ‘Focus On The Largest Source Of Risk: The Data Center’, ‘Risk Economics’, ‘Gaining Competitive Advantage: The Myth of the Resiliency Paradox’, ‘Eco-Friendly Data Center’, ‘Virtualization, a Resiliency Enabler’, ‘Economic Impact of Interruptions’, ‘Risk-based Business Continuity’, ‘High-Stakes Business Impact Analysis’, ‘A Risk-Based Approach to Internal Controls’, and ‘Resiliency: Clearing the Five Nines Hurdle’.