Video Screencast Help
Symantec Appoints Michael A. Brown CEO. Learn more.
Storage & Clustering Community Blog

The Ultimate BCM Goal, ‘Get Out of the Data Center’????

Created: 24 Sep 2012 • Updated: 11 Jun 2014
dennis_wenk's picture
0 0 Votes
Login to vote

Recently some have said that the ultimate goal for Business Continuity Management (BCM) practitioners is to get “business continuity management activities out from the computer room and into the business and boardroom”.  Most likely this is the remnant from bygone era in business continuity; the notion that the computer room, or more appropriately the modern data center, should be relegated to disaster recovery (DR) activities, undeserving of serious attention for the business-oriented BCM practitioner.    Nothing could be further from the truth in today’s business-world and if our ‘best practice’ provides this type of misdirected guidance then we have completely lost touch with reality.  

Technology has transformed the way we do business and that transformation puts the data center directly into the domain of business-oriented practitioner because the data center is the largest source of operational risk in any organization.  If BCM really wants to make a strong, significant contribution to mitigate risk then the data center is precisely where the practitioner needs to place focus.  We need to put real emphasis on the data center; not to disengage from it. Any self-proclaimed ‘ultimate goal’ that excludes the data center would do more to disconnect from the business than connect with it.  The progressive practitioner should consider a goal taken from the Basel Accord on International Banking i.e., to reduce “the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events.”

Traditional BCM ‘best practices’ have had a long running bias that favors the ‘Precautionary Principal’, which asserts that it makes sense to take special precautions against the worse-case scenarios.  The ‘Better Safe than Sorry” approach enjoys widespread support within the BCM community, leading to an emphasis on contingency planning, crisis management, and other preparedness activities.  The unintended consequence of the precautionary principal, however, is that operational aspects of the business have been systematically neglected by BCM and this might be our biggest blunder.  A wealth of experimental data from behavioral economics and cognitive psychology support the contention that the precautionary principal bias creates blinders causing people to spend a disproportionate amount of time thinking about the worse-case scenarios while downplaying or disregarding other risks.  So as we directed our attention to preparedness and worse-case events, the data center was rapidly expanding its influence over the business.

Nicholas G. Carr points out in his seminal Harvard Business Review article IT Doesn’t Matter “…no one would dispute that Information technology has become the backbone of commerce.  It underpins the operations of individual companies, ties together far-flung supply chains, and, increasingly, links businesses to the customers they serve.  Hardly a dollar or euro changes hands anymore without the aid of computer systems”.   Technology is the fundamental infrastructure for the modern business. Carr points out that the capital investment in IT is significant; “nearly 50% of capital expenditures by American companies and more than $2 trillion a year globally are spent on IT”.  While $2 trillion investment annually is truly a big deal to the business and to the boardroom, it is not the magnitude of the investments that justifies a practitioner’s attention. Our attention should center on reducing the risk of loss to our organizations and not simply based on the magnitude of investment.

Allowing technology to be used unabated generates risks that have evaded our diligent attention, overshadowed by our pursuit of the worse-case scenarios.  Technology has accelerated throughout the organization and the data center inherited the organization’s characteristics.  The datacenter is now fully embedded in the operating fabric of the entire business.  The growing complexity and increased dependence on technology has introduced new risks and transformed some benign ones i.e., what was once considered a ‘minor’ problem, like a software error, can now cause the same economic-loss to a company as a fire.  One bad bit can ruin the entire day.  Lack of sufficient attention to these new risks has an extreme consequence to our organizations.

Carr continues, “Today, an IT disruption can paralyze a company’s ability to make products, deliver its services, and connect with its customers, not to mention foul its reputation. …even a brief disruption in availability of technology can be devastating.”   A 2010 study by the Ponemon Institute estimates a whopping 2.84 million hours of annual data center downtime.  At an estimated average of $300k per hour of outage, that translates into a total loss of $426 billion a year.  Roger Sessions also attempts to quantify the problem in his ‘The IT Complexity Crisis: Danger and Opportunity’ in which he calculates that “IT failures are costing businesses $6.18 trillion per year worldwide. …The cost of IT failure is paid year after year, with no end in sight. … If this trend continues, within another five years or so a total IT meltdown may be unavoidable”.  To substantiate Roger Sessions’ calculations blogger Michael Krigsman invited two qualified experts; Mr. Gene Kim (the founder and former CTO of Tripwire, Inc.) and his colleague, Mr. Mike Orzen (“Lean IT”) to re-assess Sessions’ work: these two experts calculated that “the global impact of IT failure as being $3 trillion annually”.  While the numbers are based on a large number of calculations and extrapolations, the crucial point is that these losses demonstrate the staggering impact IT-failures have on the business world.  It is important to note that these losses are not representations of “potential for loss” or even the “expected loss”; these actual losses realized from IT-failures.  So $3 trillion, $6 trillion or even a mere $426 billion annually, the losses due to IT-failure are huge and they are real; we don’t have to wait for any pandemic or catastrophe to strike, these losses are occurring now. 

The sheer size of the losses from IT-failure should serve as a wake-up for anyone related to Business Continuity that either our target or our aim is considerably off.  We absolutely must start thinking differently, not only about where we are devoting our efforts but also about where we place our emphasis.  Genuine ‘Best practices’ must make certain that the ‘real and serious’ dangers receive sufficient attention.  Understanding how to prevent the continuing spiral of IT-failures will have substantial benefits for our companies.  In these difficult economic times, there is a lot of goodness that can provided if we address the real and serious risks.  Our duty must be to ensure that the risks are quantified, communicated, and managed clearly and proactively.   In the words of Nicholas G. Carr’s New Rules for IT Management ; avoid risks and prepare our organizations for “technical glitches, outages, and security breaches, shifting their attention from opportunities to vulnerabilities”. 

For Business Continuity ‘best practices’ that are intended to protect our organizations from crisis, losses of between $35 billion and $500 billion per month are not good results; some might even consider these ‘best practices’ to be dysfunctional.   It certainly would not be rational to promote ‘best practices’ that will not meet the real needs of today’s technologically-rich organization.  The real crisis is happening in Information technology right now and if we want to reduce losses for our organization then we will need to adjust our focus.

Any standard, any best practice, any expert guidance, whether new or old, whether international or domestic, that moves BCM in another direction would be a terrible and costly mistake. If you are not focused on the technology, you are not focused on the business.  Serious BCM practitioners don’t need to get out of the computer-room, what they need to do is get to the very heart of it

Blog Author:
Mr. Wenk is Principal Resiliency Architect for Symantec’s Storage and Availability Management Group. He has consulted worldwide with large Fortune 500 customers; Generating demand for Cloud Infrastructures and architecting private cloud solutions for technology-intensive organizations in over 20 different countries; tackling some very challenging, complex, and ambiguous problems. His experience includes developing architectures and strategies for highly available, resilient and secure infrastructures in heterogeneous IT environments. He has performed quantitative operational risk assessments that were used to justify the significant investments required to build, transform and maintain resilient infrastructures; he has performed technology assessments, IT consolidation and transition strategies, and developed site selection criteria for complex heterogeneous technology consolidations. In addition, he has developed charging methodologies, performed capacity planning and performance evaluations in large, complex IT environments. Dennis has developed a number of risk-based services that quantify the return on technology investments that increase resiliency and improve continuity programs. His background includes experience with EMC Consulting as Senior Cloud Architect and with Hitachi Data Systems as Principal Global Solution Architect for High Availability Solutions, IBM Global Network as an Outsourcing Project Executive; Comdisco where he was Western of Director Technology Consulting; KPMG where he was Senior Manager, Group Leader for IT Operations and Transformations, as well as Heller Financial where he served as VP/Information Processing. Dennis Wenk earned an MBA in Accounting and Finance, BS in Computer Science from Northern Illinois University. He is a certified Information Systems Auditor (CISA), Certified Data Processor (CDP), and Certified Systems Professional (CSP), certified in ITIL Service Management. He was awarded Best Management Paper by Computer Measurement Group, and currently he sits on the Advisory Board for Continuity Insights and Serves as their Technology Chair. He has held the Cloud Special Interest Group Leader for the Outsourcing Institute and the Business Continuity Focus Expert for Information Technology Infrastructure Management Group. He is an advisor to Business Continuity Services Group. Dennis has written award-winning professional articles, white-papers and has been published in Information Week, Computer Performance Review, Trends and Topics, Continuity Insights, Infosystems, Computer Measurement Group, and DR Journal. He is a regular speaker at world-wide industry conferences. Some current topical expertise include; ‘3 Simple Complexities of Data Protection’, ‘Think About Never Failing, Not How To Recover’, ‘Focus On The Largest Source Of Risk: The Data Center’, ‘Risk Economics’, ‘Gaining Competitive Advantage: The Myth of the Resiliency Paradox’, ‘Eco-Friendly Data Center’, ‘Virtualization, a Resiliency Enabler’, ‘Economic Impact of Interruptions’, ‘Risk-based Business Continuity’, ‘High-Stakes Business Impact Analysis’, ‘A Risk-Based Approach to Internal Controls’, and ‘Resiliency: Clearing the Five Nines Hurdle’.