Video Screencast Help
Symantec Appoints Michael A. Brown CEO. Learn more.
Storage & Clustering Community Blog

The PROBLEM with ‘Best Practices’ for Business Continuity

Created: 04 Oct 2012 • Updated: 11 Jun 2014 • 1 comment
dennis_wenk's picture
+1 1 Vote
Login to vote

“Best Practices” is a popular expression of the intent to manage business continuity prudently.  Best Practices are seen as a way to sidestep both the quantification of operational-risks, as well as, the objective evaluation of the cost-benefit for any proposed mitigation actions.  There are several reasons why Best Practices “Are not.” best for Business Continuity purposes.

  • It is unreasonable to assume that a best practice could optimally answer the business continuity questions for multiple organizations.  Organizations differ widely in terms of their maturity level, their technologies deployed, and their vulnerabilities. 
  • Given the wide assortment of published ‘best practices’, which of the best practices really are the ‘best’ for any particular circumstance? 
  • No organization could hope to implement all of the thousands of best practices to get it perfectly-right, and there is no way to know which best practices are most cost effective? 
  • Organization do not have unlimited resources, however, there seems to be unlimited risks.  Best practices do not help identify which risks are the most serious?

The fundamental requirement for business continuity is to (1) measure and evaluate risks objectivity, (2) determine which risks are the most serious risks and (3) then make a rational choice on how to best invest scarce resources to optimally reduce operational risk.

Blog Author:
Mr. Wenk is Principal Resiliency Architect for Symantec’s Storage and Availability Management Group. He has consulted worldwide with large Fortune 500 customers; Generating demand for Cloud Infrastructures and architecting private cloud solutions for technology-intensive organizations in over 20 different countries; tackling some very challenging, complex, and ambiguous problems. His experience includes developing architectures and strategies for highly available, resilient and secure infrastructures in heterogeneous IT environments. He has performed quantitative operational risk assessments that were used to justify the significant investments required to build, transform and maintain resilient infrastructures; he has performed technology assessments, IT consolidation and transition strategies, and developed site selection criteria for complex heterogeneous technology consolidations. In addition, he has developed charging methodologies, performed capacity planning and performance evaluations in large, complex IT environments. Dennis has developed a number of risk-based services that quantify the return on technology investments that increase resiliency and improve continuity programs. His background includes experience with EMC Consulting as Senior Cloud Architect and with Hitachi Data Systems as Principal Global Solution Architect for High Availability Solutions, IBM Global Network as an Outsourcing Project Executive; Comdisco where he was Western of Director Technology Consulting; KPMG where he was Senior Manager, Group Leader for IT Operations and Transformations, as well as Heller Financial where he served as VP/Information Processing. Dennis Wenk earned an MBA in Accounting and Finance, BS in Computer Science from Northern Illinois University. He is a certified Information Systems Auditor (CISA), Certified Data Processor (CDP), and Certified Systems Professional (CSP), certified in ITIL Service Management. He was awarded Best Management Paper by Computer Measurement Group, and currently he sits on the Advisory Board for Continuity Insights and Serves as their Technology Chair. He has held the Cloud Special Interest Group Leader for the Outsourcing Institute and the Business Continuity Focus Expert for Information Technology Infrastructure Management Group. He is an advisor to Business Continuity Services Group. Dennis has written award-winning professional articles, white-papers and has been published in Information Week, Computer Performance Review, Trends and Topics, Continuity Insights, Infosystems, Computer Measurement Group, and DR Journal. He is a regular speaker at world-wide industry conferences. Some current topical expertise include; ‘3 Simple Complexities of Data Protection’, ‘Think About Never Failing, Not How To Recover’, ‘Focus On The Largest Source Of Risk: The Data Center’, ‘Risk Economics’, ‘Gaining Competitive Advantage: The Myth of the Resiliency Paradox’, ‘Eco-Friendly Data Center’, ‘Virtualization, a Resiliency Enabler’, ‘Economic Impact of Interruptions’, ‘Risk-based Business Continuity’, ‘High-Stakes Business Impact Analysis’, ‘A Risk-Based Approach to Internal Controls’, and ‘Resiliency: Clearing the Five Nines Hurdle’.

Comments 1 CommentJump to latest comment

jimnelson's picture

Dennis-thank you for sharing some good thought provoking ideas. I would be interested in your views on the difference or applicability of requirements, regulations, best practices, "the way we do things around here" and standards that organizations can consider for their unique needs and requirements. (I recognize these would differ from the ICT organization, the DR team, the BCM folks, Risk Management and of course the business owners.)

+1
Login to vote