Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage & Clustering Community Blog

The BIA Delimma

Created: 07 Jan 2013 • Updated: 11 Jun 2014
dennis_wenk's picture
0 0 Votes
Login to vote

There are many business benefits to the efficiencies that IT provides and the vast majority of functions have been automated.  Today, businesses do more transactions, of greater value, faster than ever before.  This intense dependence on technology has also introduced new risks and vulnerabilities that have large consequences.  One of the primary missions, therefore, for any modern organization is to manage the inherent risk within this complex infrastructure. The only rational reason for spending money to reduce operational risk is the expectation that the benefits outweigh the costs.  

Subjective measures such as risk-tolerance or risk appetite can lead to serious errors of fact, in the form of excessive fear of small risks and neglect of large ones.  The stakes are too great for organizations to rely on intuitive judgments that are error-prone.  Creating infrastructures that increase resiliency requires methods that provide better guidance regarding the large number of competing choices that reduce risk .   Little guidance can be gained from measuring operational risk either on a subjective low/medium/high value-scale or on its economic loss potential.   

Money is the appropriate yardstick for measuring operational risk and prioritizing the myriad of options for reducing risk; traditional methods based on  Loss Potential, as in a BIA, are fatally flawed.  Answering operational risk questions involve making a number of tradeoffs that go far beyond simple awareness, intuitive judgment, and best practices.  The correct economic metric is Expected Loss. Expected loss ensures more effective priority setting and proper resource allocation.  

While this might seem like a difficult and complex process, it really is fairly straightforward.  The first step is to build a Quantitative Operational Risk Model (QORM) that organizes the loss potential, threat and vulnerability data and makes the detailed and “what if” calculations.   This process allows us to collect, as well as, fine-tune the data in a timely fashion.  using the appropriate threat information. 

The second step is to build an economic database of loss potential.  This involves interviews with several internal and external sources.  Once the data has been collected and entered into QORM, there is an initial verification step which validates the data and provides for any necessary adjustments to the risk model.  After completing any needed adjustments, QORM calculates the Annualized Loss Expectancy (ALE) and single occurrence loss (SOL) calculations and the detail analysis begins with the information provided.

QORM makes the estimating process easy, economical, and supports risk management decision-making by helping managers to develop the information needed to make more informed decisions regarding investments necessary to reduce operational risk and increase business resiliency

Blog Author:
Mr. Wenk is Principal Resiliency Architect for Symantec’s Storage and Availability Management Group. He has consulted worldwide with large Fortune 500 customers; Generating demand for Cloud Infrastructures and architecting private cloud solutions for technology-intensive organizations in over 20 different countries; tackling some very challenging, complex, and ambiguous problems. His experience includes developing architectures and strategies for highly available, resilient and secure infrastructures in heterogeneous IT environments. He has performed quantitative operational risk assessments that were used to justify the significant investments required to build, transform and maintain resilient infrastructures; he has performed technology assessments, IT consolidation and transition strategies, and developed site selection criteria for complex heterogeneous technology consolidations. In addition, he has developed charging methodologies, performed capacity planning and performance evaluations in large, complex IT environments. Dennis has developed a number of risk-based services that quantify the return on technology investments that increase resiliency and improve continuity programs. His background includes experience with EMC Consulting as Senior Cloud Architect and with Hitachi Data Systems as Principal Global Solution Architect for High Availability Solutions, IBM Global Network as an Outsourcing Project Executive; Comdisco where he was Western of Director Technology Consulting; KPMG where he was Senior Manager, Group Leader for IT Operations and Transformations, as well as Heller Financial where he served as VP/Information Processing. Dennis Wenk earned an MBA in Accounting and Finance, BS in Computer Science from Northern Illinois University. He is a certified Information Systems Auditor (CISA), Certified Data Processor (CDP), and Certified Systems Professional (CSP), certified in ITIL Service Management. He was awarded Best Management Paper by Computer Measurement Group, and currently he sits on the Advisory Board for Continuity Insights and Serves as their Technology Chair. He has held the Cloud Special Interest Group Leader for the Outsourcing Institute and the Business Continuity Focus Expert for Information Technology Infrastructure Management Group. He is an advisor to Business Continuity Services Group. Dennis has written award-winning professional articles, white-papers and has been published in Information Week, Computer Performance Review, Trends and Topics, Continuity Insights, Infosystems, Computer Measurement Group, and DR Journal. He is a regular speaker at world-wide industry conferences. Some current topical expertise include; ‘3 Simple Complexities of Data Protection’, ‘Think About Never Failing, Not How To Recover’, ‘Focus On The Largest Source Of Risk: The Data Center’, ‘Risk Economics’, ‘Gaining Competitive Advantage: The Myth of the Resiliency Paradox’, ‘Eco-Friendly Data Center’, ‘Virtualization, a Resiliency Enabler’, ‘Economic Impact of Interruptions’, ‘Risk-based Business Continuity’, ‘High-Stakes Business Impact Analysis’, ‘A Risk-Based Approach to Internal Controls’, and ‘Resiliency: Clearing the Five Nines Hurdle’.