Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage & Clustering Community Blog
Showing posts tagged with #BCM
Showing posts in English
dennis_wenk | 04 Dec 2014 | 0 comments

Theoretically, a risk matrix would seem to be a reasonable way of prioritizing risk management actions. Unfortunately, there are three major problems with the matrices as they are currently being use

  1. The first is that either the cost nor the effectiveness of proposed mitigation actions can be evaluated against the identified risks.
  2. The second problem is the evaluation scale is inadequate to represent the real world and provides no rational decision-making intelligence. such as What does provide a rational meter for a 'High Risk' mean or a meter for a 'Severe' impact.  These measurements are simply left to one's subjective judgment.
  3. Finally, the risk matrix does not take into account the fact that each risk may have different impacts on various business systems.  It is quite possible that a risk could not impact on one system and a sever impact on another. 

...

dennis_wenk | 03 Dec 2014 | 0 comments

Few markets have grown as quickly or have caused as must disruption as cloud computing. According to IDC, the cloud computing market will surged by 25% in 2014.  The market will continue to expand rapidly as enterprise organizations realize the significant impact that embracing the cloud can bring in terms of productivity, agility, and competitiveness.

Security has often been cited as a reason NOT to adopt cloud services. In response to this concern Cloud Service Providers (CSPs) are investing more in security, as a result secuirty is now becoming baked into the cloud service.  This fundamental shift caused by cloud computing continues to disrupt traditional security markets. This disruptive force is different than the usual competitive rivalry.  In an effort to get a better understanding of ‘why’ cloud computing is such a market disruptor a quick a “Five Forces” analysis is in order.

...

dennis_wenk | 03 Dec 2014 | 0 comments

A deduplicated backup copy of data will certainly save on storage costs but does it provide the actual protection or does it just provide a false sense of security?

Most enterprise IT organizations are managing at least 100 TBs of data and some might even have a little bit more data these days.  So 100 TBs seems to be a reasonable amount of data to scratch out a few calculations with regards to data transfer time and cost.

It takes 6 minutes to transfer 1 TB of data over Fiber Channel and over 2 hours (~133 minutes) to move 1 TB over GigE[1].  That means it takes 10 hours to transfer 100TBs over Fiber Channel and ~9 days (~222 hours) over GigE[2]

Deduplication of the data can reduce bandwidth and storage during the backup and that is a good thing.  However, it takes an hour to rehydrate 1.2TBs of deduplicated data back to usable format.  At that rehydration-rate, it will take...

dennis_wenk | 12 Mar 2013 | 0 comments

Information Technology (IT) is tightly integrated with the business; it has transformed the way we do business.  Nicholas G. Carr points out his seminal Harvard Business Review article IT Doesn’t Matter that the capital investment in IT is significant; “nearly 50% of capital expenditures by American companies and more than $2 trillion a year globally are spent on IT. …no one would dispute that information technology has become the backbone of commerce.  It underpins the operations of individual companies, ties together far-flung supply chains, and, increasingly, links businesses to the customers they serve.  Hardly a dollar or euro changes hands anymore without the aid of computer systems”.   Technology is the fundamental infrastructure for the modern business.

Carr continues with; “Today, an IT disruption can paralyze a company’s ability to make products, deliver its services...

dennis_wenk | 17 Jan 2013 | 0 comments

A principal challenge many enterprises face is identifying exposures to their complex IT infrastructures.   There are considerable business dependencies on this strategic resource and weaknesses within the IT-infrastructure may lead to serious business interruptions.   It is not enough however, to merely identify weaknesses; the impact of those weaknesses must also be clearly understood and quantified.  This is an important point because it is difficult to know how much to invest to strengthen the infrastructure unless there is a sense of the size of the risk to the organization.

IT infrastructures can fail due to a wide range of events.  These events can be as simple as a process failure or as catastrophic as a full system crash.  It is unrealistic and costly to eliminate each and every harmful event; therefore, a priority ranking based on the consequence of the event is very useful.  Ranking the potentially damaging events based on...

dennis_wenk | 17 Jan 2013 | 0 comments

Oh yes you DO need to know Probability! Many of Professional and Thought-Leaders have said that ‘there is no reason to know probabilities to know that a big risk exists’ and that it should be intuitively obvious that losing a datacenter would be very bad.  So then, if the IT-infrastructure risk is so self-intuitive then value does not lie in identifying the most serious risks, these risks are self-evident. The value lies in determining the optimal ‘investment’ to mitigate the most serious risks.  In this context, optimal means to allocate the organization’s resources to those actions that will yield the best overall performance.

So even if these self-intuitive, gut feeling about the risks are right, it is not the most effective way to justify the appropriate level of investment.  The fact, this is the reason that many in IT find it difficult to provide a valid ROI for HA/DR solutions; because they fail to understand the value that...

dennis_wenk | 09 Jan 2013 | 0 comments

A real crisis is happening now and if we really want to reduce losses for our organization then we will need to adjust our focus.  We don’t have to wait for any pandemic or catastrophe to strike; organizations are experiencing losses that range between $35 billion and $500 billion per month.  If these losses are the result of best practices that are intended to protect our organizations from crisis, then some might even consider these regulations and best practices to be gravely dysfunctional.   

Compliance with federal, state, and international privacy and security laws and regulations often is more an interpretive art than an empirical science—and it is frequently a matter for negotiation.  When business metrics are applied to compliance, many companies decide to deploy as little technology or process as possible—or to ignore the governing laws and regulations completely. Every company weighs the cost of...

dennis_wenk | 07 Jan 2013 | 0 comments

There are many business benefits to the efficiencies that IT provides and the vast majority of functions have been automated.  Today, businesses do more transactions, of greater value, faster than ever before.  This intense dependence on technology has also introduced new risks and vulnerabilities that have large consequences.  One of the primary missions, therefore, for any modern organization is to manage the inherent risk within this complex infrastructure. The only rational reason for spending money to reduce operational risk is the expectation that the benefits outweigh the costs.  

Subjective measures such as risk-tolerance or risk appetite can lead to serious errors of fact, in the form of excessive fear of small risks and neglect of large ones.  The stakes are too great for organizations to rely on intuitive judgments that are error-prone.  Creating infrastructures that increase resiliency requires methods that provide better...

dennis_wenk | 18 Dec 2012 | 0 comments

Operational risk is everywhere in the business environment, every decision has its share of uncertainty.  Nothing is a sure thing, yet we when we make important decision we certainly want to “keep the odds in our favor”.  I have often heard the terms like ‘risk appetite’, ‘risk tolerance’, or ‘risk aversion’ used in reference to making forward-looking choices about operational risk as if we can rationally and effectively manage risk based on our subjective feelings.  These terms, however, provide little guidance and position risk-management in the domain of oracles and soothsayers.  Business is not a game of chance based on our subjective ‘feelings’ regarding operational risk. 

The stakes are too high relative to operational risk to leave it to subjective guesses or ‘gut’...

dennis_wenk | 05 Oct 2012 | 0 comments

Basel II Accord for International Banking Operational Risk is defined as, “Risk of loss from inadequate or failed internal; processes, people, and systems or external events “.   When Processes, People or Systems fail, whether it be from internal or external events, the losses can be substantial.  As an example, the Ponemon Institute estimates that worldwide organizational are losing over $35 Billion monthly from data center downtime.  Nicholas G. Carr point out in his seminal Harvard Business Review article IT Doesn’t Matter, “today, an IT disruption can paralyze a company’s ability to make products, deliver its services, and connect with its customers, not to mention foul its reputation … even a brief disruption in availability of technology can be devastating.”

There are two primary ways for an organization to increase value.  The first way is to...