"Only when the tide goes out do you discover who's been swimming naked."
The idea of risk management is in the news lately, given the turmoil in the financial markets. Working in data protection, we think long and hard about risk management. Our data protection products give an enterprise significant protection in the case of an actual disaster, man-made or otherwise. Disasters, while an important factor when considering data protection in an enterprise, are in actuality low probability/high impact events. The 2007 Symantec State of the Data Center report shows that datacenter managers know that downtime is not generally caused by a disaster.
Chief reasons for downtime
As you can see, in the data center the "tide" that goes out is often just a human error or a hardware failure. There is a lot of supporting evidence that correlates this. For instance, in a recent survey of NetBackup users, we found overwhelming evidence that restore requests are primarily due to an individual user deleting files or directories. You can't protect yourself from human error by simply relying on hosted services. Even highly reliable "storage cloud" or hosted services can experience significant outages, which can often be traced to human errors. Do an Internet search on "cloud outage" if you have doubts.
When I mull this over I come to two conclusions. First, when it comes to backup and recovery operations, you want the process to be as automated as possible. This requires a central catalog managing the process so that no one has to remember how to properly restore the data correctly. You probably need to restore data because of human error already, why introduce the possibility of more human error during restore? Secondly, when you decide on data protection architectures, both the strategies and the return on investment calculations have to factor in both the low probability/high risk events like a natural disaster, but also the high probability daily events of human error.
Technical Product Manager