Six Myths of Information Security (cont'd)
Myth #3 -- Information Classification is a necessary pre-requisite to protecting your data
Not only does this myth receive strong implicit backing from many security textbooks, there's also a whole flotilla of startups and a phalanx of security analysts making this claim. To anyone who hasn't seen a DLP solution at work in a large enterprise environment, it at least looks plausible that this myth is in fact true.
What we see
This myth has received some pretty thorough rebuking by Data Loss Prevention deployments. Projects that attempt to classify everything first before moving on to remediation of their data exposure problems invariably end in one of two ways: 1) the project runs out of time and money before all assets can be classified, or 2) a compliance or breach event tears the team away from classification and forces them to focus on a specific data exposure event.
In fact, at hundreds of deployments across G2000 accounts, we have yet to see a single successful example of the "classify everything first" approach.
So you may ask “what's the alternative?” And I think you can see where I'm heading with this. Data Loss Prevention capabilities allow enterprises to see the concentrations of large scale exposure of the most confidential data. It’s a quick process to interview the key information asset owners on their top-ranked data loss risks and any decent DLP solution can easily translate these risks into policies that will hunt for exposure events that indicate these risks.
An instructive metaphor might clarify things here. If your house is on fire, what's the first thing you do? Would it be logical to start running around trying to tag every item and identify its value? Probably not. In fact, if you tried to do that, the house would burn to the ground before you got everything tagged up. Instead, what you'd do is find the biggest hot burning flame and douse it any way you could. Same thing with the protection of data. Instead of pursuing a time consuming "tag everything" prequel to protecting your data, why not take a pragmatic approach to finding the worst exposure events and remediating the sources of that exposure?
And just so it's clear, this is not abstract theory or speculation here. Numerous Symantec DLP customers have produced hard measureable results on this front and have been doing so for years. I don't think it's too much to say that, within ten years or so, a large amount of the textbook training on security will have to be entirely re-written around these new Information Centric paradigms of risk management, while large segments of the chapters on information classification will be tossed out entirely.