As with my last blog, the topic this time is behavioral detection, and the various trade-offs involved. We already covered some of the issues in the use of virtual environments for the detection of threats, and this time we’ll cover some of the issues involved in classifying behavior and mitigating damage.
Whatever your approach is to generating and tracking behavior, you need the ability to classify it. There are challenges to tracking behavior, but once you have a profile of behavior, determining what is malicious is a harder problem. Some security products solve this by handing off the problem to the user. Most don’t. The real problem in profiling is that the definition of what is malicious has changed over time. Is tracking your activity as you surf a web page malicious? If you say yes, what about the wonderful “suggest” features that use historical data? Is any program that downloads silently with no GUI malicious? What about Windows Update or Live Update? Something with no GUI that hooks the keyboard and mouse? Instant messaging software does this to determine if you are idle. And not only is classification hard with a static definition of malicious, but the definition of malicious has changed over time, continually redefining the problem.
There are more hurdles than those discussed so far, but the point is that you make trade-offs at every step. So it’s no wonder that false positives have become the black sheep of the behavioral detection business. That’s not to say that you can’t build an effective product with a low false-positive rate, it just takes a lot more time and patience than most people think.
To cut back on false positives while managing to stay relevant to the ever-changing definition of “malicious program”, you end up having to track and update your profiler over time, in the same way you had to update that old-fashioned signature engine. This illustrates a key point that people pushing the “Next Big Thing” hypothesis of behavioral detection miss: that just like in signature-based systems, you are tracking attributes of software, and making a judgment call. Signature, behavioral, white-box, black-box: all are different ways at looking at this same problem, each with their own trade-offs. And like so many problems in software, it has no silver bullet.
So what’s my point? Throw away that behavioral detection engine, and just keep updating those signatures? No, you should never throw away your copy of Symantec’s latest engines. That’s my paycheck! My point is that behavioral detection is one more tool. A tool to be used and appreciated, but with a realistic eye that acknowledges shortcomings. The basics of security haven’t changed. Security is never perfect, is best in layers, and had better be usable, or it won’t be used. And finally, understanding information security requires understanding how we make and use computer systems, at every level, from the bits in our programs, to the global network that can spread the bad just as fast as the good. And this is the kind of understanding every security company should try to cultivate and demonstrate.