Endpoint Protection

 View Only

Another Big Thing, Part 2 

May 24, 2007 03:00 AM

As with my last blog, the topic this time is behavioral detection, and the various trade-offs involved. We already covered some of the issues in the use of virtual environments for the detection of threats, and this time we’ll cover some of the issues involved in classifying behavior and mitigating damage.

Whatever your approach is to generating and tracking behavior, you need the ability to classify it. There are challenges to tracking behavior, but once you have a profile of behavior, determining what is malicious is a harder problem. Some security products solve this by handing off the problem to the user. Most don’t. The real problem in profiling is that the definition of what is malicious has changed over time. Is tracking your activity as you surf a web page malicious? If you say yes, what about the wonderful “suggest” features that use historical data? Is any program that downloads silently with no GUI malicious? What about Windows Update or Live Update? Something with no GUI that hooks the keyboard and mouse? Instant messaging software does this to determine if you are idle. And not only is classification hard with a static definition of malicious, but the definition of malicious has changed over time, continually redefining the problem.

After detection, we need to mitigate the damage. This means identifying the source of the behavior, and taking the appropriate steps to neutralize the threat. So imagine we’re happily surfing the net, and our A/V notices something connecting to xxx.imahaxor.irc.net, creating remote threads in the System and IE processes, and attempting kill A/V products. So our engine says “malicious”. Now we need to fix our system. The aspects of fixing injected threads, file filters, and rootkits is beyond the scope of this article. However, you still have the challenge of making the link between behavior and an appropriate source. If you just watch processes, you may think that IE has suddenly turned into a malicious downloader (via injected threads) or that your IIS or SQL server has suddenly decided to try to take down the Internet (Code Red/Slammer). If you watch at the thread-level, what happens if someone moves up the stack, and uses Javascript/ActiveX to create a threat? Is the thread implementing your JavaScript interpreter the source of the threat? The point is that “malware” does not necessarily exist with its own process space, file, or even thread. Slammer was a single UDP packet that existed only on the wire or on top of another program’s stack, yet it made history [1].

There are more hurdles than those discussed so far, but the point is that you make trade-offs at every step. So it’s no wonder that false positives have become the black sheep of the behavioral detection business. That’s not to say that you can’t build an effective product with a low false-positive rate, it just takes a lot more time and patience than most people think.

To cut back on false positives while managing to stay relevant to the ever-changing definition of “malicious program”, you end up having to track and update your profiler over time, in the same way you had to update that old-fashioned signature engine. This illustrates a key point that people pushing the “Next Big Thing” hypothesis of behavioral detection miss: that just like in signature-based systems, you are tracking attributes of software, and making a judgment call. Signature, behavioral, white-box, black-box: all are different ways at looking at this same problem, each with their own trade-offs. And like so many problems in software, it has no silver bullet.

So what’s my point? Throw away that behavioral detection engine, and just keep updating those signatures? No, you should never throw away your copy of Symantec’s latest engines. That’s my paycheck! My point is that behavioral detection is one more tool. A tool to be used and appreciated, but with a realistic eye that acknowledges shortcomings. The basics of security haven’t changed. Security is never perfect, is best in layers, and had better be usable, or it won’t be used. And finally, understanding information security requires understanding how we make and use computer systems, at every level, from the bits in our programs, to the global network that can spread the bad just as fast as the good. And this is the kind of understanding every security company should try to cultivate and demonstrate.

[1] http://www.caida.org/publications/papers/2003/sapphire/sapphire.html

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.