Endpoint Protection

 View Only

Life After AV: If Anti-Virus is Obsolete, What Comes Next? 

Jul 08, 2002 02:00 AM

by Paul Schmehl

Life After AV: If Anti-Virus is Obsolete, What Comes Next?
by Paul Schmehl
last updated July 8, 2002

In a previous article, Past Its Prime: Is Anti-Virus Scanning Obsolete?, I discussed the reasons why I believe that anti-virus scanning as we now know it is obsolete and must be replaced. In this article, I will address what I believe will be its replacement - behavioral blocking - including what is currently available, and how behavioral blocking needs to function for it to successfully defeat malicious code.

Before briefly reviewing the available products, I will define what I mean by behavioral blocking. When I use the term, I am referring to a technology that has the ability to run suspect programs in multiple virtual operating systems, determine precisely what the code does and then, based upon a set of rules, decide what to do with that program. This is different from what some people call behavioral blocking, which generally refers to the use of set of rules to decide what to do with a program based upon its attributes.

Attribute Blocking - Only Part of the Answer

Attribute blocking can only go so far. On our network, we bounce e-mail messages that have attachments based upon the extension the attachment uses. While this can be an effective screening device, it is manifestly crude. It doesn't matter if the attachment was legitimate or not or even if it was dangerous. Certain attachments are forbidden and others are not. Furthermore, to avoid this type of blocking all one has to do is rename the file to a known good extension.

Another form of attribute blocking is integrity checking. This can take various forms, but essentially it consists of a mechanism to determine if the characteristics of a file or program are about to be (or have been) changed by another program. CRC and MD5 checks are one form of this technology. Programs like tripwire and tcpwrappers are another. Most of these, however, only report a file or program alteration and do nothing to prevent it.

Firewalls, proxies and intrusion detection systems are also attribute blockers, blocking content based upon attributes such as the port the traffic is on, the URL of a Web site, the network that the traffic is coming from or a known bad pattern or filename detected in the packets. Although attribute blocking is an important part of any security architecture (e.g. networking protocols such as NFS, CIFS and NetBIOS should always be blocked at the network edge), it's a crude technology that makes no judgments about "good" or "bad" content. If the content triggers a rule, it's blocked. Furthermore, attribute blocking can only be used when it's possible to simply block a packet without regard to its content. Our attachment blocking system doesn't stop macro viruses, for example, because the benefit of blocking files that could contain macros is offset by the loss of capability that would be the result of the blocks. Behavioral blocking, on the other hand, is not concerned with the characteristics or attributes of a program but rather with its behavior under normal operating system conditions.

Behavioral Blocking Today - Not Quite There Yet

Behavioral blocking is not a new concept. It has been developing gradually over time, as the threat of malicious code has increased and morphed into today's "blended threat" worms and viruses that attack through multiple vectors. In February 2001, Information Security Magazine published an article by Robert Vibert, a well-known anti-virus expert, that discussed behavioral blocking and the current state of various products. More recently, Carey Nachenberg, a researcher at Symantec, published an article in Securityfocus reviewing the current state of behavioral blocking, the advantages and disadvantages of the technology and what needs to happen for it to become commercially viable.

While these articles discuss behavior blocking and examine what's being done today, they reveal a gap between what is being done today and what I believe has to happen to further the technology. For example, both Trend Micro's InterScan AppletTrap and Computer Associates' EZ Armor are simple attribute blockers. InDefense's Achilles, Aladdin's eSafe, Okena's Stormsystem, Entercept Technology's Entercept products and Securitae's Sandbox, while performing limited forms of behavioral blocking, are all primarily attribute blockers (and in some cases, integrity checkers as well.) Most of these products claim to be using behavioral blocking technologies.

But Getting Closer

Pelican Security's SafeTnet and Finjan Software's SurfinShield are more along the lines of what I envision as behavioral blocking. SafeTnet and Finjan attack only parts of the problem (e.g. Web traffic, the desktop, etc.). Both consider newly introduced programs as unsafe and place them in a "sandbox

According to Pelican's description:

SafeTnet implements a Dynamic Sandbox

Finjan's Web site defines their technology thus:

SurfinShield's proactive Sandbox enforces security policies to automatically block malicious activity before damage can be inflicted. Examples of security policy violations include attempts to delete files, open network connections or access the system registry.

Essentially, these products put a wrapper around any new program that is introduced to the computer and then enforce a set of predefined rules that do not allow the program to behave in ways that are prohibited by the rule set. However, even these products don't employ a complete behavioral blocking system While all of the products mentioned above are effective in varying degrees, they attempt to attack only some aspects of what is a much bigger security problem - protecting a network from all types of malicious code attacks and hackers.

The Scope of the Problem

So how will behavioral blocking solve the problem? In order to answer that question, we first have to define what the problem is.

The current trend in network attacks is to use what is being called a "blended" attack. Blended attacks use a combination of methods to get past network defenses; e-mail, ICQ, IRC, Instant Messenger, Web sites, network protocols such as NetBIOS, NFS, CIFS and the P2P networks that are so popular (KaZaA, Morpheus, etc.), and weaknesses in various operating systems and applications. Furthermore, the trend has been for malicious code that attacks a specific vulnerability to appear quite rapidly after the public announcement of vulnerability. Experts such as David Harley predict that these trends will continue and worsen in the future. Harley also predicts that multi-platform attacks will become more prevalent over time.

This means that security administrators must be prepared to deal with a wide variety of attack vectors and a diversity of malicious code in real time without significantly hindering the primary purpose of the network. In the past, the norm has been to wait for an attack and then provide a defense; but, as Melissa, Hybris, Code Red and Nimda have proven, this can be very harmful in the short term. Security architects must find a way to prevent attacks before they are publicized and put strategies in place to defend against them.

The problem is that malicious code is simply a program, and programs can do many things that can be defined as good or bad depending upon the context within which the action occurs. Often it's not possible to tell whether a program is malicious or not until after it has been installed and begins to function. It makes no sense to find this out on production equipment that is dedicated to fulfilling the corporate mission. Yet that's often how malicious code is found today.

The Solution

The solution is to run unproven programs in a protected, virtual environment. This will allow the program to perform all the functions that it normally would, both during the installation and after it is running normally. Each action the program takes, both during installation and during normal operation, can be compared against a set of rules and rated as to its likelihood of being malicious in nature. Programs that rate above a certain number or perform certain actions would be automatically deleted. Those in a lower range would be quarantined so that security administrators could examine them more closely. The rest would simply be passed back to the network intact.

The virtual environment would need to be able to attack other virtual environments, so that a worm that infects Solaris, for example, and then attacks IIS Web servers on Windows could perform all its normal functions, all the while being observed by the supervising management system. This system would also have to appear as though it was a normally functioning part of a network; yet disallow any contact with the "outside" world. Each virtual environment would also have to be rebuilt after each analysis, ready to analyze the next program.

Problems That Will Have To Be Solved

One of the problems that will have to be overcome is the potential for bandwidth congestion. For a system like this to be successful, it has to process all network traffic. Many networks have increasingly higher bandwidth capabilities which means that any system sitting on the edge of a network has to be capable of incredible feats of processing to prevent dropping packets. (Many intrusion detection systems have to deal with this problem now.)

The solution, I believe, is to hand off the traffic that has programs in it to a system that can be used to run the programs, rate them and then return the benign ones to the network traffic stream. The rest of the traffic can be passed directly through to the network. For example, a simple ACSII e-mail would be passed through with no delay, but an HTML e-mail would be shunted to the virtual environment and would have to pass some tests before being forwarded on.

Web pages with no active content would be served immediately, but those with active content would be tested before being served for viewing later. This would necessarily create some delays in Web browsing, but delays are a small price to pay for security and are certainly a more palatable choice than complete denial. Furthermore, Web-caching technologies could be used for popular Web pages. They could then be tested and cached routinely, resulting in no apparent delays to the end user.

If IRC, ICQ, Instant Messenger programs or P2P networks are allowed at all, any programmatic content would be shunted to the behavioral blocking system. Some sort of end user notification system would have to be devised both so that the end user would understand why content had not arrived immediately and so that the end user would also know when something wouldn't be arriving at all.

All Content Should Be Scanned

This same technique could be used for all content coming in from the Internet. If a traffic stream has a program in it, such as a hacker trying to upload tools to a hacked server using ftp or sftp, the content could be sidetracked to the behavioral blocking system, rated as potentially malicious and quarantined for inspection. The inspection system would have recorded all pertinent information, including originating and destination IP address, which would help identify weak systems and locate machines and networks that are being used to attack the network.

Encrypted content may have to be handled as well. It may be necessary to "spoof" the destination machine in order to decrypt the traffic for inspection. In some cases, it may not be possible to decrypt the traffic, and it would have to be flagged as potentially dangerous before forwarding it to the final destination.

Conclusion

In summary, this system would consist of six major components; a high speed screening device that sits in the network stream and diverts all potentially malicious code, a holding area where content to be tested can be queued, the virtual environments where the actual behavioral testing and decision-making would occur, a quarantine area for questionable content, a communications system to keep the end user (and security administrators) informed and a reporting system from which security administrators could gather useful data.

Paul Schmehl is a Technical Support Services Manager with over 25 years experience. He is currently employed in IT management in higher education, in enterprise-wide technical support, help desk management and anti-virus protection. Involved in many new technology projects, web site development and security-related issues. Paul is also a founding member of AVIEN.

This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent.

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.