Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Security Response

Patch Management – Speed is of the Essence

Created: 19 Jun 2008 16:57:35 GMT • Updated: 23 Jan 2014 18:40:52 GMT
Hon Lau's picture
0 0 Votes
Login to vote

Most people are well aware of the potential problem posed by software vulnerabilities that are publicly announced, but many of these vulnerabilities can remain unpatched by the relevant vendors. Dealing effectively with security problems posed by software vulnerabilities is a two-way street. You count on your software vendors to quickly bring out reliable patches and once they are available, your end of the bargain is to apply them as quickly as possible. Many software vendors are attempting to address their share of the issues in relation to patch development and distribution. The problem is, many users are still slow to apply new software patches, for various reasons. It is this gap between the availability of patches and their application that is creating a window of opportunity for would-be attackers.

To add fuel to the fire, an interesting research report was recently published by David Brumley, Pongsin Poosankam, Dawn Song, and Jiang Zheng. The report details a technique for the automatic generation of exploits for previously unknown, but patched, vulnerabilities in software. The report provides information on a technique for scanning a patched version of a binary file against the unpatched version and detecting the differences, in order to then generate an exploit for the fixed vulnerability. The researchers claim that this automated technique could allow for the creation of an exploit in a matter of minutes. On the face of it, this would not appear to be a big deal since the security hole is already patched, but there are just a couple of problems with this scenario.

To start with, there are many vulnerabilities that are patched every day—even as I write—for which there are no publicly known exploits or information in circulation. If people don't know about them, then they can't create exploits for them. The techniques described in the research could potentially pave the way for the discovery of these relatively unknown vulnerabilities and create opportunities for more of them to be exploited in the future. These are vulnerabilities that would have been relatively unknown or overlooked by attackers due to a lack of information.

Another factor is the process for applying patches. It is well known that the application of patches is an inconsistent and slow process. This factor has been taken advantage of time and time again in the past, when newly patched vulnerabilities have been exploited to great effect. The reason why this is the case is because the patches are not being applied in a timely manner. In addition, it is well known in the industry that many computer users do not apply patches immediately upon their release. There are several reasons for this:

• Patches have to be downloaded and in many cases users have to actively seek them out as updating is not automatic.
• Patches do not always work or have unintended side effects, which could be nearly as disruptive as a real security threat.

Many businesses have highly complex and critical IT infrastructures in place, so it is risky to tinker with something that is working along just fine in order to address a risk that may materialize / but then again may not. It is human nature to follow the mantra that if something is not broken then you should not fix it. The problem with this thinking—in relation to IT security at least—is that things don’t operate in the IT world as they do in the real, physical world. In the physical world, you don’t have hordes of would-be attackers arriving at your premises looking and probing for weaknesses in order to gain access because there is a probably a low chance of success and it is risky to do so. In the virtual world, it is quite easy for an attacker to reach out and probe thousands of targets for weaknesses and remain untraceable. In other words, you don’t have to accidentally wander into a bad part of the Internet to get into trouble—on many occasions trouble comes looking for you. This places a heavy onus on the users of the software to make sure that they are doing all they can to protect themselves too.

Users need to make sure that they stay informed of developments that may affect their infrastructure. They should obtain, test, and apply patches as quickly as possible and stay alert. As long as people are failing to apply patches in a timely manner, the window of opportunity for exploitation will remain open. This research on the automated development of exploits could present delays in the application of patches, making it an even riskier practice than it currently is. For some time we have observed new threats exploiting recently patched vulnerabilities surfacing quickly after the patch is released. For example, my colleagues Orla and Masaki both reported on the exploitation of patched vulnerabilities, the success of these exploits almost surely chalked up to delays in patching. The publication of the techniques described in this research will make it more likely that at some point in the future, not only will there be more vulnerabilities exploited, but they will be exploited more quickly too.

In conclusion, users and businesses need to take vulnerability assessment and patch management seriously and be constantly on the lookout for possible threats to their infrastructure. Keeping on top of the many vulnerabilities disclosed each day is a full time job, but a job that must be done regularly and done well.