As a vulnerability analyst, I need togather as much information as I can on new or existing vulnerabilities.Part of my job is to scour vendor security sites, public disclosurelists, and other security-related sites looking for security-relatedinformation. In the process I often come across messages, emails, orblog entries etc. that are, to me at least, quite amusing. Typically,these messages tend to be from application authors declaring that theirapplication “can’t have vulnerabilities” or, that “it just isn’tpossible”. The arguments are often made that the programmer is eithertoo reputable, or the software that they’ve developed has check uponcheck, making it impossible for the application to havevulnerabilities. Of course, no one wants to hear that something theyhave created has bugs or security holes, but more often than not,unfortunately it does. More likely, the case isn’t that the applicationis not vulnerable, but the author themselves may not understand thevulnerability in question.
Is it really possible touncover all vulnerabilities before releasing software to the public?How realistic is it for a vendor to assume that their products do nothave vulnerabilities? Even when multiple steps are taken—be itextensive security testing, with fuzzers and/or other such tools—theremay and will likely be vulnerabilities in software. It is moreimportant, in my opinion, to handle the disclosure and distribution ofupdates or workarounds in a timely and professional manner.
The majority of these messages also complain about how thevulnerability was disclosed. There are several approaches to disclosingsecurity-related vulnerabilities to the public. Symantec has its own policy,as do other vendors. Some people adhere to this type of policy, whileothers have their own approach (such as full disclosure, private,zero-day, etc.). However, it doesn’t seem to matter how a vulnerabilitywas disclosed, as it will never be the “right” way for some. In somecases it seems as if the authors of software would prefer that peoplenever disclosed issues, because then they would never need to be fixed.Because, as we all know, if a security vulnerability isn’t public,there is absolutely no way there could be someone exploiting it. Um,right.
Granted, there may not be a perfect solution or answer for everyscenario. Perhaps if some of these authors accepted the notion that itisn’t a direct attack against them, but instead it is an attack againsttheir present (and possibly future) customers, they may realize theirenergies are better spent fixing the problem versus denying itsexistence.