t has been almost 14 years since Scott Chasin began BugTraq to discuss computer security vulnerabilities in detail. Since then, it has grown from a small email list to become a top industry source for vulnerability information and, along the way, helped advanced many of the changes in the industry through its full disclosure policy. What a long and strange trip it has been since then. But one thing remains the same, the constant struggle to do what is right in a field full of moral landmines.
Any field that deals in issues of security and safety, from medicine and insurance to airport screening and immigration, will contain many difficult moral dilemmas. Often these problems are rooted in finance and the different ways money incentivizes or disincentivizes people and organizations. Ideally, monetary and other incentives would be aligned with the moral thing to do. Often, though, this is not the case. Just as often, what the moral or right thing to do is not altogether clear.
When Bugtraq began, the argument was whether to even acknowledge the existence of vulnerabilities. Software vendors had monetary disincentives to recognize and fix problems. They also honestly believed that their customers would be worse off by making the vulnerabilities more widely available to potential abusers. Proponents of disclosing detailed discussions of vulnerability information, myself included, countered that such acts would clue-in users to the true state of security. In turn, this would kick start the research process that would lead to short-term patches and long-term solutions.
Looking at the state of computer security today, it is perhaps not unexpected that, at least in hindsight, both groups seem to have been correct. While it seems we are getting a handle on the classes of vulnerabilities that afflicted us the most in the past, most dominant software vendors have some form of vulnerability disclosure process, and many researchers choose not to publish easily adaptable exploit code, the intervening years have been punishing in the number of vulnerabilities and attacks that users have had to endure. At times, these have been directly as a result of disclosed vulnerabilities and, at others, as a result of the general vulnerability knowledge that is now publicly available.
While some vulnerabilities are now very well understood, and solutions and countermeasures are known (if not always used and deployed) the field moves forward with new classes of vulnerabilities being described and studied. Similarly, the moral dilemmas have not disappeared. The advent of vulnerability acquisition programs, vulnerability auction sites, black and grey markets, an increasing number of researchers that wish to be financially remunerated for work that in the past they would have done pro bono, and a press always looking for content, has led to some thorny issues.
As before, some sort of equilibrium and rough consensus is likely to emerge. Nonetheless, it will probably be a few more decades before we gain a handle on the current crop of problems. In the meantime, users will be subjected to many attacks. The situation reminds me of the old saying that you can’t make an omelet without breaking some eggs. While that may be true, it is the moral obligation of everyone in the industry to make the omelet by breaking as few eggs as possible.