The continued development of insecure code was a topic at Black Hat 2006 that was explored by speaker Paul Böhm. Paul questioned why we see these same types of manifest coding issues year after year, despite over ten years of widely documented research into the matter. This pattern is not necessarily attributed to ignorance, as these mistakes are made by novice and veteran coders alike. In fact, it is not unheard of for individuals or organizations that specialize explicitly in security to eventually make a coding mistake that compromises the security of their software. One notable example of this was a vulnerability found in the grsecurity patch for the Linux kernel, which caused a product designed to harden the operating system to actually introduce a hole that would allow a full compromise.
Paul stated that when a problem is presented to a set of individuals, the majority of the population—given that they have access to the same resources—will end up taking the same (or at least very similar) steps and arrive at the same solution. This is because the methodologies we are taught will often present one path to a solution as being slightly more obvious than the others and this will be the route that most of us will take. History has made it evident that in certain situations this will be a path to trouble. This is simply because the tools that we use, when combined with human nature, are not compatible with software security. We are not able to use our tools effectively and securely, 100% of the time. Given the complexity of some problems that coders must solve, this should come as no surprise.
So, instead of relying on the notion that being more conscious when writing potentially troublesome routines will eliminate common vulnerability types, we should focus on using tools that remove this burden entirely—moving away from the invariably ill-fated focus on behavioral adjustments. As we all know, the probability for development errors increases with the size and complexity of the software. It is for this reason that potentially dangerous routines, such as memory management, string operations, and processing user data should be abstracted from the rest of the development process. This essentially means that the programmer only has to implement the operation perfectly once in the abstraction routine, as opposed to every single time they need to perform that type of operation. Many high-level languages already employ some of these concepts, particularly with memory management. Unfortunately, lower-level languages (such as C) still require the developer to implement such abstractions manually. But, doing this is the only way that we can hope to rid our software of those textbook mistakes, even though we like to think we know better. This also has the added benefit of having a project that is far easier to audit and maintain, which also can only bring good results in terms of security. Interestingly, Paul cited references that showed there is a positive correlation between a dramatic decrease in data reference-based vulnerabilities (which would include overflows) and an increase in the adoption of high-level languages such as Python and Ruby, which is more than we can say for what widespread education on common vulnerability types has done.