Abstract
Software vulnerabilities exist and will continue to do so. Every week, a new vulnerability gains popular attention, is discussed at length in mailing lists, and hopefully gets patched by the vendor before exploits and attack tools start appearing. But there is little evidence that we are learning from our mistakes. Sharing of vulnerability information through public databases has been possible for quite sometime now. If it is not lack of information, what is it that is preventing us from learning from our past? Are there any lessons to be learned at all? A good start towards answering such questions would be to analyze vulnerabilities in widely deployed, critical but buggy software artifacts. In this paper, we look at vulnerabilities in five such software artifacts and examine two of their attributes. Among other statistics, our analysis suggests that the discovery of a vulnerability in a software artifact may influence the discovery of more vulnerabilities of the same type in that artifact. Thus, there may be some learning occurring, but it is by the penetration community rather than the software engineers. This paper argues that measuring vulnerability occurrences may have predictive value and that this concept of retrospective metric is an interesting approach to expressing assurance.