Posts tagged infosecurity-statistics

Page Content

2007: The year of the 9,999 vulnerabilities?

A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:

YearVulnerabilities%Increase
20021959N/A
20031281-35%
2004236785%
20054876106%
2006660535%



Average yearly increase (including the 2002-2003 decline): 48%

6605*1.48= 9775

So, that’s not quite 9999, but fairly close.  There’s enough variance that hitting 9999 in 2007 seems a plausible event.  If not in 2007, then it seems likely that we’ll hit 9999 in 2008.  So, what does it matter?



MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities:  CVE-YEAR-XXXX.  Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format.  We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow.  When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds.  A yearly number of 9999 seemed astronomical;  I’m sure that anyone who would have brought up that as a concern back then would have been laughed at.  I felt at the time that it would take a security apocalypse to reach that.  Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.



Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught.  I’m impressed.

Vulnerability disclosure grace period needs to be short, too short for patches

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.

 

Community Comments & Feedback to Security Absurdity Article

[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices.  Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article.  That response is a bit long, but worth reading.

Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.”  I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.

Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems.  It has been shown again and again (and not only in IT) that this is mistaken.  It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation.  You can’t depend on best practices and people doing the right thing all the time.  You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems.  Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.

I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either).  In the meantime, go take a look at Noam’s response piece.  And if you’re in the US, have a happy Thanksgiving.

[posted with ecto]