Posts tagged disclosure

Page Content

The Vulnerability Protection Racket

TippingPoint’s Zero Day Initiative (ZDI) gives interesting data.  TippingPoint’s ZDI has made public its “disclosure pipeline” on August 28, 2006.  As of today, it has 49 vulnerabilities from independent researchers, which have been waiting on average 114 days for a fix.  There are also 12 vulnerabilities from TippingPoint’s researchers as well.  With those included, the average waiting time for a fix is 122 days, or about 4 months!  Moreover, 56 out of 61 are high severity vulnerabilities.  These are from high profile vendors: Microsoft, HP, Novell, Apple, IBM Tivoli, Symantec, Computer Associates, Oracle…  Some high severity issues have been languishing for more than 9 months.

Hum.  ZDI is supposed to be a “best-of-breed model for rewarding security researchers for responsibly disclosing discovered vulnerabilities. ”  How is it responsible to take 9 months to fix a known but secret high severity vulnerability?  It’s not directly ZDI’s fault that the vendors are taking so long, but then it’s not providing much incentive either to the vendors.  This suggests that programs like ZDI’s have a pernicious effect.  They buy the information from researchers, who are then forbidden from disclosing the vulnerabilities.  More vulnerabilities are found due to the monetary incentive, but only people paying for protection services have any peace of mind.  The software vendors don’t care much, as the vulnerabilities remain secret.  The rest of us are worse off than before because more vulnerabilities remain secret for an unreasonable length of time.

Interestingly, this is what was predicted several years ago in “Market for Software Vulnerabilities?  Think Again” (2005) Kannan K and Telang R, Management Science 51, pp. 726-740.  The model predicted worse social consequences from these programs than no vulnerability handling at all due to races with crackers, increased vulnerability volume, and unequal protection of targets.  This makes another conclusion of the paper interesting and likely valid:  CERT/CC offering rewards to vulnerability discoverers should provide the best outcomes, because information would be shared systematically and equally.  I would add that CERT/CC is also in a good position to find out if a vulnerability is being exploited in the wild, in which case it can release an advisory and make vulnerability information public sooner.  A vendor like TippingPoint has a conflict of interest in doing so, because it decreases the value of their protection services.

I tip my hat to TippingPoint for making their pipeline information public.  However, because they provide no deadlines to vendors or incentives for responsibly patching the vulnerabilities, the very existence of their services and similar ones from other vendors are hurting those who don’t subscribe.  That’s what makes vulnerability protection services a racket. 

 

Vulnerability disclosure grace period needs to be short, too short for patches

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.