Drone “Flaw” Known Since 1990s Was a Vulnerability
The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:
"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.
A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.
Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "
Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.
on Monday, January 4, 2010 at 03:04 PM