Posts tagged Microsoft

Page Content

Thoughts on Virtualization, Security and Singularity

The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before.  It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM).  An undetectable VMM would be “transparent”.  Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings.  The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. 

However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God.  They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts.  This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex.  However, this is not an hermetic, formal proof that it is impossible and cannot exist;  a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible. 

Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine.  One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? 

McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then.  The idea is to integrate security functions within the virtual machine monitor.  Malware nowadays prevents the installation of security tools and interferes with them as much as possible.  If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools.  I really like that idea. 
 
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS.  Because of this, VMM detection isn’t a moot point for the defender.  However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker.  Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM?  Perhaps it would need more hooks into the VMM to do this.  Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly.  What is the cost of all these detection attempts, which must be executed regularly?  Aren’t they prohibitive, therefore making strong malicious VMM detection impractical?  In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection.  Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.


Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler.  What strikes me is how it resembles the “white list” approach I’ve been talking about.  “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner.  It states what processes do and what may happen, instead of focusing on what must not happen. 

Last year I thought that virtualization and security could provide a revolution;  now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger.  Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race.  In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.

Think OpenOffice is the solution?  Think again.

[tags]viruses,OpenOffice,Word,Microsoft,Office,Powerpoint,Excel[/tags]
In my last post, I ranted about a government site making documents available only in Word.  A few people have said to me “Get over it—use OpenOffice instead of the Microsoft products.”  The problem is that those are potentially dangerous too—there is too much functionality (some of it may be undocumented, too) in Word (and Office) documents.

Now, we have a virus specific to OpenOffice.  We’ve had viruses that run in emulators, too.  Trying to be compatible with something fundamentally flawed is not a security solution.  That’s also my objection to virtualization as a “solution” to malware.

I don’t mean to be unduly pejorative, but as the saying goes, even if you put lipstick on a pig, it is still a pig.

Word and the other Office components are useful programs, but if MS really cared about security, they would include a transport encoding that didn’t include macros and potentially executable attachments—and encourage its use!  RTF is probably that encoding for text documents, but it is not obvious to the average user that it should be used instead of .doc format for exchanging files.  And what is there for Excel, Powerpoint, etc?

What security push?

[tags]Vista, Windows, security,flaws,Microsoft[/tags]

Update: additions added 4/19 and 4/24, at the end.

Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million.  That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources.  (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc.  I don’t know if that is true, but the reality certainly was/is a substantial number.)

Without a doubt, people inside Microsoft took the issue seriously.  They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts.  Security of the Microsoft code base suddenly became a Very Big Deal.

Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult.  The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch.  Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set.  The number of flaws maybe hasn’t really decreased all that much from years ago.

Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training.  The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration.  The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago.  The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year.  The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.

Microsoft seems to project the attitude that they have solved the security problem.

If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn?  Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!!  There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.

Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved.  There is greater awareness inside Microsoft about security and privacy issues.  I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues.  But questions remain: did Microsoft get its money’s worth?  Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws?  Why does it seem that security is no longer a priority?  What does that portend for Vista, Longhorn, and Office 2007?  (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior. grin )

I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there.  I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”

What do you think?

[posted with ecto]

Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security.  I do not know who the current members might be.

Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post.  He pointed out several issues that I think are valid and deserve airing here;

  1. Security training of personnel is on-going.  It still is unclear to me whether they are employing good educational methods, including follow-up testing, to optimize their instruction.
  2. The TCABB does indeed continue (and was meeting when I made the original post!).  It has undergone some changes since it was announced, but is largely the same as when it was formed.  What they are doing, and what effect they are having (if any), is unclear.
  3. Microsoft’s patch process is much smoother now, and bundled patches are easier to apply than lots of individual ones.  (However, there are still a lot of patches for things that shouldn’t be in the code.)
  4. The loss of outreach to academia by MSR does not imply they aren’t still doing research in security issues.

Many of my questions still remain unanswered, including Mr. Nash’s condition….