Virtualization Is Successful Because Operating Systems Are Weak
It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems. Virtualization supports functions such as these:
- Availability
- Minimized downtime for patching OSes and applications
- Restart a crashed OS or server
- Scalability
- More or different images as demand changes
- Isolation and compartmentalization
- Better hardware utilization
- Hardware abstraction for OSes
- Support legacy platforms
Compare it to the list of operating system duties:
- Availability
- Minimized downtime for patching applications
- Restart crashed applications
- Scalability
- More or different processes as demand changes
- Isolation and compartmentalization
- Protected memory
- Accounts, capabilities
- Better hardware utilization (with processes)
- Hardware abstraction for applications
The similarity suggests that virtualization solutions compete with operating systems. I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously. Typical operating systems lack security, reliability and ease of maintenance. They have drivers in kernel space; Windows Vista thankfully now has them in user space, and Linux is moving in that direction. The complexity is staggering. This is reflected in the security guidance; hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex. The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex. The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.
What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization. In reality, virtualization typically depends on another, full-blown operating system.
VMWare ESX Server runs its own OS with drivers. Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation). Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it. So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason. This other OS also needs patches, configuration and maintenance. Now we have multiple OSes to maintain! What did we gain? We don’t trust OSes but we trust “virtualization” that depends on more OSes? At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.
I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory. Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own. The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes.
As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary. ESX is an operating system anyway… I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS; there may be a thesis or two in figuring out how to implement them back in an operating system.
What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista: virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest. Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure, and for legacy applications. Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).
Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level. ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure? Computer, 39
on Wednesday, March 26, 2008 at 12:15 PM