Rethinking the PC: why virtual machines should replace operating systems

Discloser: Most of the vendors mentioned are clients of the author.

Technology generally develops in a linear fashion even if something happens that should change its progression. Take PC operating systems, which arrived in the 1980s. One of the big problems they brought was the need to prevent the operating system and applications from breaking every time Intel modified its chipset or firmware. The fix, ultimately, was to create virtual machines – a virtual hardware layer that would remain constant, regardless of what happened to the underlying hardware.

Most of the issues we’ve encountered with deployments over the past two decades have revolved around IT’s need to keep the PC image static while hardware changes. If we instead preload a virtual machine from VMware or Microsoft – and then put the image on it – you could ensure a level of compatibility that you don’t typically get today.

Let’s explore this week to rethink PCs, virtual machines and operating systems.

Rethink PCs

When PCs were first created, the people who built the operating system and the people who built the hardware were the same. Apple built both and IBM bought the rights to Windows so it could do the same as well. But on the Windows side, the operating system quickly decoupled. This allowed for a much more competitive market, but also a market unusually plagued by incompatibilities and failures because the two halves of the PC were not developed together.

For a while at the start of this century – when Intel and Microsoft didn’t even talk very well – we had disasters like Windows Vista and Windows 8, platforms that even Microsoft would love to forget. Things finally evened out, and most of these issues are a thing of the past. But in some ways this problem has gotten worse as AMD has become a powerhouse and Qualcomm is now providing PC solutions. This variety of hardware forces Intel to ramp up its own development efforts, increasing the possibility that maintaining operating system reliability will become more difficult to do.

One way Microsoft solves this problem with its Surface line; the company is starting to spec processors for the Surface X and the upcoming Surface Neo dual-screen laptop: from Qualcomm and Intel, respectively. Custom processors are an interesting idea, but if Dell, HP, and Lenovo went down this route, the resulting hardware complexity — and the chance of operating system crashes — would increase dramatically.

In this new world, there is a need to allow the operating system side of the solution from Apple, Google and Microsoft to move forward as fast as these companies can scale and the hardware platforms from AMD, Intel and Qualcomm to do the same without any resulting breakage.

Enter the virtual machine

A virtual machine running on hardware usually has a hypervisor, so you can run multiple virtual machine instances, each isolated from each other, as the technology is usually used on servers where you have multiple users on the same material.

On a PC, you can have separate VM instances for work, school, and personal use with different levels of freedom to use. The corporate VM would be locked down so that the business is better protected from other usage patterns. Viruses often arrive in companies, carried by employees who do not pay attention to their personal use of the PC of their company. You mostly see this kind of behavior from developers who need to separate their development projects from their corporate image.

Even with a three image setup (work, school, personal), you would be able to optimize on all three supporting organizations. Work IT handles the work image, school IT handles the school image, and the OEM helps with the personal image (which they might charge for). You’d get a higher level of security because the two or three usage models would be isolated from each other – and you’d free up the OS vendor and platform vendor to advance their platforms faster because they might specify a definite VM configuration.

The VM company, whether VMware or Microsoft, could then work with the hardware vendor to maximize flexibility as a performance factor and the hardware development PC would evolve to become better multi-homed clients. Other options would be to create a virtual machine for your kids on the family PC that could be automatically purged and rebuilt on a regular basis, as well as operating systems tuned for things like eSports. You may be able to create games that run natively on one virtual machine while suspending the other virtual machines when competing. And, of course, IT would get a very stable virtual hardware image that would remain as stable as they needed across all hardware vendors and hardware versions.

Wrap:

I think it’s time to start rethinking the relationship between operating systems, hypervisors and virtual machines to better secure our PCs (root kits would generally become a thing of the past thanks to the VM). The result could be more flexible, more reliable, more secure, and better able to cope with our changing future than the way we build platforms today.

I think the world is ready for a change; now it’s only a matter of time before an OEM is willing to take the risk and try something new.

Copyright © 2020 IDG Communications, Inc.

Comments are closed.