As a proof-of-concept for USC computer science labs, I set up eight Windows 7 VMs on the same physical host in the Windows Server demonstration below, and recorded firing them up simultaneously and doing some light web browsing, etc. on several of them. Performance is pretty solid; you could probably cram double this many guests on that host and still have as good or better performance than the typical physical lab workstation.
update: replaced video with somewhat more watchable version, with all eight guests tiled on one screen.
Aside from good performance and a single box to maintain, this setup offers some fairly compelling advantages over the traditional computer lab: the host also has a 2TB conventional drive in it, which is where a “gold” image of the Win7 guests is maintained. It only takes about 10 minutes total to reset all of the guests to the “gold” standard; and it would be just as easy to keep multiple gold images on the conventional drive for different classes – Linux images for one class, Windows images with Office for a basic class, Windows images with Visual Studio for another, Solaris for yet another… you get the idea.
Also, the time to “reset” the guests could be substantially faster than that, even, with a little tweaking – using .qcow files instead of whole LVM volumes would allow you to use rsync with the –inplace argument and only have to write over the (relatively few) changed blocks, for example; or in a more advanced layout a separate FreeBSD machine with a large RAIDZ array and iSCSI exports could be used to store the images. There’s still plenty of room for improvement and innovation, but even the simple proof-of-concept (which I put together in roughly half an hour) looks pretty compelling to me.
I’ve been surprised and pleased at just how well Windows Server 2008 runs virtualized under Debian Squeeze. I first started running virtual Windows Servers purely for the disaster recovery and portability aspects, expecting to pay with a drop in performance… but what I found was that in a lot of cases, Windows 2008’s performance is actually somewhat better when running virtually. In particular, the ever-annoying reboot cycle gets cut to a tiny, tiny fraction of what it would be if running on “the bare metal.”
It’s also pretty nice never, ever having to play “hunt-the-driver” – the virtual “hardware” is all natively supported by Windows, so a virtual install “just works” the moment it’s done, no fuss no muss. But what about that performance?
Smokin’! Which exposes yet another reason to think about virtualization: being able to take advantage of Linux’s highly superior kernel RAID capabilities. The box shown above is running four Crucial C300 128GB solid state drives connected to SATA-3 6Gbps ports on an ASUS board; the Debian Squeeze host has them set up in a kernel RAID10. The resulting 250GB or so of storage is on a performance level that just has to be seen to be believed.
Note that while this IS a really “hot” machine, it’s still just one machine, running on commodity hardware – there’s no $50,000 SAN lurking in the background somewhere; that performance is ALL coming from a single machine with a price tag of WELL under $10K.
If you’ve never seen a machine equipped with a good Solid State Drive (SSD)… they’re pretty impressive.Â In this clip, I’m putting an Ubuntu 9.10 workstation with an Intel SSD through its paces.
Some of the reason that machine is so fast is Ubuntu – the newest release has some pretty significant disk speed related enhancements – but the vast majority of it is the solid state drive.Â (For those of you not familiar with Linux, it might help you to think of GiMP as “Photoshop” – both because it does pretty much the same job, and because both are notorious for being EXTREMELY slow to start up.)
You do have to be careful when you’re buying an SSD, though – they’re not all created equal.Â In fact, some of them are absolutely atrocious, with significantly worse performance than conventional hard drives… so you need to know what you’re doing (or trust who you’re buying from) when you go that route.Â In particular, anything with a jMicron controller in it is better taken out back and shot than put in a production machine.Â You also need to be aware that you’re going to pay a lot more per megabyte for solid state – an 80GB SSD costs about as much as two 1.5 terabyte conventional hard drives.Â So you probably don’t want SSDs (yet) for tasks involving large amounts of bulk storage.
But, as the video demonstrates… if what you need is performance, there’s nothing else in the same league; a few hundred bucks spent on a good SSD will give you more real-world performance benefit for most tasks than several thousand dollars spent otherwise.
It’s also worth noting that the current generation of SSDs are generally 2.5″ form factor, meaning they fit interchangeably in notebooks, netbooks, or desktop computers.Â You typically won’t see as much of the top-end performance on a notebook or netbook – their SATA controllers usually bottleneck at a third of the top-end speed of the best SSDs – but they’re just as much (if not more) worth the upgrade, because conventional laptop HDDs perform much more poorly than full-size HDDs, so the speed boost is even more of a blessing.
Bad Behavior has blocked 1521 access attempts in the last 7 days.