ZFS: You should use mirror vdevs, not RAIDZ.

Continuing this week’s “making an article so I don’t have to keep typing it” ZFS series… here’s why you should stop using RAIDZ, and start using mirror vdevs instead.

The basics of pool topology

A pool is a collection of vdevs. Vdevs can be any of the following (and more, but we’re keeping this relatively simple):

  • single disks (think RAID0)
  • redundant vdevs (aka mirrors – think RAID1)
  • parity vdevs (aka stripes – think RAID5/RAID6/RAID7, aka single, dual, and triple parity stripes)

The pool itself will distribute writes among the vdevs inside it on a relatively even basis. However, this is not a “stripe” like you see in RAID10 – it’s just distribution of writes. If you make a RAID10 out of 2 2TB drives and 2 1TB drives, the second TB on the bigger drives is wasted, and your after-redundancy storage is still only 2 TB. If you put the same drives in a zpool as two mirror vdevs, they will be a 2x2TB mirror and a 2x1TB mirror, and your after-redundancy storage will be 3TB. If you keep writing to the pool until you fill it, you may completely fill the two 1TB disks long before the two 2TB disks are full. Exactly how the writes are distributed isn’t guaranteed by the specification, only that they will be distributed.

What if you have twelve disks, and you configure them as two RAIDZ2 (dual parity stripe) vdevs of six disks each? Well, your pool will consist of two RAIDZ2 arrays, and it will distribute writes across them just like it did with the pool of mirrors. What if you made a ten disk RAIDZ2, and a two disk mirror? Again, they go in the pool, the pool distributes writes across them. In general, you should probably expect a pool’s performance to exhibit the worst characteristics of each vdev inside it. In practice, there’s no guarantee where reads will come from inside the pool – they’ll come from “whatever vdev they were written to”, and the pool gets to write to whichever vdevs it wants to for any given block(s).

Storage Efficiency

If it isn’t clear from the name, storage efficiency is the ratio of usable storage capacity (after redundancy or parity) to raw storage capacity (before redundancy or parity).

This is where a lot of people get themselves into trouble. “Well obviously I want the most usable TB possible out of the disks I have, right?” Probably not. That big number might look sexy, but it’s liable to get you into a lot of trouble later. We’ll cover that further in the next section; for now, let’s just look at the storage efficiency of each vdev type.

  • single disk vdev(s) – 100% storage efficiency. Until you lose any single disk, and it becomes 0% storage efficency…
    single-disk vdevs
    eight single-disk vdevs



  • RAIDZ1 vdev(s) – (n-1)/n, where n is the number of disks in each vdev. For example, a RAIDZ1 of eight disks has an SE of 7/8 = 87.5%.
    raidz1
    an eight disk raidz1 vdev



  • RAIDZ2 vdev(s) – (n-2)/n. For example, a RAIDZ2 of eight disks has an SE of 6/8 = 75%.
    raidz2
    an eight disk raidz2 vdev



  • RAIDZ3 vdev(s) – (n-3)/n. For example, a RAIDZ3 of eight disks has an SE of 5/8 = 62.5%.
    raidz3
    an eight disk raidz3 vdev



  • mirror vdev(s) – 1/n, where n is the number of disks in each vdev. Eight disks set up as 4 2-disk mirror vdevs have an SE of 1/2 = 50%.
    mirror vdevs
    a pool of four 2-disk mirror vdevs



One final note: striped (RAIDZ) vdevs aren’t supposed to be “as big as you can possibly make them.” Experts are cagey about actually giving concrete recommendations about stripe width (the number of devices in a striped vdev), but they invariably recommend making them “not too wide.” If you consider yourself an expert, make your own expert decision about this. If you don’t consider yourself an expert, and you want more concrete general rule-of-thumb advice: no more than eight disks per vdev.

Fault tolerance / degraded performance

Be careful here. Keep in mind that if any single vdev fails, the entire pool fails with it. There is no fault tolerance at the pool level, only at the individual vdev level! So if you create a pool with single disk vdevs, any failure will bring the whole pool down.

It may be tempting to go for that big storage number and use RAIDZ1… but it’s just not enough. If a disk fails, the performance of your pool will be drastically degraded while you’re replacing it. And you have no fault tolerance at all until the disk has been replaced and completely resilvered… which could take days or even weeks, depending on the performance of your disks, the load your actual use places on the disks, etc. And if one of your disks failed, and age was a factor… you’re going to be sweating bullets wondering if another will fail before your resilver completes. And then you’ll have to go through the whole thing again every time you replace a disk. This sucks. Don’t do it. Conventional RAID5 is strongly deprecated for exactly the same reasons. According to Dell, “Raid 5 for all business critical data on any drive type [is] no longer best practice.”

RAIDZ2 and RAIDZ3 try to address this nightmare scenario by expanding to dual and triple parity, respectively. This means that a RAIDZ2 vdev can survive two drive failures, and a RAIDZ3 vdev can survive three. Problem solved, right? Well, problem mitigated – but the degraded performance and resilver time is even worse than a RAIDZ1, because the parity calculations are considerably gnarlier. And it gets worse the wider your stripe (number of disks in the vdev).

Saving the best for last: mirror vdevs. When a disk fails in a mirror vdev, your pool is minimally impacted – nothing needs to be rebuilt from parity, you just have one less device to distribute reads from. When you replace and resilver a disk in a mirror vdev, your pool is again minimally impacted – you’re doing simple reads from the remaining member of the vdev, and simple writes to the new member of the vdev. In no case are you re-writing entire stripes, all other vdevs in the pool are completely unaffected, etc. Mirror vdev resilvering goes really quickly, with very little impact on the performance of the pool. Resilience to multiple failure is very strong, though requires some calculation – your chance of surviving a disk failure is 1-(f/(n-f)), where f is the number of disks already failed, and n is the number of disks in the full pool. In an eight disk pool, this means 100% survival of the first disk failure, 85.7% survival of a second disk failure, 66.7% survival of a third disk failure. This assumes two disk vdevs, of course – three disk mirrors are even more resilient.

But wait, why would I want to trade guaranteed two disk failure in RAIDZ2 with only 85.7% survival of two disk failure in a pool of mirrors? Because of the drastically shorter time to resilver, and drastically lower load placed on the pool while doing so. The only disk more heavily loaded than usual during a mirror vdev resilvering is the other disk in the vdev – which might sound bad, but remember that it’s no more heavily loaded than it would’ve been as a RAIDZ member.  Each block resilvered on a RAIDZ vdev requires a block to be read from each surviving RAIDZ member; each block written to a resilvering mirror only requires one block to be read from a surviving vdev member.  For a six-disk RAIDZ1 vs a six disk pool of mirrors, that’s five times the extra I/O demands required of the surviving disks.

Resilvering a mirror is much less stressful than resilvering a RAIDZ.

One last note on fault tolerance

No matter what your ZFS pool topology looks like, you still need regular backup.

Say it again with me: I must back up my pool!

ZFS is awesome. Combining checksumming and parity/redundancy is awesome. But there are still lots of potential ways for your data to die, and you still need to back up your pool. Period. PERIOD!

Normal performance

It’s easy to think that a gigantic RAIDZ vdev would outperform a pool of mirror vdevs, for the same reason it’s got a greater storage efficiency. “Well when I read or write the data, it comes off of / goes onto more drives at once, so it’s got to be faster!” Sorry, doesn’t work that way. You might see results that look kinda like that if you’re doing a single read or write of a lot of data at once while absolutely no other activity is going on, if the RAIDZ is completely unfragmented… but the moment you start throwing in other simultaneous reads or writes, fragmentation on the vdev, etc then you start looking for random access IOPS. But don’t listen to me, listen to one of the core ZFS developers, Matthew Ahrens: “For best performance on random IOPS, use a small number of disks in each RAID-Z group. E.g, 3-wide RAIDZ1, 6-wide RAIDZ2, or 9-wide RAIDZ3 (all of which use ⅓ of total storage for parity, in the ideal case of using large blocks). This is because RAID-Z spreads each logical block across all the devices (similar to RAID-3, in contrast with RAID-4/5/6). For even better performance, consider using mirroring.

Please read that last bit extra hard: For even better performance, consider using mirroring. He’s not kidding. Just like RAID10 has long been acknowledged the best performing conventional RAID topology, a pool of mirror vdevs is by far the best performing ZFS topology.

Future expansion

This is one that should strike near and dear to your heart if you’re a SOHO admin or a hobbyist. One of the things about ZFS that everybody knows to complain about is that you can’t expand RAIDZ. Once you create it, it’s created, and you’re stuck with it.

Well, sorta.

Let’s say you had a server with 12 slots to put drives in, and you put six drives in it as a RAIDZ2. When you bought it, 1TB drives were a great bang for the buck, so that’s what you used. You’ve got 6TB raw / 4TB usable. Two years later, 2TB drives are cheap, and you’re feeling cramped. So you fill the rest of the six available bays in your server, and now you’ve added an 12TB raw / 8TB usable vdev, for a total pool size of 18TB/12TB. Two years after that, 4TB drives are out, and you’re feeling cramped again… but you’ve got no place left to put drives. Now what?

Well, you actually can upgrade that original RAIDZ2 of 1TB drives – what you have to do is fail one disk out of the vdev and remove it, then replace it with one of your 4TB drives. Wait for the resilvering to complete, then fail a second one, and replace it. Lather, rinse, repeat until you’ve replaced all six drives, and resilvered the vdev six separate times – and after the sixth and last resilvering finishes, you have a 24TB raw / 16TB usable vdev in place of the original 6TB/4TB one. Question is, how long did it take to do all that resilvering? Well, if that 6TB raw vdev was nearly full, it’s not unreasonable to expect each resilvering to take twelve to sixteen hours… even if you’re doing absolutely nothing else with the system. The more you’re trying to actually do in the meantime, the slower the resilvering goes. You might manage to get six resilvers done in six full days, replacing one disk per day. But it might take twice that long or worse, depending on how willing to hover over the system you are, and how heavily loaded it is in the meantime.

What if you’d used mirror vdevs? Well, to start with, your original six drives would have given you 6TB raw / 3TB usable. So you did give up a terabyte there. But maybe you didn’t do such a big upgrade the first time you expanded. Maybe since you only needed to put in two more disks to get more storage, you only bought two 2TB drives, and by the time you were feeling cramped again the 4TB disks were available – and you still had four bays free. Eventually, though, you crammed the box full, and now you’re in that same position of wanting to upgrade those old tiny 1TB disks. You do it the same way – you replace, resilver, replace, resilver – but this time, you see the new space after only two resilvers. And each resilvering happens tremendously faster – it’s not unreasonable to expect nearly-full 1TB mirror vdevs to resilver in three or four hours. So you can probably upgrade an entire vdev in a single day, even without having to hover over the machine too crazily. The performance on the machine is hardly impacted during the resilver. And you see the new capacity after every two disks replaced, not every six.

TL;DR

Too many words, mister sysadmin. What’s all this boil down to?

  • don’t be greedy. 50% storage efficiency is plenty.
  • for a given number of disks, a pool of mirrors will significantly outperform a RAIDZ stripe.
  • a degraded pool of mirrors will severely outperform a degraded RAIDZ stripe.
  • a degraded pool of mirrors will rebuild tremendously faster than a degraded RAIDZ stripe.
  • a pool of mirrors is easier to manage, maintain, live with, and upgrade than a RAIDZ stripe.
  • BACK. UP. YOUR POOL. REGULARLY. TAKE THIS SERIOUSLY.

TL;DR to the TL;DR – unless you are really freaking sure you know what you’re doing… use mirrors. (And if you are really, really sure what you’re doing, you’ll probably change your mind after a few years and wish you’d done it this way to begin with.)

Will ZFS and non-ECC RAM kill your data?

This comes up far too often, so rather than continuing to explain it over and over again, I’m going to try to do a really good job of it once and link to it here.

What’s ECC RAM? Is it a good idea?

The ECC stands for Error Correcting Checksum. In a nutshell, ECC RAM is a special kind of server-grade memory that can detect and repair some of the most common kinds of in-memory corruption. For more detail on how ECC RAM does this, and which types of errors it can and cannot correct, the rabbit hole’s over here.

Now that we know what ECC RAM is, is it a good idea? Absolutely. In-memory errors, whether due to faults in the hardware or to the impact of cosmic radiation (yes, really) are a thing. They do happen. And if it happens in a particularly strategic place, you will lose data to it. Period. There’s no arguing this.

What’s ZFS? Is it a good idea?

ZFS is, among other things, a checksumming filesystem. This means that for every block committed to storage, a strong hash (somewhat misleadingly AKA checksum) for the contents of that block is also written. (The validation hash is written in the pointer to the block itself, which is also checksummed in the pointer leading to itself, and so on and so forth. It’s turtles all the way down. Rabbit hole begins over here for this one.)

Is this a good idea? Absolutely. Combine ZFS checksumming with redundancy or parity, and now you have a self-healing array. If a block is corrupt on disk, the next time it’s read, ZFS will see that it doesn’t match its checksum and will load a redundant copy (in the case of mirror vdevs or multiple copy storage) or rebuild a parity copy (in the case of RAIDZ vdevs), and assuming that copy of the block matches its checksum, will silently feed you the correct copy instead, and log a checksum error against the first block that didn’t pass.

ZFS also supports scrubs, which will become important in the next section. When you tell ZFS to scrub storage, it reads every block that it knows about – including redundant copies – and checks them versus their checksums. Any failing blocks are automatically overwritten with good blocks, assuming that a good (passing) copy exists, either redundant or as reconstructed from parity. Regular scrubs are a significant part of maintaining a ZFS storage pool against long term corruption.

Is ZFS and non-ECC worse than not-ZFS and non-ECC? What about the Scrub of Death?

OK, it’s pretty easy to demonstrate that a flipped bit in RAM means data corruption: if you write that flipped bit back out to disk, congrats, you just wrote bad data. There’s no arguing that. The real issue here isn’t whether ECC is good to have, it’s whether non-ECC is particularly problematic with ZFS. The scenario usually thrown out is the the much-dreaded Scrub Of Death.

TL;DR version of the scenario: ZFS is on a system with non-ECC RAM that has a stuck bit, its user initiates a scrub, and as a result of in-memory corruption good blocks fail checksum tests and are overwritten with corrupt data, thus instantly murdering an entire pool. As far as I can tell, this idea originates with a very prolific user on the FreeNAS forums named Cyberjock, and he lays it out in this thread here. It’s a scary idea – what if the very thing that’s supposed to keep your system safe kills it? A scrub gone mad! Nooooooo!

The problem is, the scenario as written doesn’t actually make sense. For one thing, even if you have a particular address in RAM with a stuck bit, you aren’t going to have your entire filesystem run through that address. That’s not how memory management works, and if it were how memory management works, you wouldn’t even have managed to boot the system: it would have crashed and burned horribly when it failed to load the operating system in the first place. So no, you might corrupt a block here and there, but you’re not going to wring the entire filesystem through a shredder block by precious block.

But we’re being cheap here. Say you only corrupt one block in 5,000 this way. That would still be hellacious. So let’s examine the more reasonable idea of corrupting some data due to bad RAM during a scrub. And let’s assume that we have RAM that not only isn’t working 100% properly, but is actively goddamn evil and trying its naive but enthusiastic best to specifically kill your data during a scrub:

First, you read a block. This block is good. It is perfectly good data written to a perfectly good disk with a perfectly matching checksum. But that block is read into evil RAM, and the evil RAM flips some bits. Perhaps those bits are in the data itself, or perhaps those bits are in the checksum. Either way, your perfectly good block now does not appear to match its checksum, and since we’re scrubbing, ZFS will attempt to actually repair the “bad” block on disk. Uh-oh! What now?

Next, you read a copy of the same block – this copy might be a redundant copy, or it might be reconstructed from parity, depending on your topology. The redundant copy is easy to visualize – you literally stored another copy of the block on another disk. Now, if your evil RAM leaves this block alone, ZFS will see that the second copy matches its checksum, and so it will overwrite the first block with the same data it had originally – no data was lost here, just a few wasted disk cycles. OK. But what if your evil RAM flips a bit in the second copy? Since it doesn’t match the checksum either, ZFS doesn’t overwrite anything. It logs an unrecoverable data error for that block, and leaves both copies untouched on disk. No data has been corrupted. A later scrub will attempt to read all copies of that block and validate them just as though the error had never happened, and if this time either copy passes, the error will be cleared and the block will be marked valid again (with any copies that don’t pass validation being overwritten from the one that did).

Well, huh. That doesn’t sound so bad. So what does your evil RAM need to do in order to actually overwrite your good data with corrupt data during a scrub? Well, first it needs to flip some bits during the initial read of every block that it wants to corrupt. Then, on the second read of a copy of the block from parity or redundancy, it needs to not only flip bits, it needs to flip them in such a way that you get a hash collision. In other words, random bit-flipping won’t do – you need some bit flipping in the data (with or without some more bit-flipping in the checksum) that adds up to the corrupt data correctly hashing to the value in the checksum. By default, ZFS uses 256-bit SHA validation hashes, which means that a single bit-flip has a 1 in 2^256 chance of giving you a corrupt block which now matches its checksum. To be fair, we’re using evil RAM here, so it’s probably going to do lots of experimenting, and it will try flipping bits in both the data and the checksum itself, and it will do so multiple times for any single block. However, that’s multiple 1 in 2^256 (aka roughly 1 in 10^77) chances, which still makes it vanishingly unlikely to actually happen… and if your RAM is that damn evil, it’s going to kill your data whether you’re using ZFS or not.

But what if I’m not scrubbing?

Well, if you aren’t scrubbing, then your evil RAM will have to wait for you to actually write to the blocks in question before it can corrupt them. Fortunately for it, though, you write to storage pretty much all day long… including to the metadata that organizes the whole kit and kaboodle. First time you update the directory that your files are contained in, BAM! It’s gotcha! If you stop and think about it, in this evil RAM scenario ZFS is incredibly helpful, because your RAM now needs to not only be evil but be bright enough to consistently pull off collision attacks. So if you’re running non-ECC RAM that turns out to be appallingly, Lovecraftianishly evil, ZFS will mitigate the damage, not amplify it.

If you are using ZFS and you aren’t scrubbing, by the way, you’re setting yourself up for long term failure. If you have on-disk corruption, a scrub can fix it only as long as you really do have a redundant or parity copy of the corrupted block which is good. Once you corrupt all copies of a given block, it’s too late to repair it – it’s gone. Don’t be afraid of scrubbing. (Well, maybe be a little wary of the performance impact of scrubbing during high demand times. But don’t be worried about scrubbing killing your data.)

I’ve constructed a doomsday scenario featuring RAM evil enough to kill my data after all! Mwahahaha!

OK. But would using any other filesystem that isn’t ZFS have protected that data? ‘Cause remember, nobody’s arguing that you can lose data to evil RAM – the argument is about whether evil RAM is more dangerous with ZFS than it would be without it.

I really, really want to use the Scrub Of Death in a movie or TV show. How can I make it happen?

What you need here isn’t evil RAM, but an evil disk controller. Have it flip one bit per block read or written from disk B, but leave the data from disk A alone. Now scrub – every block on disk B will be overwritten with a copy from disk A, but the evil controller will flip bits on write, so now, all of disk B is written with garbage blocks. Now start flipping bits on write to disk A, and it will be an unrecoverable wreck pretty quickly, since there’s no parity or redundancy left for any block. Your choice here is whether to ignore the metadata for as long as possible, giving you the chance to overwrite as many actual data blocks as you can before the jig is up as they are written to by the system, or whether to pounce straight on the metadata and render the entire vdev unusable in seconds – but leave the actual data blocks intact for possible forensic recovery.

Alternately you could just skip straight to step B and start flipping bits as data is written on any or all individual devices, and you’ll produce real data loss quickly enough. But you specifically wanted a scrub of death, not just bad hardware, right?

I don’t care about your logic! I wish to appeal to authority!

OK. “Authority” in this case doesn’t get much better than Matthew Ahrens, one of the cofounders of ZFS at Sun Microsystems and current ZFS developer at Delphix. In the comments to one of my filesystem articles on Ars Technica, Matthew said “There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem.”

Hope that helps. =)

Avahi killed my server :'(

Avahi is the equivalent to Apple’s “Bonjour” zeroconf network service. It installs by default with the ubuntu-desktop meta-package, which I generally use to get, you guessed it, a full desktop on virtualization host servers. This never caused me any issues until today.

Today, though – on a server with dual network interfaces, both used as bridge ports on its br0 adapter – Avahi apparently decided “screw the configuration you specified in /etc/network/interfaces, I’m going to give your production virt host bridge an autoconf address. Because I want to be helpful.”

When it did so, the host dropped off the network, I got alarms on my monitoring service, and I couldn’t so much as arp the host, much less log into it. So I drove down to the affected office and did an ifconfig br0, which showed me the following damning bit of evidence:

me@box:~$ ifconfig br0
br0       Link encap:Ethernet  HWaddr 00:0a:e4:ae:7e:4c
         inet6 addr: fe80::20a:e4ff:feae:7e4c/64 Scope:Link
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
         RX packets:11 errors:0 dropped:0 overruns:0 frame:0
         TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:3927 (3.8 KB)  TX bytes:6970 (6.8 KB)

br0:avahi Link encap:Ethernet  HWaddr 00:0a:e4:ae:7e:4c
         inet addr:169.254.6.229  Bcast:169.254.255.255  Mask:255.255.0.0
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

Oh, Avahi, you son-of-a-bitch. Was there anything wrong with the actual NIC? Certainly didn’t look like it – had link lights on the NIC and on the switch, and sure enough, ifdown br0 ; ifup br0 brought it right back online again.

Can we confirm that avahi really was the culprit?

/var/log/syslog:Jan  9 09:10:58 virt0 avahi-daemon[1357]: Withdrawing address record for [redacted IP] on br0.
/var/log/syslog:Jan  9 09:10:58 virt0 avahi-daemon[1357]: Leaving mDNS multicast group on interface br0.IPv4 with address [redacted IP].
/var/log/syslog:Jan  9 09:10:58 virt0 avahi-daemon[1357]: Interface br0.IPv4 no longer relevant for mDNS.
/var/log/syslog:Jan  9 09:10:59 virt0 avahi-autoipd(br0)[12460]: Found user 'avahi-autoipd' (UID 111) and group 'avahi-autoipd' (GID 121).
/var/log/syslog:Jan  9 09:10:59 virt0 avahi-autoipd(br0)[12460]: Successfully called chroot().
/var/log/syslog:Jan  9 09:10:59 virt0 avahi-autoipd(br0)[12460]: Successfully dropped root privileges.
/var/log/syslog:Jan  9 09:10:59 virt0 avahi-autoipd(br0)[12460]: Starting with address 169.254.6.229
/var/log/syslog:Jan  9 09:11:03 virt0 avahi-autoipd(br0)[12460]: Callout BIND, address 169.254.6.229 on interface br0
/var/log/syslog:Jan  9 09:11:03 virt0 avahi-daemon[1357]: Joining mDNS multicast group on interface br0.IPv4 with address 169.254.6.229.
/var/log/syslog:Jan  9 09:11:03 virt0 avahi-daemon[1357]: New relevant interface br0.IPv4 for mDNS.
/var/log/syslog:Jan  9 09:11:03 virt0 avahi-daemon[1357]: Registering new address record for 169.254.6.229 on br0.IPv4.
/var/log/syslog:Jan  9 09:11:07 virt0 avahi-autoipd(br0)[12460]: Successfully claimed IP address 169.254.6.229

I know I said this already, but – oh, avahi, you worthless son of a bitch!

Next step was to kill it and disable it.

me@box:~$ sudo stop avahi-daemon
me@box:~$ echo manual | sudo tee /etc/init/avahi-daemon.override

Grumble grumble grumble. Now I’m just wondering why I’ve never had this problem before… I suspect it’s something to do with having dual NICs on the bridge, and one of them not being plugged in (I only added them both so it wouldn’t matter which one actually got plugged in if the box ever got moved somewhere).

Apache 2.4 / Ubuntu Trusty problems

Found out the hard way today that there’ve been SIGNIFICANT changes in configuration syntax and requirements since Apache 2.2, when I tried to set up a VERY simple couple of vhosts on Apache 2.4.7 on a brand new Ubuntu Trusty Tahr install.

First – the a2ensite/a2dissite scripts refuse to work unless your vhost config files end in .conf. BE WARNED. Example:

you@trusty:~$ ls /etc/apache2/sites-available
000-default.conf
default-ssl.conf
testsite.tld
you@trusty:~$ sudo a2ensite testsite.tld
ERROR: Site testsite.tld does not exist!

The solution is a little annoying; you MUST end the filename of your vhost configs in .conf – after that, a2ensite and a2dissite work as you’d expect.

you@trusty:~$ sudo mv /etc/apache2/sites-available/testsite.tld /etc/apache2/sites-available/testsite.tld.conf
you@trusty:~$ sudo a2ensite testsite.tld
Enabling site testsite.tld
To activate the new configuration, you need to run:
  service apache2 reload

After that, I had a more serious problem. The “site” I was trying to enable was nothing other than a simple exposure of a directory (a local ubuntu mirror I had set up) – no php, no cgi, nothing fancy at all. Here was my vhost config file:

<VirtualHost *:80>
        ServerName us.archive.ubuntu.com
        ServerAlias us.archive.ubuntu.local 
        Options Includes FollowSymLinks MultiViews Indexes
        DocumentRoot /data/apt-mirror/mirror/us.archive.ubuntu.com
	*lt;Directory /data/apt-mirror/mirror/us.archive.ubuntu.com/>
	        Options Indexes FollowSymLinks
	        AllowOverride None
	</Directory>
</VirtualHost>

Can’t get much simpler, right? This would have worked fine in any previous version of Apache, but not in Apache 2.4.7, the version supplied with Trusty Tahr 14.04 LTS.

Every attempt to browse the directory gave me a 403 Forbidden error, which confused me to no end, since the directories were chmod 755 and chgrp www-data. Checking Apache’s error log gave me pages on pages of lines like this:

[Mon Jun 02 10:45:19.948537 2014] [authz_core:error] [pid 27287:tid 140152894646016] [client 127.0.0.1:40921] AH01630: client denied by server configuration: /data/apt-mirror/mirror/us.archive.ubuntu.com/ubuntu/

What I eventually discovered was that since 2.4, Apache not only requires explicit authentication setup and permission for every directory to be browsed, the syntax has changed as well. The old “Order Deny, Allow” and “Allow from all” won’t cut it – you now need “Require all granted”. Here is my final working vhost .conf file:

<VirtualHost *:80>
        ServerName us.archive.ubuntu.com
        ServerAlias us.archive.ubuntu.local 
        Options Includes FollowSymLinks MultiViews Indexes
        DocumentRoot /data/apt-mirror/mirror/us.archive.ubuntu.com
	<Directory /data/apt-mirror/mirror/us.archive.ubuntu.com/>
	        Options Indexes FollowSymLinks
	        AllowOverride None
                Require all granted
	</Directory>
</VirtualHost>

Hope this helps someone else – this was a frustrating start to the morning for me.

Heartbleed SSL vulnerability

Last night (2014 Apr 7) a massive security vulnerability was publicly disclosed in OpenSSL, the library that encrypts most of the world’s sensitive traffic. The bug in question is approximately two years old – systems older than 2012 are not vulnerable – and affects the TLS “heartbeat” function, which is why the vulnerability has been nicknamed HeartBleed.

The bug allows a malicious remote user to scan arbitrary 64K chunks of the affected server’s memory. This can disclose any and ALL information in that affected server’s memory, including SSL private keys, usernames and passwords of ANY running service accepting logins, and more. Nobody knows if the vulnerability was known or exploited in the wild prior to its public disclosure last night.

If you are an end user:

You will need to change any passwords you use online unless you are absolutely sure that the servers you used them on were not vulnerable. If you are not a HIGHLY experienced admin or developer, you absolutely should NOT assume that sites and servers you use were not vulnerable. They almost certainly were. If you are a highly experienced ops or dev person… you still absolutely should not assume that, but hey, it’s your rope, do what you want with it.

Note that most sites and servers are not yet patched, meaning that changing your password right now will only expose that password as well. If you have not received any notification directly from the site or server in question, you may try a scanner like the one at http://filippo.io/Heartbleed/ to see if your site/server has been patched. Note that this script is not bulletproof, and in fact it’s less than 24 hours old as of the time of this writing, up on a free site, and under massive load.

The most important thing for end users to understand: You must not, must not, MUST NOT reuse passwords between sites. If you have been using one or two passwords for every site and service you access – your email, forums you post on, Facebook, Twitter, chat, YouTube, whatever – you are now compromised everywhere and will continue to be compromised everywhere until ALL sites are patched. Further, this will by no means be the last time a site is compromised. Criminals can and absolutely DO test compromised credentials from one site on other sites and reuse them elsewhere when they work! You absolutely MUST use different passwords – and I don’t just mean tacking a “2” on the end instead of a “1”, or similar cheats – on different sites if you care at all about your online presence, the money and accounts attached to your online presence, etc.

If you are a sysadmin, ops person, dev, etc:

Any systems, sites, services, or code that you are responsible for needs to be checked for links against OpenSSL versions 1.0.1 through 1.0.1f. Note, that’s the OpenSSL vendor versioning system – your individual distribution, if you are using repo versions like a sane person, may have different numbering schemes. (For example, Ubuntu is vulnerable from 1.0.1-0 through 1.0.1-4ubuntu5.11.)

Examples of affected services: HTTPS, IMAPS, POP3S, SMTPS, OpenVPN. Fabulously enough, for once OpenSSH is not affected, even in versions linking to the affected OpenSSL library, since OpenSSH did not use the Heartbeat function. If you are a developer and are concerned about code that you wrote, the key here is whether your code exposed access to the Heartbeat function of OpenSSL. If it was possible for an attacker to access the TLS heartbeat functionality, your code was vulnerable. If it was absolutely not possible to check an SSL heartbeat through your application, then your application was not vulnerable even if it linked to the vulnerable OpenSSL library.

In contrast, please realize that just because your service passed an automated scanner like the one linked above doesn’t mean it was safe. Most of those scanners do not test services that use STARTTLS instead of being TLS-encrypted from the get-go, but services using STARTTLS are absolutely still affected. Similarly, none of the scanners I’ve seen will test UDP services – but UDP services are affected. In short, if you as a developer don’t absolutely know that you weren’t exposing access to the TLS heartbeat function, then you should assume that your OpenSSL-using application or service was/is exploitable until your libraries are brought up to date.

You need to update all copies of the OpenSSL library to 1.0.1g or later (or your distribution’s equivalent), both dynamically AND statically linked (PS: stop using static links, for exactly things like this!), and restart any affected services. You should also, unfortunately, consider any and all credentials, passwords, certificates, keys, etc. that were used on any vulnerable servers, whether directly related to SSL or not, as compromised and regenerate them. The Heartbleed bug allowed scanning ALL memory on any affected server and thus could be used by a sufficiently skilled user to extract ANY sensitive data held in server RAM. As a trivial example, as of today (2014-Apr-08) users at the Ars Technica forums are logging on as other users using password credentials held in server RAM, as exposed by standard exploit test scripts publicly disclosed.

Completely eradicating all potential vulnerability is a STAGGERING amount of work and will involve a lot of user disruption. When estimating your paranoia level, please do remember that the bug itself has been in the wild since 2012 – the public disclosure was not until 2014-Apr-07, but we have no way of knowing how long private, possibly criminal entities have been aware of and/or exploiting the bug in the wild.

Restoring Legacy Boot (Linux Boot) on a Chromebook

press ctrl-alt-forward to jump to TTY2 and a standard login prompt
press ctrl-alt-forward to jump to TTY2 and a standard login prompt

I let the battery die completely on my Acer C720 Chromebook, and discovered that unfortunately if you do that, your Chromebook will no longer Legacy boot when you press Ctrl-L – it just beeps at you despondently, with no error message to indicate what’s going wrong.

Sadly, I found message after message on forums indicating that people encountering this issue just reinstalled ChrUbuntu from scratch. THIS IS NOT NECESSARY!

If you just get several beeps when you press Ctrl-L to boot into Linux on your Chromebook, don’t fret – press Ctrl-D to boot into ChromeOS, but DON’T LOG IN. Instead, change terminals to get a shell. The function keys at the top of the keyboard (the row with “Esc” at the far left) map to the F-keys on a normal keyboard, and ctrl-alt-[Fkey] works here just as it would in Linux. The [forward arrow] key two keys to the right of Esc maps to F2, so pressing ctrl-alt-[forward arrow on top row] will bring you to tty2, which presents you with a standard Linux login prompt.

Log in as chronos (no password, unless you’d previously set one). Now, one command will get you right:

sudo crossystem dev_boot_usb=1 dev_boot_legacy=1

That’s it. You’re now ready to reboot and SUCCESSFULLY Ctrl-L into your existing Linux install.

Slow performance with dovecot – mysql – roundcube

This drove me crazy forever, and Google wasn’t too helpful.  If you’re running dovecot with mysql authentication, your logins will be exceedingly slow.  This isn’t much of a problem with traditional mail clients – just an annoying bit of a hiccup you probably won’t even notice except for SASL authentication when sending mail – but it makes Roundcube webmail PAINFULLY painfully slow in a VERY obvious way.

The issue is due to PAM authentication being enabled by default in Dovecot, and on Ubuntu at least, it’s done in a really hidden little out-of-the-way file with no easy way to forcibly override it elsewhere that I’m aware of.

Again on Ubuntu, you’ll find the file in question at /etc/dovecot/conf.d/auth-system.conf.ext, and the relevant block should be commented out COMPLETELY, like this:

# PAM authentication. Preferred nowadays by most systems.
# PAM is typically used with either userdb passwd or userdb static.
# REMEMBER: You'll need /etc/pam.d/dovecot file created for PAM
# authentication to actually work. <doc/wiki/PasswordDatabase.PAM.txt>
#passdb {
  # driver = pam
  # [session=yes] [setcred=yes] [failure_show_msg=yes] [max_requests=]
  # [cache_key=] []
  #args = dovecot
#}

Once you’ve done this (and remember, we’re assuming you’re using SQL auth here, and NOT actually USING the PAM!) you’ll auth immediately instead of having to fail PAM and then fall back to SQL auth on every auth request, and things will speed up IMMENSELY. This turns Roundcube from “painfully slow” to “blazing fast”.

Benchmarking Windows Guests on KVM:I/O performance

I’ve been using KVM in production to host Windows Server guests for close to 4 years now.  I’ve always been thoroughly impressed at what a great job KVM did with accelerating disk I/O for the guests – making Windows guests perform markedly faster virtualized than they used to on the bare metal.  So when I got really, REALLY bad performance recently on a few Windows Server Standard 2012 guests – bad enough to make the entire guest seem “locked up tight” for minutes at a time – I did some delving to figure out what was going on.

Linux and KVM offer a wealth of options for handling caching and underlying subsystems of host storage… an almost embarassing wealth, which nobody seems to have really benchmarked.  I have seen quite a few people tossing out offhanded comments about this cache mode or that cache mode being “safer”or “faster”or “better”, but no hard numbers.  So to both fix my own immediate problem and do some much-needed documentation, I spent more hours this week than I really want to think about doing some real, no-kidding, here-are-the-numbers benchmarking.

Methodology

Test system: AMD FX-8320 8-core CPU, 32GB DDR3 SDRAM, 1x WD 2TB Black (WD2002FAEX) drive, 1x Samsung 840 PRO Series 512GB SSD, Ubuntu 12.04.2-LTS fully updated, Windows Server 2008 R2 guest OS, HD Tune Pro 5.50 Windows disk benchmark suite.

The host and guest OS are both installed on the WD 2TB Black conventional disk; the Samsung 840 PRO Series SSD is attached to the guest in various configurations for benchmarking.  The guest OS is given approximately 30 seconds to “settle” after each boot and login before running any benchmarks.  No other operations are occurring on either guest or host while benchmarks are run.

Exploratory Testing

Before diving straight into “which combination works the fastest”, I really wanted to explore the individual characteristics of all the various overlapping options available.

The first thing I wanted to find out: how much of a penalty, if any, do you pay for operating a raw disk virtualized under KVM, as opposed to under Windows on the bare metal?  And how much of a boost do the VirtIO guest drivers offer over basic IDE drivers?

Baseline Performance

As you can see, we do pay a penalty – particularly without the VirtIO drivers, which offer a substantial increase in performance over the default IDE, even without caching.  We can also see that LVM logical volumes perform effectively identically to actual raw disks.  Nice to know!

Now that we know that “raw is raw”, and “VirtIO is significantly better than IDE”, the next burning question is: how much of a performance hit do we take if we use .qcow2 files on an actual filesystem, instead of feeding KVM a raw block device?  Actually, let’s pause that question – before that, why would we want to use a .qcow2 file instead of a raw disk or LV?  Two big answers: rsync, and state saves.  Unless you compile rsync from source with an experimental patch, you can’t use it to synchronize copies of a guest that are stored on a block device – whereas you can, with a qcow2 or raw file on a filesystem.  And you can’t save state (basically, like hibernation – only much faster, and handled by the host instead of the guest) with raw storage either – you need qcow2 for that.  As a really, really nice aside, if you’re using qcow2 and your host runs out of space… your guest pauses instead of crashing, and as soon as you’ve made more space available on your host, you can resume the guest as though nothing ever happened.  Which is nice.

So, if we can afford to, we really would like to have qcow2.  Can we afford to?

VirtIO-nocache

 

Yes… yes we can.  There’s nothing too exciting to see here – basically, the takeaway is “there is little to no performance penalty for using qcow2 files on a filesystem instead of raw disks.”  So, performance is determined by cache settings and by the presence of VirtIO drivers in our guest… not so much by whether we’re using raw disks, or LV, or ZVOL, or qcow2 files.

One caveat: I tested using fully-allocated qcow2 files.  I did a little bit of casual testing with sparsely allocated (aka “thin provisioned”) qcow2 files, and basically, they follow the same performance curves, but with roughly half the performance on some writes.  This isn’t  that big a deal, in my opinion – you only have to do a “slow” write to any given block once.  After that, it’s a rewrite, not a new write, and you’re back to the same performance level you’d have had if you’d fully allocated your qcow2 file to start with.  So, basically, it’s a self-correcting problem, with a tolerable temporary performance penalty.  I’m more than willing to deal with that in return for not having to potentially synchronize gigabytes of slack space when I do backups and migrations.

So… since performance is determined largely by cache settings, let’s take a look at how those play out in the guest:

storage cache methods

 

In plain English, “writethrough” – the default – is read caching with no write cache, and “writeback” is both read and write caching.  “None” is a little deceptive – it’s not just “no caching”, it actually requires Direct I/O access to the storage medium, which not all filesystems provide.

The first thing we see here is that the default cache method, writethrough, is very very fast at reading, but painfully slow on writes – cripplingly so.  On very small writes, writethrough is capable of less than 0.2 MB/sec in some cases!  This is on a brand-new 840 Pro Series SSD... and it’s going to get even worse than this later, when we look at qcow2 storage.  Avoid, avoid, avoid.

KVM caching really is pretty phenomenal when it hits, though.  Take a look at the writeback cache method – it jumps well above bare metal performance for large reads and writes… and it’s not a small jump, either; 1MB random reads of well over 1GB / sec are completely normal.  It’s potentially a little risky, though – you could potentially lose guest data if you have a power failure or host system crash during a write.  This shouldn’t be an issue on a stable host with a UPS and apcupsd.

Finally, there’s cache=none.  It works.  It doesn’t impress.  It isn’t risky in terms of data safety.  And it generally performs somewhat better with extremely, extremely small random I/O… but without getting the truly mind-boggling wins that caching can offer.  In my personal opinion, cache=none is mostly useful when you’re limited to IDE drivers in your guest.  Also worth noting: “cache=none” isn’t available on ZFS or FUSE filesystems.

Moving on, we get to the stuff I really care about when I started this project – ZFS!  Storing guests on ZFS is really exciting, because it offers you the ability to take block-level host-managed snapshots of your guests; set and modify quotas; set and configure compression; do asynchronous replication; do block-level deduplication – the list goes on and on and on.  This is a really big deal.  But… how’s the performance?

ZFS storage methods

The performance is very, very solid… as long as you don’t use writethrough.  If you use writethrough cache and ZFS, you’re going to have a bad time.  Also worth noting: Direct I/O is not available on the ZFS filesystem – although it is available with ZFS zvols! – so there are no results here for “cache=none” and ZFS qcow2.

The big, big, big thing you need to take away from this is that abysmal write performance line for ZFS/qcow2/writethrough – well under 2MB/sec for any and all writes.  If you set a server up this way, it will look blazing quick and you’ll love it… until the first time you or a user tries to write a few hundred MB of data to it across the network, at which point the whole thing will lock up tighter than a drum for half an hour plus.  Which will make you, and your users, very unhappy.

What else can we learn here?  Well, although we’ve got the option of using a zvol – which is basically ZFS’s answer to an LVM LV – we really would like to avoid it, for the same reasons we talked about when we compared qcow to raw.  So, let’s look at the performance of that raw zvol – is it worth the hassle?  In the end, no.

But here’s the big surprise – if we set up a ZVOL, then format it with ext4 and put a .qcow2 on top of that… it performs as well, and in some cases better than, the raw zvol itself did!  As odd as it sounds, this leaves qcow2-on-ext4-on-zvol as one of our best performing overall storage methods, with the most convenient options for management.  It sounds like it’d be a horrible Rube Goldberg, but it performs like best-in-breed.  Who’d’a thunk it?

There’s one more scenario worth exploring – so far, since discovering how much faster it was, we’ve almost exclusively looked at VirtIO performance.  But you can’t always get VirtIO – for example, I have a couple of particularly crotchety old P2V’ed Small Business Server images that absolutely refuse to boot under VirtIO without blue-screening.  It happens.  What are your best options if you’re stuck with IDE?

IDE performance

 

Without VirtIO drivers, caching does very, very little for you at best, and kills your performance at worst.  (Hello again, horrible writethrough write performance.)  So you really want to use “cache=none” if you’re stuck on IDE.  If you can’t do that for some reason (like using ZFS as a filesystem, not a zvol), writeback will perform quite acceptably… but it will also expose you to whatever added data integrity risk that the write caching presents, without giving you any performance benefits in return.  Caveat emptor.

Final Tests / Top Performers

At this point, we’ve pretty thoroughly explored how individual options affect performance, and the general ways in which they interact.  So it’s time to cut to the chase: what are our top performers?

First, let’s look at our top read performers.  My method for determining “best” read performance was to take the 4KB random read and the sequential read, then multiply the 4KB random by a factor which, when applied to the bare metal Windows performance, would leave a roughly identical value to the sequential read.  Once you’ve done this, taking the average of the two gives you a mean weighted value that makes 4KB read performance roughly as “important” as sequential read performance.  Sorting the data by these values gives us…

Top performers weighted read

 

Woah, hey, what’s that joker in the deck?  RAIDZ1…?

My primary workstation is also an FX-8320 with 32GB of DDR3, but instead of an SSD, it has a 4 drive RAIDZ1 array of Western Digital 1TB Black (WD1001FAEX) drives.  I thought it would be interesting to see how the RAIDZ1 on spinning rust compared to the 840 Pro SSD… and was pretty surprised to see it completely stomping it, across the board.  At least, that’s how it looks in these benchmarks.  The benchmarks don’t tell the whole story, though, which we’ll cover in more detail later.  For now, we just want to notice that yes, a relatively small and inexpensive RAIDZ1 array does surprisingly well compared to a top-of-the-line SSD – and that makes for some very interesting and affordable options, if you need to combine large amounts of data with high performance access.

Joker aside, the winner here is pretty obvious – qcow2 on xfs, writeback.  Wait, xfs?  Yep, xfs.  I didn’t benchmark xfs as thoroughly as I did ext4 – never tried it layered on top of a zvol, in particular – but I did do an otherwise full set of xfs benchmarks.  In general, xfs slightly outperforms ext4 across an identically shaped curve – enough of a difference notice on a graph, but not enough to write home about.  The difference is just enough to punt ext4/writeback out of the top 5 – and even though we aren’t actually testing write performance here, it’s worth noting how much better xfs/writeback writes than the two bottom-of-the-barrel “top performers” do.

I keep harping on this, but seriously, look close at those two writethrough entries – it’s not as bad as writethrough-on-ZFS-qcow2, but it’s still way worse than any of the other contenders, with 4KB writes under 2MB/sec.  That’s single-raw-spinning-disk territory, at best.  Writethrough does give you great read performance, and it’s “safe” as in data integrity, but it’s dangerous as hell in terms of suddenly underperforming under really badly under heavy load.  That’s not just “I see a valley on the graph” levels of bad, it’s very potentially “hey IT guy, why did the server lock up?” levels of bad.  I do not recommend writethrough caching, “default” option or not.

How ’bout write performance?  I calculated “weighted write” just like “weighted read” – divide the bare metal sequential write speed by the bare metal 4KB write speed, then apply the resulting factor to all the 4KB random writes, and average them with the sequential writes.  Here are the top 5 weighted write performers:

Top performers, weighted write

 

The first thing to notice here is that while the top 5 slots have changed, the peak read numbers really haven’t.  All we’ve really done here is kick the writethrough entries to the curb – we haven’t paid any significant penalty in read performance to do so.  Realizing that, let’s not waste too much time talking about this one… instead, let’s cut straight to the “money graph” – our top performers in average mean weighted read and write performance.  The following are, plain and simple, the best performers for any general purpose (and most fairly specialized) use cases:

Top performers mean weighted r-w

Interestingly, our “jokers” – zvol/ext4/qcow2/writeback and zfs/qcow2/writeback on my workstation’s relatively humble 4-drive RAIDZ1 – are still dominating the pack, at #1 and #2 respectively. This is because they read as well as any of the heavy lifters do, and are showing significantly better write performance – with caveats, which we’ll cover in the conclusions.

Jokers aside, xfs/qcow2/writeback is next, followed by zvol/ext4/qcow2/writeback.  You aren’t seeing any cache=none at all here – the gains some cache=none contenders make in very tiny writes just don’t offset the penalties paid in reads and in larger writes from foregoing the cache.  If you have a very heavily teeny-tiny-write-loaded workload – like a super-heavy-traffic database – cache=none may still perform better…  but you probably don’t want virtualization for a really heavy database workload in the first place, KVM or otherwise.  If you hammer your disks, rust or solid state, to within an inch of their lives… you’re really going to feel that raw performance penalty.

Conclusions

In the end, the recommendation is pretty clear – a ZFS zvol with ext4, qcow2 files, and writeback caching offers you the absolute best performance.  (Using xfs on a zvol would almost certainly perform as well, or even better, but I didn’t test that exact combination here.)  Best read performance, best write performance, qcow2 management features, ZFS snapshots / replication / data integrity / optional compression / optional deduplication / etc – there really aren’t any drawbacks here… other than the complexity of the setup itself. In the real world, I use simple .qcow2 on ZFS, no zvols required. The difference in performance between the two was measurable on graphs, but it’s not significant enough to make me want to actually maintain the added complexity.

If you can’t or won’t use ZFS for whatever reason (like licensing concerns), xfs is probably your next best bet – but if that scares you, just use ext4 – the difference won’t be enough to matter much in the long run.

There’s really no reason to mess around with raw disks or raw files – they add significant extra hassle, remove significant features, and don’t offer tangible performance benefits.

If you’re going to use writeback caching, you should be extra certain of power integrity – UPS on the server with apcupsd.  If you’re at all uncertain of your power integrity… take the read performance hit and go with nocache.  (On the other hand, if you’re using ZFS and taking rolling hourly snapshots… maybe it’s worth taking more risks for the extra performance.  Ultimately, you’re the admin, you get to call the shots.)

Caveats

It’s very important to note that these benchmarks do not tell the whole story about disk performance under KVM.  In particular, both the random and sequential reads used here bypass the cache considerably more heavily than most general purpose workloads would… minimizing the impact that the cache has.  And yes, it is significant.  See those > 1GB/sec peaks in the random read performance?  That kind of thing happens a lot more in a normal workload than it does in a random walk – particularly with ZFS storage, which uses the ARC (Adaptive Replacement Cache) rather than the simple FIFO cache used by other systems.  The ARC makes decisions about cache eviction based not only on the time since an object was last seen, but the frequency at which it’s seen, among other things – with the result that it kicks serious butt, especially after a good amount of time to warm up and learn the behavior patterns of the system.

So, please take these numbers with a grain of salt.  They’re quite accurate for what they are, but they’re synthetic benchmarks, not a real-world experience.  Windows on KVM is actually a much (much) better experience than the raw numbers here would have you believe – the importance of better-managed cache, and persistent cache, really can’t be over-emphasized.

The actual guest OS installation for these tests was on a single spinning disk, but it used ZFS/qcow2/writeback for the underlying storage.  I needed to reboot the guest after every single row of data – and in many cases, several times more, because I screwed something or another up.  In the end, I rebooted that Windows Server 2008 R2 guest upwards of 50 times.  And on a single spinning disk, shutdowns took about 4 seconds and boot times (from BIOS to desktop) took about 3 seconds.  You don’t get that kind of performance out of the bare metal, and you can’t see it in these graphs, either.  For most general-purpose workloads, these graphs are closer to being a “worst-place scenario” than are a direct model.

That sword cuts both ways, though – the “jokers in the deck”, my RAIDZ1 of spinning rust, isn’t really quite as impressive as it looks above.  I’m spitballing here, but I think a lot of the difference is that ZFS is willing to more aggressively cache writes with a RAIDZ array than it is with a single member, probably because it expects that more spindles == faster writes, and it only wants to keep so many writes in cache before it flushes them.  That’s a guess, and only a guess, but the reality is that after those jaw-dropping 1.9GB/sec 1MB random write runs, I could hear the spindles chattering for a significant chunk of time, getting all those writes committed to the rust.  That also means that if I’d had the patience for bigger write runs than 5GB, I’d have seen performance dropping significantly.  So you really shouldn’t think “hey, forget SSDs completely, spinning rust is fine for everybody!”  It’s not.  It is, however, a surprisingly good competitor on the cheap if you buy enough of it, and if your write runs come in bursts – even “bursts” ranging in the gigabytes.

 

force directory mode on Samba 3.x

I just spent bloody forever trying to force any folder created by a Windows user to automatically be 770 instead of 750 on a Samba share (3.6.2 on Ubuntu).

The issue at hand is, with force directory mode = 0770 if a user creates a folder DIRECTLY on the Samba share, all is well… but if they drag an already existing folder FROM their windows machine INTO the samba share, it ends up 0750 and nobody else can see it.  (I don’t want to tell you how long it took me to figure out WHEN, exactly, the mode wasn’t getting set correctly, either.)

I chased my tail with masks, directory security mode, unix extensions, and more, before I finally, FINALLY found the fix:

edit /etc/login.defs and set the default umask to 002 instead of 022.  *poof*, problem solved.  Thank you, thank you, THANK you, random dude from ubuntuforums.org…

 

Getting SPICE working in Ubuntu 12.04.1-LTS

Well, I finally got Spice working with KVM in Ubuntu 12.04.1 LTS today. It… was not pleasant. But it can be done (and without any PPAs, with some ugly hackery).

THESE STEPS ARE NECESSARY AS OF OCTOBER 2012. THEY MAY NOT BE BY THE TIME YOU READ THIS. IF YOU’RE FROM THE FUTURE, DO NOT HACK YOUR APPARMOR PROFILE OR MOVE SYSTEM BINARIES AROUND UNTIL AFTER YOU HAVE CONFIRMED THAT SPICE DOESN’T WORK WITHOUT YOU DOING THESE THINGS.

First, there’s a problem in the AppArmor profile for KVM that prevents it getting read access to /proc/*/auxv, which is necessary for QXL/Spice stuff to work. So let’s fix that: nano /etc/apparmor.d/abstractions/libvirt-qemu, find the line @{PROC}/*/status r, and add the following line:

  @{PROC}/*/auxv r,

NOTE THE TRAILING COMMA – it’s not a bug, it’s a (necessary) feature! Now sudo /etc/init.d/apparmor restart. This fixes the apparmor problem.

Now, let’s add all the packages you’ll need (assuming you’re already successfully running KVM, and just want to add Spice support):

sudo apt-get update ; sudo apt-get install qemu-kvm-spice python-spice-client-gtk

Another bit of REALLY dirty hackery is necessary here – you’ll need to replace your /usr/bin/kvm with the spice-enabled binary you just installed, which nothing in your system knows how to find. Sigh.

sudo mv /usr/bin/kvm /usr/bin/kvm.dist ; sudo ln -s /usr/bin/qemu-system-x86_64 /usr/bin/kvm

Alright, now – assuming you’re running a Windows guest – you’ll need to boot into that guest, download the Windows spice-guest-tools from http://spice-space.org/download.html, and install them. THINGS GET REALLY, REALLY UGLY HERE: YOU NEED TO MAKE VERY SURE YOU HAVE RDP (or some other client side remote control tool) INSTALLED AND WORKING BEFORE GOING ANY FURTHER. Seriously, TEST. IT. NOW.

Okay, tested? You can RDP or otherwise get into your Windows guest WITHOUT use of virt-manager, or the VNC channel virt-manager provides? Good. Shut down your Windows guest, and change the Display to Spice (say yes to “install Spice channels”) and the Video to QXL. Reboot your guest. YOU WILL GET NO VIDEO, EITHER FOR BIOS OR FOR THE GUEST (reference bug https://bugs.launchpad.net/ubuntu/+source/seabios/+bug/958549 ). Remote into your guest using RDP or other client-side remote tool as mandated in last step. Open Device Manager, find “Standard VGA Adapter”, right-click and select “Update Driver Software…” Choose “Search automatically for updated driver software” and it will find the “RedHat QXL Driver” and install it, after which it will need a reboot.

After rebooting, virt-manager will now successfully connect to the guest using Spice, and you will have a nice wicked-cool uber-accelerated control window, with working sound, to your Windows guest. HOWEVER cut and paste, unfortunately, still won’t work. You also won’t have any display during the SeaBIOS boot sequence or the guest’s boot sequence – video won’t start happening until you get all the way to the login screen for Win7.

—–

From the “don’t blame me” dept: if you don’t follow these instructions exactly, it can get REALLY annoying – in particular, if you have Display VNC set while you have Video QXL *and the spice-guest-tools installed on your guest*, kvm will generate crash after crash, REPEATEDLY, and manage to expose a bug in apport-gtk as well so that you’re constantly asked to submit problem reports but then informed that the problem reports are corrupt (incorrect padding) and therefore you can neither see them nor send them. Once you get tired of that, force your guest shut in virt-manager and change its video to QXL and you should be fine. I learned this one the hard way while figuring this crap out today.