sample netplan config for ubuntu 18.04

Here’s a sample /etc/netplan config for Ubuntu 18.04. HUGE LIFE PRO TIP: against all expectations of decency, netplan refuses to function if you don’t indent everything exactly the way it likes it and returns incomprehensible wharrgarbl errors like “mapping values are not allowed in this context, line 17, column 15” if you, for example, have a single extra space somewhere in the config.

I wish I was kidding.

Anyway, here’s a sample /etc/netplan/01-config.yaml with a couple interfaces, one wired and static, one wireless and dynamic. Enjoy. And for the love of god, get the spacing exactly right; I really wasn’t kidding about it barfing if you have one too many spaces for a whitespace indent somewhere. Ask me how I know. >=\

If for any reason you have trouble reading this exact spacing, the rule is two spaces for each level of indent. So the v in “version” should line up under the t in “network”, the d in “dhcp4” should line up under the o in “eno1”, and so forth.

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
      addresses: [192.168.0.1/24]
      gateway4: 192.168.0.1
      nameservers:
        addresses: [8.8.8.8, 1.1.1.1]
  wifis:
    wlp58s0:
      dhcp4: yes
      dhcp6: no
      access-points:
        "your-wifi-SSID-name":
          password: "your-wifi-password"

Wait for network to be configured (no limit)

In Ubuntu 16.04 or up (ie, post systemd) if you’re ever stuck staring for two straight minutes at “Waiting for network to be configured (no limit)” and despairing, there’s a simple fix:

systemctl mask systemd-networkd-wait-online.service

This links the service that sits there with its thumb up its butt if you don’t have a network connection to /dev/null, causing it to just return instantly whenever it’s called. Which is probably a good idea. There may indeed be a situation in which I want a machine to refuse to boot until it gets an IP address, but whatever that situation MIGHT be, I’ve never encountered it in 20+ years of professional system administration, so…

PSA: new SATA power standard / HGST 10TB drives

PSA to anyone who bought a new 10T or 12T drive and can’t figure out why the damn thing won’t power on: the SATA power standard changed. The 3.3v rail is now used to command a new-spec drive to spin down – which means that an old-style SATA power supply will never allow one of the newer spec drives to spin up.

I discovered this the hard way with two new HGST 10TB NAS drives this afternoon. I wondered why such shiny big drives shipped with molex->SATA power adapters… and now I know.

Fortunately, you don’t have to use those crappy molex->SATA power adapters to get the drives working; the fix is just to pull the 3.3V rail out of the SATA adapter coming off your PSU that you want to power the newer drive with. This should typically be the orange wire; it’s the one on the “dogleg down” side of the adapter:

To get newer drives to spin up on older SATA PSUs, remove the 3.3V rail from the plug. It’s the wire on the “dogleg down” side of the SATA power plug, and is typically orange in color.

From what I’ve read online, no production hard drive prior to the SATA standard change actually used that 3.3V rail for anything, so it should also be safe to power older drives (and backplanes) with the 3.3V rail forcibly removed. I can confirm that my HGST 10TB NAS drives worked after removing the orange rail as shown; and the WD 2TB Black drives that they are replacing also worked fine without the 3.3V rail; I successfully booted the system on one of them after removing the 3.3V as shown, with no apparent problems whatsoever.

I am expressly providing this information with NO WARRANTY; if your drives or backplane stops working / your cat gets pregnant / a republican congress is elected after you remove the 3.3V rail from a SATA adapter, that’s your problem not mine. With that said, this worked great for me, saved me from having to use one of those crappy little firetrap molex adapters, and does not seem to cause any issues whatsoever with either newer or older drives.

Primer: How data is stored on-disk with ZFS

As with a lot of things at this blog, I’m largely writing this to confirm and solidify my own knowledge. I tend to be pretty firm on how disks relate to vdevs, and vdevs relate to pools… but once you veer down deeper into the direct on-disk storage, I get a little hazier. So here’s an attempt to remedy that, with citations, for my benefit (and yours!) down the line.

Top level: the zpool

The zpool is the topmost unit of storage under ZFS. A zpool is a single, overarching storage system consisting of one or more vdevs. Writes are distributed among the vdevs according to how much FREE space each vdev has available – you may hear urban myths about ZFS distributing them according to the performance level of the disk, such that “faster disks end up with more writes”, but they’re just that – urban myths. (At least, they’re only myths as of this writing – 2018 April, and ZFS through 7.5.)

A zpool may be created with one or more vdevs, and may have any number of additional vdevs zpool added to it later – but, for the most part, you may not ever remove a vdev from a zpool. There is working code in development to make this possible, but it’s more of a “desperate save” than something you should use lightly – it involves building a permanent lookup table to redirect requests for records stored on the removed vdevs to their new locations on remaining vdevs; sort of a CNAME for storage blocks.

If you create a zpool with vdevs of different sizes, or you add vdevs later when the pool already has a substantial amount of data in it, you’ll end up with an imbalanced distribution of data that causes more writes to land on some vdevs than others, which will limit the performance profile of your pool.

A pool’s performance scales with the number of vdevs within the pool: in a pool of n vdevs, expect the pool to perform roughly equivalently to the slowest of those vdevs, multiplied by n. This is an important distinction – if you create a pool with three solid state disks and a single rust disk, the pool will trend towards the IOPS performance of four rust disks.

Also note that the pool’s performance scales with the number of vdevs, not the number of disks within the vdevs. If you have a single 12 disk wide RAIDZ2 vdev in your pool, expect to see roughly the IOPS profile of a single disk, not of ten!

There is absolutely no parity or redundancy at the pool level. If you lose any vdev, you’ve lost the entire pool, plain and simple. Even if you “didn’t write to anything on that vdev yet” – the pool has altered and distributed its metadata accordingly once the vdev was added; if you lose that vdev “with nothing on it” you’ve still lost the pool.

It’s important to realize that the zpool is not a RAID0; in conventional terms, it’s a JBOD – and a fairly unusual one, at that.

Second level: the vdev

A vdev consists of one or more disks. Standard vdev types are single-disk, mirror, and raidz. A raidz vdev can be raidz1, raidz2, or raidz3. There are also special vdev types – log and l2arc – which extend the ZIL and the ARC, respectively, onto those vdev types. (They aren’t really “write cache” and “read cache” in the traditional sense, which trips a lot of people up. More about that in another post, maybe.)

A single vdev, of any type, will generally have write IOPS characteristics similar to those of a single disk. Specifically, the write IOPS characteristics of its slowest member disk – which may not even be the same disk on every write.

All parity and/or redundancy in ZFS occurs within the vdev level.

Single-disk vdevs

This is as simple as it gets: a vdev that consists of a single disk, no more, no less.

The performance profile of a single-disk vdev is that of, you guessed it, that single disk.

Single-disk vdevs may be expanded in size by replacing that disk with a larger disk: if you zpool attach a 4T disk to a 2T disk, it will resilver into a 2T mirror vdev. When you then zpool detach the 2T disk, the vdev becomes a 4T vdev, expanding your total pool size.

Single-disk vdevs may also be upgraded permanently to mirror vdevs; just zpool attach one or more disks of the same or larger size.

Single-disk vdevs can detect, but not repair, corrupted data records. This makes operating with single-disk vdevs quite dangerous, by ZFS standards – the equivalent, danger-wise, of a conventional RAID0 array.

However, a pool of single-disk vdevs is not actually a RAID0, and really shouldn’t be referred to as one. For one thing, a RAID0 won’t distribute twice as many writes to a 2T disk as to a 1T disk. For another thing, you can’t start out with a three disk RAID0 array, then add a single two-disk RAID1 array (or three five-disk RAID5 arrays!) to your original array, and still call it “a RAID0”.

It may be tempting to use old terminology for conventional RAID, but doing so just makes it that much more difficult to get accustomed to thinking in terms of ZFS’ real topology, hindering both understanding and communication.

Mirror vdevs

Mirror vdevs work basically like traditional RAID1 arrays – each record destined for a mirror vdev is written redundantly to all disks within the vdev. A mirror vdev can have any number of constituent disks; common sizes are 2-disk and 3-disk, but there’s nothing stopping you from creating a 16-disk mirror vdev if that’s what floats your boat.

A mirror vdev offers usable storage capacity equivalent to that of its smallest member disk; and can survive intact as long as any single member disk survives. As long as the vdev has at least two surviving members, it can automatically repair corrupt records detected during normal use or during scrubbing – but once it’s down to the last disk, it can only detect corruption, not repair it. (If you don’t scrub regularly, this means you may already be screwed when you’re down to a single disk in the vdev – any blocks that were already corrupt are no longer repairable, as well as any blocks that become corrupt before you replace the failed disk(s).

You can expand a single disk to a mirror vdev at any time using the zpool attach command; you can also add new disks to an existing mirror in the same way. Disks may also be detached and/or replaced from mirror vdevs arbitrarily. You may also expand the size of an individual mirror vdev by replacing its disks one by one with larger disks; eg start with a mirror of 2T disks, then replace one disk with a 4T disk, wait for it to resilver, then replace the second 2T disk with another 4T disk. Once there are no disks smaller than 4T in the vdev, and it finishes resilvering, the vdev will expand to the new 4T size.

Mirror vdevs are extremely performant: like all vdevs, their write IOPS are roughly those of a single disk, but their read IOPS are roughly those of n disks, where n is the number of disks in the mirror – a mirror vdev n disks wide can read blocks from all n members in parallel.

A pool made of mirror vdevs closely resembles a conventional RAID10 array; each has write IOPS similar to n/2 disks and read IOPS similar to disks, where n is the total number of disks. As with single-disk vdevs, though, I’d advise you not to think and talk sloppily and call it “ZFS RAID10” – it really isn’t, and referring to it that way blurs the boundaries between pool and vdev, hindering both understanding and accurate communication.

RAIDZ vdevs

RAIDZ vdevs are striped parity arrays, similar to RAID5 or RAID6. RAIDZ1 has one parity block per stripe, RAIDZ2 has two parity blocks per stripe, and RAIDZ3 has three parity blocks per stripe. This means that RAIDZ1vdevs can survive loss of a single disk, RAIDZ2 can survive the loss of two disks, and RAIDZ3 vdevs can survive the loss of as many as three disks.

Note, however, that – just like mirror vdevs – once you’ve stripped away all the parity, you’re vulnerable to corruption that can’t be repaired. RAIDZ vdevs take typically take significantly longer to resilver than mirror vdevs do, as well – so you really don’t want to end up completely “uncovered” (surviving, but with no remaining parity blocks) with a RAIDZ array.

Each raidz vdev offers n-(parity*n) storage capacity, where n is the storage capacity of a single disk, and parity is the number of parity blocks per stripe. So a six-disk RAIDZ1 vdev offers the storage capacity of five disks, an eight-disk RAIDZ2 vdev offers the storage capacity of six disks, and so forth.

You may create RAIDZ vdevs using mismatched disk sizes, but the vdev’s capacity will be based around the smallest member disk. You can expand the size of an existing RAIDZ vdev by replacing all of its members individually with larger disks than were originally used, but you cannot expand a RAIDZ vdev by adding new disks to it and making it wider – a 5-disk RAIDZ1 vdev cannot be converted into a 6-disk RAIDZ1 vdev later; neither can a 6-disk RAIDZ2 be converted into a 6-disk RAIDZ1.

It’s a common misconception to think that RAIDZ vdev performance scales linearly with the number of disks used. Although throughput under ideal conditions can scale towards n-parity disks, throughput under moderate to serious load will rapidly degrade toward the profile of a single disk – or even slightly worse, since it scales down toward the profile of the slowest disk for any given operation. This is the difference between IOPS and bandwidth (and it works the same way for conventional RAID!)

RAIDZ vdev IOPS performance is generally more robust than that of a conventional RAID5 or RAID6 array of the same size, because RAIDZ offers variable stripe write sizes – if you routinely write data in records only one record wide, a RAIDZ1 vdev will write to only two of its disks (one for data, and one for parity); a RAIDZ2 vdev will write to only three of its disks (one for data, and two for parity) and so on. This can mitigate some of the otherwise-crushing IOPS penalty associated with wide striped arrays; a three-record variable stripe write to a six-disk RAIDZ vdev only lights up half the disks both when written, and later, when read – which can make the performance profile of that six-disk RAIDZ resemble that of two three-disk RAIDZ1 vdevs rather than that of a single vdev.

The performance improvement described above assumes that multiple reads and writes of the three-record stripes are being requested concurrently; otherwise the entire vdev still binds while waiting for a full-stripe read or write.

Remember that you can – and with larger servers, should – have multiple RAIDZ vdevs per pool, not just one. A pool of three eight-disk RAIDZ2 vdevs will significantly outperform a pool with a single 24-disk RAIDZ2 or RAIDZ3 vdev – and it will resilver much faster when replacing failed disks.

Third level: the metaslab

Each vdev is organized into metaslabs – typically, 200 metaslabs per vdev (although this number can change, if vdevs are expanded and/or as the ZFS codebase itself becomes further optimized over time).

When you issue writes to the pool, those writes are coalesced into a txg (transaction group), which is then distributed among individual vdevs, and finally allocated to specific metaslabs on each vdev. There’s a fairly hefty logic chain which determines exactly what metaslab a record is written to; it was explained to me (with no warranty offered) by a friend who worked with Oracle as follows:

• Is this metaslab “full”? (zfs_mg_noalloc_threshold)
• Is this metaslab excessively fragmented? (zfs_metaslab_fragmentation_threshold)
• Is this metaslab group excessively fragmented? (zfs_mg_fragmentation_threshold)
• Have we exceeded minimum free space thresholds? (metaslab_df_alloc_threshold) This one is weird; it changes the whole storage pool allocation strategy for ZFS if you cross it.
• Should we prefer lower-numbered metaslabs over higher ones? (metaslab_lba_weighting_enabled) This is totally irrelevant to all-SSD pools, and should be disabled there, because it’s pretty stupid without rust disks underneath.
• Should we prefer lower-numbered metaslab groups over higher ones? (metaslab_bias_enabled) Same as above.

You can dive into the hairy details of your pool’s metaslabs using the zdb command – this is a level which I have thankfully not personally needed so far, and I devoutly hope I will continue not to need it in the future.

Fourth level: the record

Each ZFS write is broken into records, the size of which is determined by the zfs set recordsize=command. The default recordsize is currently 128K; it may range from 512B to 1M.

Recordsize is a property which can be tuned individually per dataset, and for higher performance applications, should be tuned per dataset. If you expect to largely be moving large chunks of contiguous data – for example, reading and writing 5MB JPEG files – you’ll benefit from a larger recordsize than default. Setting recordsize=1M here will allow your writes to be less fragmented, resulting in higher performance both when making the writes, and later when reading them.

Conversely, if you expect a lot of small-block random I/O – like reading and writing database binaries, or VM (virtual machine) images – you should set recordsize smaller than the default 128K. MySQL, as an example, typically works with data in 16K chunks; if you set recordsize=16K you will tremendously improve IOPS when working with that data.

ZFS CSUMs – cryptographic hashes which verify its data’s integrity – are written on a per-record basis; data written with recordsize=1M will have a single CSUM per 1MB; data written with recordsize=8K will have 128 times as many CSUMs for the same 1MB of data.

Setting recordsize to a value smaller than your hardware’s individual sector size is a tremendously bad idea, and will lead to massive read/write amplification penalties.

Fifth (and final) level: ashift

Ashift is the property which tells ZFS what the underlying hardware’s actual sector size is. The individual blocksize within each record will be determined by ashift; unlike recordsize, however, ashift is set as a number of bits rather than an actual number.  For example, ashift=13 specifies 8K sectors, ashift=12 specifies 4K sectors, and ashift=9 specifies 512B sectors.

Ashift is per vdev, not per pool – and it’s immutable once set, so be careful not to screw it up!  In theory, ZFS will automatically set ashift to the proper value for your hardware; in practice, storage manufacturers very, very frequently lie about the underlying hardware sector size in order to keep older operating systems from getting confused, so you should do your homework and set it manually. Remember, once you add a vdev to your pool, you can’t get rid of it; so if you accidentally add a vdev with improper ashift value to your pool, you’ve permanently screwed up the entire pool!

Setting ashift too high is, for the most part, harmless – you’ll increase the amount of slack space on your storage, but unless you have a very specialized workload this is unlikely to have any significant impact. Setting ashift too low, on the other hand, is a horrorshow. If you end up with an ashift=9 vdev on a device with 8K sectors (thus, properly ashift=13), you’ll suffer from massive write amplification penalties as ZFS needs to write, read, rewrite again over and over on the same actual hardware sector. I have personally seen improperly set ashift cause a pool of Samsung 840 Pro SSDs perform slower than a pool of WD Black rust disks!

Even if you’ve done your homework and are absolutely certain that your disks use 512B hardware sectors, I strongly advise considering setting ashift=12 or even ashift=13 – because, remember, it’s immutable per vdev, and vdevs cannot be removed from pools. If you ever need to replace a 512B sector disk in a vdev with a 4K or 8K sector disk, you’ll be screwed if that vdev is ashift=9.

How data gets imbalanced on ZFS

In an earlier post, I demonstrated that ZFS distributes writes evenly across vdevs according to FREE space per vdev (not based on latency or anything else: just FREE).

There are three ways I know of that you can end up with an imbalanced distribution of data across your vdevs. The first two are dead obvious; the third took a little head-scratching and empirical testing before I was certain of it.

Different-sized vdevs

If you used vdevs of different sizes in the first place, you end up with more data on the larger vdevs than the smaller vdevs.

This one’s a no-brainer: we know that ZFS will distribute writes according to the amount of FREE on each vdev, so if you create a pool with one 1T vdev and one 2T vdev, twice as many writes will go on the 2T vdev as the 1T vdev; natch.

Vdevs ADDed after data was already written to the pool

If you zpool add one or more vdevs to an existing pool that already has data on it, ZFS isn’t going to redistribute the writes you already made to the older vdevs.

For example, let’s say you create a pool with a single 2T vdev, write 1T of data to it, then add another 2T vdev. You’ve got 1T FREE on one vdev and 2T FREE on the other vdev; ZFS will now write two records to the new vdev for every one record it writes to the old one; this means that while your writes will remain imbalanced for the rest of the pool’s life, each vdev will become full at about the same time.

You might ask, why not bias writes to the new vdevs even more heavily, so that they achieve balance before the pool’s full? The answer is consistency. If you distribute two writes to a 2T FREE vdev for every one write to a 1T FREE vdev, you have a consistent write performance profile for the remainder of the life of the pool, rather than a really bad performance profile either now (if you bias all the writes to the vdev with more FREE) or at the end of the pool’s life (if you deliver writes evenly until one vdev is entirely full, then have no choice but to send all writes to the one vdev that still has FREEspace remaining).

Balanced writes, imbalanced deletes

OK, this is the fun one. Let’s say you create a pool with two equally-sized vdevs, and a year later you look at it and you’ve got imbalanced writes. What gives?

Well, this is going to be more likely the larger your recordsize is, since as far as I can tell each record is written to a single vdev (not split across the pool as a whole in ashift-sized blocks). Basically, although ZFS wrote your data balanced across your equally-sized vdevs, you deleted more records from one vdev than another.

To demonstrate this effect (and give myself a sanity check!), I created a pool with two equally-sized 500GB vdevs, set recordsize=1M, and wrote a ton of 900K files to the pool.

root@banshee:~# zpool create -oashift=13 alloctest /ssd/alloctest/disk1.raw /rust/alloctest/disk2.raw
root@banshee:~# zfs set recordsize=1M alloctest

root@banshee:~# for i in {1..3636} do ; cp /tmp/900K.bin /alloctest/$i.bin ; done

root@banshee:~# zpool iostat -v alloctest
                               capacity   operations  bandwidth
pool                          alloc free  read  write read write
----------------------------- ----- ----- ----- ----- ----- -----
alloctest                     3.14G 989G  0     45    4.07K 14.2M
 /rust/alloctest/disk1.raw    1.57G 494G  0     22    2.04K 7.10M
 /ssd/alloctest/disk2.raw     1.57G 494G  0     22    2.04K 7.09M
----------------------------- ----- ----- ----- ----- ----- -----

As expected, these files are balanced equally across each vdev in the pool… even though one of the vdevs is much, much faster than the other, since they had the same FREE space available.

Now, we write a tiny bit of Perl to delete only the even-numbered files from alloctest

#!/usr/bin/perl

opendir (my $dh, "/alloctest") || die "Can't open directory: $!";

while (readdir $dh) { 
    my $file = $_; 
    $file =~ s/\.bin$// ; 
    if ($file/2 == int($file/2)) { 
        # this is an even-numbered file - delete it
        unlink "/alloctest/$file.bin"; 
    }
}

closedir $dh;

Now we run our little bit of Perl, delete the even-numbered files only, and see if we’re left with imbalanced data:

root@banshee:~# perl ~/deleteevens.pl

root@banshee:~# zpool iostat -v alloctest
                               capacity   operations  bandwidth
pool                          alloc free  read  write read write
----------------------------- ----- ----- ----- ----- ----- -----
alloctest                     1.57G 990G  0     24    2.13K 7.44M
 /rust/alloctest/disk1.raw    12.3M 496G  0     12    1.07K 3.72M
 /ssd/alloctest/disk2.raw     1.56G 494G  0     12    1.07K 3.72M
----------------------------- ----- ----- ----- ----- ----- -----

Bingo! 12.3M ALLOCed on disk1, and 1.56G ALLOCed on disk2 – it took some careful planning, but we now have imbalanced data on a pool with equally-sized vdevs that have been present since the pool’s creation.

However, it’s not imbalanced because ZFS wrote it that way, it’s imbalanced because we deleted it that way.  By deleting all the even-numbered files, we got rid of the files on /ssd/alloctest/disk1.raw while leaving all the files (actually, all the records) on /ssd/alloctest/disk2.rawintact. And since ZFS allocates writes according to FREE per vdev, we know that our data will slowly creep back into balance, as ZFS favors the vdev with a higher FREE count on new writes.

In practice, most people shouldn’t see a really large imbalance like this in normal usage, even with a large recordsize. I had to pretty specifically gimmick this scenario up to save files right at the desired recordsize and then delete them very specifically in a pattern which would produce the results I was looking for; organic deletions should be very unlikely to create a large imbalance.

ZFS allocates writes according to free space per vdev, not latency per vdev

I frequently see the mistaken idea popping up that ZFS allocates writes to the quickest vdev to respond. This isn’t the case: ZFS allocates pool writes in proportion to the amount of free space available on each vdev, so that the vdevs will become full at roughly the same time regardless of how small or large each was to begin with.

Testing: one large slow vdev, one small fast vdev

We can demonstrate this quickly and easily. Below, I use the truncate command to create raw storage files on two pools: rust and ssd.  By creating a 10G storage file on rust and a 2G storage file on ssd, we will see quickly whether ZFS prefers to allocate data according to free space or to latency: the ssd storage is tremendously lower latency, but the size of the device on the rust is larger.

root@banshee:~# zfs create ssd/alloctest
root@banshee:~# zfs create rust/alloctest
root@banshee:~# zfs set compression=off ssd/alloctest
root@banshee:~# zfs set compression=off rust/alloctest
root@banshee:~# truncate -s 10G /rust/alloctest/10Grust.raw
root@banshee:~# truncate -s 2G /ssd/alloctest/2Gssd.raw
root@banshee:~# zpool create -oashift=13 alloctest /rust/alloctest/10Grust.raw /ssd/alloctest/2Gssd.raw
root@banshee:~# zfs set compression=off alloctest

root@banshee:~# zpool list -v alloctest
NAME                          SIZE  ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                     11.9G 672K  11.9G -       0%   0% 1.00x ONLINE -
 /rust/alloctest/10Grust.raw  9.94G 416K  9.94G -       0%   0%
 /ssd/alloctest/2Gssd.raw     1.98G 256K  1.98G -       0%   0%

OK, now we’ve got our lopsided pool “alloctest”, which has one very fast 2G vdev and one much slower 10G vdev. Let’s see what happens when we dump 2GB of data into it:

root@banshee:~# dd if=/dev/zero bs=256M count=8 of=/alloctest/2G.bin
8+0 records in
8+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 16.6184 s, 129 MB/s

root@banshee:~# zpool list -v alloctest
NAME                          SIZE  ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                     11.9G 2.00G 9.92G -       9%   16% 1.00x ONLINE -
 /rust/alloctest/10Grust.raw  9.94G 1.56G 8.37G -       9%   15%
 /ssd/alloctest/2Gssd.raw     1.98G 451M  1.54G -       13%  22%

We’ve ALLOC’d 451M to the smaller vdev, and 1.56G to the larger vdev – a ratio of 3.54:1, quite close to the 5:1 ratio of the storage sizes themselves.

What if we dump more data in?

root@banshee:~# dd if=/dev/zero bs=256M count=12 of=/alloctest/3G.bin
12+0 records in
12+0 records out
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 29.0672 s, 111 MB/s

root@banshee:~# zpool list -v alloctest
NAME                          SIZE  ALLOC FREE  EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                     11.9G 5.01G 6.91G -        24%  42% 1.00x ONLINE -
 /rust/alloctest/10Grust.raw  9.94G 3.92G 6.02G -        23%  39%
 /ssd/alloctest/2Gssd.raw     1.98G 1.09G 916M  -        34%  54%

3.92G to 1.09G – 3.59 to 1, or no real change. Let’s fill the pool literally to bursting:

root@banshee:~# dd if=/dev/zero bs=256M count=48 of=/alloctest/12G.bin
dd: error writing '/alloctest/12G.bin': No space left on device
27+0 records in
26+0 records out
7014973440 bytes (7.0 GB, 6.5 GiB) copied, 99.4393 s, 70.5 MB/s

root@banshee:~# zpool list -v alloctest
NAME                          SIZE  ALLOC FREE  EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                     11.9G 11.5G 381M  -        58%  96% 1.00x ONLINE -
 /rust/alloctest/10Grust.raw  9.94G 9.61G 330M  -        58%  96%
 /ssd/alloctest/2Gssd.raw     1.98G 1.93G 50.8M -        61%  97%

With the pool entirely full, we have a ratio of 4.98:1 – still not quite the exact 5:1 ratio of our vdevs’ sizes, but pretty damn close.

Testing: one large fast vdev, one small slow vdev

OK… now what if we repeat the same experiment, but this time we put the big vdev on ssd and the little one on rust?

root@banshee:~# truncate -s 10G /ssd/alloctest/10Gssd.raw
root@banshee:~# truncate -s 2G /rust/alloctest/2Grust.raw
root@banshee:~# zpool create -oashift=13 alloctest /ssd/alloctest/10Gssd.raw /rust/alloctest/2Grust.raw
root@banshee:~# zfs set compression=off alloctest

root@banshee:~# zpool list -v alloctest
NAME                        SIZE  ALLOC FREE  EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                   11.9G 552K  11.9G -        0%   0% 1.00x ONLINE -
 /ssd/alloctest/10Gssd.raw  9.94G 336K  9.94G -        0%   0%
 /rust/alloctest/2Grust.raw 1.98G 216K  1.98G -        0%   0%

OK, the tables have turned. Now we’ve got a 12G pool with 10G of the storage on fast SSD, and 2G of the storage on slow rust. Let’s dump data in it:

root@banshee:~# dd if=/dev/zero bs=256M count=8 of=/alloctest/2G.bin
8+0 records in
8+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 13.5287 s, 159 MB/s

root@banshee:~# zpool list -v alloctest
NAME                        SIZE  ALLOC FREE  EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                   11.9G 1.98G 9.95G -        9%   16% 1.00x ONLINE -
 /ssd/alloctest/10Gssd.raw  9.94G 1.55G 8.39G -        9%   15%
 /rust/alloctest/2Grust.raw 1.98G 440M  1.56G -        13%  21%

1.55G to 440M – 3.6:1. That’s a pretty familiar ratio, isn’t it? Let’s dump another 3G of data in, just like we did earlier, when the big vdev was rust:

root@banshee:~# dd if=/dev/zero bs=256M count=12 of=/alloctest/3G.bin
12+0 records in
12+0 records out
3221225472 bytes (3.2 GB, 3.0 GiB) copied, 23.5282 s, 137 MB/s

root@banshee:~# zpool list -v alloctest
NAME                        SIZE  ALLOC FREE  EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
alloctest                   11.9G 5.01G 6.91G -        25%  42% 1.00x ONLINE -
 /ssd/alloctest/10Gssd.raw  9.94G 3.92G 6.02G -        24%  39%
 /rust/alloctest/2Grust.raw 1.98G 1.09G 916M  -        34%  54%

1.09G to 3.92G ALLOCated… simplified, that’s 3.6:1 again. Just like it was when the big vdev was rust and the small vdev was ssd.

What about high-IOPS, small random writes?

For this one, I set up equally-sized vdevs on rust and ssd, created a pool with no compression, and began populating them with 4K synchronously written files, which is just about the maximum IOPS load you can put on a pool:

root@banshee:~# for i in {1..1048576}
> do
> cp /tmp/4K.bin /alloctest/$i.bin
> sync
> done

This gives us a stream of steady 4K synchronous writes to the pool (as ensured by that sync command in the loop).

Checking zpool iostat -v alloctest while the data is streaming onto the pool confirms that the writes are balanced equally between the equal-sized drives, even though we’re doing 4K writes, and one of the vdevs is an Intel 480GB SSD and the other is WD Red 4TB rust drive:

root@banshee:~# zpool iostat -v alloctest
 capacity operations bandwidth
pool                          alloc free  read  write read  write
----------------------------- ----- ----- ----- ----- ----- -----
alloctest                     4.57G 987G  171   334   1.34M 6.12M
 /ssd/alloctest/500G.raw      2.29G 494G  85    172   683K  3.08M
 /rust/alloctest/500G.raw     2.28G 494G  85    161   685K  3.05M
----------------------------- ----- ----- ----- ----- ----- -----

There’s no significant difference: each device is receiving roughly the same number of operations, and the same amount of bandwidth, at any given second; and we’re accumulating the same amount of data on each same-sized vdev.

The rule of thumb – as we’re seeing here – is that writes to any given vdev bind on the slowest disk in the vdev, and writes to a pool bind on the slowest vdev in the pool. In this case, we’re binding on the performance of the rust vdev. The reason we’re binding on that slower vdev is to keep the pool from filling imbalanced.

Conclusion

ZFS allocates writes to the pool according to the amount of free space left on each vdev, period. With the small vdev sizes we used for testing here, this didn’t result in a “perfect” allocation ratio exactly matching our vdev sizes – but the “imperfect” ratio we got was the same whether the smaller vdev was the slower one or the faster one. And when we tested with 4K synchronous writes to a pool with evenly sized vdevs, the throughput bound to the slower of the two vdevs, and we could see the data moving at the same pace onto each of those vdevs – not allocated according to their individual capacities.

This should remove any confusion about whether ZFS (at least, as of 0.6.5.6) “prefers” faster/lower latency vdevs when allocating writes. It does not.

If you’re frowning because you’ve got an imbalanced distribution of data across your pool and not sure how it happened, see here.

Wifi Acronym/Protocol Cheat Sheet

I can never find all this stuff in easy human-readable form in one place and have trouble remembering some of it, so here’s a cheat sheet for myself (and for you!)

AC Speed Ratings:

They’re basically complete snake oil and cannot be trusted to mean anything concrete. The only really meaningful basic hardware designator looks like “3×3:2”, which actually means “three input antennas, three output antennas, and two simultaneous MIMO streams.” The relevant part of that is the two MIMO streams. A laptop that supports two MIMO streams can get roughly double the throughput from a router or AP that also supports two MIMO streams than it could from a router or AP which only supported one.

Very, very few client devices (laptops, phones, tablets, etc) support more than two MIMO streams. But a rare handful can support three – the most common being recent-model Macbook Pro laptops. If a router supports more MIMO streams than any of the clients connected to it, it does nobody any good at all, though. (MU-MIMO changes that, slightly, but almost no client devices support MU-MIMO, either. Welcome to wifi.)

Unfortunately, without hitting a specialty site like wikidevi, you’re going to find it really, really difficult to find anything but AC speed ratings, so here’s a list of what each of them probably means. Assuming you’re talking about a single router or access point – if you’re looking at the “rating” on a box of wifi mesh nodes, you’re going to need a couple of hours and all the algebra you ever learned to try to reverse engineer something meaningful out of it!

  • AC1200 or AC1350 probably means a 2×2:2 dual-band device.
  • AC1750, AC1900, or AC2300 probably means a 3×3:3 dual-band device.
  • AC2600 probably means a 4×4:4 dual-band device.
  • AC3200 or higher probably means a tri-band device, with two 5 GHz radios as well as a 2.4 GHz radio, and god help you if you need to know the specifics of the MIMO streams beyond that.

You may see the MIMO ratings just listed as “2×2” or “3×3” instead of the full “2×2:2” or “3×3:3”; you can generally assume this will mean the same number of MIMO streams as antennae. Probably. But if you really want to know for sure, go look the device up at wikidevi.

Terms/Acronyms:

  • AP – Access Point. This is wifi infrastructure – a router or access point which offers network access to clients.
  • STA – Station. This is nerd shorthand for “client device”; a device that connects to APs in order to have access to the network.
  • SSID – Service Set IDentifier. Normal humans call this a “wifi network name”. What you see on the list of wifi networks to connect to.
  • BSSID – Basic Service Set IDentifier. This is the hardware address of the wifi chipset in an AP or STA; wired network nerds will be also familiar with this as the “MAC address”.
  • MAC Address – this is a string of text which uniquely identifies a particular network interface to other network interfaces on the network. It’s the fundamental network identity – IP addresses will get you to the right network domain, but from there you need a translation table (ARP) to tell you which MAC address owns which IP addresses. When speaking of Wifi, MAC address is synonymous with BSSID.
  • ARP – Address Resolution Protocol. ARP is not unique to wifi; much like MAC addresses, wired networking uses it too. ARP is the protocol which allows machines on the local network to convert IP addresses to MAC addresses (which are how the packets ultimately get to the right local-network destination).
  • NIC – Network Interface Card. Used to refer specifically to the network chipset doing the communicating; a STA or AP may have multiple NICs. Each NIC has its own MAC/BSSID.

Protocols:

802.11k – RF-based roaming report

802.11k and 802.11v are protocols which facilitate BSS (Basic Service Set) transitions. Normal humans tend to call this “roaming.” K, specifically, is how an AP offers a STA information about the network, so that the STA can choose a reasonable AP to roam to.

1. AP determines that STA is moving away from it
2. AP informs STA to prepare for roaming
3. STA requests list of nearby access points
4. AP gives site report
5. STA moves to best AP based on report

Both AP and STA must support 802.11k for it to be of use. Without K, roaming takes longer (since the STA must switch bands “sniffing” the air for new APs), and is more likely to send the STA to a suboptimal AP.

If you need more info, rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11k-2008

802.11r – Fast BSS transition

802.11r is only relevant to networks using EAP (Extensible Authentication Protocol), an enterprise-typical technology which allows each individual STA on the same SSID to use different passwords, and thus separate encryption keys. 802.11r does not apply to PSK networks, eg WPA/WPA2 “personal”.

Without 802.11r, a roaming event is much slower on an EAP network than on a Pre-Shared-Key style network, because the STA must first complete the full roaming process it would on the PSK network – then it must renegotiate the crypto side of things all over again with the new AP.

With 802.11r enabled (and supported on both STA and AP), part of the authentication and encryption keys may be cached for a certain amount of time, speeding up handoffs from AP to AP on an EAP network.

The details get a little hairy if you’re not super up on both the crypto and the nitty-gritty of the protocol; rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11r-2008

802.11s – Mesh infrastructure protocol

802.11s is a mesh networking extension. It’s how most, if not all, Wifi Mesh networking kits handle communication between APs. Key features include:

1. SAE – Simultaneous Authentication between Equals. The idea here is that the various nodes of the mesh network can recognize one another without dependence on a central, authoritative controller.
2. broadcast/multicast and unicast delivery – in a normal network, if you hit the broadcast address a packet is relayed out to each STA. This becomes more difficult in a mesh network as not every STA is connected to a single infrastructure node; 802.11s facilitates the delivery of these *cast packets to all the STAs on the network.

802.11s is for APs only – normal STAs do not need to support and do not know anything about 802.11s, even if they’re connected to a “mesh” Wifi network.

Rabbit hole starts here: https://en.wikipedia.org/wiki/IEEE_802.11s

802.11v – Load-based roaming report

802.11v assists roaming based on AP load conditions. 802.11v BSS-TM management frames include a list of APs, and a report of their current loads. Providing this information to a STA reduces the scan time necessary, and allows for more graceful, steered roaming.

An 802.11v-enabled STA may request an 802.11v BSS-TM management frame from an AP, or an AP may send an unsolicited BSS-TM frame to the STA (indicating to the STA that a more preferred AP is available).

Similarly to 802.11k, the AP doesn’t unconditionally command the STA to roam to a specific AP, and the STA does not unconditionally obey. Both STA and AP must support 802.11v for load-based roaming to function.

I haven’t found a really good rabbit hole start for this one, but try here, here, and here.

ZVOL vs QCOW2 with KVM

When mixing ZFS and KVM, should you put your virtual machine images on ZVOLs, or on .qcow2 files on plain datasets? It’s a topic that pops up a lot, usually with a ton of people weighing in on performance without having actually done any testing.  My old benchmarks are getting a little long in the tooth, so here’s an fio random write run with 4K blocksize, done on both a .qcow2 on a dataset, and a zvol.

Test Configuration

Host:

CPU :  Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz
RAM : 32 GB DDR4 SDRAM
SATA : Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ZFS 0.6.5.6-0ubuntu19, from Canonical main repo
Disks : 2x Samsung 850 Pro 1TB SATA3, mirror vdev
ZFS parameters: ashift=13,recordsize=8K,atime=off,compression=lz4

Guest:

CPU : Intel Core Processor (Broadwell), 2 threads
RAM : 512MB
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ext4
Disks: /mnt/zvol on 20G zvol volume, /mnt/dataset on 20G .qcow2 file

Synchronous 4K write results

ZVOL, –ioengine=sync:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=sync --iodepth=4 \
                              --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                              --end_fsync=1
[...]
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=50453KB/s, minb=3153KB/s, maxb=3153KB/s, mint=83116msec, maxt=83132msec

QCOW2, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
[...]
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=45767KB/s, minb=2860KB/s, maxb=2976KB/s, mint=88058msec, maxt=91643msec

So, 50.5 MB/sec (zvol) vs 45.8 MB/sec (qcow2). Yes, there’s a difference; at least on the most punishing I/O workloads. Is it perceptible enough to matter? Probably not, for most use cases, given the benefits in ease of management and maintenance for .qcow2 on datasets. QCOW2 are easier to provision, you don’t have to worry about refreservation keeping you from taking snapshots, they’re not significantly more difficult to mount offline (modprobe nbd ; qemu-nbd -c /dev/nbd0 /path/to/image.qcow2 ; mount -oro /mnt/image /dev/nbd0 or similar); and probably the most importantly, filling the underlying storage beneath a qcow2 won’t crash the guest.

Tuning QCOW2 for even better performance

I found out yesterday that you can tune the underlying cluster size of the .qcow2 format. Creating a new .qcow2 file tuned to use 8K clusters – matching our 8K recordsize, and the 8K underlying hardware blocksize of the Samsung 850 Pro drives in our vdev – produced tremendously better results. With the tuned qcow2, we more than tripled the performance of the zvol – going from 50.5 MB/sec (zvol) to 170 MB/sec (8K tuned qcow2)!

QCOW2 -o cluster_size=8K, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
[...]
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=170002KB/s, minb=10625KB/s, maxb=12698KB/s, mint=20643msec, maxt=24672msec

ZVOL won’t pause the guest if storage is unavailable

If you fill the underlying pool with a guest that’s using a zvol for its storage, the filesystem in the guest will panic. From the guest’s perspective, this is a hardware I/O error, and the guest and/or its apps which use that virtual disk will crash, leaving it in an unknown and possibly corrput state.

If the guest uses a .qcow2 file on a dataset for storage, the same problem is handled much more safely. When writes become unavailable on host storage, the guest will be automatically paused by libvirt. This gives you a chance to free up space, then virsh resume the guest. The net effect is that the guest and its apps never realize there was ever a problem in the first place. Any pending writes complete automatically and without error once you’ve cleared the host storage problem and resumed the guest.

ZVOL doesn’t honor guest synchronous writes

It may also be worth noting that the guest seems a little less clued in with what’s going on with its storage when using the zvol. I specified --ioengine=sync for these test runs, which should – repeat, should – have made the also-specified parameter end_fsync=1 irrelevant, since all writes were supposed to be synchronous.

On the .qcow2-hosted storage, the data was written verifiably sync, since we can see there’s no pause at the end for end_fsync=1 to finish flushing the data to the metal:

Jobs: 16 (f=16): [w(16)] [66.7% done] [0KB/75346KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [68.0% done] [0KB/0KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [72.0% done] [0KB/263.8MB/0KB /s]
Jobs: 16 (f=16): [w(8),F(1),w(7)] [80.0% done] [0KB/199.1MB/0KB /s] 
Jobs: 15 (f=15): [w(8),_(1),w(7)] [80.8% done] [0KB/53866KB/0KB /s] 
Jobs: 15 (f=15): [w(3),F(1),w(4),_(1),w(3),F(1),w(3)] [84.6% done] 
Jobs: 12 (f=12): [F(1),w(2),_(1),w(4),_(2),w(2),_(1),w(3)] [85.2% done] 
Jobs: 8 (f=8): [_(4),w(4),_(2),w(2),_(1),w(1),_(1),w(1)] [88.9% done] Jobs: 4 (f=3): [_(4),F(1),_(1),w(1),_(3),F(1),_(4),w(1)] [100.0% done] 

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1773: Tue Mar 13 13:57:16 2018

The ZVOL hosted storage, on the other hand, clearly was not honoring ioengine=sync, as it spent a significant amount of time after all data was supposedly already written, waiting for end_fsync=1 to finish:

Jobs: 16 (f=16): [w(16)] [81.0% done] [0KB/527.2MB/0KB /s] 
Jobs: 16 (f=16): [w(10),F(1),w(5)] [94.7% done] [0KB/551.6MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/155.2MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

 ------[[[ above line repeats for 60 more lines ]]]------

Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1792: Tue Mar 13 13:57:42 2018

This strikes me as pretty disturbing; you could end up in a world of hurt if you’re expecting your host to honor the guest’s synchronous writes when, in fact, it’s not.

Asynchronous 4K write results

Well, hrm. Realizing now that zvol storage doesn’t actually honor synchronous write requests very well, what if we use the libaio (native Linux asynchronous I/O) engine instead?

ZVOL, –ioengine=libaio:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=139484KB/s, minb=8717KB/s, maxb=8722KB/s, mint=30054msec, maxt=30070msec

QCOW2, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=164392KB/s, minb=10274KB/s, maxb=11651KB/s, mint=22498msec, maxt=25514msec

And there you have it – qcow2 at 164MB/sec vs zvol at 139 MB/sec. So when using asynchronous I/O, the qcow2-backed virtual disk actually finished the fio run faster than the zvol-backed disk.

What if we tune the .qcow2 for 8K cluster size, like we did above in the synchronous write test?

QCOW2 -o cluster_size=8K, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=181304KB/s, minb=11331KB/s, maxb=13543KB/s, mint=19356msec, maxt=23134msec

The improvements aren’t as drastic here – 181 MB/sec (tuned qcow2) vs 164 MB/sec (default qcow2) vs 139 MB/sec (zvol) – but they’re still a clear improvement, and the qcow2 storage is still faster than the zvol. (If anybody knows similar tuning that can be done to the zvol to improve its numbers, please tweet or DM me @jrssnet.)

Conclusion: .qcow2 FTW

For me, it’s a no-brainer: qcow2 files are only slightly slower on even the most punishing I/O workloads under default, untuned configuration, while being MUCH easier to manage, and arguably safer (won’t crash the guest if the host fills up the storage, honors sync write requests more predictably). And if you take the time to tune the .qcow2 on creation, they actually outperform the zvol. Winner: .qcow2.

Boot rescue for GalliumOS / chrx on Chromebooks

Since acquiring a small fleet of HP Chromebooks for use in network testing, I’ve discovered that once in a blue moon, one of them that’s lost power while running will have trashed its Linux boot configuration – in which case it hangs at the SeaBIOS “Booting from Hard Disk…” black screen indefinitely.

The fix is obscure but doesn’t take long. What you need to do is boot into ChromeOS, but don’t log in. Instead, press ctrl-alt-F2 (probably ctrl-alt-right-arrow on most Chromebook keyboards) to get a bash login. Log in as chronos, no password. Sudo -s to become root. Now run the “mount” command, with no arguments – you should see a few partitions from your system disk mounted; what the device name is can vary from Chromebook to Chromebook. Mine is /dev/mmcblk0, so partitions look like /dev/mmcblk0p7.

Standard chrx disk layouts that preserve ChromeOS should have the Linux partition as p7 on the system disk; so you’ll be looking at something like /dev/sda7 or /dev/mmcblk0p7. You’re going to make a temp directory, mount that Linux partition on the temp directory, then chroot inside it so that you can update the bootloader. Adjust that first mount command as necessary for your system, and you’re off to the races:

mkdir /tmp/a

mount /dev/mmcblk0p7        /tmp/a
mount -o bind /proc    /tmp/a/proc
mount -o bind /dev     /tmp/a/dev
mount -o bind /dev/pts /tmp/a/dev/pts
mount -o bind /sys     /tmp/a/sys
mount -o bind /run     /tmp/a/run

chroot /tmp/a /bin/bash

dpkg-reconfigure grub-pc

That’s it. dpkg-reconfigure will ask you a few questions, including one about the boot command line – which will come up blank, and which you can leave blank. Aside from that, enter your way through; you’re done in a few seconds, after which exit exit exit your way out, reboot, and your Linux installation will boot again!

Demonstrating ZFS zpool write distribution

One of my pet peeves is people talking about zfs “striping” writes across a pool. It doesn’t help any that zfs core developers use this terminology too – but it’s sloppy and not really correct.

ZFS distributes writes among all the vdevs in a pool.  If your vdevs all have the same amount of free space available, this will resemble a simple striping action closely enough.  But if you have different amounts of free space on different vdevs – either due to disks of different sizes, or vdevs which have been added to an existing pool – you’ll get more blocks written to the drives which have more free space available.

This came into contention on Reddit recently, when one senior sysadmin stated that a zpool queues the next write to the disk which responds with the least latency.  This statement did not match with my experience, which is that a zpool binds on the performance of the slowest vdev, period.  So, I tested, by creating a test pool with sparse images of mismatched sizes, stored side-by-side on the same backing SSD (which largely eliminates questions of latency).

root@banshee:/tmp# qemu-img create -f qcow2 512M.qcow2 512M root@banshee:/tmp# qemu-img create -f qcow2 2G.qcow2 2G
root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2
root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /tmp/2G.qcow2
root@banshee:/tmp# zpool create -oashift=13 test nbd0 nbd1

OK, we’ve now got a 2.5 GB pool, with vdevs of 512M and 2G, and pretty much guaranteed equal latency between the two of them.  What happens when we write some data to it?

root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 status=none | pv -s 512M > /test/512M.zero
 512MiB 0:00:12 [41.4MiB/s] [================================>] 100% 

root@banshee:/tmp# zpool export test
root@banshee:/tmp# ls -lh *qcow2
-rw-r--r-- 1 root root 406M Jul 27 15:25 2G.qcow2
-rw-r--r-- 1 root root 118M Jul 27 15:25 512M.qcow2

There you have it – writes distributed with a ratio of roughly 4:1, matching the mismatched vdev sizes. (I also tested with a 512M image and a 1G image, and got the expected roughly 2:1 ratio afterward.)

OK. What if we put one 512M image on SSD, and one 512M image on much slower rust?  Will the pool distribute more of the writes to the much faster SSD?

root@banshee:/tmp# qemu-img create -f qcow2 /tmp/512M.qcow2 512M
root@banshee:/tmp# qemu-img create -f qcow2 /data/512M.qcow2 512M

root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2
root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /data/512M.qcow2

root@banshee:/tmp# zpool create test -oashift=13 nbd0 nbd1
root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 | pv -s 512M > /test/512M.zero 
512MiB 0:00:48 [10.5MiB/s][================================>] 100%
root@banshee:/tmp# zpool export test
root@banshee:/tmp# ls -lh /tmp/512M.qcow2 ; ls -lh /data/512M.qcow2 
-rw-r--r-- 1 root root 266M Jul 27 15:07 /tmp/512M.qcow2 
-rw-r--r-- 1 root root 269M Jul 27 15:07 /data/512M.qcow2

Nope. Once again, zfs distributes the writes according to the amount of free space available – even when this causes performance to bind *severely* on the slowest vdev in the pool.

You should expect to see this happening if you have a vdev with failing hardware, as well – if any one disk is throwing massive latency instead of just returning errors, your entire pool will as well, until the deranged disk has been removed.  You can usually spot this sort of problem using iotop – all of the disks in your pool will have roughly the same throughput in MB/sec (assuming they’ve got equivalent amounts of free space left!), but your problem disk will show a much higher %UTIL than the rest.  Fault that slow disk, and your pool performance returns to normal.