A comprehensive guide to fixing slow SSH logins

The debug text that brought you here

Most of you are probably getting here just from frustratedly googling “slow ssh login”.  Those of you who got a little froggier and tried doing an ssh -vv to get lots of debug output saw things hanging at debug1: SSH2_MSG_SERVICE_ACCEPT received, likely for long enough that you assumed the entire process was hung and ctrl-C’d out.  If you’re patient enough, the process will generally eventually continue after the debug1: SSH2_MSG_SERVICE_ACCEPT received line, but it may take 30 seconds.  Or even five minutes.

You might also have enabled debug logging on the server, and discovered that your hang occurs immediately after debug1: KEX done [preauth]
and before debug1: userauth-request for user in /var/log/auth.log.

I feel your frustration, dear reader. I have solved this problem after hours of screeching head-desking probably ten times over the years.  There are a few fixes for this, with the most common – DNS – tending to drown out the rest.  Which is why I keep screeching in frustration every few years; I remember the dreaded debug1: SSH2_MSG_SERVICE_ACCEPT received hang is something I’ve solved before, but I can only remember some of the fixes I’ve needed.

Anyway, here are all the fixes I’ve needed to deploy over the years, collected in one convenient place where I can find them again.

It’s usually DNS.

The most common cause of slow SSH login authentications is DNS. To fix this one, go to the SSH server, edit /etc/ssh/sshd_config, and set UseDNS no.  You’ll need to restart the service after changing sshd_config: /etc/init.d/ssh restart, systemctl restart ssh, etc as appropriate.

If it’s not DNS, it’s Avahi.

The next most common cause – which is devilishly difficult to find reference to online, and I hope this helps – is the never-to-be-sufficiently damned avahi daemon.  To fix this one, go to the SSH client, edit /etc/nsswitch.conf, and change this line:

hosts:          files mdns4_minimal [NOTFOUND=return] dns

to:

hosts:          files dns

In theory maybe something might stop working without that mdns4_minimal option?  But I haven’t got the foggiest notion what that might be, because nothing ever seems broken for me after disabling it.  No services need restarting after making this change, which again, must be made on the client.

You might think this isn’t your problem. Maybe your slow logins only happen when SSHing to one particular server, even one particular server on your local network, even one particular server on your local network which has UseDNS no and which you don’t need any DNS resolution to connect to in the first place.  But yeah, it can still be this avahi crap. Yay.

When it’s not Avahi… it’s PAM.

This is another one that’s really, really difficult to find references to online.  Optional PAM modules can really screw you here.  In my experience, you can’t get away with simply disabling PAM login in /etc/ssh/sshd_config – if you do, you won’t be able to log in at all.

What you need to do is go to the SSH server, edit /etc/pam.d/common-session and comment out the optional module that’s causing you grief.  In the past, that was pam_ck_connector.so.  More recently, in Ubuntu 16.04, the culprit that bit me hard was pam_systemd.so. Get in there and comment that bugger out.  No restarts required.

#session optional pam_systemd.so

GSSAPI, and ChallengeResponse.

I’ve seen a few seconds added to a pokey login from GSSAPIAuthentication, whatever that is. I feel slightly embarrassed about not knowing, but seriously, I have no clue.  Ditto for ChallengeResponseAuthentication.  All I can tell you is that neither cover standard interactive passwords, or standard public/private keypair authentication (the keys you keep in ~/ssh/authorized_keys).

If you aren’t using them either, then disable them.  If you’re not using Active Directory authentication, might as well go ahead and nuke Kerberos while you’re at it.  Make these changes on the server in /etc/ssh/sshd_config, and restart the service.

ChallengeResponseAuthentication no
KerberosAuthentication no
GSSAPIAuthentication no

Host-based Authentication.

If you’re actually using this, don’t disable it. But let’s get real with each other: you’re not using it.  I mean, I’m sure somebody out there is.  But it’s almost certainly not you.  Get rid of it.  This is also on the server in /etc/ssh/sshd_config, and also will require a service restart.

# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no

Need frequent connections? Consider control sockets.

If you need to do repetitive ssh’ing into another host, you can speed up the repeated ssh commands enormously with the use of control sockets.  The first time you SSH into the host, you establish a socket.  After that, the socket obviates the need for re-authentication.

ssh -M -S /path/to/socket -o ControlPersist=5m remotehost exit

This creates a new SSH socket at /path/to/socket which will survive for the next 5 minutes, after which it’ll automatically expire.  The exit at the end just causes the ssh connection you establish to immediately terminate (otherwise you’d have a perfectly normal ssh session going to remotehost).

After creating the control socket, you utilize it with -S /path/to/socket in any normal ssh command to remotehost.

The improvement in execution time for commands using that socket is delicious.

me@banshee:/~$ ssh -M -S /tmp/demo -o ControlPersist=5m eohippus exit

me@banshee:/~$ time ssh eohippus exit
real 0m0.660s
user 0m0.052s
sys 0m0.016s

me@banshee:/~$ time ssh -S /tmp/demo eohippus exit
real 0m0.050s
user 0m0.005s
sys 0m0.010s

Yep… from 660ms to 50ms.  SSH control sockets are pretty awesome.

ZFS clones: Probably not what you really want

ZFS clones look great on paper: they’re instantaneously generated, they’re read/write, they’re initially “free” because they reference the same blocks their parent snapshots do. They’re also (initially) frequently extra-snappy performance-wise, because a lot of those parent blocks are very likely already in the ARC. If you create ten clones of the same VM image (for instance), all ten clones will share the same blocks in the ARC instead of them needing to be in the ARC ten different times. Huge win!

But, as great as a clone sounds at first blush, you probably don’t want to use them for anything that isn’t ephemeral (intended to be destroyed in fairly short order). This is because a clone’s parent snapshot is forever immutable; you can’t destroy the parent snapshot without destroying the clone along with it… even if and when the clone becomes 100% divergent, and no longer shares any block references with its parent. Let’s examine this on a small scale.

Practical testing

On my workstation banshee, I create a new dataset, make sure compression is turned off so as not to confuse us, and populate it with a 256MB chunk of random binary stuff:

root@banshee:~# zfs create banshee/demo ; zfs set compression=off banshee/demo
root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.483868 s, 555 MB/s
 256MiB 0:00:00 [ 533MiB/s] [<=>                                               ]

I know this looks a little weird, but AES-256 is roughly an order of magnitude faster than /dev/urandom: so what I did here was use /dev/urandom to seed AES-256, then encrypt a 256MB chunk of /dev/zero with it. At the end of this procedure, we have a dataset with 256MB of data in it:

root@banshee:~# ls -lh /banshee/demo
total 262M
-rw-r--r-- 1 root root 256M Mar 15 14:39 random.bin
root@banshee:~# zfs list banshee/demo
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G   262M  /banshee/demo

OK. Next step, we take a snapshot of banshee/demo, then create a clone using that snapshot as its parent.

Creating a clone

You don’t actually create a ZFS clone of a dataset at all; you create a clone from a snapshot of a dataset. So before we can “clone banshee/demo”, we first have to take a snapshot of it, and then we clone that.

root@banshee:~# zfs snapshot banshee/demo@parent-snapshot
root@banshee:~# zfs clone banshee/demo@parent-snapshot banshee/demo-clone
root@banshee:~# zfs list -rt all banshee/demo
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G   262M  /banshee/demo
banshee/demo@parent-snapshot      0      -   262M  -
root@banshee:~# zfs list -rt all banshee/demo-clone
NAME                 USED  AVAIL  REFER  MOUNTPOINT
banshee/demo-clone     1K  83.3G   262M  /banshee/demo-clone

So right now, we have the dataset banshee/demo, which shares all its blocks with banshee/demo@parent-snapshot, which in turn shares all its blocks with banshee/demo-clone. We see 262M in USED for banshee/demo, with nothing or next-to-nothing in USED for either banshee/demo@parent-snapshot or banshee/demo-clone.

Beginning divergence: removing data

Now, we remove all the data from banshee/demo:

root@banshee:~# rm /banshee/demo/random.bin
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G    19K  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
NAME                 USED  AVAIL  REFER  MOUNTPOINT
banshee/demo-clone     1K  83.3G   262M  /banshee/demo-clone

We still only have 262M of USED – but it’s all actually in banshee/demo@parent-snapshot now. You can tell because the REFER column has changed – banshee/demo@parent-snapshot and banshee/demo-clone still both REFER 262M, but banshee/demo only REFERs 19K now. (You still see 262M in USED for banshee/demo because banshee/demo@parent-snapshot is a child of banshee/demo, so its contents count towards banshee/demo‘s USED figure.)

Next up: we re-fill the parent dataset, banshee/demo, with 256MB of different random garbage.

Continuing divergence: replacing data in the parent

root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.498349 s, 539 MB/s
 256MiB 0:00:00 [ 516MiB/s] [<=>                                               ]
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   523M  83.2G   262M  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
NAME                 USED  AVAIL  REFER  MOUNTPOINT
banshee/demo-clone     1K  83.2G   262M  /banshee/demo-clone

OK, at this point you see that the USED for banshee/demo shoots up to 523M: that’s the total of the 262M of original random garbage which is still preserved in banshee/demo@parent-snapshot, plus the new 262M of different random garbage in banshee/demo itself. The snapshot now diverges completely from the parent dataset, having no blocks in common at all.

So far, banshee/demo-clone is still 100% convergent with banshee/demo@parent-snapshot, so we’re still getting some conservation of space on disk and in ARC from that. But remember, the whole point of making the clone was so that we could write to it as well as read from it. So let’s do exactly that, and make the clone 100% divergent from its parent, too.

Diverging completely: replacing data in the clone

root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo-clone/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.50151 s, 535 MB/s
 256MiB 0:00:00 [ 534MiB/s] [<=>                                               ]
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   523M  82.8G   262M  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
NAME                 USED  AVAIL  REFER  MOUNTPOINT
banshee/demo-clone  262M  82.8G  262M  /banshee/demo-clone

There, done. We now have a parent dataset, banshee/demo, which diverges completely from its snapshot banshee/demo@parent-snapshot, and a clone, banshee/demo-clone, which also diverges completely from banshee/demo@parent-snapshot.

Examining the suck

Since neither the parent, its snapshot, nor the clone share any blocks with one another anymore, we’re using the full 786MB of on-disk space that the three of them add up to. And since they also don’t share any blocks in the ARC, we’re left with absolutely no benefit in either storage consumption or performance to our having used a clone.

Worse, despite having no blocks in common and no perceptible benefit to the clone structure, all three are still inextricably linked, and neither banshee/demo nor banshee/demo@parent-snapshot can be destroyed without also destroying banshee/demo-clone:

root@banshee:~# zfs destroy banshee/demo -r
cannot destroy 'banshee/demo': filesystem has dependent clones
use '-R' to destroy the following datasets:
banshee/demo-clone
root@banshee:~# zfs destroy banshee/demo@parent-snapshot
cannot destroy 'banshee/demo@parent-snapshot': snapshot has dependent clones
use '-R' to destroy the following datasets:
banshee/demo-clone

So now you’re left with a great unwieldy mass of tangled dependencies, wasted space, and no perceptible benefits at all.

Conclusion and practical example

Imagine that you’re storing VM images in ZFS, and you began with a “gold” image of a freshly installed operating system, and created ten different clones to run ten different VMs from. Initially, this seemed great: you could create the clones instantaneously, and they shared tons of blocks, so they consumed a fraction of the ARC they would as complete, separate copies.

A year later, however, your gold image – of, let’s say, Ubuntu 16.04.1 – has diverged to a staggering degree with the set of rolling updates necessary to bring it all the way to Ubuntu 16.04.2. Your VMs have also diverged tremendously, from their parent snapshot and from one another. And now you’re stuck with the year-old snapshot of the “gold” image, completely useless to you but forever engraved on your drive unless and until you’re willing to replicate or otherwise block-for-block copy your VMs painstakingly into self-sufficient datasets with no references. You also have no remaining performance benefits, and you have an extra SPOF (single point of failure) where some admin – maybe even you – might see that parent snapshot nobody cared about anymore taking up all that disk space, and…

root@banshee:~# zfs destroy -R banshee/demo@parent-snapshot
root@banshee:~# zfs list banshee/demo-clone
cannot open 'banshee/demo-clone': dataset does not exist

One “oops” later, that “useless” parent snapshot and every single one of those clones you were using in production are gone forever. (Or, hopefully, just gone until you can restore them from your off-pool backup. You are maintaining replicated backups on at least one other pool, preferably on another machine, aren’t you? Aren’t you?!)

Connecting pfSense to a standard OpenVPN Server config

First, you need to dump the client cert+key into System -> Cert Manager -> Certificates. Then dump the server’s CA cert into System-> Cert Manager-> CA.

Now go to the VPN -> OpenVPN -> Clients and add a client. You’ll likely want Peer-to-Peer (SSL/TLS), UDP, tun, and wan. Put in the remote host IP address or FQDN. You’ll probably want to check “infinitely resolve server”. Under Cryptographic settings, select the CA and certificate you entered into the System Cert Manager, and you’ll most likely want BF-CBC for the encryption algo and SHA-1 for the auth digest algo. Topology should be subnet unless you’re doing something funky; set compression if you’ve enabled it on the other end, but otherwise leave it alone.

This is enough to get you the VPN, but it won’t pass traffic originating there to you. To respond to traffic initiated from the other end, you’ll need to head to Firewall -> Rules -> OpenVPN. If you want all traffic to be allowed, when you create the new Pass rule, be certain to change the protocol from TCP to Any, and leave everything else the default. Save your rule and apply it, and you should at this point be connected and passing packets in both directions between your pfSense OpenVPN client and your standard (based on the template server.conf distributed with OpenVPN and using easy-rsa) OpenVPN server.

Verifying copies

Most of the time, we explicitly trust that our tools are doing what we ask them to do. For example, if we rsync -a /source /target, we trust that the contents of /target will exactly match the contents of /source. That’s the whole point, right? And rsync is a tool with a long and impeccable lineage. So we trust it. And, really, we should.

But once in a while, we might get paranoid. And when we do, and we decide to empirically and thoroughly check our copied data… things might get weird. Today, we’re going to explore the weirdness, and look at how to interpret it.

So you want to make a copy of /usr/bin…

Well, let’s be honest, you probably don’t. But it’s a convenient choice, because it has plenty of files in it (but not so many that copies will take forever), it has folders beneath it, and it has a couple of other things in there that can throw you off balance.

On the machine I’m sitting in front of now, /usr/bin is on an ext4 filesystem – the root filesystem, in fact. The machine also has a couple of ZFS filesystems. Let’s start out by using rsync to copy it off to a ZFS dataset.

We’ll use the -a argument to rsync, which stands for archive, and is a shorthand for -rlptgoD. This means recursive, copy symlinks as symlinks, maintain permissions, maintain timestamps, maintain group ownership, maintain ownership, and preserve devices and other special files as devices and special files where possible. Seems fairly straightforward, right?

root@banshee:/# zfs create banshee/tmp
root@banshee:/# rsync -a /usr/bin /banshee/tmp/
root@banshee:/# du -hs /usr/bin ; du -hs /banshee/tmp/bin
353M	/usr/bin
215M	/banshee/tmp/bin

Hey! We’re missing data! What the hell? No, we’re not actually missing data – we just didn’t think about the fact that the du command reports space used on disk, which is related but not at all the same thing as amount of data stored. In this case, obvious culprit is obvious:

root@banshee:/# zfs get compression banshee/tmp
NAME         PROPERTY     VALUE     SOURCE
banshee/tmp  compression  lz4       inherited from banshee
root@banshee:/# zfs get compressratio banshee/tmp
NAME         PROPERTY       VALUE  SOURCE
banshee/tmp  compressratio  1.55x  -

So, compression tricked us.

I had LZ4 compression enabled on my ZFS dataset, which means that the contents of /usr/bin – much of which are highly compressible – do not take up anywhere near as much space on disk when stored there as they do on the original ext4 filesystem. For the purposes of our exercise, let’s just wipe it out, create a new dataset that doesn’t use compression, and try again.

root@banshee:/# zfs destroy -r banshee/tmp
root@banshee:/# zfs create -o compression=off banshee/tmp
root@banshee:/# rsync -ha /usr/bin /banshee/tmp/
root@banshee:/# du -hs /usr/bin ; du -hs /banshee/tmp/bin
353M	/usr/bin
365M	/tmp/bin

Wait, what the hell?! Compression’s off now, so why am I still seeing discrepancies – and in this case, more space used on the ZFS target than the ext4 source? Well, again – du measures on-disk space, and these are different filesystems. The blocksizes may or may not be different, and in this case, ZFS stores redundant copies of the metadata associated with each file where ext4 does not, so this adds up to measurable discrepancies in the output of du.

Semi-hidden, arcane du arguments

You won’t find any mention at all of it in the man page, but info du mentions an argument called –apparent-size that sounds like the answer to our prayers.

`--apparent-size'
     Print apparent sizes, rather than disk usage.  The apparent size
     of a file is the number of bytes reported by `wc -c' on regular
     files, or more generally, `ls -l --block-size=1' or `stat
     --format=%s'.  For example, a file containing the word `zoo' with
     no newline would, of course, have an apparent size of 3.  Such a
     small file may require anywhere from 0 to 16 KiB or more of disk
     space, depending on the type and configuration of the file system
     on which the file resides.

Perfect! So all we have to do is use that, right?

root@banshee:/# du -hs --apparent-size /usr/bin ; du -hs --apparent-size /banshee/tmp/bin
349M	/usr/bin
353M	/banshee/tmp/bin

We’re still consuming 4MB more space on the target than the source, dammit. WHY?

Counting files and comparing notes

So we’ve tried everything we could think of, including some voodoo arguments to du that probably cost the lives of many Bothans, and we still have a discrepancy in the amount of data present in source and target. Next step, let’s count the files and folders present on both sides, by using the find command to enumerate them all, and piping its output to wc -l, which counts them once enumerated:

root@banshee:/# find /usr/bin | wc -l ; find /banshee/tmp/bin | wc -l
2107
2107

Huh. No discrepancies there. OK, fine – we know about the diff command, which compares the contents of files and tells us if there are discrepancies – and handily, diff -r is recursive! Problem solved!

root@banshee:/# diff -r /usr/bin /banshee/tmp/bin 2>&1 | head -n 5
diff: /banshee/tmp/bin/arista-gtk: No such file or directory
diff: /banshee/tmp/bin/arista-transcode: No such file or directory
diff: /banshee/tmp/bin/dh_pypy: No such file or directory
diff: /banshee/tmp/bin/dh_python3: No such file or directory
diff: /banshee/tmp/bin/firefox: No such file or directory

What’s wrong with diff -r?

Actually, problem emphatically not solved. I cheated a little, there – I knew perfectly well we were going to get a ton of errors thrown, so I piped descriptor 2 (standard error) to descriptor 1 (standard output) and then to head -n 5, which would just give me the first five lines of errors – otherwise we’d be staring at screens full of crap. Why are we getting “no such file or directory” on all these? Let’s look at one:

root@banshee:/# ls -l /banshee/tmp/bin/arista-gtk
lrwxrwxrwx 1 root root 26 Mar 20  2014 /banshee/tmp/bin/arista-gtk -> ../share/arista/arista-gtk

Oh. Well, crap – /usr/bin is chock full of relative symlinks, which rsync -a copied exactly as they are. So now our target, /banshee/tmp/bin, is full of relative symlinks that don’t go anywhere, and diff -r is trying to follow them to compare contents. /usr/share/arista/arista-gtk is a thing, but /banshee/share/arista/arista-gtk isn’t. So now what?

find, and xargs, and md5sum, oh my!

Luckily, we have a tool called md5sum that will generate a pretty strong checksum of any arbitrary content we give it. It’ll still try to follow symlinks if we let it, though. We could just ignore the symlinks, since they don’t have any data in them directly anyway, and just sorta assume that they’re targeted to the same relative target that they were on the source… yeah, sure, let’s try that.

What we’ll do is use the find command, with the -type f argument, to only find actual files in our source and our target, thereby skipping folders and symlinks entirely. We’ll then pipe that to xargs, which turns the list of output from find -type f into a list of arguments for md5sum, which we can then redirect to text files, and then we can diff the text files. Problem solved! We’ll find that extra 4MB of data in no time!

root@banshee:/# find /usr/bin -type f | xargs md5sum > /tmp/sourcesums.txt
root@banshee:/# find /banshee/tmp/bin -type f | xargs md5sum > /tmp/targetsums.txt

… okay, actually I’m going to cheat here, because you guessed it: we failed again. So we’re going to pipe the output through grep sudo to let us just look at the output of a couple of lines of detected differences.

root@banshee:/# diff /tmp/sourcesums.txt /tmp/targetsums.txt | grep sudo
< 866713ce6e9700c3a567788334be7e25  /usr/bin/sudo
< e43c40cfbec06f191191120f4332f60f  /usr/bin/sudoreplay
> e43c40cfbec06f191191120f4332f60f  /banshee/tmp/bin/sudoreplay
> 866713ce6e9700c3a567788334be7e25  /banshee/tmp/bin/sudo

Ah, crap – absolute paths. Every single file failed this check, since we’re seeing it referenced by absolute path, and /usr/bin/sudo doesn’t match /banshee/tmp/bin/sudo, even though the checksums match! OK, we’ll fix that by using relative paths when we generate our sums… and I’ll save you a little more time; that might fail too, because there’s not actually any guarantee that find will return the files in the same sort order both times. So we’ll also pipe find‘s output through sort to fix that problem, too.

root@banshee:/banshee/tmp# cd /usr ; find ./bin -type f | sort | xargs md5sum > /tmp/sourcesums.txt
root@banshee:/usr# cd /banshee/tmp ; find ./bin -type f | sort | xargs md5sum > /tmp/targetsums.txt
root@banshee:/banshee/tmp# diff /tmp/sourcesums.txt /tmp/targetsums.txt
root@banshee:/banshee/tmp# 

Yay, we finally got verification – all of our files exist on both sides, are named the same on both sides, are in the same folders on both sides, and checksum out the same on both sides! So our data’s good! Hey, wait a minute… if we have the same number of files, in the same places, with the same contents, how come we’ve still got different totals?!

Macabre, unspeakable use of ls

We didn’t really get what we needed out of find. We do at least know that we have the right total number of files and folders, and that things are in the right places, and that we got the same checksums out of the files that we could check. So where the hell are those missing 4MB of data coming from?! We’re using our arcane invocation of du -s –apparent-size to make sure we’re looking at bytes of data and not bytes of allocated disk space, so why are we still seeing differences?

This time, let’s try using ls. It’s rarely used, but ls does have a recursion argument, -R. We’ll use that along with -l for long form and -a to pick up dotfiles, and we’ll pipe it though grep -v ‘\.\.’ to get rid of the entry for the parent directory (which obviously won’t be the same for /usr/bin and for /banshee/tmp/bin, no matter how otherwise perfect matches they are), and then, once more, we’ll have something we can diff.

root@banshee:/banshee/tmp# ls -laR /usr/bin > /tmp/sourcels.txt
root@banshee:/banshee/tmp# ls -laR /banshee/tmp/bin > /tmp/targetls.txt
root@banshee:/banshee/tmp# diff /tmp/sourcels.txt /tmp/targetls.txt | grep which
< lrwxrwxrwx  1 root   root         10 Jan  6 14:16 which -> /bin/which
> lrwxrwxrwx 1 root   root         10 Jan  6 14:16 which -> /bin/which

Once again, I cheated, because there was still going to be a problem here. ls -l isn’t necessarily going to use the same number of formatting spaces when listing the source or the target – so our diff, again, frustratingly fails on every single line. I cheated here by only looking at the output for the which file so we wouldn’t get overwhelmed. The fix? We’ll pipe through awk, forcing the spacing to be constant. Grumble grumble grumble…

root@banshee:/banshee/tmp# ls -laR /usr/bin | awk '{print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11}' > /tmp/sourcels.txt
root@banshee:/banshee/tmp# ls -laR /banshee/tmp/bin | awk '{print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11}' > /tmp/targetls.txt
root@banshee:/banshee/tmp# diff /tmp/sourcels.txt /tmp/targetls.txt | head -n 25
1,4c1,4
< /usr/bin:          
< total 364620         
< drwxr-xr-x 2 root root 69632 Jun 21 07:39 .  
< drwxr-xr-x 11 root root 4096 Jan 6 15:59 ..  
---
> /banshee/tmp/bin:          
> total 373242         
> drwxr-xr-x 2 root root 2108 Jun 21 07:39 .  
> drwxr-xr-x 3 root root 3 Jun 29 18:00 ..  
166c166
< -rwxr-xr-x 2 root root 36607 Mar 1 12:35 c2ph  
---
> -rwxr-xr-x 1 root root 36607 Mar 1 12:35 c2ph  
828,831c828,831
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-check  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-create  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-remove  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-touch  
---
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-check  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-create  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-remove  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-touch  
885,887c885,887

FINALLY! Useful output! It makes perfect sense that the special files . and .., referring to the current and parent directories, would be different. But we’re still getting more hits, like for those four lockfile- commands. We already know they’re identical from source to target, both because we can see the filesizes are the same and because we’ve already md5sum‘ed them and they matched. So what’s up here?

And you thought you’d never need to use the ‘info’ command in anger

We’re going to have to use info instead of man again – this time, to learn about the ls command’s lesser-known evils.

Let’s take a quick look at those mismatched lines for the lockfile- commands again:

< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-check  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-create  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-remove  
< -rwxr-xr-x 4 root root 14592 Dec 3 2012 lockfile-touch  
---
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-check  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-create  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-remove  
> -rwxr-xr-x 1 root root 14592 Dec 3 2012 lockfile-touch

OK, ho hum, we’ve seen this a million times before. File type, permission modes, owner, group, size, modification date, filename, and in the case of the one symlink shown, the symlink target. Hey, wait a minute… how come nobody ever thinks about the second column? And, yep, that second column is the only difference in these lines – it’s 4 in the source, and 1 in the target.

But what does that mean? You won’t learn the first thing about it from man ls, which doesn’t tell you any more than that -l gives you a “long listing format”. And there’s no option to print column headers, which would certainly be helpful… info ls won’t give you any secret magic arguments to print column headers either. But however grudgingly, it will at least name them:

`-l'
`--format=long'
`--format=verbose'
     In addition to the name of each file, print the file type, file
     mode bits, number of hard links, owner name, group name, size, and
     timestamp

So it looks like that mysterious second column refers to the number of hardlinks to the file being listed, normally one (the file itself) – but on our source, there are four hardlinks to each of our lockfile- files.

Checking out hardlinks

We can explore this a little further, using find‘s –samefile test:

root@banshee:/banshee/tmp# find /usr/bin -samefile /usr/bin/lockfile-check
/usr/bin/lockfile-create
/usr/bin/lockfile-check
/usr/bin/lockfile-remove
/usr/bin/lockfile-touch
root@banshee:/banshee/tmp# find /banshee/tmp/bin -samefile /usr/bin/lockfile-check
root@banshee:/banshee/tmp# 

Well, that explains it – on our source, lockfile-create, lockfile-check, lockfile-remove, and lockfile-touch are all actually just hardlinks to the same inode – meaning that while they look like four different files, they’re actually just four separate pointers to the same data on disk. Those and several other sets of hardlinks add up to the reason why our target, /banshee/tmp/bin, is larger than our source, /usr/bin, even though they appear to contain the same data exactly.

Could we have used rsync to duplicate /usr/bin exactly, including the hardlink structures? Absolutely! That’s what the -H argument is there for. Why isn’t that rolled into the -a shortcut argument, you might ask? Well, probably because hardlinks aren’t supported on all filesystems – in particular, they won’t work on FAT32 thumbdrives, or on NFS links set up by sufficiently paranoid sysadmins – so rather than have rsync jobs fail, it’s left up to you to specify if you want to recreate them.

Simpler verification with tar

This was cumbersome as hell – could we have done it more simply? Well, of course! Actually… maybe.

We could have used the venerable tar command to generate a single md5sum for the entire collection of data in one swell foop. We do have to be careful, though – tar will encode the entire path you feed it into the file structure, so you can’t compare tars created directly from /usr/bin and /banshee/tmp/bin – instead, you need to cd into the parent directories, create tars of ./bin in each case, then compare those.

Let’s start out by comparing our current copies, which differ in how hardlinks are handled:

root@banshee:/tmp# cd /usr ; tar -c ./bin | md5sum ; cd /banshee/tmp ; tar -c ./bin | md5sum
4bb9ffbb20c11b83ca616af716e2d792  -
2cf60905cb7870513164e90ac82ffc3d  -

Yep, they differ – which we should be expecting ; after all, the source has hardlinks and the target does not. How about if we fix up the target so that it also uses hardlinks?

root@banshee:/tmp# rm -r /banshee/tmp/bin ; rsync -haH /usr/bin /banshee/tmp/
root@banshee:/tmp# cd /usr ; tar -c ./bin | md5sum ; cd /banshee/tmp ; tar -c ./bin | md5sum
4bb9ffbb20c11b83ca616af716e2d792  -
f654cf7a1c4df8482d61b59993bb92f7  -

Well, crap. That should’ve resulted in identical md5sums, shouldn’t it? It would have, if we’d gone from one ext4 filesystem to another:

root@banshee:/banshee/tmp# rsync -haH /usr/bin /tmp/
root@banshee:/banshee/tmp# cd /usr ; tar -c ./bin | md5sum ; cd /tmp ; tar -c ./bin | md5sum
4bb9ffbb20c11b83ca616af716e2d792  -
4bb9ffbb20c11b83ca616af716e2d792  -

So why, exactly, didn’t we get the same md5sum values for a tar on ZFS, when we did on ext4? The metadata gets included in those tarballs, and it’s just plain different between different filesystems. Different metadata means different contents of the tar means different checksums. Ugly but true. Interestingly, though, if we dump /banshee/tmp/bin back through tar and out to another location, the checksums match again:

root@banshee:/tmp/fromtar# tar -c /banshee/tmp/bin | tar -x
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
root@banshee:/tmp/fromtar# cd /tmp/fromtar/banshee/tmp ; tar -c ./bin | md5sum ; cd /usr ; tar -c ./bin | md5sum
4bb9ffbb20c11b83ca616af716e2d792  -
4bb9ffbb20c11b83ca616af716e2d792  -

What this tells us is that ZFS is storing additional metadata that ext4 can’t, and that, in turn, is what was making our tarballs checksum differently… but when you move that data back onto a nice dumb ext4 filesystem again, the extra metadata is stripped again, and things match again.

Picking and choosing what matters

Properly verifying data and metadata on a block-for-block basis is hard, and you need to really, carefully think about what, exactly, you do – and don’t! – want to verify. If you want to verify absolutely everything, including folder metadata, tar might be your best bet – but it probably won’t match up across filesystems. If you just want to verify the contents of the files, some combination of find, xargs, md5sum, and diff will do the trick. If you’re concerned about things like hardlink structure, you have to check for those independently. If you aren’t just handwaving things away, this stuff is hard. Extra hard if you’re changing filesystems from source to target.

Shortcut for ZFS users: replication!

For those of you using ZFS, and using strictly ZFS, though, there is a shortcut: replication. If you use syncoid to orchestrate ZFS replication at the dataset level, you can be sure that absolutely everything, yes everything, is preserved:

root@banshee:/usr# syncoid banshee/tmp banshee/tmp2
INFO: Sending oldest full snapshot banshee/tmp@autosnap_2016-06-29_19:33:01_hourly (~ 370.8 MB) to new target filesystem:
 380MB 0:00:01 [ 229MB/s] [===============================================================================================================================] 102%            
INFO: Updating new target filesystem with incremental banshee/tmp@autosnap_2016-06-29_19:33:01_hourly ... syncoid_banshee_2016-06-29:19:39:01 (~ 4 KB):
2.74kB 0:00:00 [26.4kB/s] [======================================================================================>                                         ] 68%           
root@banshee:/banshee/tmp2# cd /banshee/tmp ; tar -c . | md5sum ; cd /banshee/tmp2 ; tar -c . | md5sum
e2b2ca03ffa7ba6a254cefc6c1cffb4f  -
e2b2ca03ffa7ba6a254cefc6c1cffb4f  -

Easy, peasy, chicken greasy: replication guarantees absolutely everything matches.

TL;DR

There isn’t one, really! Verification of data and metadata, especially across filesystems, is one of the fundamental challenges of information systems that’s generally abstracted away from you almost entirely, and it makes for one hell of a deep rabbit hole to fall into. Hopefully, if you’ve slogged through all this with me so far, you have a better idea of what tools are at your fingertips to test it when you need to.

PSA: Snapshots are better than ZVOLs

A lot of people new to ZFS, and even a lot of people not-so-new to ZFS, like to wax ecstatic about ZVOLs. But they never seem to mention the very real pitfalls ZVOLs present.

What’s a ZVOL?

Well, if you know what LVM is, a ZVOL is like an LV, but for ZFS. If you don’t know what LVM is, you can think of a ZVOL as, basically, a dynamically allocated “raw partition” inside ZFS. Unlike a normal dataset, a ZVOL doesn’t have a filesystem of its own. And you can access it by a raw devicename, like /dev/zvol/poolname/zvolname. This looks ideal for those use-cases where you want to nest a legacy filesystem underneath ZFS – for example, virtual machine images. Once you have the ZVOL, you have a raw block storage device to interact with – think mkfs.ext4 /dev/zvol/poolname/zvolname, for example – but you still get all the normal ZFS features along with it, like data integrity, compression, snapshots, and so forth. Plus you don’t have to mess with a loopback device, so that should be higher performance, right? What’s not to love?

ZVOLs perform better, though, right?

AFAICT, the increased performance is pretty much a lie. I’ve benchmarked ZVOLs pretty extensively against raw disk partitions, raw LVs, raw files, and even .qcow2 files and there really isn’t much of a performance difference to be seen. A partially-allocated ZVOL isn’t going to perform any better than a partially-allocated .qcow2 file, and a fully-allocated ZVOL isn’t going to perform any better than a fully-allocated .qcow2 file. (Raw disk partitions or LVs don’t really get any significant boost, either.)

Let’s talk about snapshots.

If snapshots aren’t one of the biggest reasons you’re using ZFS, they should be, and ZVOLs and snapshots are really, really tricky and weird. If you have a dataset that’s occupying 85% of your pool, you can snapshot that dataset any time you like. If you have a ZVOL that’s occupying 85% of your pool, you cannot snapshot it, period. This is one of those things that both tyros and vets tend to immediately balk at – I must be misunderstanding something, right? Surely it doesn’t work that way? Afraid it does.

Ooh, is it demo-in-a-VM-time again?! =)

root@xenial:~# zfs create target/dataset -o compress=off -o quota=15G
root@xenial:~# pv < /dev/zero > /target/dataset/0.bin
  15GiB 0:01:13 [10.3MiB/s] [            <=>                            ]
pv: write failed: Disk quota exceeded
root@xenial:~# zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
target          15.3G  3.93G    19K  /target
target/dataset  15.0G      0  15.0G  /target/dataset
root@xenial:~# zfs snapshot target/dataset@1
root@xenial:~# 

Above, we created a dataset on a 20G pool, we dumped 15G of data into it, and we snapshotted the dataset. No surprises here, this is exactly what we expect.

But what happens when we try the same thing with a ZVOL?

root@xenial:~# zfs create target/zvol -V 15G -o compress=off
root@xenial:~# pv < /dev/zero > /dev/zvol/target/zvol
  15GiB 0:03:22 [57.3MiB/s] [========================================>  ] 99% ETA 0:00:00
pv: write failed: No space left on device
NAME          USED  AVAIL  REFER  MOUNTPOINT
target       15.8G  3.46G    19K  /target
target/zvol  15.5G  3.90G  15.0G  -
root@xenial:~# zfs snapshot target/zvol@1
cannot create snapshot 'target/zvol@1': out of space

Despite having 3.9G free on our pool, we can’t snapshot the zvol. If you don’t have at least as much free space in a pool as the REFER of a ZVOL on that pool, you can’t snapshot the ZVOL, period. This means for our little baby demonstration here we’d need 15G free to snapshot our 15G ZVOL. In a real-world situation with VM images, this could easily be a case where you can’t snapshot your 15TB VM image without 15 terabytes of free space available – where if you’d stuck with standard datasets, you’d be able to snapshot that same 15TB VM even with just a few hundred megabytes of AVAIL at your disposal.

TL;DR:

Think long and hard before you implement ZVOLs. Then, you know… don’t.

Testing copies=n resiliency

I decided to see how well ZFS copies=n would stand up to on-disk corruption today. Spoiler alert: not great.

First step, I created a 1GB virtual disk, made a zpool out of it with 8K blocks, and set copies=2.

me@locutus:~$ sudo qemu-img create -f qcow2 /data/test/copies/0.qcow2 1G
me@locutus:~$ sudo qemu-nbd -c /dev/nbd0 /data/test/copies/0.qcow2 1G
me@locutus:~$ sudo zpool create -oashift=12 test /data/test/copies/0.qcow2
me@locutus:~$ sudo zfs set copies=2 test

Now, I wrote 400 1MB files to it – just enough to make the pool roughly 85% full, including the overhead due to copies=2.

me@locutus:~$ cat /tmp/makefiles.pl
#!/usr/bin/perl

for ($x=0; $x<400 ; $x++) {
	print "dd if=/dev/zero bs=1M count=1 of=$x\n";
	print `dd if=/dev/zero bs=1M count=1 of=$x`;
}

With the files written, I backed up my virtual filesystem, fully populated, so I can repeat the experiment later.

me@locutus:~$ sudo zpool export test
me@locutus:~$ sudo cp /data/test/copies/0.qcow2 /data/test/copies/0.qcow2.bak
me@locutus:~$ sudo zpool import test

Now, I write corrupt blocks to 10% of the filesystem. (Roughly: it's possible that the same block was overwritten with a garbage block more than once.) Note that I used a specific seed, so that I can recreate the scenario exactly, for more runs later.

me@locutus:~$ cat /tmp/corruptor.pl
#!/usr/bin/perl

# total number of blocks in the test filesystem
$numblocks=131072;

# desired percentage corrupt blocks
$percentcorrupt=.1;

# so we write this many corrupt blocks
$corruptloop=$numblocks*$percentcorrupt;

# consistent results for testing
srand 32767;

# generate 8K of garbage data
for ($x=0; $x<8*1024; $x++) {
	$garbage .= chr(int(rand(256)));
}

open FH, "> /dev/nbd0";

for ($x=0; $x<$corruptloop; $x++) {
	$blocknum = int(rand($numblocks-100));
	print "Writing garbage data to block $blocknum\n";
	seek FH, ($blocknum*8*1024), 0;
	print FH $garbage;
}

close FH;

Okay. When I scrub the filesystem I just wrote those 10,000 or so corrupt blocks to, what happens?

me@locutus:~$ sudo zpool scrub test ; sudo zpool status test
  pool: test
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 133M in 0h1m with 1989 errors on Mon May  9 15:56:11 2016
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0 1.94K
	  nbd0      ONLINE       0     0 8.94K

errors: 1989 data errors, use '-v' for a list
me@locutus:~$ sudo zpool status -v test | grep /test/ | wc -l
385

OUCH. 385 of my 400 total files were still corrupt after the scrub! Copies=2 didn't do a great job for me here. 🙁

What if I try it again, this time just writing garbage to 1% of the blocks on disk, not 10%? First, let's restore that backup I cleverly made:

root@locutus:/data/test/copies# zpool export test
root@locutus:/data/test/copies# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
root@locutus:/data/test/copies# pv < 0.qcow2.bak > 0.qcow2
 999MB 0:00:00 [1.63GB/s] [==================================>] 100%            
root@locutus:/data/test/copies# qemu-nbd -c /dev/nbd0 /data/test/copies/0.qcow2
root@locutus:/data/test/copies# zpool import test
root@locutus:/data/test/copies# zpool status test | tail -n 5
	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0     0
	  nbd0      ONLINE       0     0     0

errors: No known data errors

Alright, now let's change $percentcorrupt from 0.1 to 0.01, and try again. How'd we do after only corrupting 1% of the blocks on disk?

root@locutus:/data/test/copies# zpool status test
  pool: test
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 101M in 0h0m with 72 errors on Mon May  9 16:13:49 2016
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0    72
	  nbd0      ONLINE       0     0 1.08K

errors: 64 data errors, use '-v' for a list
root@locutus:/data/test/copies# zpool status test -v | grep /test/ | wc -l
64

Still not great. We lost 64 out of our 400 files. Tenth of a percent?

root@locutus:/data/test/copies# zpool status -v test
  pool: test
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 12.1M in 0h0m with 2 errors on Mon May  9 16:22:30 2016
config:

	NAME        STATE     READ WRITE CKSUM
	test        ONLINE       0     0     2
	  nbd0      ONLINE       0     0   105

errors: Permanent errors have been detected in the following files:

        /test/300
        /test/371

Damn. We still lost two files, even with only writing 130 or so corrupt blocks. (The missing 26 corrupt blocks weren't picked up by the scrub because they happened in the 15% or so of unused space on the pool, presumably: a scrub won't check unused blocks.) OK, what if we try a control - how about we corrupt the same tenth of a percent of the filesystem (105 blocks or so), this time without copies=2 set? To make it fair, I wrote 800 1MB files to the same filesystem this time using the default copies=1 - this is more files, but it's the same percentage of the filesystem full. (Interestingly, this went a LOT faster. Perceptibly, more than twice as fast, I think, although I didn't actually time it.)

Now with our still 84% full /test zpool, but this time with copies=1, I corrupted the same 0.1% of the total block count.

root@locutus:/data/test/copies# zpool status test
  pool: test
 state: ONLINE
status: One or more devices has experienced an error resulting in data
	corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
	entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 8K in 0h0m with 98 errors on Mon May  9 16:28:26 2016
config:

	NAME                         STATE     READ WRITE CKSUM
	test                         ONLINE       0     0    98
	  /data/test/copies/0.qcow2  ONLINE       0     0   198

errors: 93 data errors, use '-v' for a list

Without copies=2 set, we lost 93 files instead of 2. So copies=n was definitely better than nothing for our test of writing 0.1% of the filesystem as bad blocks... but it wasn't fabulous, either, and it fell flat on its face with 1% or 10% of the filesystem corrupted. By comparison, a truly redundant pool - one with two disks in it, in a mirror vdev - would have survived the same test (corrupting ANY number of blocks on a single disk) with flying colors.

The TL;DR here is that copies=n is better than nothing... but not by a long shot, and you do give up a lot of performance for it. Conclusion: play with it if you like, but limit it only to extremely important data, and don't make the mistake of thinking it's any substitute for device redundancy, much less backups.

zfs: copies=n is not a substitute for device redundancy!

I’ve been seeing a lot of misinformation flying around the web lately about the zfs dataset-level feature copies=n. To be clear, dangerous misinformation. So dangerous, I’m going to go ahead and give you the punchline in the title of this post and in its first paragraph: copies=n does not give you device fault tolerance!

Why does copies=n actually exist then? Well, it’s a sort of (extremely) poor cousin that helps give you a better chance of surviving data corruption. Let’s say you have a laptop, you’ve set copies=3 on some extremely critical work-related datasets, and the drive goes absolutely bonkers and starts throwing tens of thousands of checksum errors. Since there’s only one disk in the laptop, ZFS can’t correct the checksum errors, only detect them… except on that critical dataset, maybe and hopefully, because each block has multiple copies. So if a given block has been written three times and any single copy of that block reads so as to match its validation hash, that block will get served up to you intact.

So far, so good. The problem is that I am seeing people advocating scenarios like “oh, I’ll just add five disks as single-disk vdevs to a pool, then make sure to set copies=2, and that way even if I lose a disk I still have all the data.” No, no, and no. But don’t take my word for it: let’s demonstrate.

First, let’s set up a test pool using virtual disks.

root@banshee:/tmp# qemu-img create -f qcow2 0.qcow2 10G ; qemu-img create -f qcow2 1.qcow2 10G
Formatting '0.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
Formatting '1.qcow2', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off 
root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/0.qcow2 ; qemu-nbd -c /dev/nbd1 /tmp/1.qcow2
root@banshee:/tmp# zpool create test /dev/nbd0 /dev/nbd1
root@banshee:/tmp# zpool status test
  pool: test
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    test        ONLINE       0     0     0
      nbd0      ONLINE       0     0     0
      nbd1      ONLINE       0     0     0

errors: No known data errors

Now let’s set copies=2, and then create a couple of files in our pool.

root@banshee:/tmp# zfs set copies=2 test
root@banshee:/tmp# dd if=/dev/urandom bs=4M count=1 of=/test/test1
1+0 records in
1+0 records out
4194304 bytes (4.2 MB) copied, 0.310805 s, 13.5 MB/s
root@banshee:/tmp# dd if=/dev/urandom bs=4M count=1 of=/test/test2
1+0 records in
1+0 records out
4194304 bytes (4.2 MB) copied, 0.285544 s, 14.7 MB/s

Let’s confirm that copies=2 is working.

We should see about 8MB of data on each of our virtual disks – one for each copy of each of our 4MB test files.

root@banshee:/tmp# ls -lh *.qcow2
-rw-r--r-- 1 root root 9.7M May  2 13:56 0.qcow2
-rw-r--r-- 1 root root 9.8M May  2 13:56 1.qcow2

Yep, we’re good – we’ve written a copy of each of our two 4MB files to each virtual disk.

Now fail out a disk:

root@banshee:/tmp# zpool export test
root@banshee:/tmp# qemu-nbd -d /dev/nbd1
/dev/nbd1 disconnected

Will a pool with copies=2 and one missing disk import?

root@banshee:/tmp# zpool import
   pool: test
     id: 15144803977964153230
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://zfsonlinux.org/msg/ZFS-8000-6X
 config:

    test         UNAVAIL  missing device
      nbd0       ONLINE

    Additional devices are known to be part of this pool, though their
    exact configuration cannot be determined.

That’s a resounding “no”.

Can we force an import?

root@banshee:/tmp# zpool import -f test
cannot import 'test': one or more devices is currently unavailable

No – your data is gone.

Please let this be a lesson: no, copies=n is not a substitute for redundancy or parity, and yes, losing any vdev does lose the pool.

FIO cheat sheet

With any luck I’ll turn this into a real blog post soon, but for the moment: a cheat sheet for simple usage of fio to benchmark storage. This command will run 16 simultaneous 4k random writers in sync mode. It’s a big enough config to push through the ZIL on most zpools and actually do some testing of the real hardware underneath the cache.

fio --name=random-writers --ioengine=sync --iodepth=4 --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16

For reference, a Sanoid Standard host with two 1TB solid state pro mirror vdevs gets 429MB/sec throughput with 16 4k random writers in sync mode:

Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=426293KB/s, minb=26643KB/s, maxb=28886KB/s, mint=9075msec, maxt=9839msec

Awww, yeah.

378MB/sec for a single 4k random writer, so don’t think you have to have a ridiculous queue depth to see outstanding throughput, either:

Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=378444KB/s, minb=378444KB/s, maxb=378444KB/s, mint=11083msec, maxt=11083msec

That’s a spicy meatball.

ZFS compression: yes, you want this

So ZFS dedup is a complete lose. What about compression?

Compression is a hands-down win. LZ4 compression should be on by default for nearly anything you ever set up under ZFS. I typically have LZ4 on even for datasets that will house database binaries… yes, really. Let’s look at two quick test runs, on a Xeon E3 server with 32GB ECC RAM and a pair of Samsung 850 EVO 1TB disks set up as a mirror vdev.

This is an inline compression torture test: we’re reading pseudorandom data (completely incompressible) and writing it to an LZ4 compressed dataset.

root@lab:/data# pv < in.rnd > incompressible/out.rnd
7.81GB 0:00:22 [ 359MB/s] [==================================>] 100%

root@lab:/data# zfs get compressratio data/incompressible
NAME                 PROPERTY       VALUE  SOURCE
data/incompressible  compressratio  1.00x  -

359MB/sec write… yyyyyeah, I’d say LZ4 isn’t hurting us too terribly here – and this is a worst case scenario. What about something a little more realistic? Let’s try again, this time with a raw binary of my Windows Server 2012 R2 “gold” image (the OS is installed and Windows Updates are applied, but nothing else is done to it):

root@lab:/data/test# pv < win2012r2-gold.raw > realworld/win2012r2-gold.out
8.87GB 0:00:17 [ 515MB/s] [==================================>] 100%

Oh yeah – 515MB/sec this time. Definitely not hurting from using our LZ4 compression. What’d we score for a compression ratio?

root@lab:/data# zfs get compressratio data/realworld
NAME            PROPERTY       VALUE  SOURCE
data/realworld  compressratio  1.48x  -

1.48x sounds pretty good! Can we see some real numbers on that?

root@lab:/data# ls -lh /data/realworld/win2012r2-gold.raw
-rw-rw-r-- 1 root root 8.9G Feb 24 18:01 win2012r2-gold.raw
root@lab:/data# du -hs /data/realworld
6.2G	/data/realworld

8.9G of data in 6.2G of space… with sustained writes of 515MB/sec.

What if we took our original 8G of incompressible data, and wrote it to an uncompressed dataset?

root@lab:/data#  zfs create data/uncompressed
root@lab:/data# zfs set compression=off data/uncompressed
root@lab:/data# cat 8G.in > /dev/null ; # this is to make sure our source data is preloaded in the ARC
root@lab:/data# pv < 8G.in > uncompressed/8G.out
7.81GB 0:00:21 [ 378MB/s] [==================================>] 100% 

So, our worst case scenario – completely incompressible data – means a 5% performance hit, and a more real-world-ish scenario – copying a Windows Server installation – means a 27% performance increase. That’s on fast solid state, of course; the performance numbers will look even better on slower storage (read: spinning rust), where even worst-case writes are unlikely to slow down at all.

Yep, that’s a win.

ZFS dedup: tested, found wanting

Even if you have the RAM for it (and we’re talking a good 6GB or so per TB of storage), ZFS deduplication is, unfortunately, almost certainly a lose.

I don’t usually have that much RAM to spare, but one server has 192GB of RAM and only a few terabytes of storage – and it stores a lot of VM images, with obvious serious block-level duplication between images. Dedup shows at 1.35+ on all the datasets, and would be higher if one VM didn’t have a couple of terabytes of almost dup-free data on it.

That server’s been running for a few years now, and nobody using it has complained. But I was doing some maintenance on it today, splitting up VMs into their own datasets, and saw some truly abysmal performance.

root@virt0:/data/images# pv < jabberserver.qcow2 > jabber/jabberserver.qcow2 206MB 0:00:31 [7.14MB/s] [>                  ]  1% ETA 0:48:41

7MB/sec? UGH! And that’s not even a sustained average; that’s just where it happened to be when I killed the process. This server should be able to sustain MUCH better performance than that, even though it’s reading and writing from the same pool. So I checked, and saw that dedup was on:

root@virt0:~# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
data  7.06T  2.52T  4.55T    35%  1.35x  ONLINE  -

In theory, you’d think that dedup would help tremendously with exactly this operation: copying a quiesced VM from one dataset to another on the same pool. There’s no need for a single block of data to be rewritten, just more pointers added to the metadata for the existing blocks. However, dedup looked like the obvious culprit for my performance woes here, so I disabled it and tried again:

root@virt0:/data/images# pv < jabberserver.qcow2 > jabber/jabberserver.qcow219.2GB 0:04:58 [65.7MB/s] [============>] 100%

Yep, that’s more like it.

TL;DR: ZFS dedup sounds like a great idea, but in the real world, it sucks. Even on a machine built to handle it. Even on exactly the kind of storage (a bunch of VMs with similar or identical operating systems) that seems tailor-made for it. I do not recommend its use for pretty much any conceivable workload.

(On the other hand, LZ4 compression is an unqualified win.)