Wifi Acronym/Protocol Cheat Sheet

I can never find all this stuff in easy human-readable form in one place and have trouble remembering some of it, so here’s a cheat sheet for myself (and for you!)


  • AP – Access Point. This is wifi infrastructure – a router or access point which offers network access to clients.
  • STA – Station. This is nerd shorthand for “client device”; a device that connects to APs in order to have access to the network.
  • SSID – Service Set IDentifier. Normal humans call this a “wifi network name”. What you see on the list of wifi networks to connect to.
  • BSSID – Basic Service Set IDentifier. This is the hardware address of the wifi chipset in an AP or STA; wired network nerds will be also familiar with this as the “MAC address”.
  • MAC Address – this is a string of text which uniquely identifies a particular network interface to other network interfaces on the network. It’s the fundamental network identity – IP addresses will get you to the right network domain, but from there you need a translation table (ARP) to tell you which MAC address owns which IP addresses. When speaking of Wifi, MAC address is synonymous with BSSID.
  • ARP – Address Resolution Protocol. ARP is not unique to wifi; much like MAC addresses, wired networking uses it too. ARP is the protocol which allows machines on the local network to convert IP addresses to MAC addresses (which are how the packets ultimately get to the right local-network destination).
  • NIC – Network Interface Card. Used to refer specifically to the network chipset doing the communicating; a STA or AP may have multiple NICs. Each NIC has its own MAC/BSSID.


802.11k – RF-based roaming report

802.11k and 802.11v are protocols which facilitate BSS (Basic Service Set) transitions. Normal humans tend to call this “roaming.” K, specifically, is how an AP offers a STA information about the network, so that the STA can choose a reasonable AP to roam to.

1. AP determines that STA is moving away from it
2. AP informs STA to prepare for roaming
3. STA requests list of nearby access points
4. AP gives site report
5. STA moves to best AP based on report

Both AP and STA must support 802.11k for it to be of use. Without K, roaming takes longer (since the STA must switch bands “sniffing” the air for new APs), and is more likely to send the STA to a suboptimal AP.

If you need more info, rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11k-2008

802.11r – Fast BSS transition

802.11r is only relevant to networks using EAP (Extensible Authentication Protocol), an enterprise-typical technology which allows each individual STA on the same SSID to use different passwords, and thus separate encryption keys. 802.11r does not apply to PSK networks, eg WPA/WPA2 “personal”.

Without 802.11r, a roaming event is much slower on an EAP network than on a Pre-Shared-Key style network, because the STA must first complete the full roaming process it would on the PSK network – then it must renegotiate the crypto side of things all over again with the new AP.

With 802.11r enabled (and supported on both STA and AP), part of the authentication and encryption keys may be cached for a certain amount of time, speeding up handoffs from AP to AP on an EAP network.

The details get a little hairy if you’re not super up on both the crypto and the nitty-gritty of the protocol; rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11r-2008

802.11s – Mesh infrastructure protocol

802.11s is a mesh networking extension. It’s how most, if not all, Wifi Mesh networking kits handle communication between APs. Key features include:

1. SAE – Simultaneous Authentication between Equals. The idea here is that the various nodes of the mesh network can recognize one another without dependence on a central, authoritative controller.
2. broadcast/multicast and unicast delivery – in a normal network, if you hit the broadcast address a packet is relayed out to each STA. This becomes more difficult in a mesh network as not every STA is connected to a single infrastructure node; 802.11s facilitates the delivery of these *cast packets to all the STAs on the network.

802.11s is for APs only – normal STAs do not need to support and do not know anything about 802.11s, even if they’re connected to a “mesh” Wifi network.

Rabbit hole starts here: https://en.wikipedia.org/wiki/IEEE_802.11s

802.11v – Load-based roaming report

802.11v assists roaming based on AP load conditions. 802.11v BSS-TM management frames include a list of APs, and a report of their current loads. Providing this information to a STA reduces the scan time necessary, and allows for more graceful, steered roaming.

An 802.11v-enabled STA may request an 802.11v BSS-TM management frame from an AP, or an AP may send an unsolicited BSS-TM frame to the STA (indicating to the STA that a more preferred AP is available).

Similarly to 802.11k, the AP doesn’t unconditionally command the STA to roam to a specific AP, and the STA does not unconditionally obey. Both STA and AP must support 802.11v for load-based roaming to function.

I haven’t found a really good rabbit hole start for this one, but try here, here, and here.

ZVOL vs QCOW2 with KVM

When mixing ZFS and KVM, should you put your virtual machine images on ZVOLs, or on .qcow2 files on plain datasets? It’s a topic that pops up a lot, usually with a ton of people weighing in on performance without having actually done any testing.  My old benchmarks are getting a little long in the tooth, so here’s an fio random write run with 4K blocksize, done on both a .qcow2 on a dataset, and a zvol.

Test Configuration


CPU :  Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz
SATA : Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ZFS, from Canonical main repo
Disks : 2x Samsung 850 Pro 1TB SATA3, mirror vdev
ZFS parameters: ashift=13,recordsize=8K,atime=off,compression=lz4


CPU : Intel Core Processor (Broadwell), 2 threads
RAM : 512MB
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ext4
Disks: /mnt/zvol on 20G zvol volume, /mnt/dataset on 20G .qcow2 file

Synchronous 4K write results

ZVOL, –ioengine=sync:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=sync --iodepth=4 \
                              --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=50453KB/s, minb=3153KB/s, maxb=3153KB/s, mint=83116msec, maxt=83132msec

QCOW2, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=45767KB/s, minb=2860KB/s, maxb=2976KB/s, mint=88058msec, maxt=91643msec

So, 50.5 MB/sec (zvol) vs 45.8 MB/sec (qcow2). Yes, there’s a difference; at least on the most punishing I/O workloads. Is it perceptible enough to matter? Probably not, for most use cases, given the benefits in ease of management and maintenance for .qcow2 on datasets. QCOW2 are easier to provision, you don’t have to worry about refreservation keeping you from taking snapshots, they’re not significantly more difficult to mount offline (modprobe nbd ; qemu-nbd -c /dev/nbd0 /path/to/image.qcow2 ; mount -oro /mnt/image /dev/nbd0 or similar); and probably the most importantly, filling the underlying storage beneath a qcow2 won’t crash the guest.

Tuning QCOW2 for even better performance

I found out yesterday that you can tune the underlying cluster size of the .qcow2 format. Creating a new .qcow2 file tuned to use 8K clusters – matching our 8K recordsize, and the 8K underlying hardware blocksize of the Samsung 850 Pro drives in our vdev – produced tremendously better results. With the tuned qcow2, we more than tripled the performance of the zvol – going from 50.5 MB/sec (zvol) to 170 MB/sec (8K tuned qcow2)!

QCOW2 -o cluster_size=8K, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=170002KB/s, minb=10625KB/s, maxb=12698KB/s, mint=20643msec, maxt=24672msec

ZVOL won’t pause the guest if storage is unavailable

If you fill the underlying pool with a guest that’s using a zvol for its storage, the filesystem in the guest will panic. From the guest’s perspective, this is a hardware I/O error, and the guest and/or its apps which use that virtual disk will crash, leaving it in an unknown and possibly corrput state.

If the guest uses a .qcow2 file on a dataset for storage, the same problem is handled much more safely. When writes become unavailable on host storage, the guest will be automatically paused by libvirt. This gives you a chance to free up space, then virsh resume the guest. The net effect is that the guest and its apps never realize there was ever a problem in the first place. Any pending writes complete automatically and without error once you’ve cleared the host storage problem and resumed the guest.

ZVOL doesn’t honor guest synchronous writes

It may also be worth noting that the guest seems a little less clued in with what’s going on with its storage when using the zvol. I specified --ioengine=sync for these test runs, which should – repeat, should – have made the also-specified parameter end_fsync=1 irrelevant, since all writes were supposed to be synchronous.

On the .qcow2-hosted storage, the data was written verifiably sync, since we can see there’s no pause at the end for end_fsync=1 to finish flushing the data to the metal:

Jobs: 16 (f=16): [w(16)] [66.7% done] [0KB/75346KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [68.0% done] [0KB/0KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [72.0% done] [0KB/263.8MB/0KB /s]
Jobs: 16 (f=16): [w(8),F(1),w(7)] [80.0% done] [0KB/199.1MB/0KB /s] 
Jobs: 15 (f=15): [w(8),_(1),w(7)] [80.8% done] [0KB/53866KB/0KB /s] 
Jobs: 15 (f=15): [w(3),F(1),w(4),_(1),w(3),F(1),w(3)] [84.6% done] 
Jobs: 12 (f=12): [F(1),w(2),_(1),w(4),_(2),w(2),_(1),w(3)] [85.2% done] 
Jobs: 8 (f=8): [_(4),w(4),_(2),w(2),_(1),w(1),_(1),w(1)] [88.9% done] Jobs: 4 (f=3): [_(4),F(1),_(1),w(1),_(3),F(1),_(4),w(1)] [100.0% done] 

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1773: Tue Mar 13 13:57:16 2018

The ZVOL hosted storage, on the other hand, clearly was not honoring ioengine=sync, as it spent a significant amount of time after all data was supposedly already written, waiting for end_fsync=1 to finish:

Jobs: 16 (f=16): [w(16)] [81.0% done] [0KB/527.2MB/0KB /s] 
Jobs: 16 (f=16): [w(10),F(1),w(5)] [94.7% done] [0KB/551.6MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/155.2MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

 ------[[[ above line repeats for 60 more lines ]]]------

Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1792: Tue Mar 13 13:57:42 2018

This strikes me as pretty disturbing; you could end up in a world of hurt if you’re expecting your host to honor the guest’s synchronous writes when, in fact, it’s not.

Asynchronous 4K write results

Well, hrm. Realizing now that zvol storage doesn’t actually honor synchronous write requests very well, what if we use the libaio (native Linux asynchronous I/O) engine instead?

ZVOL, –ioengine=libaio:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=139484KB/s, minb=8717KB/s, maxb=8722KB/s, mint=30054msec, maxt=30070msec

QCOW2, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=164392KB/s, minb=10274KB/s, maxb=11651KB/s, mint=22498msec, maxt=25514msec

And there you have it – qcow2 at 164MB/sec vs zvol at 139 MB/sec. So when using asynchronous I/O, the qcow2-backed virtual disk actually finished the fio run faster than the zvol-backed disk.

What if we tune the .qcow2 for 8K cluster size, like we did above in the synchronous write test?

QCOW2 -o cluster_size=8K, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=181304KB/s, minb=11331KB/s, maxb=13543KB/s, mint=19356msec, maxt=23134msec

The improvements aren’t as drastic here – 181 MB/sec (tuned qcow2) vs 164 MB/sec (default qcow2) vs 139 MB/sec (zvol) – but they’re still a clear improvement, and the qcow2 storage is still faster than the zvol. (If anybody knows similar tuning that can be done to the zvol to improve its numbers, please tweet or DM me @jrssnet.)

Conclusion: .qcow2 FTW

For me, it’s a no-brainer: qcow2 files are only slightly slower on even the most punishing I/O workloads under default, untuned configuration, while being MUCH easier to manage, and arguably safer (won’t crash the guest if the host fills up the storage, honors sync write requests more predictably). And if you take the time to tune the .qcow2 on creation, they actually outperform the zvol. Winner: .qcow2.

Boot rescue for GalliumOS / chrx on Chromebooks

Since acquiring a small fleet of HP Chromebooks for use in network testing, I’ve discovered that once in a blue moon, one of them that’s lost power while running will have trashed its Linux boot configuration – in which case it hangs at the SeaBIOS “Booting from Hard Disk…” black screen indefinitely.

The fix is obscure but doesn’t take long. What you need to do is boot into ChromeOS, but don’t log in. Instead, press ctrl-alt-F2 (probably ctrl-alt-right-arrow on most Chromebook keyboards) to get a bash login. Log in as chronos, no password. Sudo -s to become root. Now run the “mount” command, with no arguments – you should see a few partitions from your system disk mounted; what the device name is can vary from Chromebook to Chromebook. Mine is /dev/mmcblk0, so partitions look like /dev/mmcblk0p7.

Standard chrx disk layouts that preserve ChromeOS should have the Linux partition as p7 on the system disk; so you’ll be looking at something like /dev/sda7 or /dev/mmcblk0p7. You’re going to make a temp directory, mount that Linux partition on the temp directory, then chroot inside it so that you can update the bootloader. Adjust that first mount command as necessary for your system, and you’re off to the races:

mkdir /tmp/a

mount /dev/mmcblk0p7        /tmp/a
mount -o bind /proc    /tmp/a/proc
mount -o bind /dev     /tmp/a/dev
mount -o bind /dev/pts /tmp/a/dev/pts
mount -o bind /sys     /tmp/a/sys
mount -o bind /run     /tmp/a/run

chroot /tmp/a /bin/bash

dpkg-reconfigure grub-pc

That’s it. dpkg-reconfigure will ask you a few questions, including one about the boot command line – which will come up blank, and which you can leave blank. Aside from that, enter your way through; you’re done in a few seconds, after which exit exit exit your way out, reboot, and your Linux installation will boot again!

Demonstrating ZFS zpool write distribution

One of my pet peeves is people talking about zfs “striping” writes across a pool. It doesn’t help any that zfs core developers use this terminology too – but it’s sloppy and not really correct.

ZFS distributes writes among all the vdevs in a pool.  If your vdevs all have the same amount of free space available, this will resemble a simple striping action closely enough.  But if you have different amounts of free space on different vdevs – either due to disks of different sizes, or vdevs which have been added to an existing pool – you’ll get more blocks written to the drives which have more free space available.

This came into contention on Reddit recently, when one senior sysadmin stated that a zpool queues the next write to the disk which responds with the least latency.  This statement did not match with my experience, which is that a zpool binds on the performance of the slowest vdev, period.  So, I tested, by creating a test pool with sparse images of mismatched sizes, stored side-by-side on the same backing SSD (which largely eliminates questions of latency).

root@banshee:/tmp# qemu-img create -f qcow2 512M.qcow2 512M root@banshee:/tmp# qemu-img create -f qcow2 2G.qcow2 2G
root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2
root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /tmp/2G.qcow2
root@banshee:/tmp# zpool create -oashift=13 test nbd0 nbd1

OK, we’ve now got a 2.5 GB pool, with vdevs of 512M and 2G, and pretty much guaranteed equal latency between the two of them.  What happens when we write some data to it?

root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 status=none | pv -s 512M > /test/512M.zero
 512MiB 0:00:12 [41.4MiB/s] [================================>] 100% 

root@banshee:/tmp# zpool export test
root@banshee:/tmp# ls -lh *qcow2
-rw-r--r-- 1 root root 406M Jul 27 15:25 2G.qcow2
-rw-r--r-- 1 root root 118M Jul 27 15:25 512M.qcow2

There you have it – writes distributed with a ratio of roughly 4:1, matching the mismatched vdev sizes. (I also tested with a 512M image and a 1G image, and got the expected roughly 2:1 ratio afterward.)

OK. What if we put one 512M image on SSD, and one 512M image on much slower rust?  Will the pool distribute more of the writes to the much faster SSD?

root@banshee:/tmp# qemu-img create -f qcow2 /tmp/512M.qcow2 512M
root@banshee:/tmp# qemu-img create -f qcow2 /data/512M.qcow2 512M

root@banshee:/tmp# qemu-nbd -c /dev/nbd0 /tmp/512M.qcow2
root@banshee:/tmp# qemu-nbd -c /dev/nbd1 /data/512M.qcow2

root@banshee:/tmp# zpool create test -oashift=13 nbd0 nbd1
root@banshee:/tmp# dd if=/dev/zero bs=4M count=128 | pv -s 512M > /test/512M.zero 
512MiB 0:00:48 [10.5MiB/s][================================>] 100%
root@banshee:/tmp# zpool export test
root@banshee:/tmp# ls -lh /tmp/512M.qcow2 ; ls -lh /data/512M.qcow2 
-rw-r--r-- 1 root root 266M Jul 27 15:07 /tmp/512M.qcow2 
-rw-r--r-- 1 root root 269M Jul 27 15:07 /data/512M.qcow2

Nope. Once again, zfs distributes the writes according to the amount of free space available – even when this causes performance to bind *severely* on the slowest vdev in the pool.

You should expect to see this happening if you have a vdev with failing hardware, as well – if any one disk is throwing massive latency instead of just returning errors, your entire pool will as well, until the deranged disk has been removed.  You can usually spot this sort of problem using iotop – all of the disks in your pool will have roughly the same throughput in MB/sec (assuming they’ve got equivalent amounts of free space left!), but your problem disk will show a much higher %UTIL than the rest.  Fault that slow disk, and your pool performance returns to normal.


A comprehensive guide to fixing slow SSH logins

The debug text that brought you here

Most of you are probably getting here just from frustratedly googling “slow ssh login”.  Those of you who got a little froggier and tried doing an ssh -vv to get lots of debug output saw things hanging at debug1: SSH2_MSG_SERVICE_ACCEPT received, likely for long enough that you assumed the entire process was hung and ctrl-C’d out.  If you’re patient enough, the process will generally eventually continue after the debug1: SSH2_MSG_SERVICE_ACCEPT received line, but it may take 30 seconds.  Or even five minutes.

You might also have enabled debug logging on the server, and discovered that your hang occurs immediately after debug1: KEX done [preauth]
and before debug1: userauth-request for user in /var/log/auth.log.

I feel your frustration, dear reader. I have solved this problem after hours of screeching head-desking probably ten times over the years.  There are a few fixes for this, with the most common – DNS – tending to drown out the rest.  Which is why I keep screeching in frustration every few years; I remember the dreaded debug1: SSH2_MSG_SERVICE_ACCEPT received hang is something I’ve solved before, but I can only remember some of the fixes I’ve needed.

Anyway, here are all the fixes I’ve needed to deploy over the years, collected in one convenient place where I can find them again.

It’s usually DNS.

The most common cause of slow SSH login authentications is DNS. To fix this one, go to the SSH server, edit /etc/ssh/sshd_config, and set UseDNS no.  You’ll need to restart the service after changing sshd_config: /etc/init.d/ssh restart, systemctl restart ssh, etc as appropriate.

If it’s not DNS, it’s Avahi.

The next most common cause – which is devilishly difficult to find reference to online, and I hope this helps – is the never-to-be-sufficiently damned avahi daemon.  To fix this one, go to the SSH client, edit /etc/nsswitch.conf, and change this line:

hosts:          files mdns4_minimal [NOTFOUND=return] dns


hosts:          files dns

In theory maybe something might stop working without that mdns4_minimal option?  But I haven’t got the foggiest notion what that might be, because nothing ever seems broken for me after disabling it.  No services need restarting after making this change, which again, must be made on the client.

You might think this isn’t your problem. Maybe your slow logins only happen when SSHing to one particular server, even one particular server on your local network, even one particular server on your local network which has UseDNS no and which you don’t need any DNS resolution to connect to in the first place.  But yeah, it can still be this avahi crap. Yay.

When it’s not Avahi… it’s PAM.

This is another one that’s really, really difficult to find references to online.  Optional PAM modules can really screw you here.  In my experience, you can’t get away with simply disabling PAM login in /etc/ssh/sshd_config – if you do, you won’t be able to log in at all.

What you need to do is go to the SSH server, edit /etc/pam.d/common-session and comment out the optional module that’s causing you grief.  In the past, that was pam_ck_connector.so.  More recently, in Ubuntu 16.04, the culprit that bit me hard was pam_systemd.so. Get in there and comment that bugger out.  No restarts required.

#session optional pam_systemd.so

GSSAPI, and ChallengeResponse.

I’ve seen a few seconds added to a pokey login from GSSAPIAuthentication, whatever that is. I feel slightly embarrassed about not knowing, but seriously, I have no clue.  Ditto for ChallengeResponseAuthentication.  All I can tell you is that neither cover standard interactive passwords, or standard public/private keypair authentication (the keys you keep in ~/ssh/authorized_keys).

If you aren’t using them either, then disable them.  If you’re not using Active Directory authentication, might as well go ahead and nuke Kerberos while you’re at it.  Make these changes on the server in /etc/ssh/sshd_config, and restart the service.

ChallengeResponseAuthentication no
KerberosAuthentication no
GSSAPIAuthentication no

Host-based Authentication.

If you’re actually using this, don’t disable it. But let’s get real with each other: you’re not using it.  I mean, I’m sure somebody out there is.  But it’s almost certainly not you.  Get rid of it.  This is also on the server in /etc/ssh/sshd_config, and also will require a service restart.

# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no

Need frequent connections? Consider control sockets.

If you need to do repetitive ssh’ing into another host, you can speed up the repeated ssh commands enormously with the use of control sockets.  The first time you SSH into the host, you establish a socket.  After that, the socket obviates the need for re-authentication.

ssh -M -S /path/to/socket -o ControlPersist=5m remotehost exit

This creates a new SSH socket at /path/to/socket which will survive for the next 5 minutes, after which it’ll automatically expire.  The exit at the end just causes the ssh connection you establish to immediately terminate (otherwise you’d have a perfectly normal ssh session going to remotehost).

After creating the control socket, you utilize it with -S /path/to/socket in any normal ssh command to remotehost.

The improvement in execution time for commands using that socket is delicious.

me@banshee:/~$ ssh -M -S /tmp/demo -o ControlPersist=5m eohippus exit

me@banshee:/~$ time ssh eohippus exit
real 0m0.660s
user 0m0.052s
sys 0m0.016s

me@banshee:/~$ time ssh -S /tmp/demo eohippus exit
real 0m0.050s
user 0m0.005s
sys 0m0.010s

Yep… from 660ms to 50ms.  SSH control sockets are pretty awesome.

Multiple client wifi testing

I’ve started beta testing my new tools for modeling and testing multiple client network usage. The main tool is something I didn’t actually think I’d need to build, which I’ve named netburn. The overall concept is using an HTTP back end server to feed multiple client devices, and I thought I’d be able to just use ApacheBench (ab) for that… but it turned out that ab was missing some crucial features I needed. Ab is designed to test the HTTP server on the back end, whereas my goal is to test the network in the middle – if the server on the back end fails, my tests fail with it.

So, ab doesn’t feature any throttling at all, and that wouldn’t work for me. Netburn, like ab, is a flexible tool, but I have four basic workloads in mind:

  • browsing: a multiple-concurrent-fetch operation that’s extremely bursty and moderately latency-sensitive, but low Mbps over time
  • 4kstream: a consistent, latency-insensitive, serial 25 Mbps download that mustn’t fall below 20 Mbps (the dreaded buffering!)
  • voip: a 1 Mbps, steady/non-bursty, extremely latency-sensitive download
  • download: a completely unthrottled, serialized download of large object(s)

I installed GalliumOS Linux on four Chromebooks, set them up with Linksys WUSB-6300 USB3 802.11ac 2×2 NICs, and got to testing against a reference Archer C7 wifi router. For this first round of very-much-beta testing, the Chromebooks aren’t really properly distributed around the house – the “4kstream” Chromebook is a pretty reasonable 20-ish feet away in the next room, but the other three were just sitting on the workbench right next to the router.

The Archer C7 got default settings overall, with a single SSID for both 5 GHz and 2.4 GHz bands. There was clearly no band-steering in play on the C7, as all four Chromebooks associated with the 5 GHz radio. This lead to some unsurprisingly crappy results for our simultaneous tests:

The C7 clearly doesn't feature any band-steering: all four Chromebooks associated with the 5 GHz radio, with predictably awful results.
The C7 clearly doesn’t feature any band-steering: all four Chromebooks associated with the 5 GHz radio, with predictably awful results.

The latency was godawful for the web browsing workload, the voip was mostly tolerable but failed our 150ms goal significantly in one packet out of every 100, and the 4K stream very definitely buffered a lot. Sad face. While we got a totally respectable 156.8 Mbps overall throughput over the course of this 5 minute test, the actual experience for humans using it would have been quite bad.

Manually splitting the SSIDs and joining the "download" client to the 2.4 GHz radio produced significantly better results. We had some failures to meet latency goals, but overall I'd call this a "mediocre pass".
Manually splitting the SSIDs and joining the “download” client to the 2.4 GHz radio produced significantly better results. We had some failures to meet latency goals, but overall I’d call this a “mediocre pass”.

Splitting the SSIDs manually and forcing the “download” client to associate to the 2.4 GHz radio produced much better results. While we had some latency failures in the bottom 5% of the packets, they weren’t massively over our 500ms goal; this would have been a bit laggy maybe but tolerable. 99% of our VOIP packets met our 150ms latency goal, and even the absolute worst single packet wasn’t much over 200ms.

The interesting takeaways here are first, how important band steering – or manual management of clients to split them between radios – is, and second, that higher overall throughput does not correlate that strongly with a better actual experience. The second run produced only 113 Mbps throughput to the first run’s 157 Mbps… but it would have been a much better actual experience for users.

ZFS clones: Probably not what you really want

ZFS clones look great on paper: they’re instantaneously generated, they’re read/write, they’re initially “free” because they reference the same blocks their parent snapshots do. They’re also (initially) frequently extra-snappy performance-wise, because a lot of those parent blocks are very likely already in the ARC. If you create ten clones of the same VM image (for instance), all ten clones will share the same blocks in the ARC instead of them needing to be in the ARC ten different times. Huge win!

But, as great as a clone sounds at first blush, you probably don’t want to use them for anything that isn’t ephemeral (intended to be destroyed in fairly short order). This is because a clone’s parent snapshot is forever immutable; you can’t destroy the parent snapshot without destroying the clone along with it… even if and when the clone becomes 100% divergent, and no longer shares any block references with its parent. Let’s examine this on a small scale.

Practical testing

On my workstation banshee, I create a new dataset, make sure compression is turned off so as not to confuse us, and populate it with a 256MB chunk of random binary stuff:

root@banshee:~# zfs create banshee/demo ; zfs set compression=off banshee/demo
root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.483868 s, 555 MB/s
 256MiB 0:00:00 [ 533MiB/s] [<=>                                               ]

I know this looks a little weird, but AES-256 is roughly an order of magnitude faster than /dev/urandom: so what I did here was use /dev/urandom to seed AES-256, then encrypt a 256MB chunk of /dev/zero with it. At the end of this procedure, we have a dataset with 256MB of data in it:

root@banshee:~# ls -lh /banshee/demo
total 262M
-rw-r--r-- 1 root root 256M Mar 15 14:39 random.bin
root@banshee:~# zfs list banshee/demo
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G   262M  /banshee/demo

OK. Next step, we take a snapshot of banshee/demo, then create a clone using that snapshot as its parent.

Creating a clone

You don’t actually create a ZFS clone of a dataset at all; you create a clone from a snapshot of a dataset. So before we can “clone banshee/demo”, we first have to take a snapshot of it, and then we clone that.

root@banshee:~# zfs snapshot banshee/demo@parent-snapshot
root@banshee:~# zfs clone banshee/demo@parent-snapshot banshee/demo-clone
root@banshee:~# zfs list -rt all banshee/demo
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G   262M  /banshee/demo
banshee/demo@parent-snapshot      0      -   262M  -
root@banshee:~# zfs list -rt all banshee/demo-clone
banshee/demo-clone     1K  83.3G   262M  /banshee/demo-clone

So right now, we have the dataset banshee/demo, which shares all its blocks with banshee/demo@parent-snapshot, which in turn shares all its blocks with banshee/demo-clone. We see 262M in USED for banshee/demo, with nothing or next-to-nothing in USED for either banshee/demo@parent-snapshot or banshee/demo-clone.

Beginning divergence: removing data

Now, we remove all the data from banshee/demo:

root@banshee:~# rm /banshee/demo/random.bin
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   262M  83.3G    19K  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
banshee/demo-clone     1K  83.3G   262M  /banshee/demo-clone

We still only have 262M of USED – but it’s all actually in banshee/demo@parent-snapshot now. You can tell because the REFER column has changed – banshee/demo@parent-snapshot and banshee/demo-clone still both REFER 262M, but banshee/demo only REFERs 19K now. (You still see 262M in USED for banshee/demo because banshee/demo@parent-snapshot is a child of banshee/demo, so its contents count towards banshee/demo‘s USED figure.)

Next up: we re-fill the parent dataset, banshee/demo, with 256MB of different random garbage.

Continuing divergence: replacing data in the parent

root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.498349 s, 539 MB/s
 256MiB 0:00:00 [ 516MiB/s] [<=>                                               ]
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   523M  83.2G   262M  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
banshee/demo-clone     1K  83.2G   262M  /banshee/demo-clone

OK, at this point you see that the USED for banshee/demo shoots up to 523M: that’s the total of the 262M of original random garbage which is still preserved in banshee/demo@parent-snapshot, plus the new 262M of different random garbage in banshee/demo itself. The snapshot now diverges completely from the parent dataset, having no blocks in common at all.

So far, banshee/demo-clone is still 100% convergent with banshee/demo@parent-snapshot, so we’re still getting some conservation of space on disk and in ARC from that. But remember, the whole point of making the clone was so that we could write to it as well as read from it. So let’s do exactly that, and make the clone 100% divergent from its parent, too.

Diverging completely: replacing data in the clone

root@banshee:~# dd if=/dev/zero bs=16M count=16 | openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt | pv > /banshee/demo-clone/random.bin
16+0 records in
16+0 records out
268435456 bytes (268 MB, 256 MiB) copied, 0.50151 s, 535 MB/s
 256MiB 0:00:00 [ 534MiB/s] [<=>                                               ]
root@banshee:~# zfs list -rt all banshee/demo ; zfs list banshee/demo-clone
NAME                           USED  AVAIL  REFER  MOUNTPOINT
banshee/demo                   523M  82.8G   262M  /banshee/demo
banshee/demo@parent-snapshot   262M      -   262M  -
banshee/demo-clone  262M  82.8G  262M  /banshee/demo-clone

There, done. We now have a parent dataset, banshee/demo, which diverges completely from its snapshot banshee/demo@parent-snapshot, and a clone, banshee/demo-clone, which also diverges completely from banshee/demo@parent-snapshot.

Examining the suck

Since neither the parent, its snapshot, nor the clone share any blocks with one another anymore, we’re using the full 786MB of on-disk space that the three of them add up to. And since they also don’t share any blocks in the ARC, we’re left with absolutely no benefit in either storage consumption or performance to our having used a clone.

Worse, despite having no blocks in common and no perceptible benefit to the clone structure, all three are still inextricably linked, and neither banshee/demo nor banshee/demo@parent-snapshot can be destroyed without also destroying banshee/demo-clone:

root@banshee:~# zfs destroy banshee/demo -r
cannot destroy 'banshee/demo': filesystem has dependent clones
use '-R' to destroy the following datasets:
root@banshee:~# zfs destroy banshee/demo@parent-snapshot
cannot destroy 'banshee/demo@parent-snapshot': snapshot has dependent clones
use '-R' to destroy the following datasets:

So now you’re left with a great unwieldy mass of tangled dependencies, wasted space, and no perceptible benefits at all.

Conclusion and practical example

Imagine that you’re storing VM images in ZFS, and you began with a “gold” image of a freshly installed operating system, and created ten different clones to run ten different VMs from. Initially, this seemed great: you could create the clones instantaneously, and they shared tons of blocks, so they consumed a fraction of the ARC they would as complete, separate copies.

A year later, however, your gold image – of, let’s say, Ubuntu 16.04.1 – has diverged to a staggering degree with the set of rolling updates necessary to bring it all the way to Ubuntu 16.04.2. Your VMs have also diverged tremendously, from their parent snapshot and from one another. And now you’re stuck with the year-old snapshot of the “gold” image, completely useless to you but forever engraved on your drive unless and until you’re willing to replicate or otherwise block-for-block copy your VMs painstakingly into self-sufficient datasets with no references. You also have no remaining performance benefits, and you have an extra SPOF (single point of failure) where some admin – maybe even you – might see that parent snapshot nobody cared about anymore taking up all that disk space, and…

root@banshee:~# zfs destroy -R banshee/demo@parent-snapshot
root@banshee:~# zfs list banshee/demo-clone
cannot open 'banshee/demo-clone': dataset does not exist

One “oops” later, that “useless” parent snapshot and every single one of those clones you were using in production are gone forever. (Or, hopefully, just gone until you can restore them from your off-pool backup. You are maintaining replicated backups on at least one other pool, preferably on another machine, aren’t you? Aren’t you?!)

Office 2016 for Mac on the Microsoft VLSC (Volume License Service Center) by way of TechSoup

If you bought Office 2016 for the Mac by way of volume licensing – for example, if you’re a non-profit and got it through TechSoup – you may have a hell of a time figuring out how to actually GET it. Above and beyond the usual fandango of getting an open license agreement and creating a Windows Live account for the same email address the OLSA is attached to and creating a VLSC account on that Windows Live account and taking ownership of the OLSA… things get deeply weird when you try to download it.

I had to google to even figure out what "Office Online Server" was.  Hint: you can't actually install it on a Mac...
I had to google to even figure out what “Office Online Server” was. Hint: you can’t actually install it on a Mac…

There’s your Microsoft Office for Mac 2016 Standard in the Downloads section, and it has the usual glowing text about how awesome it will be to have Office 2016 for your Mac in the Description tab… but when you click Download, all of a sudden you’re faces with a download for “Office Online Server”, which has absolutely nothing to do with Office 2016 for Mac.

I went around and around trying to figure out what was going on with this, to no avail. I eventually figured out – due to scads of people posting about OTHER problems with the Mac installer, which I fervently hope I won’t encounter once I actually get the chance to install this thing – that the ISO I should be seeing was about 1.6GB in size. The ISO for “Office Online Server 64 Bit English” is a “svelte” 599MB, so that’s not it.

Eventually, just before giving up and trying to file a bug report with Microsoft about mislabeled downloads on the VLSC, I looked hard at the “32/64 bit” Operating System Type. I mean, I’d looked at it ten times already and moved on, because, sure, OS X should be taking a multi-arch installer, why not? But when I actually clicked the drop down…

somebody at Microsoft is in need of a paddlin'. Why the hell isn't "MAC" the *default* operating system type for "Office 2016 Standard FOR MAC?!"
somebody at Microsoft is in need of a paddlin’. Why the hell isn’t “MAC” the *default* operating system type for “Office 2016 Standard FOR MAC?!”

Yyyyyyeah. Hope this helps somebody else, that was a frustrating half hour or so.

Depressing Storage Calculator

When a Terabyte is not a Terabyte

It seems like a stupid question, if you’re not an IT professional – and maybe even if you are – how much storage does it take to store 1TB of data? Unfortunately, it’s not a stupid question in the vein of “what weighs more, a pound of feathers or a pound of bricks”, and the answer isn’t “one terabyte” either. I’m going to try to break down all the various things that make the answer harder – and unhappier – in easy steps. Not everybody will need all of these things, so I’ll try to lay it out in a reasonably likely order from “affects everybody” to “only affects mission-critical business data with real RTO and RPO defined”.

Counting the Costs

Simple Local Storage

Computer TB vs Manufacturer TB

To your computer, and to all computers since the dawn of computing, a KB is actually a “kibibyte”, a megabyte a “mebibyte”, and so forth – they’re powers of two, not of ten. So 1 KiB = 2^10 = 1024. That’s an extra 24 bytes from a proper Kilobyte, which is 10^3 = 1000. No big deal, right? Well, the difference squares itself with each hop up from KB to MB to GB to TB, and gets that much more significant. Storage manufacturers prefer – and also have, since the dawn of time – to measure in those proper power-of-ten units, since that means they get to put bigger numbers on a device of a given actual size and thus try to trick you into thinking it’s somehow better.

At the Terabyte/Tebibyte level, you’re talking about the difference between 2^40 and 10^12. So 1 TiB, as your computer measures data, is 1.0995 TB as the rat bastards who sell hard drives measure storage. Let’s just go ahead and round that up to a nice easy 1.1.

TL;DR: multiply times 1.1 to account for vendor units.

Working Free Space

Remember those sliding number puzzles you had as a kid, where the digits 1-8 were embedded in a 9-square grid, and you were supposed to slide them around one at a time until you got them in order? Without the “9” missing, you wouldn’t be able to slide them. That’s a pretty decent rough analogy of how storage generally works, for all sorts of general reasons. If you don’t have any free space, you can’t move the tiles around and actually get anything done. For our sliding number puzzles when we were kids, that was 8/9 of the available storage occupied. A better rule of thumb for us is 8/10, or 80%. Once your disk(s) are 80% full, you should consider them full, and you should immediately be either deleting things or upgrading. If they hit 90% full, you should consider your own personal pants to be on actual fire, and react with an appropriate amount of immediacy to remedy that.

TL;DR: multiply times 1.25 to account for working free space.


You’re probably not really planning on just storing one chunk of data you have right now and never changing it. You’re almost certainly talking about curating an ever-growing collection of data that changes and accumulates as time goes on. Most people and businesses should plan on their data storage needs to double about every five years – that’s pretty conservative; it can easily get worse than that. Still, five years is also a pretty decent – and very conservative, not aggressive – hardware refresh cycle. So let’s say we want to plan for our storage needs to be fulfilled by what we buy now, until we need new everything anyway. That means doubling everything so you don’t have to upgrade for another few years.

TL;DR: multiply times 2.0 to account for data growth over the next few years.

Disaster Recovery

What, you weren’t planning on not backing your stuff up, were you? At a bare minimum, you’re going to need as much storage for backup as you did for production – most likely, you’ll need considerably more. We’ll be super super generous here and assume all you need is enough space for one single full backup – which usually only applies if you also have redundancy and very heavy-duty “oops recovery” and maybe hotspares as well. But if you don’t have all those things… this really isn’t enough. Really.

TL;DR: multiply times 2.0 to account for one full backup, as disaster recovery.

Redundancy, Hotspares, and Snapshots

Snapshots / “Oops Recovery” Schemes

You want to have a way to fix it pretty much immediately if you accidentally break a document. What this scheme looks like may differ depending on the sophistication of the system you’re working on. At best, you’re talking something like ZFS snapshots. In the middle of the road, Windows’ Volume Shadow Copy service (what powers the “Previous Versions” tab in Windows Explorer). At worst, the Recycle Bin. (And that’s really not good enough and you should figure out a way to do better.) What these things all have in common is that they offer a limiting factor to how badly you can screw yourself with the stroke of a key – you can “undo” whatever it is you broke to a relatively recent version that wasn’t broken in just a few clicks.

Different “oops recovery” schemes have different levels of efficiency, and different amounts of point-in-time granularity. My own ZFS-based systems maintain 30 hourly snapshots, 30 daily snapshots, and 3 monthly snapshots. I generally plan for snapshot space to take up about 33% as much space as my production storage, and that’s not a bad rule of thumb across the board, even if you can’t cram as many of your own schemes level of “oops points” in the same amount of space.

TL;DR: multiply times 1.3 to account for snapshots, VSS, or other “oops recovery”.


Redundancy – in the form of mirrored drives, striped RAID arrays, and so forth – is not a backup! However, it is a very, very useful thing to help you avoid the downtime monster, and in the case of more advanced storage schema like ZFS, to avoid corruption and bitrot. If you’re using 1:1 redundancy – RAID1, RAID10, ZFS mirrors, or btrfs-RAID1 distributed redundancy – this means you need two of every drive. If you’re using two blocks of parity in each eight block stripe (think RAID6 or ZFS RAIDZ2 with eight drives in each vdev), you’re going to be looking at 75% theoretical efficiency that comes out to more like 70% actual efficiency after stripe overhead. I’m just going to go ahead and say “let’s calculate using the more pessimistic number”. So, double everything to account for redundancy.

TL;DR: multiply times 2.0 to account for redundant storage scheme.


This is probably going to be the least common item on the list, but the vast majority of my clients have opted for it at this point. A hotspare server is ready to take over for the production server at a moment’s notice, without an actual “restore the backup” type procedure. With Sanoid, this most frequently means hourly replication from production to hotspare, with the ability to spin up the replicated VMs – both storage and hypervisor – directly on the hotspare server. The hotspare is thus promoted to being production, and what was the production server can be repaired with reduced time pressure and restored into service as a hotspare itself.

If you have a hotspare – and if, say, ten or more people’s payroll and productivity is dependent on your systems being up and running, you probably should – that’s another full redundancy to add to the bill.

TL;DR: bump your “backup” allowance up from x2.0 to x3.0 if you also use hotspare hardware.

The Butcher’s Bill

If you have, and account for, everything we went through above, to store 1 “terabyte” of data you’ll need:

1 “terabyte” (really a tebibyte) of data
x 1.1 TiB per TB
x 1.25 for working free space
x 2.0 for planned growth over the next few years

x 3.0 for disaster recovery + hotspare systems
x 1.3 for snapshot or other “oops recovery”
x 2.0 for redundancy
21.45 TB of actual storage hardware.

That can’t be right! You’re insane!

Alright, let’s break that down somewhat differently, then. Keep in mind that we’re talking about three separate computer systems in the above example, each with its own storage (production, hotspare, and disaster recovery). Now let’s instead assume that we’re talking about using drives of a given size, and see what that breaks down to in terms of actual usable storage on them.

Let’s forget about the hotspare and the disaster recovery boxes, so we’re looking at the purely local level now. Then let’s toss out the redundancy, since we’re only talking about one individual drive. That leaves us with 1TB / 1.1 TiB per TB / 1.25 working TiB per stored TiB / 1.3 TiB of prod+snapshots for every TiB of prod = 0.559 TiB of usable capacity per 1TB drive. Factor in planned growth by cutting that in half, and that means you shouldn’t be planning to start out storing more than 0.28 TiB of data on 1TB of storage.

TL;DR: If you have 280GiB of existing data, you need 1TB of local capacity.

That probably sounds more reasonable in terms of your “gut feel”, right? You have 280GiB of data, so you buy a 1TB disk, and that’ll give you some breathing room for a few years? Maybe you think it feels a bit aggressive (it isn’t), but it should at least be within the ballpark of how you’re used to thinking and feeling.

Now multiply by 2 for storage redundancy (mirrored disks), and by 3 for site/server redundancy (production, hotspare, and DR) and you’re at six 1TB disks total, to store 280GiB of data. 6/.28 = 21.43, and we’re right back where we started from, less a couple of rounding errors: we need to provision 21.45 TB for every 1TiB of data we’ve got right now.

8:1 rule of thumb

Based on the same calculations and with a healthy dose of rounding, we come up with another really handy, useful, memorable rule of thumb: when buying, you need eight times as much raw storage in production as the amount of data you have now.

So if you’ve got 1TiB of data, buy servers with 8TB of disks – whether it’s two 4TB disks in a single mirror, or four 2TB disks in two mirrors, or whatever, your rule of thumb is 8:1. Per system, so if you maintain hotspare and DR systems, you’ll need to do that twice more – but it’s still 8:1 in raw storage per machine.

Connecting pfSense to a standard OpenVPN Server config

First, you need to dump the client cert+key into System -> Cert Manager -> Certificates. Then dump the server’s CA cert into System-> Cert Manager-> CA.

Now go to the VPN -> OpenVPN -> Clients and add a client. You’ll likely want Peer-to-Peer (SSL/TLS), UDP, tun, and wan. Put in the remote host IP address or FQDN. You’ll probably want to check “infinitely resolve server”. Under Cryptographic settings, select the CA and certificate you entered into the System Cert Manager, and you’ll most likely want BF-CBC for the encryption algo and SHA-1 for the auth digest algo. Topology should be subnet unless you’re doing something funky; set compression if you’ve enabled it on the other end, but otherwise leave it alone.

This is enough to get you the VPN, but it won’t pass traffic originating there to you. To respond to traffic initiated from the other end, you’ll need to head to Firewall -> Rules -> OpenVPN. If you want all traffic to be allowed, when you create the new Pass rule, be certain to change the protocol from TCP to Any, and leave everything else the default. Save your rule and apply it, and you should at this point be connected and passing packets in both directions between your pfSense OpenVPN client and your standard (based on the template server.conf distributed with OpenVPN and using easy-rsa) OpenVPN server.