Wifi Acronym/Protocol Cheat Sheet

I can never find all this stuff in easy human-readable form in one place and have trouble remembering some of it, so here’s a cheat sheet for myself (and for you!)

AC Speed Ratings:

They’re basically complete snake oil and cannot be trusted to mean anything concrete. The only really meaningful basic hardware designator looks like “3×3:2”, which actually means “three input antennas, three output antennas, and two simultaneous MIMO streams.” The relevant part of that is the two MIMO streams. A laptop that supports two MIMO streams can get roughly double the throughput from a router or AP that also supports two MIMO streams than it could from a router or AP which only supported one.

Very, very few client devices (laptops, phones, tablets, etc) support more than two MIMO streams. But a rare handful can support three – the most common being recent-model Macbook Pro laptops. If a router supports more MIMO streams than any of the clients connected to it, it does nobody any good at all, though. (MU-MIMO changes that, slightly, but almost no client devices support MU-MIMO, either. Welcome to wifi.)

Unfortunately, without hitting a specialty site like wikidevi, you’re going to find it really, really difficult to find anything but AC speed ratings, so here’s a list of what each of them probably means. Assuming you’re talking about a single router or access point – if you’re looking at the “rating” on a box of wifi mesh nodes, you’re going to need a couple of hours and all the algebra you ever learned to try to reverse engineer something meaningful out of it!

  • AC1200 or AC1350 probably means a 2×2:2 dual-band device.
  • AC1750, AC1900, or AC2300 probably means a 3×3:3 dual-band device.
  • AC2600 probably means a 4×4:4 dual-band device.
  • AC3200 or higher probably means a tri-band device, with two 5 GHz radios as well as a 2.4 GHz radio, and god help you if you need to know the specifics of the MIMO streams beyond that.

You may see the MIMO ratings just listed as “2×2” or “3×3” instead of the full “2×2:2” or “3×3:3”; you can generally assume this will mean the same number of MIMO streams as antennae. Probably. But if you really want to know for sure, go look the device up at wikidevi.

Terms/Acronyms:

  • AP – Access Point. This is wifi infrastructure – a router or access point which offers network access to clients.
  • STA – Station. This is nerd shorthand for “client device”; a device that connects to APs in order to have access to the network.
  • SSID – Service Set IDentifier. Normal humans call this a “wifi network name”. What you see on the list of wifi networks to connect to.
  • BSSID – Basic Service Set IDentifier. This is the hardware address of the wifi chipset in an AP or STA; wired network nerds will be also familiar with this as the “MAC address”.
  • MAC Address – this is a string of text which uniquely identifies a particular network interface to other network interfaces on the network. It’s the fundamental network identity – IP addresses will get you to the right network domain, but from there you need a translation table (ARP) to tell you which MAC address owns which IP addresses. When speaking of Wifi, MAC address is synonymous with BSSID.
  • ARP – Address Resolution Protocol. ARP is not unique to wifi; much like MAC addresses, wired networking uses it too. ARP is the protocol which allows machines on the local network to convert IP addresses to MAC addresses (which are how the packets ultimately get to the right local-network destination).
  • NIC – Network Interface Card. Used to refer specifically to the network chipset doing the communicating; a STA or AP may have multiple NICs. Each NIC has its own MAC/BSSID.

Protocols:

802.11k – RF-based roaming report

802.11k and 802.11v are protocols which facilitate BSS (Basic Service Set) transitions. Normal humans tend to call this “roaming.” K, specifically, is how an AP offers a STA information about the network, so that the STA can choose a reasonable AP to roam to.

1. AP determines that STA is moving away from it
2. AP informs STA to prepare for roaming
3. STA requests list of nearby access points
4. AP gives site report
5. STA moves to best AP based on report

Both AP and STA must support 802.11k for it to be of use. Without K, roaming takes longer (since the STA must switch bands “sniffing” the air for new APs), and is more likely to send the STA to a suboptimal AP.

If you need more info, rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11k-2008

802.11r – Fast BSS transition

802.11r is only relevant to networks using EAP (Extensible Authentication Protocol), an enterprise-typical technology which allows each individual STA on the same SSID to use different passwords, and thus separate encryption keys. 802.11r does not apply to PSK networks, eg WPA/WPA2 “personal”.

Without 802.11r, a roaming event is much slower on an EAP network than on a Pre-Shared-Key style network, because the STA must first complete the full roaming process it would on the PSK network – then it must renegotiate the crypto side of things all over again with the new AP.

With 802.11r enabled (and supported on both STA and AP), part of the authentication and encryption keys may be cached for a certain amount of time, speeding up handoffs from AP to AP on an EAP network.

The details get a little hairy if you’re not super up on both the crypto and the nitty-gritty of the protocol; rabbit hole begins here: https://en.wikipedia.org/wiki/IEEE_802.11r-2008

802.11s – Mesh infrastructure protocol

802.11s is a mesh networking extension. It’s how most, if not all, Wifi Mesh networking kits handle communication between APs. Key features include:

1. SAE – Simultaneous Authentication between Equals. The idea here is that the various nodes of the mesh network can recognize one another without dependence on a central, authoritative controller.
2. broadcast/multicast and unicast delivery – in a normal network, if you hit the broadcast address a packet is relayed out to each STA. This becomes more difficult in a mesh network as not every STA is connected to a single infrastructure node; 802.11s facilitates the delivery of these *cast packets to all the STAs on the network.

802.11s is for APs only – normal STAs do not need to support and do not know anything about 802.11s, even if they’re connected to a “mesh” Wifi network.

Rabbit hole starts here: https://en.wikipedia.org/wiki/IEEE_802.11s

802.11v – Load-based roaming report

802.11v assists roaming based on AP load conditions. 802.11v BSS-TM management frames include a list of APs, and a report of their current loads. Providing this information to a STA reduces the scan time necessary, and allows for more graceful, steered roaming.

An 802.11v-enabled STA may request an 802.11v BSS-TM management frame from an AP, or an AP may send an unsolicited BSS-TM frame to the STA (indicating to the STA that a more preferred AP is available).

Similarly to 802.11k, the AP doesn’t unconditionally command the STA to roam to a specific AP, and the STA does not unconditionally obey. Both STA and AP must support 802.11v for load-based roaming to function.

I haven’t found a really good rabbit hole start for this one, but try here, here, and here.

ZVOL vs QCOW2 with KVM

When mixing ZFS and KVM, should you put your virtual machine images on ZVOLs, or on .qcow2 files on plain datasets? It’s a topic that pops up a lot, usually with a ton of people weighing in on performance without having actually done any testing.  My old benchmarks are getting a little long in the tooth, so here’s an fio random write run with 4K blocksize, done on both a .qcow2 on a dataset, and a zvol.

Test Configuration

Host:

CPU :  Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz
RAM : 32 GB DDR4 SDRAM
SATA : Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31)
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ZFS 0.6.5.6-0ubuntu19, from Canonical main repo
Disks : 2x Samsung 850 Pro 1TB SATA3, mirror vdev
ZFS parameters: ashift=13,recordsize=8K,atime=off,compression=lz4

Guest:

CPU : Intel Core Processor (Broadwell), 2 threads
RAM : 512MB
OS : Ubuntu 16.04.4 LTS, fully updated as of 2018-03-13
FS : ext4
Disks: /mnt/zvol on 20G zvol volume, /mnt/dataset on 20G .qcow2 file

Synchronous 4K write results

ZVOL, –ioengine=sync:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=sync --iodepth=4 \
                              --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                              --end_fsync=1
[...]
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=50453KB/s, minb=3153KB/s, maxb=3153KB/s, mint=83116msec, maxt=83132msec

QCOW2, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
[...]
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=45767KB/s, minb=2860KB/s, maxb=2976KB/s, mint=88058msec, maxt=91643msec

So, 50.5 MB/sec (zvol) vs 45.8 MB/sec (qcow2). Yes, there’s a difference; at least on the most punishing I/O workloads. Is it perceptible enough to matter? Probably not, for most use cases, given the benefits in ease of management and maintenance for .qcow2 on datasets. QCOW2 are easier to provision, you don’t have to worry about refreservation keeping you from taking snapshots, they’re not significantly more difficult to mount offline (modprobe nbd ; qemu-nbd -c /dev/nbd0 /path/to/image.qcow2 ; mount -oro /mnt/image /dev/nbd0 or similar); and probably the most importantly, filling the underlying storage beneath a qcow2 won’t crash the guest.

Tuning QCOW2 for even better performance

I found out yesterday that you can tune the underlying cluster size of the .qcow2 format. Creating a new .qcow2 file tuned to use 8K clusters – matching our 8K recordsize, and the 8K underlying hardware blocksize of the Samsung 850 Pro drives in our vdev – produced tremendously better results. With the tuned qcow2, we more than tripled the performance of the zvol – going from 50.5 MB/sec (zvol) to 170 MB/sec (8K tuned qcow2)!

QCOW2 -o cluster_size=8K, –ioengine=sync:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=sync --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
[...]
Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=170002KB/s, minb=10625KB/s, maxb=12698KB/s, mint=20643msec, maxt=24672msec

ZVOL won’t pause the guest if storage is unavailable

If you fill the underlying pool with a guest that’s using a zvol for its storage, the filesystem in the guest will panic. From the guest’s perspective, this is a hardware I/O error, and the guest and/or its apps which use that virtual disk will crash, leaving it in an unknown and possibly corrput state.

If the guest uses a .qcow2 file on a dataset for storage, the same problem is handled much more safely. When writes become unavailable on host storage, the guest will be automatically paused by libvirt. This gives you a chance to free up space, then virsh resume the guest. The net effect is that the guest and its apps never realize there was ever a problem in the first place. Any pending writes complete automatically and without error once you’ve cleared the host storage problem and resumed the guest.

ZVOL doesn’t honor guest synchronous writes

It may also be worth noting that the guest seems a little less clued in with what’s going on with its storage when using the zvol. I specified --ioengine=sync for these test runs, which should – repeat, should – have made the also-specified parameter end_fsync=1 irrelevant, since all writes were supposed to be synchronous.

On the .qcow2-hosted storage, the data was written verifiably sync, since we can see there’s no pause at the end for end_fsync=1 to finish flushing the data to the metal:

Jobs: 16 (f=16): [w(16)] [66.7% done] [0KB/75346KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [68.0% done] [0KB/0KB/0KB /s]
Jobs: 16 (f=16): [w(16)] [72.0% done] [0KB/263.8MB/0KB /s]
Jobs: 16 (f=16): [w(8),F(1),w(7)] [80.0% done] [0KB/199.1MB/0KB /s] 
Jobs: 15 (f=15): [w(8),_(1),w(7)] [80.8% done] [0KB/53866KB/0KB /s] 
Jobs: 15 (f=15): [w(3),F(1),w(4),_(1),w(3),F(1),w(3)] [84.6% done] 
Jobs: 12 (f=12): [F(1),w(2),_(1),w(4),_(2),w(2),_(1),w(3)] [85.2% done] 
Jobs: 8 (f=8): [_(4),w(4),_(2),w(2),_(1),w(1),_(1),w(1)] [88.9% done] Jobs: 4 (f=3): [_(4),F(1),_(1),w(1),_(3),F(1),_(4),w(1)] [100.0% done] 

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1773: Tue Mar 13 13:57:16 2018

The ZVOL hosted storage, on the other hand, clearly was not honoring ioengine=sync, as it spent a significant amount of time after all data was supposedly already written, waiting for end_fsync=1 to finish:

Jobs: 16 (f=16): [w(16)] [81.0% done] [0KB/527.2MB/0KB /s] 
Jobs: 16 (f=16): [w(10),F(1),w(5)] [94.7% done] [0KB/551.6MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/155.2MB/0KB /s]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]
Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

 ------[[[ above line repeats for 60 more lines ]]]------

Jobs: 16 (f=16): [F(16)] [100.0% done] [0KB/0KB/0KB /s] [0/0/0 iops]

random-readwrite: (groupid=0, jobs=1): err= 0: pid=1792: Tue Mar 13 13:57:42 2018

This strikes me as pretty disturbing; you could end up in a world of hurt if you’re expecting your host to honor the guest’s synchronous writes when, in fact, it’s not.

Asynchronous 4K write results

Well, hrm. Realizing now that zvol storage doesn’t actually honor synchronous write requests very well, what if we use the libaio (native Linux asynchronous I/O) engine instead?

ZVOL, –ioengine=libaio:

root@benchmark:/mnt/zvol# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=139484KB/s, minb=8717KB/s, maxb=8722KB/s, mint=30054msec, maxt=30070msec

QCOW2, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=164392KB/s, minb=10274KB/s, maxb=11651KB/s, mint=22498msec, maxt=25514msec

And there you have it – qcow2 at 164MB/sec vs zvol at 139 MB/sec. So when using asynchronous I/O, the qcow2-backed virtual disk actually finished the fio run faster than the zvol-backed disk.

What if we tune the .qcow2 for 8K cluster size, like we did above in the synchronous write test?

QCOW2 -o cluster_size=8K, –ioengine=libaio:

root@benchmark:/mnt/qcow2# fio --name=random-write --ioengine=libaio --iodepth=4 \
                               --rw=randwrite --bs=4k --direct=0 --size=256m --numjobs=16 \
                               --end_fsync=1
 ... Run status group 0 (all jobs): WRITE: io=4096.0MB, aggrb=181304KB/s, minb=11331KB/s, maxb=13543KB/s, mint=19356msec, maxt=23134msec

The improvements aren’t as drastic here – 181 MB/sec (tuned qcow2) vs 164 MB/sec (default qcow2) vs 139 MB/sec (zvol) – but they’re still a clear improvement, and the qcow2 storage is still faster than the zvol. (If anybody knows similar tuning that can be done to the zvol to improve its numbers, please tweet or DM me @jrssnet.)

Conclusion: .qcow2 FTW

For me, it’s a no-brainer: qcow2 files are only slightly slower on even the most punishing I/O workloads under default, untuned configuration, while being MUCH easier to manage, and arguably safer (won’t crash the guest if the host fills up the storage, honors sync write requests more predictably). And if you take the time to tune the .qcow2 on creation, they actually outperform the zvol. Winner: .qcow2.