Adventures in network repair

Recently, I acquired a new client with a massive load of technical debt (in other words: a new client). The facility internet connection appeared to go down for an hour or two every day, typically in the mid-afternoon.

Complicating things tremendously, this new client had no insight into its own infrastructure: the former IT person had left them with no credentials or documentation for anything. So I was limited to completely unprivileged tools while troubleshooting.

The first major thing I discovered was a somewhat deranged Adtran Netvanta router, as installed by the ISP. When I got a Linux laptop onto the network and issued a dhclient -v, I could see both that the Netvanta was acting as DHCP server, and that it was struggling badly.

My laptop DHCPDISCOVERed about twelve times before getting a DHCPOFFER from the Netvanta, to which it eagerly replied with a DHCPREQ for the offered address… which the Netvanta failed to respond to. My laptop DHCPREQ’d twice more, before giving up and moving back to DHCPDISCOVER. Eventually, the Netvanta DHCPOFFERed again, my laptop DHCPREQ’d, and this time on the third try, the punch-drunk Netvanta DHCPACK’d it, and it was on the network… after a solid two minutes of trying to get an IP address.

Alright, now I knew both that DHCP was coming from the ISP router, and that it was deranged. Why? And could I do anything about it?

The Netvanta was bolted into a wall-mounted half-cab directly touching its sibling Adva, so tightly together you couldn’t slide a playing card between the two. Both devices had functional active cooling, so this wasn’t necessarily a problem… but when I ran a bare finger along the rear face of the chassis, it was a lot warmer than I liked. So, I unbolted one side of it, and re-bolted it caddycorner with one side higher than the other, which gave some external airflow across the chassis.

Although now it looks like I’m an idiot who can’t line up boltholes, the triangles of airspace on the bottom left and top right of the Netvanta give it some convection space to shed heat from its metal chassis.

And to my great delight, when I got back to my commandeered office (the former IT guy’s personal dungeon), dhclient -v now completed in under 10ms, every time: DHCPDISCOVER–>DHCPOFFER–>DHCPREQ–>DHCPACK with no stumbles at all. As an added bonus, my exploratory internet speedtests went from 65Mbps to 400Mbps!

This made an enormous improvement in the facility’s network health, but there were still problems: the next day, my direct report got frustrated enough with the facility network to turn on a cell phone hotspot. Luckily, I’d already spotted another problem in the same rack:

Whoever installed all this gear apparently didn’t realize there’s a minimum bend radius for fiberoptics: and for multi-mode fiber like you see above, that minimum bend radius is 30x the diameter of the jacketed pair. I didn’t try to break out a ruler, but that looked a lot more like 10x the jacket diameter than 30x to me, so off I went to grab a five-pack of LC to LC multi-mode patch cables.

Keeping to our theme of me looking like a drunken redneck while actually improving things technically, I used some mounting points on the face of an abandoned Cisco switch mounted several units higher in the cabinet as a centerpoint anchor for my new patch cables. Does it look stupid? Yes. Does it keep things out of the way without fracturing the glass on the inside of my optic cables? Also yes.

The performance difference here was harder to spot–especially since I needed to perform it on a weekend with the facility empty except for myself–but if you know what you’re looking for, it’s there. Prior to replacing the patch cables, an iperf3 run to one of my internet-based servers had a TCP congestion window of 3.00MiB:

me@swift:~$ iperf3 -c [redacted]
Connecting to host [redacted], port 5201
[ 5] local [redacted] port 39874 connected to [redacted] port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 30.3 MBytes 254 Mbits/sec 0 3.00 MBytes
[ 5] 1.00-2.00 sec 48.8 MBytes 409 Mbits/sec 0 3.00 MBytes
[ 5] 2.00-3.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 3.00-4.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 4.00-5.00 sec 36.2 MBytes 304 Mbits/sec 0 3.00 MBytes
[ 5] 5.00-6.00 sec 38.8 MBytes 325 Mbits/sec 0 3.00 MBytes
[ 5] 6.00-7.00 sec 40.0 MBytes 335 Mbits/sec 0 3.00 MBytes
[ 5] 7.00-8.00 sec 47.5 MBytes 399 Mbits/sec 0 3.00 MBytes
[ 5] 8.00-9.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 9.00-10.00 sec 53.8 MBytes 451 Mbits/sec 0 3.00 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 453 MBytes 380 Mbits/sec 0 sender
[ 5] 0.00-10.05 sec 452 MBytes 377 Mbits/sec receiver

iperf Done.

After replacing the too-tightly-bent fiber patch cables, the raw speed didn’t increase much–but the TCP congestion window doubled to 6.00MiB. This is an excellent sign which–if you understand TCP congestion windowing algorithms–strongly implies a significant decrease in experienced latency.

jim@swift:~$ iperf3 -c [redacted]
Connecting to host [redacted], port 5201
[ 5] local [redacted] port 39882 connected to [redacted] port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 29.1 MBytes 244 Mbits/sec 0 6.00 MBytes
[ 5] 1.00-2.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 2.00-3.00 sec 52.5 MBytes 441 Mbits/sec 0 6.00 MBytes
[ 5] 3.00-4.00 sec 47.5 MBytes 398 Mbits/sec 0 6.00 MBytes
[ 5] 4.00-5.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 5.00-6.00 sec 52.5 MBytes 441 Mbits/sec 0 6.00 MBytes
[ 5] 6.00-7.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 7.00-8.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 8.00-9.00 sec 53.8 MBytes 451 Mbits/sec 0 6.00 MBytes
[ 5] 9.00-10.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 498 MBytes 418 Mbits/sec 0 sender
[ 5] 0.00-10.05 sec 498 MBytes 415 Mbits/sec receiver

iperf Done.

This apparent improvement in latency is confirmed with simpler web-based speedtests to fast.com, which showed an unloaded latency of 3ms and a loaded latency of 48-85ms prior to replacing the cables. After replacing them, fast.com consistently showed unloaded latency of 2ms… and loaded latency of <10ms.

Again, pay attention to the latency. In the “before” shot above, we see a maxed-out download throughput of 500Mbps, which is nice… and in fact, at first glance, you might mistakenly think this is a better result than the “after” we see below:

Oh no, you might think–download speed decreased from 500Mbps to 380Mbps! What did we do wrong? That’s the tricky part; we didn’t do anything wrong–something else on the network just siphoned off some of the available throughput while the test was running.

The important things to notice are, as mentioned, latency: it’s easy to dismiss the unloaded latency (meaning, how quickly pings return when the main bulk of the test isn’t running) decreasing from 3ms to 2ms. It’s only 1ms, after all… but it’s also a 33% improvement, and it held precisely consistent across many runs.

More conclusively, the loaded latency (time to return a ping when there’s lots of data moving) decreased by several hundred percent, and that result was also consistent across several runs.

There are almost certainly more gremlins to find and eliminate in this long-untended network, but we’re already in a much better position than we started from.

PSA: Cannot open Credentials Manager

I blew several INCREDIBLY frustrating hours trying to troubleshoot issues installing Google Workspace Sync and Microsoft Office 365 on multiple Windows 10 workstations today.

Searching for “failed to create profile” errors when setting up a Google Workspace Sync user for Outlook frequently nets you advice to fire up Windows’ Credential Manager and delete rogue credentials. The same advice often pops up for the dreaded “Trusted Platform Module Has Malfunctioned” error when attempting to register a freshly-downloaded Office 365 application to a user.

Unfortunately, trying to open Credential Manager also fails on affected PCs, with the error “An error occurred while performing this action: 0x80090345.” This was what finally led me to the workaround to the single issue affecting both Office365 setup and Google Workspace Sync setup.

First, open regedit on the affected PC. Then navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Cryptography\Protect\Providers\df9d8cd0-1501-11d1-8c7a-00c04fc297eb, and create a new registry DWORD value ProtectionPolicy, and set it to 1.

After creating the new DWORD and setting it to 1, restart your PC, and opening Credential Manager should then work fine. Once Credential Manager is open, delete anything you find in there, then register your Office365 apps and/or set up your Google Workspace Sync user.

That was four hours of my life I’m never getting back. Hope you found your answer sooner than I did!

520 byte sectors and Ubuntu

I recently bought a server which came with Samsung PM1643 SSDs. Trying to install Ubuntu on them didn’t work at first try, because the drives had 520 byte sectors instead of 512 byte.

Luckily, there’s a fix–get the drive(s) to a WORKING Ubuntu system, plug them in, and use the sg_format utility to convert the sector size!

root@ubuntu:~# sg_format -v --format --size=512 /dev/sdwhatever

Yep, it’s really that easy. Be warned, this is a destructive, touch-every-sector operation–so it will take a while, and your drives might get a bit warm. The 3.84TB drives I needed to convert took around 10 minutes apiece.

On the plus side, this also fixes any drive slowdowns due to a lack of TRIM, since it’s a destructive sector-level format.

I’ve heard stories of drives that refused to sg_format initially; if you encounter stubborn drives, you might be able to unlock them by dding a gigabyte or so to them–or you might need to first sg_format them with --size=520, then immediately and I mean immediately again with --size=512.

WSL2, keychain, /etc/hosts and you

There unfortunately are still a few stumbling blocks toward getting a properly, fully-working virt-manager setup running under WSL2 on Windows 11.

apt install virt-manager just works, of course–but getting WSL2 to properly handle hostnames and SSH key passphrases takes a bit of tweaking.

First up, install a couple of additional packages:

apt install keychain ssh-askpass

The keychain package allows WSL2 to cache the passphrases for your SSH keys, and ssh-askpass allows virt-manager to bump requests up to you when necessary.

If you haven’t already done so, first generate yourself an SSH key and give it a passphrase:

me@my-win11:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/id_rsa
Your public key has been saved in ~/.ssh/id_rsa.pub

You will also need to configure keychain itself, by adding the following to the end of your .bashrc:

# For Loading the SSH key
/usr/bin/keychain -q --nogui $HOME/.ssh/id_rsa
source $HOME/.keychain/$HOSTNAME-sh

Now, you’ll enter in your SSH key passphrase each time you open a WSL2 terminal, and it will remember it for SSH sessions opened via that terminal (or via apps opened from that terminal, eg if you type in virt-manager).

If you like to set hostnames in /etc/hosts to make your virt-manager connections look more reasonable, there’s one more step necessary. By default, for some reason WSL2 clobbers /etc/hosts each time it’s started.

You can defang this by creating /etc/wsl.conf and inserting this stanza:

[network]
generateHosts = false

Presto, you can now have a nice, secure, and well-working virt-manager under your Windows 11 WSL2 instance!

screenshot of virt-manager under WSLg
I also edited this screenshot with Ubuntu GiMP installed under WSL2 with apt install gimp. Because of course I did.

One final caveat: I do not recommend trying to create a shortcut in Windows to open virt-manager directly.

You can do that… but if you do, you’re liable to break things badly enough to require a Windows reboot. Windows 11 really doesn’t like launching WSL2 apps directly from a batch file, rather than from within a fully-launched WSL2 terminal!

When NOTHING else will remove a half-installed .deb package

WARNING: ALL WARRANTIES NULL AND VOID

With that important disclaimer out of the way… when you’re stuck in the world’s worst apt -f install loop and can’t figure out any other way to get the damn thing unwedged when there’s a half-installed package (eg if you’ve removed an /etc directory for a package you installed before, and this breaks an installer script—or the installer script “knows you already have it” and refuses to replace a removed config directory), this is the nuclear option:

sudo nano /var/lib/dpkg/status

Remove the offending package (and any packages that depend on it) entirely from this file, then apt install the offending package again.

If you’re still broken after that… did I mention all warranties null and void? This is an extremely nuclear option, and I really wouldn’t recommend it outside a throwaway test environment; you’re probably better off just nuking the whole server and reinstalling.

With that said, the next thing I had to do to clean out the remnants of the mysql/mariadb coinstall debacle that inspired this post was:

find / -iname “*mysql*” | grep -v php | grep -v snap | xargs rm -r

This got me out of a half-broken state on a machine that somebody had installed both mariadb-server and mysql-server on, leaving neither working properly.

When static routes on pfSense are ignored

I have this problem pretty frequently, and it always pisses me off: a pfSense router has a static route or two configured, and it works to ping through them in the router’s own Diagnostics … but they’re ignored entirely when requests come from machines on the LAN.

Here’s the fix.

First, as normal, you need to set up a Gateway pointing to the static route relay on the LAN. Then set up a static route through that new Gateway, if you haven’t already.

Now, you need to go to System–>Advanced–>Firewall & NAT. Look about halfway down the page, for a checkbox “Static route filtering” with flavor text “Bypass firewall rules for traffic on the same interface”. Check that. Scroll to the bottom, and click Save.

Once that’s done, if traceroutes from the LAN to the target network still go out through the WAN instead of through your local gateway… add a firewall rule to fix it.

Firewall –> Rules –> Floating

New rule at the TOP.

Action–> Pass
Quick–> CHECK THIS.
Interface–> LAN
Protocol–> Any
Source–> LAN net
Destination–>Network–> [ target subnet ]

Save your firewall rule, and apply it: within a few seconds, traceroutes from the LAN should start showing the new route.

Rebalancing data on ZFS mirrors

One of the questions that comes up time and time again about ZFS is “how can I migrate my data to a pool on a few of my disks, then add the rest of the disks afterward?”

If you just want to get the data moved and don’t care about balance, you can just copy the data over, then add the new disks and be done with it. But, it won’t be distributed evenly over the vdevs in your pool.

Don’t fret, though, it’s actually pretty easy to rebalance mirrors. In the following example, we’ll assume you’ve got four disks in a RAID array on an old machine, and two disks available to copy the data to in the short term.

Step one: create the new pool, copy data to it

First up, we create a simple temporary zpool with the two available disks.

zpool create temp -oashift=12 mirror /dev/disk/by-id/wwn-disk0 /dev/disk/by-id/disk1

Simple. Now you’ve got a ZFS mirror named temp, and you can start copying your data to it.

Step two: scrub the pool

Do not skip this step!

zpool scrub temp

Once this is done, do a zpool status temp to make sure you don’t have any errors. Assuming you don’t, you’re ready to proceed.

Step three: break the mirror, create a new pool

zpool detach temp /dev/disk/by-id/disk1

Now, your temp pool is down to one single disk vdev, and you’ve freed up one of its original disks. You’ve also got a known good copy of all your data on disk0, and you’ve verified it’s all good by using a zpool scrub command in step two. So, destroy the old machine’s storage, freeing up its four disks for use. 

zpool create tank /dev/disk/by-id/disk1 mirror /dev/disk/by-id/disk2 /dev/disk/by-id/disk3 mirror /dev/disk/by-id/disk4 /dev/disk/by-id/disk5

Now you’ve got your original temporary pool named temp, and a new permanent pool named tank. Pool “temp” is down to one single-disk vdev, and pool “tank” has one single-disk vdev, and two mirror vdevs.

Step four: copy your data from temp to tank

Copy all your data one more time, from the single-disk pool “temp” to the new pool “tank.” You can use zfs replication for this, or just plain old cp or rsync. Your choice.

Step five: scrub tank, destroy temp

Do not skip this step.

zpool scrub tank

Once this is done, do a zpool status tank to make sure you don’t have any errors. Assuming you don’t, now it’s time to destroy your temporary pool to free up its disk.

zpool destroy temp

Almost done!

Step six: attach the final disk from temp to the single-disk vdev in tank

zpool attach tank /dev/disk/by-id/disk0 /dev/disk/by-id/disk1

That’s it—you now have all of your data imported to a six-disk pool of mirrors, and all of the data is evenly distributed (according to disk size, at least) across all vdevs, not all clumped up on the first one to be added.

You can obviously adjust this formula for larger (or smaller!) pools, and it doesn’t necessarily require importing from an older machine—you can use this basic technique to redistribute data across an existing pool of mirrors, too, if you add a new mirror vdev. 

The important concept here is the idea of breaking mirror vdevs using zpool detach, and creating mirror vdevs from single-disk vdevs using zpool attach.

 

Estimating space occupied by multiple ZFS snapshots

You want to reclaim space on a ZFS pool by deleting some old snapshots. Problem is, you take snapshots frequently, so they all have deceptively low REFER values—REFER only shows you the space unique to a snapshot, so it’s entirely possible that deleting two snapshots that each show REFER of 1MiB will actually remove 100GiB of data.

How, you ask? Well, if that 100GiB of data is common to both snapshots, it won’t show up on the REFER of either—but if it was present in only those two snapshots, deleting them both unlinks that 100GiB and marks it free again.

Luckily, zfs destroy has a dry-run option, and can be used to delete sequences of snapshots. So you can see before-hand how much space will be reclaimed by deleting a sequence of snapshots, like this:

root@box:~# zfs destroy -nv pool/dataset@snap4%snap8
would destroy pool/dataset@snap4
would destroy pool/dataset@snap5
would destroy pool/dataset@snap6
would destroy pool/dataset@snap7
would destroy pool/dataset@snap8
would reclaim 25.2G

Heads up—Let’s Encrypt and Dovecot

Let’s Encrypt certificates work just dandy not only for HTTPS, but also for SSL/TLS on IMAP and SMTP services in mailservers. I deployed Let’s Encrypt to replace manually-purchased-and-deployed certificates on a client server in 2019, and today, users started reporting they were getting certificate expiration errors in mail clients.

When I checked the server using TLS checking tools, they reported that the certificate was fine; both the tools and a manual check of the datestamp on the actual .pem file showed that it had been updating just fine, with the most recent update happening in January and extending the certificate validation until April. WTF?

As it turns out, the problem is that Dovecot—which handles IMAP duties on the server—doesn’t notice when the certificate has been updated on disk; it will cheerfully keep using an in-memory cached copy of whatever certificate was present when the service started until time immemorial.

The way to detect this was to use openssl on the command line to connect directly to the IMAPS port:

you@anybox:~$ openssl s_client -showcerts -connect mail.example.com:993 -servername example.com

Scrolling through the connect data produced this gem:

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3270 bytes and written 478 bytes
Verification error: certificate has expired

So obviously, the Dovecot service hadn’t reloaded the certificate after Certbot-auto renewed it. One /etc/init.d/dovecot restart later, running the same command instead produced (among all the other verbiage):

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3269 bytes and written 478 bytes
Verification: OK
---

With the immediate problem resolved, the next step was to make sure Dovecot gets automatically restarted frequently enough to pick new certs up before they expire. You could get fancy and modify certbot’s cron job to include a Dovecot restart; you can find certbot’s cron job with grep -ir certbot /etc/crontab and add a –deploy-hook argument to restart after new certificates are obtained (and only after new certificates are obtained).

But I don’t really recommend doing it that way; the cron job might get automatically updated with an upgraded version of certbot at some point in the future. Instead, I created a new root cron job to restart Dovecot once every Sunday at midnight:

# m h dom mon dow   command
0 0 * * Sun /etc/init.d/dovecot restart

Since Certbot renews any certificate with 30 days or less until expiration, and the Sunday restart will pick up new certificates within 7 days of their deployment, we should be fine with this simple brute-force approach rather than a more efficient—but also more fragile—approach tying the update directly to restarting Dovecot using the –deploy-hook argument.

 

Fixing clock drift in Windows VMs under KVM

Inside your Windows VM, open an elevated command prompt (right-click Command Prompt from the Start menu, and Run as Administrator), then issue the following command:

bcdedit /set useplatformclock true

Now, you need to restart the guest—this change is persistent, but it doesn’t actually take effect until the guest reboots! After the reboot, the guest’s clock will stop drifting.