Heads up—Let’s Encrypt and Dovecot

Let’s Encrypt certificates work just dandy not only for HTTPS, but also for SSL/TLS on IMAP and SMTP services in mailservers. I deployed Let’s Encrypt to replace manually-purchased-and-deployed certificates on a client server in 2019, and today, users started reporting they were getting certificate expiration errors in mail clients.

When I checked the server using TLS checking tools, they reported that the certificate was fine; both the tools and a manual check of the datestamp on the actual .pem file showed that it had been updating just fine, with the most recent update happening in January and extending the certificate validation until April. WTF?

As it turns out, the problem is that Dovecot—which handles IMAP duties on the server—doesn’t notice when the certificate has been updated on disk; it will cheerfully keep using an in-memory cached copy of whatever certificate was present when the service started until time immemorial.

The way to detect this was to use openssl on the command line to connect directly to the IMAPS port:

you@anybox:~$ openssl s_client -showcerts -connect mail.example.com:993 -servername example.com

Scrolling through the connect data produced this gem:

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3270 bytes and written 478 bytes
Verification error: certificate has expired

So obviously, the Dovecot service hadn’t reloaded the certificate after Certbot-auto renewed it. One /etc/init.d/dovecot restart later, running the same command instead produced (among all the other verbiage):

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3269 bytes and written 478 bytes
Verification: OK
---

With the immediate problem resolved, the next step was to make sure Dovecot gets automatically restarted frequently enough to pick new certs up before they expire. You could get fancy and modify certbot’s cron job to include a Dovecot restart; you can find certbot’s cron job with grep -ir certbot /etc/crontab and add a –deploy-hook argument to restart after new certificates are obtained (and only after new certificates are obtained).

But I don’t really recommend doing it that way; the cron job might get automatically updated with an upgraded version of certbot at some point in the future. Instead, I created a new root cron job to restart Dovecot once every Sunday at midnight:

# m h dom mon dow   command
0 0 * * Sun /etc/init.d/dovecot restart

Since Certbot renews any certificate with 30 days or less until expiration, and the Sunday restart will pick up new certificates within 7 days of their deployment, we should be fine with this simple brute-force approach rather than a more efficient—but also more fragile—approach tying the update directly to restarting Dovecot using the –deploy-hook argument.

 

Fixing clock drift in Windows VMs under KVM

Inside your Windows VM, open an elevated command prompt (right-click Command Prompt from the Start menu, and Run as Administrator), then issue the following command:

bcdedit /set useplatformclock true

Now, you need to restart the guest—this change is persistent, but it doesn’t actually take effect until the guest reboots! After the reboot, the guest’s clock will stop drifting.

Why can’t I get to the internet on my new OpnSense install?!

You buy a nice new firewall appliance. You install OpnSense on it, set all the WAN and LAN stuff up to match your existing firewall, and you drop it into place. WTF, no internet…?

First of all, if you’re using a cable ISP, remember that most cable modems are MAC address locked, and will refuse to talk to a new MAC address if they’ve already seen a different one connected. So, remember to FULLY power-cycle your cable modem. Buttons won’t cut it, in many cases—you gotta unplug the power cable out of that sucker, give it a count of five to think about its sins, then plug it back in and let it re-sync.

If you still don’t have any internets after power-cycling and your modem showing everything sync’ed and online, you may be falling afoul of a weirdness in OpnSense’s default gateway configs. By default, it will mark a gateway as “down” if it doesn’t return pings… but many ISP gateway addresses (not the WAN address your router gets, the one just upstream of it) don’t return pings. So, OpnSense reports it as down and refuses to even try slinging packets through it.

screenshot of opnsense gateway configs

To fix this, go to System–>Gateways–>Single and select your WANGW gateway for editing. Now scroll down, find “Disable Gateway monitoring” and give that sucker a checkmark. Once you click “Save”, you should now see your gateway green and online, and packets should start flowing.

 

Static routing through VPN servers in OpnSense

You’ve got a server on the LAN running OpenVPN, WireGuard, or some other VPN service. You port forwarded the VPN service port to that box, which was easy enough, under Firewall–>NAT–>Port Forward.

screenshot of opnsense portforwarding
But now you need to set a static route through that LAN-located gateway machine, so that all the machines on the LAN can find it to respond to requests from the tunnel—eg, 10.8.0.0/24.

First step, in either OpnSense or pfSense, is to set up an additional gateway. In OpnSense, that’s System–>Gateways–>Single. Add a gateway with your VPN server’s LAN IP address, name it, done.

screenshot of opnsense gateways

Now you create a static route, in System–>Routes–>Configuration. Network Address is the subnet of your tunnels—in our example, 10.8.0.0/24. Gateway is the new gateway you just created. Natch.

screenshot of opnsense static routes

At this point, if you connect into the network over your VPN, your remote client will be able to successfully ping machines on the LAN… but not access any services. If you try nmap from the remote client, it shows all ports filtered. WTF?

Diagnostically, you can go in the OpnSense GUI to Firewall–>Log Files–>Live View. If you try something nice and obnoxious like nmap that will constantly try to open connections, you’ll see tons of red as the connections from your remote machine are blocked, using Default Deny. But then you look at your LAN rules—and they’re default allow! WTF?

screenshot of opnsense firewall live view

 

I can’t really answer W the F actually is, but I can, after much cursing, tell you how to fix it. Go in OpnSense to Firewall–>Settings–>Advanced and scroll most of the way down the page. Look for “Static route filtering” and check the box for “Bypass firewall rules for traffic on the same interface”—now click the Save button and, presto, when you go back to your live firewall view, you see tons of green on that nmap instead of tons of red—and, more importantly, your actual services can now connect from remote clients connected to the VPN.

screenshot of opnsense firewall advanced settings
This is the dastardly little bugger of a setting you’ve been struggling to find.

Fixing Outlook 2016 “Either there is no default mail client…”

I have a client who can’t open .MSG files with a brand-new Office 10 Pro system, and gets the following error when he tries (using Outlook 2016, installed from Office 365):

Either there is no default mail client or the current mail client cannot fulfill the messaging request.

You might think “aha, I just need to go into the control panel and fix either file associations with .msg files, or perhaps MAPI settings.” You would be wrong. Nope, you’re gonna have to delete a registry key, because of course you are. You’re using Windows!

Open this registry address in regedit:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\PreviewHandlers 

And delete the key with Data value of “Microsoft Windows MAPI Preview Handler”.

Poof. That’s it. No more errors, stuff opens as it should. Yay.

Importing WireGuard configs on mobile

I learned something new today—you can use an app called qrencode to create plain-ASCII QR codes on Ubuntu. This comes in super handy if you need to set up WireGuard tunnels on an Android phone or tablet, which otherwise tends to be a giant pain in the ass.

If you haven’t already, you’ll need to install qrencode itself; on Ubuntu that’s simply apt install qrencode and you’re ready. After that, just feed a tunnel config into the app, and it’ll display the QR code in the terminal. Your WireGuard mobile app has “from QR code” as an option in the tunnel import section; pick that, allow it to use the camera, and you’re off to the races!

Just like that, your WireGuard tunnel is ready to import into your phone or tablet.

 

 

zfs set sync=disabled

While benchmarking the Ars Technica Hot Rod server build tonight, I decided to empirically demonstrate the effects of zfs set sync=disabled on a dataset.

In technical terms, sync=disabled tells ZFS “when an application requests that you sync() before returning, lie to it.” If you don’t have applications explicitly calling sync(), this doesn’t result in any difference at all. If you do, it tremendously increases write performance… but, remember, it does so by lying to applications that specifically request that a set of data be safely committed to disk before they do anything else. TL;DR: don’t do this unless you’re absolutely sure you don’t give a crap about your applications’ data consistency safeguards!

In the below screenshot, we see ATTO Disk Benchmark run across a gigabit LAN to a Samba share on a RAIDz2 pool of eight Seagate Ironwolf 12TB disks. On the left: write cache is enabled (meaning, no sync() calls). In the center: write cache is disabled (meaning, a sync() call after each block written). On the right: write cache is disabled, but zfs set sync=disabled has been set on the underlying dataset.

L-R: no sync(), sync(), lying in response to sync().

The effect is clear and obvious: zfs set sync=disabled lies to applications that request sync() calls, resulting in the exact same performance as if they’d never called sync() at all.

Continuously updated iostat

Finally, after I don’t know HOW many years, I figured out how to get continuously updated stats from iostat that don’t just scroll up the screen and piss you off.

For those of you who aren’t familiar, iostat gives you some really awesome per-disk reports that you can use to look for problems. Eg, on a system I’m moving a bunch of data around on at the moment:

root@dr0:~# iostat --human -xs
Linux 4.15.0-45-generic (dr0) 06/04/2019 x86_64 (16 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.1% 0.0% 2.6% 5.8% 0.0% 91.5%
Device tps kB/s rqm/s await aqu-sz areq-sz %util
loop0 0.00 0.0k 0.00 0.00 0.00 1.6k 0.0%
sda 214.70 17.6M 0.14 2.25 0.48 83.9k 29.8%
sdb 364.41 38.1M 0.63 9.61 3.50 107.0k 76.5%
sdc 236.49 22.2M 4.13 2.13 0.50 96.3k 20.4%
sdd 237.14 22.2M 4.09 2.14 0.51 95.9k 20.4%
md0 12.09 221.5k 0.00 0.00 0.00 18.3k 0.0%

In particular, note that %util column. That lets me see that /dev/sdb is the bottleneck on my current copy operation. (I expect this, since it’s a single disk reading small blocks and writing large blocks to a two-vdev pool, but if this were one big pool, it would be an indication of problems with sdb.)

But what if I want to see a continuously updated feed? Well, I can do iostat –human -xs 1 and get a new listing every second… but it just scrolls up the screen, too fast to read. Yuck.

OK, how about using the watch command instead? Well, normally, when you call iostat, the first output is a reading that averages the stats for all devices since the first boot. This one won’t change visibly very often unless the system was JUST booted, and almost certainly isn’t what you want. It also frustrates the heck out of any attempt to simply use watch.

The key here is the -y argument, which skips that first report which always gives you the summary of history since last boot, and gets straight to the continuous interval reports – and knowing that you need to specify an interval, and a count for iostat output. If you get all that right, you can finally use watch -n 1 to get a running output of iostat that doesn’t scroll up off the screen and drive you insane trying to follow it:

root@dr0:~# watch -n 1 iostat -xy --human 1 1

Have fun!

Ubuntu 18.04 hung at update-grub 66%

I’ve encountered this two or three times now, and it’s always a slog figuring out how to fix it. When doing a fresh install of Ubuntu 18.04 to a new system, it hangs forever (never times out, no matter how long you wait) at 66% running update-grub.

The problem is a bug in os-prober. The fix is to ctrl-alt-F2 into a new BusyBox shell, ps and grep for the offending process, and kill it:

BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.1) built-in shell (ash)
Enter 'help' for a list of built-in commands.

# ps wwaux | grep dmsetup | grep -v grep
6114   root   29466 S    dmsetup create -r osprober-linux-sdc9

# kill 6114

Now ctrl-alt-F1 back into your installer session. After a moment, it’ll kick back into high gear and finish your Ubuntu 18.04 installation… but you’re unfortunately not done yet; killing os-prober got the install to complete, but it didn’t get GRUB to actually install onto your disks.

You can get a shell and chroot into your new install environment right now, but if you’re not intimately familiar with that process, it may be easier to just reboot using the same Ubuntu install media, but this time select “Rescue broken system”. Once you’ve made your way through selecting your keyboard layout and given your system a bogus name (it only persists for this rescue environment; it doesn’t change on-disk configuration) you’ll be asked to pick an environment to boot into, with a list of disks and partitions.

If you installed root to a simple partition, pick that partition. If, like me, you installed to an mdraid array, you should see that array listed as “md127”, which is Ubuntu’s default name for an array it knows is there but otherwise doesn’t know much about. Choose that, and you’ll get a shell with everything already conveniently mounted and chrooted for you.

(If you didn’t have the option to get into the environment the simple way, you can still do it from a standard installer environment: find your root partition or array, mount it to /mnt like mount /dev/md127 /mnt ; then chroot into it like chroot /mnt and you’ll be caught up and ready to proceed.)

The last part is easy. First, we need to get the buggy os-prober module out of the execution path.

root@ubuntu:~# cd /etc/grub.d
root@ubuntu:~/etc/grub.d# mkdir nerfed
root@ubuntu:~/etc/grub.d# mv 30_os-prober/nerfed

OK, that got rid of our problem module that locked up on us during the install. Now we’re ready to run update-grub and grub-install. I’m assuming here that you have two disks which should be bootable, /dev/sda and /dev/sdb; if that doesn’t match your situation, adjust accordingly. (If you’re using an mdraid array, mdadm –detail /dev/md127 to tell you for sure which disks to make bootable.)

root@ubuntu:~# update-grub
root@ubuntu:~# grub-install /dev/sda
root@ubuntu:~# grub install /dev/sdb

That’s it; now you can shutdown the system, pull the USB installer, and boot from the actual disks!

I’m stuck at update-grub, but it times out and errors!

If your update-grub process hangs for quite a while (couple full minutes?) at 50% but then falls to an angry error screen with a red background, you’ve got a different problem. If you’re trying to install with an mdraid root directory on a disk 4TiB or larger, you need to do a UEFI-style install – which requires EFI boot partitions available on each of your bootable disks.

You’re going to need to start the install process over again; this time when you partition your disks, make sure to create a small partition of type “EFI System Partition”. This is not the same partition you’ll use for your actual root; it’s also not the same thing as /boot – it’s a special snowflake all to itself, and it’s mandatory for systems booting from a drive or drives 4 TiB or larger. (You can still boot in BIOS mode, with no boot partition, from 2 TiB or smaller drives. Not sure about 3 TiB drives; I’ve never owned one IIRC.)

Installing WordPress on Apache the modern way

It’s been bugging me for a while that there are no correct guides to be found about using modern Apache 2.4 or above with the Event or Worker MPMs. We’re going to go ahead and correct that lapse today, by walking through a brand-new WordPress install on a new Ubuntu 18.04 VM (grab one for $5/mo at Linode, Digital Ocean, or your favorite host).

Installing system packages

Once you’ve set up the VM itself, you’ll first need to update the package list:

root@VM:~# apt update

Once it’s updated, you’ll need to install Apache itself, along with PHP and the various extras needed for a WordPress installation.

root@VM:~# apt install apache2 mysql-server php-fpm php-common php-mbstring php-xmlrpc php-soap php-gd php-xml php-intl php-mysql php-cli php-ldap php-zip php-curl

The key bits here are Apache2, your HTTP server; MySQL, your database server; and php-fpm, which is a pool of PHP worker processes your HTTP server can connect to in order to build WordPress dynamic content as necessary.

What you absolutely, positively do not want to do here is install mod_php. If you do that, your nice modern Apache2 with its nice modern Event process model gets immediately switched back to your granddaddy’s late-90s-style prefork, loading PHP processors into every single child process, and preventing your site from scaling if you get any significant traffic!

Enable the proxy_fcgi module

Instead – and this is the bit none of the guides I’ve found mention – you just need to enable one module in Apache itself, and enable the already-installed PHP configuration module. (You will need to figure out which version of php-fpm is installed: dpkg –get-selections | grep fpm can help here if you aren’t sure.)

root@VM:~# a2enmod proxy_fcgi
root@VM:~# a2enconf php7.4-fpm.conf
root@VM:~# systemctl restart apache2

Your Apache2 server is now ready to serve PHP applications, like WordPress. (Note for more advanced admins: if you’re tuning for larger scale, don’t forget that it’s not only about the web server connections anymore; you also want to keep an eye on how many PHP worker processes you have in your pool. You’ll do that in /etc/php/[version]/fpm/pool.d/www.conf.)

Download and extract WordPress

We’re going to keep things super simple in this guide, and just serve WordPress from the existing default vhost in its standard location, at /var/www/html.

root@VM:~# cd /var/www
root@VM:/var/www# wget https://wordpress.org/latest.tar.gz
root@VM:/var/www# tar zxvf latest.tar.gz
root@VM:/var/www# chown -R www-data.www-data wordpress
root@VM:/var/www# mv html html.dist
root@VM:/var/www# mv wordpress html

Create a database for WordPress

The last step before you can browse to your new WordPress installation is creating the database itself.

root@VM:/var/www# mysql -u root

mysql> create database wordpress;
Query OK, 1 row affected (0.01 sec)

mysql> grant all on wordpress.* to 'wordpress'@'localhost' identified by 'superduperpassword';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> exit;

This created a database named wordpress, with a user named wordpress, and a password superduperpassword. That’s a bad password. Don’t actually use that password. (Also, if mysql -u root wanted a password, and you don’t have it – cat /etc/mysql/debian.cnf, look for the debian-sys-maint password, and connect to mysql using mysql -u debian-sys-maint instead. Everything else will work fine.)

note for ubuntu 20.04 / mysql 8.0 users:

MySQL changed things a bit with 8.0. grant all on db.* to ‘user’@’localhost’ identified by ‘password’; no longer works all in one step. Instead, you’ll need first to create user ‘user’@’localhost’ identified by ‘password’; then you can grant all on db.* to ‘user’@’localhost’; —you no longer need to (or can) specify password on the actual grant line itself.

All done – browser time!

Now that you’ve set up Apache, dropped the WordPress installer in its default directory, and created a mysql database – you’re ready to run through the WordPress setup itself, by browsing directly to http://your.servers.ip.address/. Have fun!