Rustdesk Server on Ubuntu 22.04

As usual, I’m self-documenting a project while I work on it. Rustdesk is an open source remote control utility that caught my eye about a year ago; it’s cross platform and allows you to self-host your own “relay server” so that you can connect securely from one machine on a private network to a machine on a different private network without needing to faff about with port forwarding or similar nonsense.

Most of the docs seem focused around running it as a Docker instance, which I didn’t particularly want to do. They also weren’t clear AT ALL about where important files live, how they’re run, etc.

First up: you’ll need to install THREE separate .deb files. Go to the rustdesk-server releases page here and download the rustdesk-server-hbbr, rustdesk-server-hbbs, and rustdeskserver-utils packages for your architecture, then dpkg -i each of the three.

This should be sufficient to get the services up and running–you can check with systemctl status rustdesk-hbbr and systemctl status rustdesk-hbbs. Once each service is running, the next step is finding your new Rustdesk server’s public key–which isn’t created until after the first time hbbs runs.

If you just installed everything normally from the .deb, you’ll find that key at /var/lib/rustdesk-server/id_ed25519.pub. Now, download and install the Rustdesk client onto another machine. Fire up the client, and get to configuring.

rustdesk client home screen
First, click the three-dot menu next to your ID, in the upper left corner of the client.

 

Rustdesk client network settings
Next, from the general Rustdesk client Settings page, click Network.

 

Rustdesk client ID/Relay server config
You should only need to fill in the highlighted two fields here: ID server and Key. Unless you’ve got a very non-standard config, Rustdesk figures Relay and API servers out for itself.

Once you’ve configured your first client to use your new relay server, you’ll want to click the copy icon on the upper right hand corner–this copies a string of apparent garbage to your clipboard; this garbage can later be imported into other Rustdesk clients.

To import your new configuration on other client systems later, just get that string of apparent garbage text into the system clipboard on the remote machine, then click the clipboard Paste icon just next to the Copy icon on the upper right. This will actually populate all fields of the ID/Relay server dialog on the new client just as they were configured on the old client. Tip: this probably isn’t particularly sensitive information, so you might consider saving it as a text file on an easy-to-access webserver somewhere. 

At this point, you’re ready to rock. Once you’ve got the client software installed on any two machines and configured them to use your new relay server, you may connect to any of the machines thus configured, using the Rustdesk password you individually configure on each of those clients.

This is extremely early days for me–I literally just finished setting this up as a proof-of-concept earlier this morning–but so far, it looks pretty slick; I’m experiencing considerably lower latency with Rustdesk piped through a relay server in Atlanta, GA than I am with direct Spice connection to the same system via virt-manager and KVM!

 

 

Salter’s Screwdriver Theory of Latency

We’ve all noticed that software never seems to get any faster no matter how much faster the hardware gets. This easily-observable fact is usually explained in one of two ways:

  • Software devs are lazy, and refuse to optimize more than they absolutely must
  • Software devs are ambitious, and use all available CPU cycles / IOPS to do as much as possible–so more cycles/IOPS available == more detailed work delivered

These things are both true, but they aren’t the root cause of the issue–they’re simply how the issue tends to be addressed and/or to expose itself.

The real reason software never gets (visibly) faster unless you install much older operating systems and applications on much newer hardware is because typically, humans aren’t comfortable with “machine-speed” interfaces.

For as long as I’ve been alive, a typical discrete human-visible task performed by a computer–opening an application, saving a file, loading a web page–has tended to take roughly 1,500ms on a typical to slow PC, or 500ms on a fast PC.

1,500ms is about the length of time it takes to say “hey, pass me that screwdriver, would you?” and have a reasonably attentive buddy pass you the screwdriver. 500ms is about the length of time it takes to say “Scalpel,” and have a reasonably attentive, professional surgical assistant slap the scalpel in your hand.

If you’re still on the fence about this, consider “transitions.” When a simple, discrete task like opening an application gets much faster than 500ms despite some devs being lazy and other devs being ambitious… that’s when the visual transitions start to appear.

On macOS, when you open applications or switch focus to them, they “stream” from the lower right-hand corner of the screen up to the upper-left corner of where the window will actually be, and expand out from there into the lower-right until the app is full sized. Windows expands windows upward from the taskbar. Even Linux distributions intended for end-users employ graphical transitions which slow things down. Why?

Because “instantaneous” response is unsettling for most humans, whether they directly understand and can articulate that fact or not.

To be fair, I have seen this general idea–that humans aren’t comfortable with low task latency–occasionally floated around over the decades. But the part I haven’t seen is the direct comparison with the typical task latency a human assistant would provide, and I think that’s a pretty illustrative and convincing point.

If you like this theory and find it useful, feel free to just refer to it as “Screwdriver Theory”–but I wouldn’t be upset if you linked to this page to explain it. =)

TPM errors in Windows 11

The last several machines I’ve built–and several long-running machines with nothing wrong with them–have suddenly started displaying strange errors regarding Microsoft 365. The affected machines can neither register Office365 apps to the entire machine successfully, nor will they accept user logins to the Outlook desktop app.

Although you don’t get any obvious errors when Outlook refuses to accept a valid password–confirmed by entering the same password successfully to log into Outlook on the Web in the same machine’s browser–scouring Event Viewer carefully enough will lead you to notice TPM related warnings and errors in the System log, occurring each time you attempt to log into desktop Outlook or attempt to register Office365 apps globally.

The extremely poorly documented actual fix requires at least one and sometimes up to three steps, followed by a reboot. If you want to perform all three steps pre-emptively then reboot, fine–but if you find you need to perform more steps, please realize you will need to reboot again following those additional steps!

Step one: regedit

First, you need to update two registry keys (you may save this text as a .reg file and add it to your Registry directly; the key names have been the same on the ten or so machines I’ve fixed in the last several weeks, despite looking like there might be GUIDs there):

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Identity\Identities]
"EnableADAL"=dword:00000001

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\Protect\Providers\df9d8cd0-1501-11d1-8c7a-00c04fc297eb]
"ProtectionPolicy"=dword:00000001

Step two: (DANGER!) clear TPM (DANGER!)

Next, consider deleting the values currently stored in your TPM. 

WARNING WARNING WARNING: if you are using Bitlocker, make certain you will still be able to decrypt the drive after destroying the secrets in your TPM! At a minimum, this means making sure you’ve got scratch codes available. If you don’t have a full backup available that you are prepared and ready to use, strongly consider decrypting the drive before destroying the data in your TPM–if you turn out not to have the scratch codes, or have the wrong scratch codes, etc etc etc, there is no turning back.

If you have decided to go ahead and clear out your TPM now, click Start, type in tpm.msc, and hit Enter. The TPM settings dialog will pop up. Click “clear TPM.” 

Again, do not blindly clear your TPM if you are using BitLocker!

Once you have decided whether or not to pre-emptively clear your TPM, reboot the system, and check to see if your Microsoft 365 woes have been resolved. If they have not, and you haven’t cleared the TPM yet, you’ll need to make arrangements to ensure safety of your data, then clear the TPM, reboot once more, and try again.

If you still have no joy after completing both steps one and two and rebooting, it’s unfortunately time for the last and most obnoxious step.

Step three: create a new, clean Windows user profile

If you’ve added the registry keys and manually cleared the TPM, then rebooted, and your Microsoft 365 login and registration problems still aren’t fixed, the problem is very likely in an undiagnosed area of your user profile.

I only needed to perform this step on two of the ten or so machines I’ve resolved these TPM errors on in the last few weeks–but on those two machines, there was no getting around it; the whole user profile had to go.

The good news is, you can test whether this is necessary before actually destroying anything! Create a new local user on your system (or log in with a different domain user which has never logged into the local system, in an Active Directory environment), log out as the current user, and log in as the brand-new user profile.

Now, try to first activate Office365, then set up Outlook to access the affected user’s email. If you already completed steps one and two, this attempt should succeed with no issues. Once you’ve successfully both registered Office 365 to the affected user’s email account, and successfully logged into that user’s email account in desktop Outlook, you know your problem really was in the original Windows user profile.

At this point, you can do one of three things. You can:

  • Either continue setting up the new Windows profile for regular use, move the user’s data to the new profile, then destroy the old profile
  • Or you can back up the user’s data, destroy the user’s old profile, create a new profile with the same username / log into Active Directory with the same user credentials, then restore the user’s data
  • Or you can get out your Mad Scientist toolkit and start feverishly trying to analyze the broken profile and figure out why it’s broken (I have had no luck with this approach, myself).

Conclusion

I don’t know what’s going on at Microsoft right now, but these TPM errors have been a plague for quite a while, and Microsoft keeps failing to either fix the issue or even provide the sort of comprehensive workaround I’m documenting here.

The good news is, despite needing to fix ten or so machines and counting so far, the majority of the affected machines were fine after nothing but Step 1 (add registry keys) and a reboot, and most of the rest were okay with only adding Step 2 (clear TPM values) and a reboot.

Again, I beg, plead, warn, scream at you: do not blindly clear the TPM without considering the impact on associated services, especially BitLocker.

Finally, of the two machines that still refused to work properly after clearing the TPM, Step 3 (blowing away the user’s Windows profile, then either manually recreating it from scratch or logging in again into an AD environment to recreate it from scratch) worked a treat.

So far, I have not encountered any machine that wouldn’t resume O365 functionality after following this guide. (Knock on wood). Good luck, fellow sysadmin or helldesk veteran, and may the Force be with you…

And please, please, please do not blindly clear the TPM without considering the consequences to associated services such as but not limited to BitLocker!

Windows, KVM, and time zones

If you’re running Windows VMs beneath a Linux KVM host, you’ve very likely been plagued by an annoying issue: they start up with the wrong time by several hours, every time they’re rebooted, no matter what you do.

The issue is that Windows syncs its time with the local hardware clock, and in KVM’s case, it generally provides VMs with a “hardware” clock set to UTC regardless of what the real hardware clock’s time zone is set to.

Here’s the fix: on your Windows VM, create a new text file called UTCtime.reg, and populate it with the following:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]
"RealTimeIsUniversal"=dword:00000001

Now you can just double-click the patch file to import it into the VM’s registry, then reboot the VM. When it comes back up, it’ll come back up with the correct time (assuming your hardware clock is set to the correct time, of course).

IDK about y’all, but this one had been pissing me off for years; it’s nice to finally have a fix for it!

Adventures in network repair

Recently, I acquired a new client with a massive load of technical debt (in other words: a new client). The facility internet connection appeared to go down for an hour or two every day, typically in the mid-afternoon.

Complicating things tremendously, this new client had no insight into its own infrastructure: the former IT person had left them with no credentials or documentation for anything. So I was limited to completely unprivileged tools while troubleshooting.

The first major thing I discovered was a somewhat deranged Adtran Netvanta router, as installed by the ISP. When I got a Linux laptop onto the network and issued a dhclient -v, I could see both that the Netvanta was acting as DHCP server, and that it was struggling badly.

My laptop DHCPDISCOVERed about twelve times before getting a DHCPOFFER from the Netvanta, to which it eagerly replied with a DHCPREQ for the offered address… which the Netvanta failed to respond to. My laptop DHCPREQ’d twice more, before giving up and moving back to DHCPDISCOVER. Eventually, the Netvanta DHCPOFFERed again, my laptop DHCPREQ’d, and this time on the third try, the punch-drunk Netvanta DHCPACK’d it, and it was on the network… after a solid two minutes of trying to get an IP address.

Alright, now I knew both that DHCP was coming from the ISP router, and that it was deranged. Why? And could I do anything about it?

The Netvanta was bolted into a wall-mounted half-cab directly touching its sibling Adva, so tightly together you couldn’t slide a playing card between the two. Both devices had functional active cooling, so this wasn’t necessarily a problem… but when I ran a bare finger along the rear face of the chassis, it was a lot warmer than I liked. So, I unbolted one side of it, and re-bolted it caddycorner with one side higher than the other, which gave some external airflow across the chassis.

Although now it looks like I’m an idiot who can’t line up boltholes, the triangles of airspace on the bottom left and top right of the Netvanta give it some convection space to shed heat from its metal chassis.

And to my great delight, when I got back to my commandeered office (the former IT guy’s personal dungeon), dhclient -v now completed in under 10ms, every time: DHCPDISCOVER–>DHCPOFFER–>DHCPREQ–>DHCPACK with no stumbles at all. As an added bonus, my exploratory internet speedtests went from 65Mbps to 400Mbps!

This made an enormous improvement in the facility’s network health, but there were still problems: the next day, my direct report got frustrated enough with the facility network to turn on a cell phone hotspot. Luckily, I’d already spotted another problem in the same rack:

Whoever installed all this gear apparently didn’t realize there’s a minimum bend radius for fiberoptics: and for multi-mode fiber like you see above, that minimum bend radius is 30x the diameter of the jacketed pair. I didn’t try to break out a ruler, but that looked a lot more like 10x the jacket diameter than 30x to me, so off I went to grab a five-pack of LC to LC multi-mode patch cables.

Keeping to our theme of me looking like a drunken redneck while actually improving things technically, I used some mounting points on the face of an abandoned Cisco switch mounted several units higher in the cabinet as a centerpoint anchor for my new patch cables. Does it look stupid? Yes. Does it keep things out of the way without fracturing the glass on the inside of my optic cables? Also yes.

The performance difference here was harder to spot–especially since I needed to perform it on a weekend with the facility empty except for myself–but if you know what you’re looking for, it’s there. Prior to replacing the patch cables, an iperf3 run to one of my internet-based servers had a TCP congestion window of 3.00MiB:

me@swift:~$ iperf3 -c [redacted]
Connecting to host [redacted], port 5201
[ 5] local [redacted] port 39874 connected to [redacted] port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 30.3 MBytes 254 Mbits/sec 0 3.00 MBytes
[ 5] 1.00-2.00 sec 48.8 MBytes 409 Mbits/sec 0 3.00 MBytes
[ 5] 2.00-3.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 3.00-4.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 4.00-5.00 sec 36.2 MBytes 304 Mbits/sec 0 3.00 MBytes
[ 5] 5.00-6.00 sec 38.8 MBytes 325 Mbits/sec 0 3.00 MBytes
[ 5] 6.00-7.00 sec 40.0 MBytes 335 Mbits/sec 0 3.00 MBytes
[ 5] 7.00-8.00 sec 47.5 MBytes 399 Mbits/sec 0 3.00 MBytes
[ 5] 8.00-9.00 sec 52.5 MBytes 440 Mbits/sec 0 3.00 MBytes
[ 5] 9.00-10.00 sec 53.8 MBytes 451 Mbits/sec 0 3.00 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 453 MBytes 380 Mbits/sec 0 sender
[ 5] 0.00-10.05 sec 452 MBytes 377 Mbits/sec receiver

iperf Done.

After replacing the too-tightly-bent fiber patch cables, the raw speed didn’t increase much–but the TCP congestion window doubled to 6.00MiB. This is an excellent sign which–if you understand TCP congestion windowing algorithms–strongly implies a significant decrease in experienced latency.

jim@swift:~$ iperf3 -c [redacted]
Connecting to host [redacted], port 5201
[ 5] local [redacted] port 39882 connected to [redacted] port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 29.1 MBytes 244 Mbits/sec 0 6.00 MBytes
[ 5] 1.00-2.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 2.00-3.00 sec 52.5 MBytes 441 Mbits/sec 0 6.00 MBytes
[ 5] 3.00-4.00 sec 47.5 MBytes 398 Mbits/sec 0 6.00 MBytes
[ 5] 4.00-5.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 5.00-6.00 sec 52.5 MBytes 441 Mbits/sec 0 6.00 MBytes
[ 5] 6.00-7.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 7.00-8.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
[ 5] 8.00-9.00 sec 53.8 MBytes 451 Mbits/sec 0 6.00 MBytes
[ 5] 9.00-10.00 sec 52.5 MBytes 440 Mbits/sec 0 6.00 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 498 MBytes 418 Mbits/sec 0 sender
[ 5] 0.00-10.05 sec 498 MBytes 415 Mbits/sec receiver

iperf Done.

This apparent improvement in latency is confirmed with simpler web-based speedtests to fast.com, which showed an unloaded latency of 3ms and a loaded latency of 48-85ms prior to replacing the cables. After replacing them, fast.com consistently showed unloaded latency of 2ms… and loaded latency of <10ms.

Again, pay attention to the latency. In the “before” shot above, we see a maxed-out download throughput of 500Mbps, which is nice… and in fact, at first glance, you might mistakenly think this is a better result than the “after” we see below:

Oh no, you might think–download speed decreased from 500Mbps to 380Mbps! What did we do wrong? That’s the tricky part; we didn’t do anything wrong–something else on the network just siphoned off some of the available throughput while the test was running.

The important things to notice are, as mentioned, latency: it’s easy to dismiss the unloaded latency (meaning, how quickly pings return when the main bulk of the test isn’t running) decreasing from 3ms to 2ms. It’s only 1ms, after all… but it’s also a 33% improvement, and it held precisely consistent across many runs.

More conclusively, the loaded latency (time to return a ping when there’s lots of data moving) decreased by several hundred percent, and that result was also consistent across several runs.

There are almost certainly more gremlins to find and eliminate in this long-untended network, but we’re already in a much better position than we started from.

PSA: Cannot open Credentials Manager

I blew several INCREDIBLY frustrating hours trying to troubleshoot issues installing Google Workspace Sync and Microsoft Office 365 on multiple Windows 10 workstations today.

Searching for “failed to create profile” errors when setting up a Google Workspace Sync user for Outlook frequently nets you advice to fire up Windows’ Credential Manager and delete rogue credentials. The same advice often pops up for the dreaded “Trusted Platform Module Has Malfunctioned” error when attempting to register a freshly-downloaded Office 365 application to a user.

Unfortunately, trying to open Credential Manager also fails on affected PCs, with the error “An error occurred while performing this action: 0x80090345.” This was what finally led me to the workaround to the single issue affecting both Office365 setup and Google Workspace Sync setup.

First, open regedit on the affected PC. Then navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Cryptography\Protect\Providers\df9d8cd0-1501-11d1-8c7a-00c04fc297eb, and create a new registry DWORD value ProtectionPolicy, and set it to 1.

After creating the new DWORD and setting it to 1, restart your PC, and opening Credential Manager should then work fine. Once Credential Manager is open, delete anything you find in there, then register your Office365 apps and/or set up your Google Workspace Sync user.

That was four hours of my life I’m never getting back. Hope you found your answer sooner than I did!

520 byte sectors and Ubuntu

I recently bought a server which came with Samsung PM1643 SSDs. Trying to install Ubuntu on them didn’t work at first try, because the drives had 520 byte sectors instead of 512 byte.

Luckily, there’s a fix–get the drive(s) to a WORKING Ubuntu system, plug them in, and use the sg_format utility to convert the sector size!

root@ubuntu:~# sg_format -v --format --size=512 /dev/sdwhatever

Yep, it’s really that easy. Be warned, this is a destructive, touch-every-sector operation–so it will take a while, and your drives might get a bit warm. The 3.84TB drives I needed to convert took around 10 minutes apiece.

On the plus side, this also fixes any drive slowdowns due to a lack of TRIM, since it’s a destructive sector-level format.

I’ve heard stories of drives that refused to sg_format initially; if you encounter stubborn drives, you might be able to unlock them by dding a gigabyte or so to them–or you might need to first sg_format them with --size=520, then immediately and I mean immediately again with --size=512.

WSL2, keychain, /etc/hosts and you

There unfortunately are still a few stumbling blocks toward getting a properly, fully-working virt-manager setup running under WSL2 on Windows 11.

apt install virt-manager just works, of course–but getting WSL2 to properly handle hostnames and SSH key passphrases takes a bit of tweaking.

First up, install a couple of additional packages:

apt install keychain ssh-askpass

The keychain package allows WSL2 to cache the passphrases for your SSH keys, and ssh-askpass allows virt-manager to bump requests up to you when necessary.

If you haven’t already done so, first generate yourself an SSH key and give it a passphrase:

me@my-win11:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/id_rsa
Your public key has been saved in ~/.ssh/id_rsa.pub

You will also need to configure keychain itself, by adding the following to the end of your .bashrc:

# For Loading the SSH key
/usr/bin/keychain -q --nogui $HOME/.ssh/id_rsa
source $HOME/.keychain/$HOSTNAME-sh

Now, you’ll enter in your SSH key passphrase each time you open a WSL2 terminal, and it will remember it for SSH sessions opened via that terminal (or via apps opened from that terminal, eg if you type in virt-manager).

If you like to set hostnames in /etc/hosts to make your virt-manager connections look more reasonable, there’s one more step necessary. By default, for some reason WSL2 clobbers /etc/hosts each time it’s started.

You can defang this by creating /etc/wsl.conf and inserting this stanza:

[network]
generateHosts = false

Presto, you can now have a nice, secure, and well-working virt-manager under your Windows 11 WSL2 instance!

screenshot of virt-manager under WSLg
I also edited this screenshot with Ubuntu GiMP installed under WSL2 with apt install gimp. Because of course I did.

One final caveat: I do not recommend trying to create a shortcut in Windows to open virt-manager directly.

You can do that… but if you do, you’re liable to break things badly enough to require a Windows reboot. Windows 11 really doesn’t like launching WSL2 apps directly from a batch file, rather than from within a fully-launched WSL2 terminal!

When NOTHING else will remove a half-installed .deb package

WARNING: ALL WARRANTIES NULL AND VOID

With that important disclaimer out of the way… when you’re stuck in the world’s worst apt -f install loop and can’t figure out any other way to get the damn thing unwedged when there’s a half-installed package (eg if you’ve removed an /etc directory for a package you installed before, and this breaks an installer script—or the installer script “knows you already have it” and refuses to replace a removed config directory), this is the nuclear option:

sudo nano /var/lib/dpkg/status

Remove the offending package (and any packages that depend on it) entirely from this file, then apt install the offending package again.

If you’re still broken after that… did I mention all warranties null and void? This is an extremely nuclear option, and I really wouldn’t recommend it outside a throwaway test environment; you’re probably better off just nuking the whole server and reinstalling.

With that said, the next thing I had to do to clean out the remnants of the mysql/mariadb coinstall debacle that inspired this post was:

find / -iname “*mysql*” | grep -v php | grep -v snap | xargs rm -r

This got me out of a half-broken state on a machine that somebody had installed both mariadb-server and mysql-server on, leaving neither working properly.

When static routes on pfSense are ignored

I have this problem pretty frequently, and it always pisses me off: a pfSense router has a static route or two configured, and it works to ping through them in the router’s own Diagnostics … but they’re ignored entirely when requests come from machines on the LAN.

Here’s the fix.

First, as normal, you need to set up a Gateway pointing to the static route relay on the LAN. Then set up a static route through that new Gateway, if you haven’t already.

Now, you need to go to System–>Advanced–>Firewall & NAT. Look about halfway down the page, for a checkbox “Static route filtering” with flavor text “Bypass firewall rules for traffic on the same interface”. Check that. Scroll to the bottom, and click Save.

Once that’s done, if traceroutes from the LAN to the target network still go out through the WAN instead of through your local gateway… add a firewall rule to fix it.

Firewall –> Rules –> Floating

New rule at the TOP.

Action–> Pass
Quick–> CHECK THIS.
Interface–> LAN
Protocol–> Any
Source–> LAN net
Destination–>Network–> [ target subnet ]

Save your firewall rule, and apply it: within a few seconds, traceroutes from the LAN should start showing the new route.