Please STOP trusting email.

It is now Anno Domini 2025, and for some reason, people keep trusting email. That needs to stop. Let’s talk about why–but first, let’s talk about why I’m telling you this now.

All things happen in cycles, including grifting. And folks, we are at an absolute peak of griftitude right now, similar to the days of snake oil peddlers traveling town to town and selling bottled garbage from the back of a wagon.

One type of fraud that’s peaking right now is payroll/escrow fraud, and it depends very much on people trusting email for things they shouldn’t. It’s an easy grift with a decent success rate and potentially massive payoffs.

Anatomy of an email-based grift

First, you discover an existing payer/payee relationship. This is usually public information and not all that hard to figure out. For example, you might discover that Architect A won a public bid for a construction project. That project will usually have publicly disclosed target dates for completion of various stages as well as the project as a whole–if a stage is nearing completion, this is a potential target, so you do a little more research.

Next, you discover that Architect A is using electrical vendor B to handle the lighting. Typically, the architectural firm gets paid first, and subs out the rest of the contractors. So now, you want to figure out who the accounts payable and receivable folks are at both A and B–again, usually publicly accessible information, not hard to figure out.

Now, you’d like to actually compromise somebody at either A or B, and both if possible. If you can manage it, your odds get a lot better, and you might be able to figure out a way to score bigger and possibly multiple payoffs. But it’s not strictly necessary. Let’s say you were not able to get into the actual email of anybody at either firm–you’re still in the ball game, no worries.

Next step, you know that the lighting for the project is done, and you know who accounts receivable at electrical engineer B is, and who the accounts payable at architect A is. So, you spoof some email from B to A. First, you say that B’s banking information has changed–and you “correct” it to a short-lived account you’re using for the grift.

If you timed it right–and if the accounts payable person at A is a useful idiot–the payment for engineering firm B winds up in your own account, and you immediately move the funds out of that account to someplace offshore and crime-friendly. If it takes more than 48 hours for A and/or B to figure out the money went to the wrong place, you’re home free–they can’t get the funds back, once they’ve moved offshore.

This doesn’t need to be an architect and an engineering sub–another common example is real estate firms and closing attorneys. Real estate sales are also publicly posted data, and you can impersonate a real estate firm and ask the closing attorney to wire the escrow (money put down prior to a purchase completing) to a different bank account. It’s the same grift, with the same potentially enormous one-time payout.

The same technique works for payroll

For construction projects, this can be a single score worth potentially millions of dollars. But what if you got some intel on a business that isn’t a construction, architectural, or real estate related business?

No problem–you’ll need to downshift and look for a smaller payout, but payroll fraud works the same way and requires even less research.

All you need for this one is to know who handles the payroll for a business, and who grants bonuses in a business. Now you impersonate the person authorized to grant bonuses, email the payroll person, and authorize a bonus–usually between $3,000 and $10,000–to one or more employees.

Since you already impersonated those employees and changed their ACH target the day before, those several-thousand dollar “bonuses” go to you, not the employees… who weren’t expecting a bonus, and therefore don’t get alerted by it not showing up.

Generally speaking, you target this one in between paydays, because an individual employee who doesn’t get a paycheck they’re expecting will ring the alarm fast.

Impersonation is incredibly easy

It’s tempting to think this is super high tech information security stuff, but it’s anything but–because email was never designed as a secure protocol in the first place.

Let’s look at physical, postal mail first. What happens if you write “President of the United States of America, 1600 Pennsylvania Ave, Washington DC 20500” in the upper left corner of the envelope?

Your letter gets delivered, is what happens. The postal office does not attempt to verify the “return address” in any way whatsoever–it’s not a form of authentication. It’s on you to realize that this dinky little envelope with a kitten stamp and a postmark from Slapout, AL did not actually originate in the Oval Office.

Email works the same way! Anybody can write anything they like in the FROM: section of an email. It is not validated, in any way, period. If you blindly trust an email based on the FROM: you are making the same mistake as somebody who blindly trusts a postal letter based on what’s scrawled in the upper left corner of the envelope.

Infiltration isn’t that much harder

So far, we’ve talked about how grifters can separate fools from thousands or millions of dollars with nothing but publicly available information–and that is, by far, the most common form of grift in my experience as a very senior IT person.

An advanced attacker might aim a little further north, though, and try to genuinely compromise a target’s email account. If the attacker can gain control of a target’s email account, the attacker can now gain access to private information which makes that crucial attack scheduling far more accurate.

In our first example, we were banking that electrical firm B will have completed the lighting phase of the project on the stated date when the building plans were first announced. If that date slipped badly–or, wonder of wonders, the firm finished early–the critical email to change the ACH target might arrive too early (and be discovered) or too late (and the payment was already made).

But if the attacker can actually compromise the accounts receivable person–or a C-level–at electrical firm B, the attacker can just monitor that email and wait to act until the exact right time. The attempt is also more likely to succeed because even a paranoid IT expert can verify that the email came from the legitimate account of the target–but the improvement in timing the attack is frankly far more important than the improvement in “legitimacy” of the attack itself.

How can I verify that an email is legitimate?

If you’re expecting a bunch of highly technical stuff about email headers, I’m going to disappoint you–because the correct answer is “you can’t.”

Yes, a sufficiently cautious and well-informed person can first force their mail client to display the normally-hidden message headers, then verify each step the message has taken across the internet. (This is the electronic version of reading all the postmarks on a physical envelope.)

However, the vast majority of targets are neither sufficiently cautious nor sufficiently well-informed, nor will they ever be. And more importantly, while this sort of sleuthery might be accurate enough to tell you whether a message came from a particular server, it can’t tell you anything about whether the message originated with the human it should have.

So the real answer here is, when money is on the line, don’t trust email. If you get an email asking you to move a significant amount of money, or to give someone access to an account (banking, telephone, online gaming, email, or anything else) you’d be upset at losing control over, don’t do it–instead, call that person, ask to speak to an actual human, and verify the legitimacy of the request.

And, this is important… don’t use the contents of the email to contact that person or organization. If you don’t already know their phone number, website address, etc–close the email, look the contact information up from scratch, then contact them that way to inquire about the validity of the message you received.

How do I protect myself from being scammed?

We’ve already covered “you shouldn’t trust email,” so we won’t belabor that point… but we will now point out that you need to make sure that the other people you associate with aren’t trusting “your” emails either.

If you’re responsible for the movement of significant amounts of money on a regular basis, check the policies of the people and the firms who you expect to pay or to be paid. Make sure they know–preferably, in writing–that you will not act on unverified email instructions, and that you will not issue unverified email instructions either.

This is important, because an entity that screws up and sends your money somewhere else based on an email “from” you will frequently try to make it your problem. As far as they’re concerned, they sent that $10,000 somewhere, so they “paid” and if you didn’t get it, well “that’s on you.”

You might be thinking “well, that’s obviously stupid.” Sure, sometimes it’s obviously stupid. Other times, it’s obviously dishonest. Either way, if you don’t have a written policy statement on file that you will not be held responsible for actions taken on unverified email, you might be left on the hook–and court actions will typically cost more than the amount of money in play, so you don’t want to rely on litigation as a solution here.

 

 

Heads up—Let’s Encrypt and Dovecot

Let’s Encrypt certificates work just dandy not only for HTTPS, but also for SSL/TLS on IMAP and SMTP services in mailservers. I deployed Let’s Encrypt to replace manually-purchased-and-deployed certificates on a client server in 2019, and today, users started reporting they were getting certificate expiration errors in mail clients.

When I checked the server using TLS checking tools, they reported that the certificate was fine; both the tools and a manual check of the datestamp on the actual .pem file showed that it had been updating just fine, with the most recent update happening in January and extending the certificate validation until April. WTF?

As it turns out, the problem is that Dovecot—which handles IMAP duties on the server—doesn’t notice when the certificate has been updated on disk; it will cheerfully keep using an in-memory cached copy of whatever certificate was present when the service started until time immemorial.

The way to detect this was to use openssl on the command line to connect directly to the IMAPS port:

you@anybox:~$ openssl s_client -showcerts -connect mail.example.com:993 -servername example.com

Scrolling through the connect data produced this gem:

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3270 bytes and written 478 bytes
Verification error: certificate has expired

So obviously, the Dovecot service hadn’t reloaded the certificate after Certbot-auto renewed it. One /etc/init.d/dovecot restart later, running the same command instead produced (among all the other verbiage):

---
Server certificate
subject=CN = mail.example.com

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: ECDH, P-384, 384 bits
---
SSL handshake has read 3269 bytes and written 478 bytes
Verification: OK
---

With the immediate problem resolved, the next step was to make sure Dovecot gets automatically restarted frequently enough to pick new certs up before they expire. You could get fancy and modify certbot’s cron job to include a Dovecot restart; you can find certbot’s cron job with grep -ir certbot /etc/crontab and add a –deploy-hook argument to restart after new certificates are obtained (and only after new certificates are obtained).

But I don’t really recommend doing it that way; the cron job might get automatically updated with an upgraded version of certbot at some point in the future. Instead, I created a new root cron job to restart Dovecot once every Sunday at midnight:

# m h dom mon dow   command
0 0 * * Sun /etc/init.d/dovecot restart

Since Certbot renews any certificate with 30 days or less until expiration, and the Sunday restart will pick up new certificates within 7 days of their deployment, we should be fine with this simple brute-force approach rather than a more efficient—but also more fragile—approach tying the update directly to restarting Dovecot using the –deploy-hook argument.

 

Heartbleed SSL vulnerability

Last night (2014 Apr 7) a massive security vulnerability was publicly disclosed in OpenSSL, the library that encrypts most of the world’s sensitive traffic. The bug in question is approximately two years old – systems older than 2012 are not vulnerable – and affects the TLS “heartbeat” function, which is why the vulnerability has been nicknamed HeartBleed.

The bug allows a malicious remote user to scan arbitrary 64K chunks of the affected server’s memory. This can disclose any and ALL information in that affected server’s memory, including SSL private keys, usernames and passwords of ANY running service accepting logins, and more. Nobody knows if the vulnerability was known or exploited in the wild prior to its public disclosure last night.

If you are an end user:

You will need to change any passwords you use online unless you are absolutely sure that the servers you used them on were not vulnerable. If you are not a HIGHLY experienced admin or developer, you absolutely should NOT assume that sites and servers you use were not vulnerable. They almost certainly were. If you are a highly experienced ops or dev person… you still absolutely should not assume that, but hey, it’s your rope, do what you want with it.

Note that most sites and servers are not yet patched, meaning that changing your password right now will only expose that password as well. If you have not received any notification directly from the site or server in question, you may try a scanner like the one at http://filippo.io/Heartbleed/ to see if your site/server has been patched. Note that this script is not bulletproof, and in fact it’s less than 24 hours old as of the time of this writing, up on a free site, and under massive load.

The most important thing for end users to understand: You must not, must not, MUST NOT reuse passwords between sites. If you have been using one or two passwords for every site and service you access – your email, forums you post on, Facebook, Twitter, chat, YouTube, whatever – you are now compromised everywhere and will continue to be compromised everywhere until ALL sites are patched. Further, this will by no means be the last time a site is compromised. Criminals can and absolutely DO test compromised credentials from one site on other sites and reuse them elsewhere when they work! You absolutely MUST use different passwords – and I don’t just mean tacking a “2” on the end instead of a “1”, or similar cheats – on different sites if you care at all about your online presence, the money and accounts attached to your online presence, etc.

If you are a sysadmin, ops person, dev, etc:

Any systems, sites, services, or code that you are responsible for needs to be checked for links against OpenSSL versions 1.0.1 through 1.0.1f. Note, that’s the OpenSSL vendor versioning system – your individual distribution, if you are using repo versions like a sane person, may have different numbering schemes. (For example, Ubuntu is vulnerable from 1.0.1-0 through 1.0.1-4ubuntu5.11.)

Examples of affected services: HTTPS, IMAPS, POP3S, SMTPS, OpenVPN. Fabulously enough, for once OpenSSH is not affected, even in versions linking to the affected OpenSSL library, since OpenSSH did not use the Heartbeat function. If you are a developer and are concerned about code that you wrote, the key here is whether your code exposed access to the Heartbeat function of OpenSSL. If it was possible for an attacker to access the TLS heartbeat functionality, your code was vulnerable. If it was absolutely not possible to check an SSL heartbeat through your application, then your application was not vulnerable even if it linked to the vulnerable OpenSSL library.

In contrast, please realize that just because your service passed an automated scanner like the one linked above doesn’t mean it was safe. Most of those scanners do not test services that use STARTTLS instead of being TLS-encrypted from the get-go, but services using STARTTLS are absolutely still affected. Similarly, none of the scanners I’ve seen will test UDP services – but UDP services are affected. In short, if you as a developer don’t absolutely know that you weren’t exposing access to the TLS heartbeat function, then you should assume that your OpenSSL-using application or service was/is exploitable until your libraries are brought up to date.

You need to update all copies of the OpenSSL library to 1.0.1g or later (or your distribution’s equivalent), both dynamically AND statically linked (PS: stop using static links, for exactly things like this!), and restart any affected services. You should also, unfortunately, consider any and all credentials, passwords, certificates, keys, etc. that were used on any vulnerable servers, whether directly related to SSL or not, as compromised and regenerate them. The Heartbleed bug allowed scanning ALL memory on any affected server and thus could be used by a sufficiently skilled user to extract ANY sensitive data held in server RAM. As a trivial example, as of today (2014-Apr-08) users at the Ars Technica forums are logging on as other users using password credentials held in server RAM, as exposed by standard exploit test scripts publicly disclosed.

Completely eradicating all potential vulnerability is a STAGGERING amount of work and will involve a lot of user disruption. When estimating your paranoia level, please do remember that the bug itself has been in the wild since 2012 – the public disclosure was not until 2014-Apr-07, but we have no way of knowing how long private, possibly criminal entities have been aware of and/or exploiting the bug in the wild.

Slow performance with dovecot – mysql – roundcube

This drove me crazy forever, and Google wasn’t too helpful.  If you’re running dovecot with mysql authentication, your logins will be exceedingly slow.  This isn’t much of a problem with traditional mail clients – just an annoying bit of a hiccup you probably won’t even notice except for SASL authentication when sending mail – but it makes Roundcube webmail PAINFULLY painfully slow in a VERY obvious way.

The issue is due to PAM authentication being enabled by default in Dovecot, and on Ubuntu at least, it’s done in a really hidden little out-of-the-way file with no easy way to forcibly override it elsewhere that I’m aware of.

Again on Ubuntu, you’ll find the file in question at /etc/dovecot/conf.d/auth-system.conf.ext, and the relevant block should be commented out COMPLETELY, like this:

# PAM authentication. Preferred nowadays by most systems.
# PAM is typically used with either userdb passwd or userdb static.
# REMEMBER: You'll need /etc/pam.d/dovecot file created for PAM
# authentication to actually work. <doc/wiki/PasswordDatabase.PAM.txt>
#passdb {
  # driver = pam
  # [session=yes] [setcred=yes] [failure_show_msg=yes] [max_requests=]
  # [cache_key=] []
  #args = dovecot
#}

Once you’ve done this (and remember, we’re assuming you’re using SQL auth here, and NOT actually USING the PAM!) you’ll auth immediately instead of having to fail PAM and then fall back to SQL auth on every auth request, and things will speed up IMMENSELY. This turns Roundcube from “painfully slow” to “blazing fast”.

Block common trojans in SpamAssassin

If you have a reasonably modern (>= 3.1) version of SpamAssassin, you should by default have a MIMEHeader plugin available (at least on Ubuntu).  This enables you to create a couple of custom rules that block the more pernicious “open this ZIP file” style trojans.

Put the following in /etc/spamassassin/local.cf:

loadplugin Mail::SpamAssassin::Plugin::MIMEHeader

mimeheader ZIP_ATTACHED Content-Type =~ /zip/i
describe ZIP_ATTACHED email contains a zip file attachment
score ZIP_ATTACHED 0.1

header SUBJ_PACKAGE_PICKUP Subject =~ /(parcel|package).*avail.*pickup/i
describe SUBJ_PACKAGE_PICKUP 1 of 2 for meta-rule TROJAN_PACKAGE_PICKUP
score SUBJ_PACKAGE_PICKUP 0.1

header FROM_IRS_GOV From =~ /irs\.gov/i
describe FROM_IRS_GOV 1 of 2 for meta-rule TROJAN_IRS_ZIPFILE
score FROM_IRS_GOV 0.1

meta TROJAN_PACKAGE_PICKUP (SUBJ_PACKAGE_PICKUP &&  ZIP_ATTACHED)
describe TROJAN_PACKAGE_PICKUP Nobody sends a ZIP file to say “your package is ready”.
score TROJAN_PACKAGE_PICKUP 4.0

meta TROJAN_IRS_ZIPFILE (FROM_IRS_GOV &&  ZIP_ATTACHED)
describe TROJAN_IRS_ZIPFILE If the IRS really wants to send a ZIP, they’ll have to find another way
score TROJAN_IRS_ZIPFILE 4.0

As always, spamassassin –lint to make sure the new rules work okay, and /etc/init.d/spamassassin reload to activate them in your running spamd process.

Configuring retry duration in Postfix

By default, Postfix (like most mailservers) will keep trying to deliver mail for unconscionably long periods of time – 5 days. Most users do NOT expect mail that isn’t getting delivered to just sorta hang out for 5 days before they find out it didn’t go through – the typical scenario is that a mistyped email address (or a receiving mailserver that decides to SMTP 4xx you instead of SMTP 5xx you when it really doesn’t want that email, ever) means that three or four days later, the message still hasn’t gotten there, the user has no idea anything went wrong, and then there’s a giant kerfuffle “but I sent that to you days ago!”

If you want to reconfigure Postfix to something a little more in tune with modern sensibilities, add the following to /etc/postfix/main.cf near the top:

# if you can't deliver it in an hour - it can't be delivered!
maximal_queue_lifetime = 1h
maximal_backoff_time = 15m
minimal_backoff_time = 5m
queue_run_delay = 5m

Note that this will cause you problems if you need to deliver to someone who has a particularly aggressive graylisting setup in place that would require retries at or near a full hour later from the original bounce that it sends you. But that’s okay – greylisting is bad, and the remote admin should FEEL bad for doing it. (Alternately – adjust the numbers above as desired to 2h, 4h, whatever will allow you to pass the greylisting on the remote end. Ultimately, you’re the one who has to answer to your users about a balance between “knowing right away that the mail didn’t go through” vs “managing to survive dumb things the other mail admin may have done on his end”.)