Heartbleed SSL vulnerability

Last night (2014 Apr 7) a massive security vulnerability was publicly disclosed in OpenSSL, the library that encrypts most of the world’s sensitive traffic. The bug in question is approximately two years old – systems older than 2012 are not vulnerable – and affects the TLS “heartbeat” function, which is why the vulnerability has been nicknamed HeartBleed.

The bug allows a malicious remote user to scan arbitrary 64K chunks of the affected server’s memory. This can disclose any and ALL information in that affected server’s memory, including SSL private keys, usernames and passwords of ANY running service accepting logins, and more. Nobody knows if the vulnerability was known or exploited in the wild prior to its public disclosure last night.

If you are an end user:

You will need to change any passwords you use online unless you are absolutely sure that the servers you used them on were not vulnerable. If you are not a HIGHLY experienced admin or developer, you absolutely should NOT assume that sites and servers you use were not vulnerable. They almost certainly were. If you are a highly experienced ops or dev person… you still absolutely should not assume that, but hey, it’s your rope, do what you want with it.

Note that most sites and servers are not yet patched, meaning that changing your password right now will only expose that password as well. If you have not received any notification directly from the site or server in question, you may try a scanner like the one at http://filippo.io/Heartbleed/ to see if your site/server has been patched. Note that this script is not bulletproof, and in fact it’s less than 24 hours old as of the time of this writing, up on a free site, and under massive load.

The most important thing for end users to understand: You must not, must not, MUST NOT reuse passwords between sites. If you have been using one or two passwords for every site and service you access – your email, forums you post on, Facebook, Twitter, chat, YouTube, whatever – you are now compromised everywhere and will continue to be compromised everywhere until ALL sites are patched. Further, this will by no means be the last time a site is compromised. Criminals can and absolutely DO test compromised credentials from one site on other sites and reuse them elsewhere when they work! You absolutely MUST use different passwords – and I don’t just mean tacking a “2” on the end instead of a “1”, or similar cheats – on different sites if you care at all about your online presence, the money and accounts attached to your online presence, etc.

If you are a sysadmin, ops person, dev, etc:

Any systems, sites, services, or code that you are responsible for needs to be checked for links against OpenSSL versions 1.0.1 through 1.0.1f. Note, that’s the OpenSSL vendor versioning system – your individual distribution, if you are using repo versions like a sane person, may have different numbering schemes. (For example, Ubuntu is vulnerable from 1.0.1-0 through 1.0.1-4ubuntu5.11.)

Examples of affected services: HTTPS, IMAPS, POP3S, SMTPS, OpenVPN. Fabulously enough, for once OpenSSH is not affected, even in versions linking to the affected OpenSSL library, since OpenSSH did not use the Heartbeat function. If you are a developer and are concerned about code that you wrote, the key here is whether your code exposed access to the Heartbeat function of OpenSSL. If it was possible for an attacker to access the TLS heartbeat functionality, your code was vulnerable. If it was absolutely not possible to check an SSL heartbeat through your application, then your application was not vulnerable even if it linked to the vulnerable OpenSSL library.

In contrast, please realize that just because your service passed an automated scanner like the one linked above doesn’t mean it was safe. Most of those scanners do not test services that use STARTTLS instead of being TLS-encrypted from the get-go, but services using STARTTLS are absolutely still affected. Similarly, none of the scanners I’ve seen will test UDP services – but UDP services are affected. In short, if you as a developer don’t absolutely know that you weren’t exposing access to the TLS heartbeat function, then you should assume that your OpenSSL-using application or service was/is exploitable until your libraries are brought up to date.

You need to update all copies of the OpenSSL library to 1.0.1g or later (or your distribution’s equivalent), both dynamically AND statically linked (PS: stop using static links, for exactly things like this!), and restart any affected services. You should also, unfortunately, consider any and all credentials, passwords, certificates, keys, etc. that were used on any vulnerable servers, whether directly related to SSL or not, as compromised and regenerate them. The Heartbleed bug allowed scanning ALL memory on any affected server and thus could be used by a sufficiently skilled user to extract ANY sensitive data held in server RAM. As a trivial example, as of today (2014-Apr-08) users at the Ars Technica forums are logging on as other users using password credentials held in server RAM, as exposed by standard exploit test scripts publicly disclosed.

Completely eradicating all potential vulnerability is a STAGGERING amount of work and will involve a lot of user disruption. When estimating your paranoia level, please do remember that the bug itself has been in the wild since 2012 – the public disclosure was not until 2014-Apr-07, but we have no way of knowing how long private, possibly criminal entities have been aware of and/or exploiting the bug in the wild.

Enabling core dumps on Apache2.2 on Debian

It was quite an adventure today, figuring out how to get a segfaulting Apache to give me core dumps to analyze on Debian Squeeze. What SHOULD have been easy… wasn’t. Here’s what all you must do:

First of all, you’ll need to set ulimit -c unlimited in your /etc/init.d/apache2 script’s start section.

case $1 in
start)
log_daemon_msg "Starting web server" "apache2"
# set ulimit for debugging
# ulimit -c unlimited

Now make a directory for core dumps – mkdir /tmp/apache2-dumps ; chmod 777 /tmp/apache2-dumps – then you’ll need to apt-get install apache2-dbg libapr1-dbg libaprutil1-dbg

And, the current (Debian Squeeze, in May 2012) version of Debian does not have PIE support in the default gdb, so you’ll need to install gdb from backports. So, add deb http://backports.debian.org/debian-backports squeeze-backports main to /etc/apt/sources.list, then apt-get update && apt-get install -t squeeze-backports gdb

Now add “CoreDumpDirectory /tmp/apache2-dumps” to /etc/apache2/apache2.conf (or its own file in conf.d, whatever), then /etc/init.d/apache2 stop ; /etc/init.d/apache2 start

And once you start getting segfaults, you’ll get a core in /tmp/apache2-dumps/core.

Finally, now that you have your core, you can gdb apache2 /tmp/apache2-dumps/core, bt full, and debug to your heart’s content. WHEW.

Linux Sysadmin 101

I just finished the LibreOffice Impress presentation I’ll be using when I give my first Linux Sysadmin 101 talk at IT-ology this weekend.  The hardest part is always making graphics!

A diagram of the UNIX filesystem structure of a simple webserver
A diagram of the UNIX filesystem structure of a simple webserver

It’s licensed Creative Commons non-commercial share-alike; if you’d like a copy, you can grab one here:  http://jrs-s.net/linux_sysadmin_101/linux_sysadmin_101.odp (ODP, 2.4MB)

More virtualization: multiple Win7 guests on a single Debian host

As a proof-of-concept for USC computer science labs, I set up eight Windows 7 VMs on the same physical host in the Windows Server demonstration below, and recorded firing them up simultaneously and doing some light web browsing, etc. on several of them. Performance is pretty solid; you could probably cram double this many guests on that host and still have as good or better performance than the typical physical lab workstation.


update: replaced video with somewhat more watchable version, with all eight guests tiled on one screen.

Aside from good performance and a single box to maintain, this setup offers some fairly compelling advantages over the traditional computer lab: the host also has a 2TB conventional drive in it, which is where a “gold” image of the Win7 guests is maintained. It only takes about 10 minutes total to reset all of the guests to the “gold” standard; and it would be just as easy to keep multiple gold images on the conventional drive for different classes – Linux images for one class, Windows images with Office for a basic class, Windows images with Visual Studio for another, Solaris for yet another… you get the idea.

Also, the time to “reset” the guests could be substantially faster than that, even, with a little tweaking – using .qcow files instead of whole LVM volumes would allow you to use rsync with the –inplace argument and only have to write over the (relatively few) changed blocks, for example; or in a more advanced layout a separate FreeBSD machine with a large RAIDZ array and iSCSI exports could be used to store the images. There’s still plenty of room for improvement and innovation, but even the simple proof-of-concept (which I put together in roughly half an hour) looks pretty compelling to me.

Virtualizing Windows Server with KVM

I’ve been surprised and pleased at just how well Windows Server 2008 runs virtualized under Debian Squeeze. I first started running virtual Windows Servers purely for the disaster recovery and portability aspects, expecting to pay with a drop in performance… but what I found was that in a lot of cases, Windows 2008’s performance is actually somewhat better when running virtually. In particular, the ever-annoying reboot cycle gets cut to a tiny, tiny fraction of what it would be if running on “the bare metal.”

It’s also pretty nice never, ever having to play “hunt-the-driver” – the virtual “hardware” is all natively supported by Windows, so a virtual install “just works” the moment it’s done, no fuss no muss. But what about that performance?

Smokin’! Which exposes yet another reason to think about virtualization: being able to take advantage of Linux’s highly superior kernel RAID capabilities. The box shown above is running four Crucial C300 128GB solid state drives connected to SATA-3 6Gbps ports on an ASUS board; the Debian Squeeze host has them set up in a kernel RAID10. The resulting 250GB or so of storage is on a performance level that just has to be seen to be believed.

Note that while this IS a really “hot” machine, it’s still just one machine, running on commodity hardware – there’s no $50,000 SAN lurking in the background somewhere; that performance is ALL coming from a single machine with a price tag of WELL under $10K.

Ready to upgrade yet? =)