Today’s Apache 101 presentation at IT-Ology went well – 20-something attendees, and plenty of questions asked and answered. If you couldn’t make the presentation today – or if you did, but you want to refer to the slides later – you can find my slide deck at https://jrs-s.net/apache101/.
Category: Open Source
Understanding ctime
ctimes have got to be one of the most widely misunderstood features of POSIX filesystems. It’s very intuitive – and very wrong – to believe that just as mtime is the time the file was last modified and atime is the time the file was last accessed, ctime must be the time the file was created. Nope!
ctime is not the file creation time, it’s the inode creation time. Any time the inode for a file changes, the ctime changes. Which isn’t very intuitive, and happens much more frequent than you might think. In fact, I can’t think of a time that the mtime changes that the ctime won’t change with it… and in a lot of cases, the ctime will update when the mtime doesn’t! There’s a LOT of bad information floating around about this… so let’s examine it directly:
me@box:/tmp$ touch test
SHOWTIMES="%P \t ctime: %Cb %Cd %CT \t mtime: %Tb %Td %TT \t atime: %Ab %Ad %AT \n" ; export SHOWTIMES
me@box:/tmp$ find . -maxdepth 1 -name test -printf "$SHOWTIMES"
test ctime: Jul 29 15:53:51.4814403150 mtime: Jul 29 15:53:51.4814403150 atime: Jul 29 15:53:51.4814403150
me@box:/tmp$ cat test > /dev/null
me@box:/tmp$ find . -maxdepth 1 -name test -printf "$SHOWTIMES"
test ctime: Jul 29 15:53:51.4814403150 mtime: Jul 29 15:53:51.4814403150 atime: Jul 29 15:54:18.1014495830
me@box:/tmp$ touch test
me@box:/tmp$ find . -maxdepth 1 -name test -printf "$SHOWTIMES"
test ctime: Jul 29 15:54:25.6522832920 mtime: Jul 29 15:54:25.6522832920 atime: Jul 29 15:54:25.6522832920
me@box:/tmp$ chmod 777 test
me@box:/tmp$ find . -maxdepth 1 -name test -printf "$SHOWTIMES"
test ctime: Jul 29 15:54:32.7214485080 mtime: Jul 29 15:54:25.6522832920 atime: Jul 29 15:54:25.6522832920
me@box:/tmp$ mv test test2 ; mv test2 test
me@box:/tmp$ find . -maxdepth 1 -name test -printf "$SHOWTIMES"
test ctime: Jul 29 15:54:54.6322825980 mtime: Jul 29 15:54:25.6522832920 atime: Jul 29 15:54:25.6522832920
OK, from top to bottom:
1. we create a file, and we check its ctime, mtime, and atime. No surprises.
2. we access the file, by cat’ing it to /dev/null. atime updates, ctime and mtime remain the same. No surprises.
3. we modify the file, by touching it. atime, mtime, and ctime update… not what you expected, amirite?
4. we change permissions on the file. ctime updates, but mtime and atime do not. Again… not what you expected, right right?
5. we mv the file a couple times. ctime updates again – mtime and atime still don’t.
This is usually the first answer that should be given to “how do I modify the ctime on a file?” (The second answer is: don’t. [i]By design[/b], there is no feature within a POSIX filesystem to set a ctime to anything other than the current system time, so the only way to do it is either to reset the system time, then mv the file, or to unmount the filesystem entirely and hexedit the metadata with a debugging tool. Neither is a good idea, and wanting to do so in the first place usually stems from a misunderstanding of what ctime is and does.)
SouthEast Linux Fest (SELF) 2012
I had a great time at SELF in Charlotte, NC this weekend. One of the highlights, for me, was for the first time meeting some fellow BSD types in the flesh – Kris Moore (founder of the PC-BSD distribution) and Dru Lavigne (Director of the FreeBSD Foundation), no less. While discussing the pain of doing FreeBSD installs onto RAIDZ, Kris told me about the new graphical installer in PC-BSD that lets you create new RAIDZ arrays of all types and install directly to them, all without ever having to leave the installer, which I found pretty exciting. I found it even more exciting when he told me that the procedures taken by the installer were based partly on my own work at freebsdwiki.net!
I set up a new VM with three virtual disks pretty much the minute I got home, and started a new PC-BSD 9.0 install. Sure enough, although the option is a little hard to discover, I managed to figure it out without having to go in search of any documentation – and without ever leaving the installer, and with a bare minimum of blood and chicken feathers, I got a brand new RAIDZ1 across my three virtual disks set up, and PC-BSD cheerfully installed onto it. (This is testing only, of course – in production, you should only do RAIDZ onto bare metal, not onto an abstraction like linux logical volumes or raw files accessed through a hypervisor.) Pretty heady stuff!
To the right – Tux dropped by the table while Dru and Kris and I were chatting, and posed for me with BSD’s horns. How great is that?
Storing PHP sessions in memcached instead of in files
in php.ini:
session.save_handler = memcache
session.save_path = "tcp://serv01:11211,tcp://serv02:11211,tcp://serv03:11211"
Obviously, you need php5-memcache installed, replace “serv1” “serv2” and “serv3” with valid server address(es), and you’ll need to restart your Apache server after making the change.
Why would you need to do this? Well, this week I had to deal with a web application server pool that kept slowly increasing its number of children all the way up to MaxCli, no matter what. It was unpredictable, other than being a slow creep. Eventually, stracing the pids showed that they were getting stuck in FLOCK on files in /var/lib/php5/php_sess*. This turns out to be an endemic problem with PHP that the devs don’t seem inclined to fix: php’s garbage collector will delete session files dirtily if a php process (which in the case of mod_php, means an Apache process) violates any of the php limits, such as max_execution_time (among many, many others). So you end up with your php script trying to lock a session file (file descriptor 3) that php’s garbage collector already deleted, and therefore an infinitely hung process that will never go away on its own.
Changing over to using memcache to store php sessions eliminated this file lock issue and resulted in a much more stable situation – a server pool that had been creeping up to 800 children per server over the course of a couple hours has been running stable and sweet on less than 150 children per server for days now.
Enabling core dumps on Apache2.2 on Debian
It was quite an adventure today, figuring out how to get a segfaulting Apache to give me core dumps to analyze on Debian Squeeze. What SHOULD have been easy… wasn’t. Here’s what all you must do:
First of all, you’ll need to set ulimit -c unlimited in your /etc/init.d/apache2 script’s start section.
case $1 in
start)
log_daemon_msg "Starting web server" "apache2"
# set ulimit for debugging
# ulimit -c unlimited
Now make a directory for core dumps – mkdir /tmp/apache2-dumps ; chmod 777 /tmp/apache2-dumps – then you’ll need to apt-get install apache2-dbg libapr1-dbg libaprutil1-dbg …
And, the current (Debian Squeeze, in May 2012) version of Debian does not have PIE support in the default gdb, so you’ll need to install gdb from backports. So, add deb http://backports.debian.org/debian-backports squeeze-backports main to /etc/apt/sources.list, then apt-get update && apt-get install -t squeeze-backports gdb …
Now add “CoreDumpDirectory /tmp/apache2-dumps” to /etc/apache2/apache2.conf (or its own file in conf.d, whatever), then /etc/init.d/apache2 stop ; /etc/init.d/apache2 start …
And once you start getting segfaults, you’ll get a core in /tmp/apache2-dumps/core.
Finally, now that you have your core, you can gdb apache2 /tmp/apache2-dumps/core, bt full, and debug to your heart’s content. WHEW.
Opening up SQL server in the Windows Server 2008 firewall
@echo ========= SQL Server Ports ===================
@echo Enabling SQLServer default instance port 1433
netsh firewall set portopening TCP 1433 “SQLServer”
@echo Enabling Dedicated Admin Connection port 1434
netsh firewall set portopening TCP 1434 “SQL Admin Connection”
@echo Enabling conventional SQL Server Service Broker port 4022
netsh firewall set portopening TCP 4022 “SQL Service Broker”
@echo Enabling Transact-SQL Debugger/RPC port 135
netsh firewall set portopening TCP 135 “SQL Debugger/RPC”
@echo ========= Analysis Services Ports ==============
@echo Enabling SSAS Default Instance port 2383
netsh firewall set portopening TCP 2383 “Analysis Services”
@echo Enabling SQL Server Browser Service port 2382
netsh firewall set portopening TCP 2382 “SQL Browser”
@echo ========= Misc Applications ==============
@echo Enabling HTTP port 80
netsh firewall set portopening TCP 80 “HTTP”
@echo Enabling SSL port 443
netsh firewall set portopening TCP 443 “SSL”
@echo Enabling port for SQL Server Browser Service’s ‘Browse’ Button
netsh firewall set portopening UDP 1434 “SQL Browser”
@echo Allowing multicast broadcast response on UDP (Browser Service Enumerations OK)
netsh firewall set multicastbroadcastresponse ENABLE
Testing Ubuntu Precise (12.04 LTS beta)
OK, this is the coolest thing ever – I decided to download the beta for the upcoming Ubuntu Precise Pangolin (12.04 LTS) release and do some testing. And I start installing it in a VM, and *while* it’s installing, I see it populate a dialog with Ubuntu-relevant links, and curiously, I click one… and BAM, instant fully-functional, working Firefox!
So, while you’re installing your operating system, you can goof around on the internet. Seriously, how cool is that?!
Linux Sysadmin 101
I just finished the LibreOffice Impress presentation I’ll be using when I give my first Linux Sysadmin 101 talk at IT-ology this weekend. The hardest part is always making graphics!
It’s licensed Creative Commons non-commercial share-alike; if you’d like a copy, you can grab one here: https://jrs-s.net/linux_sysadmin_101/linux_sysadmin_101.odp (ODP, 2.4MB)
Hash collision DoS vulnerability and PHP 5.x
There’s a lot of fear floating around right now about the hash collision DoS vulnerability which pretty much every web application platform out there (except for Perl, which fixed the vulnerability way back in 2003!) is open to. http://thehackernews.com/2011/12/web-is-vulnerable-to-hashing-denial-of.html
And yeah, it’s a pretty big deal – if you’re vulnerable, any PHP script that so much as touches $_POST or $_GET will be vulnerable. What none of these pages seem very inclined to tell you is exactly HOW to test for vulnerability – and, what may have already made this problem a non-issue for you. Spoiler: if you’re running a current LTS version of Debian or Ubuntu and you installed the LAMP stack during boot or by using tasksel install lamp-server, you’re probably fine. The Suhosin patch gets in the vulnerability’s way in the default configuration it uses on Squeeze and Lucid, and the Debian-style LAMP installation gives you Suhosin, so you’re good to go.
But what if you DIDN’T get your LAMP stack that way, or you just aren’t sure if you’re running Suhosin – or the right configuration of Suhosin – and you want to check? First, dump this little PHP script somewhere that you can access it – for most people, /var/www/hashtest.php will work fine:
<!--?php $test = $_POST['test']; echo "test passed!\n"; ?-->
Note: it’s that useless line accessing $_POST that makes this script potentially vulnerable – without that line, this script wouldn’t actually be vulnerable to the attack, because PHP builds the super-array for $_POST and $_GET lazily. You don’t access it… PHP doesn’t create it. Anyway, now make sure that you can actually access that script using wget, like this: wget -O – http://127.0.0.1/hashtest.php
You should get a quick HTML output that says “test passed!” – which isn’t true, because we didn’t actually test it – but now you know the script will actually execute. Now, wget -qO /tmp/hashcollide.txt https://jrs-s.net/hashcollide.txt – this gives you a “payload” file with a nice set of really nasty hash collisions that will confuse a PHP application badly. Finally, you’re ready to test it out – wget -O – –-post-file /tmp/hashcollide.txt http://127.0.0.1/hashtest.php[/b] and you’re off to the races. If you’re lucky, you’ll get this:
test passed!
If you’re not lucky, you get a nice long wait (max_execution_time in php.ini, 30 seconds by default) followed by this:
me@locutus:/var/www$ wget -O - --post-file /tmp/hashcollide.txt http://127.0.0.1/hashtest.php Connecting to 127.0.0.1:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-12-29 16:10:14 ERROR 500: Internal Server Error.
In my testing so far, FreeBSD (all versions), Ubuntu Hardy, Ubuntu Oneiric, and Debian Lenny are vulnerable. Ubuntu Lucid and Debian Squeeze were not. Again, this assumes you’ve installed the LAMP stack in the default manner; “cowboy installs” may or may not be vulnerable. Suhosin appears to be the key factor determining whether a particular machine will or will not fall prey to this vulnerability. The fix, if you are vulnerable – upgrade your OS to a current LTS version, upgrade all packages, and make sure you’re running Suhosin – then make sure you actually set the Suhosin variable suhosin.post.max_vars to 1000 or less.
In the following example, we discover that a stock Oneiric workstation is vulnerable, and then we fix it:
me@locutus:/var/www$ wget --post-file /tmp/hashcollide.txt -O - http://127.0.0.1/hashtest.php --2011-12-30 10:27:43-- http://127.0.0.1/hashtest.php Connecting to 127.0.0.1:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-12-30 10:28:43 ERROR 500: Internal Server Error.
Yup, it’s broken – so let’s fix it.
me@locutus:~$ sudo apt-get install php5-suhosin me@locutus:~$ grep suhosin.post.max_vars /etc/php5/apache2/conf.d/suhosin.ini ;suhosin.post.max_vars = 1000 me@locutus:~$ sudo sed -i s/;suhosin\.post\.max_vars/suhosin\.post\.max_vars/ /etc/php5/apache2/conf.d/suhosin.ini me@locutus:~$ grep suhosin.post.max_vars /etc/php5/apache2/conf.d/suhosin.ini suhosin.post.max_vars = 1000 me@locutus:~$ wget -qO - --post-file /tmp/hashcollide.txt http://127.0.0.1/hashtest.php test passed!
For some damn reason, even though Oneiric indicates that suhosin.post.max_vars should be set to 1000 by default, and even though the Suhosin project says they have it default to 200… in actuality, on Oneiric, it defaults to unset. If you uncomment the statement already in suhosin.ini as I did above, though, then restart Apache – you’re set.
Note: the “payload file” referenced above was lifted from https://github.com/koto/blog-kotowicz-net-examples/tree/master/hashcollision .
Review: ASUS “Transformer” Tablet
ASUS has entered the Android tablet market with a compelling new contender – the Eee TF-101 “Transformer.” Featuring an Nvidia Tegra dual-core CPU at 1.0GHz, the device feels “snappier” than most relatively high-end desktop PCs – nothing lags; when you open an app, it pops onto the screen smartly. If you’re accustomed to browsing on smartphones, you’ll feel the performance difference immediately – even notoriously heavy pages like CNN or ESPN render as quickly as they would if you were using a high-end desktop computer.
I first got my hands on a TF101 that one of my clients had purchased, sans docking station. After I’d played with it for a few minutes, I knew I wanted one, but that left the $150 question – what will it be like when it’s docked? The answer is “there’s a lot of potential here” – but there are problems to be worked through before you can give it an unqualified “hey, awesome!”
CNET complains that the TF101 feels cheap, with poorly-rounded corners and a flimsy backplate. After a week or so of ownership and something like 20 hours of active use, I do not agree on either issue. I find the tablet nicely balanced, easy to grip, and solid feeling. It weighs in at 1.6 pounds (tablet only) and 2.9 lbs (tablet and docking station), which puts it pretty much dead center in standard weight for both tablets and netbooks. However, while the weight of the docked TF101 is an ounce heavier than the weight of my Dell Mini 10v, the TF101 feels much less cumbersome – probably because even though it’s slightly heavier, it’s much, much slimmer.
Docking the tablet feels easy and intuitive; line up the edges of the tablet with the edges of the docking station, and you’re in the right position for the sockets to mate. Pressing down first gently, then firmly produces good tactile feedback for whether it’s lined up properly, and whether it’s “clicked” all the way in. The hinge itself is very solid and doesn’t feel “loose” or sloppy at all – in fact, it’s stiff enough that most people would have trouble moving it at all without the tablet already inserted. Undocking the tablet is easy; there’s a release toggle that slides to the left (marked with an arrow POINTING to the left, which is a nice touch); the release toggle also has a solid, not-too-sloppy but not-too-stiff action.
The docking station offers more than just the keyboard. There are also two USB ports (a convenience which is missing on the tablet itself), a full-size SD card slot, and an internal battery pack, roughly comparable to the battery in the tablet itself. The extra battery life is a great feature; the tablet itself gets 9 hours or so of fully active use, and the docking station roughly doubles that. In practical terms, most people will be able to go away for a long weekend with a fully-charged TF101 sans charger, use it for 4 hours a day without ever bothering to turn it off, and come home with a significant fraction of the battery left – especially if they’ve taken the time to set the “disconnect from wireless when screen is off” option in the Power settings. I used the docked tablet 2 to 4 hours a day for a full seven-day week, playing games, emailing, and browsing; at the end of the week I was at 15% charge remaining.
Polaris Office, the office suite shipped with the TF101, was a pleasant surprise – a client asked if I could display PowerPoint presentations on the tablet, and the answer turned out to be “yes, I certainly can.” I’ve only tried a few of them, none of which had any particularly fancy animations; but 40MB slideshows load and display just fine. Paired with an HDMI projector, the TF101 should make a pretty solid little presentation device, particularly since it feels just as “fast” running slideshows as it does browsing and playing games from the Market.
Moving on to the docking station itself, I quite liked the way the touchpad is integrated into the Android OS – instead of an arrow cursor, you get a translucent “bubble” roughly the size of a fingertip press, which felt much more intuitive to me. With the arrow, I tend to try to be just as precise as I would with a mouse – which can be frustrating. The “fingertip bubble” made it easier for me to relax and just “get what you want inside the circle” without trying to be overly finicky. Sensitivity for both tracking and tapping was also very good; the touchpad feels slick and responsive to use.
The keyboard, unfortunately, is a mixed bag – it’s better-suited to large hands than many netbook keyboards, but you won’t ever mistake it for the full-size keyboard on your desk. The dimensions are almost exactly the same as the keyboard on my Dell Mini 10v; but I find that it feels significantly more cramped and awkward – probably because ASUS elected to go the trendy new route of “raised keys with space between them”, where the Mini’s keys are literally edge-to-edge with one another. This should make the TF101 less likely to collect crumbs, skin flakes, and other kinds of “yuck” than the Mini 10v, but I personally would rather deal with more cleaning than less roominess.
Several applications don’t really play well with the keyboard – ConnectBot, which I use as an SSH client to operate remote servers, becomes completely unusable due to handling the shift key wrong – you can’t type anything from ! through + without resorting to re-enabling the onscreen keyboard. In the Android Browser, typing URLs in the address bar works fine, but if you do any significant amount of typing in a form – for example, writing this post in the TinyMCE control WordPress uses – the up and down arrow keys frequently map to the wrong thing. Sometimes up/down arrow would scroll through the text I was typing, sometimes they would tab me to different controls on the page, and some OTHER times they would simply scroll the entire page up and down.
In Polaris Office (the office suite shipped with the TF101), the keyboard itself worked perfectly – but the touchpad was too sensitive and placed too closely. It was difficult to type more than one sentence at a time without the heel of my hand brushing the touchpad and registering as a “tap”, causing the last half of a sentence to appear in the midst of the sentence before it.
Can these problems be mitigated? Probably. The ConnectBot issue was solved pretty simply by Googling “connectbot transformer”, which immediately leads you to a Transformer-specific fork of ConnectBot – after uninstalling the original ConnectBot, temporarily enabling off-Market app installation, and downloading and installing the fork directly from GitHub, my shift-key problems there were solved. Presumably either Google or ASUS will eventually deal with the arrow key behavior in the Android Browser. I tried using the Dolphin HD Browser in the meantime, but had no better luck with it – it is at least consistent in how it handles arrow key usage, but unfortunately it’s consistently wrong – it always scrolls the entire page up and down when you press up or down arrow keys, no matter where the focus on the page is. Finally, you can toggle the touchpad completely on or off by using a function button at the top of the keyboard – but it would be nice to simply change the sensitivity instead, or automatically disable it for half a second or so after keypresses, the way you can on a traditional (non-Android) netbook.
In the end, though, you can’t really fix all the problems by yourself with “tweaking”; some of the frustrations with the poor integration of the physical keyboard into the Android environment are going to keep ambushing you until Google itself addresses them. ASUS and individual app developers can and likely will continue working to mitigate these issues, but it will be a never-ending game of whack-a-mole until Android itself takes adapting to the “netbook” environment more seriously.
Final verdict: The tablet looks, feels, and performs incredibly well; in most cases it “feels faster” than even high-end desktop computers. Even though my Atom-powered Dell Mini 10v has a Crucial C300 SSD (Solid State Drive), the TF101 spanks it thoroughly in pretty much every performance category possible and sends it home crying. Battery life is also phenomenal, at 9-ish active hours undocked or 18-ish hours docked. It looks and feels, on first blush, like it would make a truly incredible netbook when docked – but Android 3.2 and its apps clearly haven’t come to the party well-prepared for a physical keyboard – and it shows, which knocks the initial blush well off the device as a netbook competitor. If you really need physical keyboard and conventional data entry, this is probably not going to be the device for you – at least, not until the rest of the OS and its apps evolve to support it better.
If you want a tablet, I can recommend the TF-101 without reservation. If you want a netbook, though, you should probably give the TF-101 a pass unless and until Google starts taking the idea of “Android Netbooks” seriously.