We’ve all noticed that software never seems to get any faster no matter how much faster the hardware gets. This easily-observable fact is usually explained in one of two ways:
- Software devs are lazy, and refuse to optimize more than they absolutely must
- Software devs are ambitious, and use all available CPU cycles / IOPS to do as much as possible–so more cycles/IOPS available == more detailed work delivered
These things are both true, but they aren’t the root cause of the issue–they’re simply how the issue tends to be addressed and/or to expose itself.
The real reason software never gets (visibly) faster unless you install much older operating systems and applications on much newer hardware is because typically, humans aren’t comfortable with “machine-speed” interfaces.
For as long as I’ve been alive, a typical discrete human-visible task performed by a computer–opening an application, saving a file, loading a web page–has tended to take roughly 1,500ms on a typical to slow PC, or 500ms on a fast PC.
1,500ms is about the length of time it takes to say “hey, pass me that screwdriver, would you?” and have a reasonably attentive buddy pass you the screwdriver. 500ms is about the length of time it takes to say “Scalpel,” and have a reasonably attentive, professional surgical assistant slap the scalpel in your hand.
If you’re still on the fence about this, consider “transitions.” When a simple, discrete task like opening an application gets much faster than 500ms despite some devs being lazy and other devs being ambitious… that’s when the visual transitions start to appear.
On macOS, when you open applications or switch focus to them, they “stream” from the lower right-hand corner of the screen up to the upper-left corner of where the window will actually be, and expand out from there into the lower-right until the app is full sized. Windows expands windows upward from the taskbar. Even Linux distributions intended for end-users employ graphical transitions which slow things down. Why?
Because “instantaneous” response is unsettling for most humans, whether they directly understand and can articulate that fact or not.
To be fair, I have seen this general idea–that humans aren’t comfortable with low task latency–occasionally floated around over the decades. But the part I haven’t seen is the direct comparison with the typical task latency a human assistant would provide, and I think that’s a pretty illustrative and convincing point.
If you like this theory and find it useful, feel free to just refer to it as “Screwdriver Theory”–but I wouldn’t be upset if you linked to this page to explain it. =)