WSL2, keychain, /etc/hosts and you

There unfortunately are still a few stumbling blocks toward getting a properly, fully-working virt-manager setup running under WSL2 on Windows 11.

apt install virt-manager just works, of course–but getting WSL2 to properly handle hostnames and SSH key passphrases takes a bit of tweaking.

First up, install a couple of additional packages:

apt install keychain ssh-askpass

The keychain package allows WSL2 to cache the passphrases for your SSH keys, and ssh-askpass allows virt-manager to bump requests up to you when necessary.

If you haven’t already done so, first generate yourself an SSH key and give it a passphrase:

me@my-win11:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/id_rsa
Your public key has been saved in ~/.ssh/id_rsa.pub

You will also need to configure keychain itself, by adding the following to the end of your .bashrc:

# For Loading the SSH key
/usr/bin/keychain -q --nogui $HOME/.ssh/id_rsa
source $HOME/.keychain/$HOSTNAME-sh

Now, you’ll enter in your SSH key passphrase each time you open a WSL2 terminal, and it will remember it for SSH sessions opened via that terminal (or via apps opened from that terminal, eg if you type in virt-manager).

If you like to set hostnames in /etc/hosts to make your virt-manager connections look more reasonable, there’s one more step necessary. By default, for some reason WSL2 clobbers /etc/hosts each time it’s started.

You can defang this by creating /etc/wsl.conf and inserting this stanza:

[network]
generateHosts = false

Presto, you can now have a nice, secure, and well-working virt-manager under your Windows 11 WSL2 instance!

screenshot of virt-manager under WSLg
I also edited this screenshot with Ubuntu GiMP installed under WSL2 with apt install gimp. Because of course I did.

One final caveat: I do not recommend trying to create a shortcut in Windows to open virt-manager directly.

You can do that… but if you do, you’re liable to break things badly enough to require a Windows reboot. Windows 11 really doesn’t like launching WSL2 apps directly from a batch file, rather than from within a fully-launched WSL2 terminal!

Fixing clock drift in Windows VMs under KVM

Inside your Windows VM, open an elevated command prompt (right-click Command Prompt from the Start menu, and Run as Administrator), then issue the following command:

bcdedit /set useplatformclock true

Now, you need to restart the guest—this change is persistent, but it doesn’t actually take effect until the guest reboots! After the reboot, the guest’s clock will stop drifting.

zfs set sync=disabled

While benchmarking the Ars Technica Hot Rod server build tonight, I decided to empirically demonstrate the effects of zfs set sync=disabled on a dataset.

In technical terms, sync=disabled tells ZFS “when an application requests that you sync() before returning, lie to it.” If you don’t have applications explicitly calling sync(), this doesn’t result in any difference at all. If you do, it tremendously increases write performance… but, remember, it does so by lying to applications that specifically request that a set of data be safely committed to disk before they do anything else. TL;DR: don’t do this unless you’re absolutely sure you don’t give a crap about your applications’ data consistency safeguards!

In the below screenshot, we see ATTO Disk Benchmark run across a gigabit LAN to a Samba share on a RAIDz2 pool of eight Seagate Ironwolf 12TB disks. On the left: write cache is enabled (meaning, no sync() calls). In the center: write cache is disabled (meaning, a sync() call after each block written). On the right: write cache is disabled, but zfs set sync=disabled has been set on the underlying dataset.

L-R: no sync(), sync(), lying in response to sync().

The effect is clear and obvious: zfs set sync=disabled lies to applications that request sync() calls, resulting in the exact same performance as if they’d never called sync() at all.

Selectively disabling Windows UAC for individual applications

Today a client emailed me to report that since installing Quickbooks “Enterprise” (note the scare quotes there. they are used with malice), her users (who are, sensibly, not Administrators) were faced with a User Account Control prompt (“Do you want to allow the following program to make changes to your computer?”) every time they opened the new version of Quickbooks.  A little further investigation showed that “DBManagerExe.exe” was the actual file throwing the UAC dialog.  Absolutely no information from Intuit is available whatsoever about how or why this program wants Administrator privileges, ways to nerf it, etc – apparently this “Enterprise” product is just supposed to be run in “Enterprises” by users who are allowed full Administrator privileges.  Because, you know, that’s what “Enterprises” do.  Delightful.

I chased the issue around and around trying to figure out what DBManagerExe.exe actually wanted access to, so I could just grant that to the users… but eventually I was forced to give up and just disable UAC selectively for that one program.  Luckily, while the process is rather arcane, it’s not actually HARD.  So let’s document it here.

1. Download the Microsoft Application Compatibility Toolkit.  I won’t link it here, to avoid creating stale links – just Google it, it should come right up.  Pick the latest version available (currently, 5.6).  Run the installer.

2. start –> all programs –> Microsoft Application Compatibility Toolkit –> Compatibility Administrator (32-bit) or Compatibility Administrator (64-bit), as appropriate. Note: just because your system is 64-bit does not necessarily mean that’s the Compatibility Administrator you want here – this needs to match the application you want to selectively allow UAC-less admin privileges for, not the system as a whole!  For DBManagerEXE.exe, I needed to select 32-bit.  Further note: if you are not logged in as the actual Administrator account, you should right-click and “Run As Administrator” to open the Compatibility Manager.  Otherwise, your “fix” won’t fix anything.

3. Click the “Fix” icon on the top toolbar.  Click “Browse” to find the executable you want to enable – for me, it was C:\Program Files (x86)\Intuit\QuickBooks Enterprise Solutions 14.0\DBManagerExe.exe.  Now, enter the name of the program and vendor in the two text boxes above the location in the dialog – this will make it easier to manage later, if you ever need to figure out what you’ve done and to whom.  Click Next.

4. Under Compatibility Modes, click none.  You don’t want this.  (Unless you do, of course, but Compatibility Modes aren’t needed for nerfing UAC dialogs, they’re for something COMPLETELY different and certainly aren’t applicable to running Quickbooks Enterprise 2014, in this case.)  Click Next.

5. Find RunAs Invoker on the list of Compatibility Fixes.  Check it.  Don’t mess with anything else.  Click Next, then click Finish.

6. Save your database (from the button on the toolbar).  Give it a name that makes sense, and save it in C:\Windows\System32. 8. File –> Install from the top menu.  You’ll get a dialog box confirming that you’ve installed your fix.  You should be done now.

Log in as an unprivileged user and test – in my case, for enabling non-Administrators to open Quickbooks “Enterprise” 2014, it worked flawlessly – no more UAC prompt, now the user went straight to the new setup wizard as they should.

Disable_UAC_selectively

Note: for this particular diabolically badly written application, just disabling UAC probably won’t be enough: QuickBooks also tends to fail miserably at starting its database manager service due to not placing its service user group into the local Administrators group. Each year of QB will create its own service user, in the form QBDataServiceUser24 or similar. If you’re here specifically for Quickbooks and you still get a nasty, this time NON Windows “you need to be administrator” prompt when you launch QB, you’ll need to find your local service user for the year of Quickbooks in question and add it to the local Administrators group on your machine. Yay, Intuit.

Using LogMeIn remote control with Linux

In 2013, LogMeIn decided to start forcing the download of a Windows-only executable file for remote control of computers. This, of course, leaves Linux users in the lurch.

The previous Flash interface is still THERE, and in fact it’s SUPPOSED to be available with a click – if you don’t have the plugin, you’re supposed to be presented with a page offering to let you download the plugin OR click another link to go to the Flash interface. Unfortunately, on Linux (Ubuntu at least), said page just instantly flashes away and takes you back to the splash page for the computer you’re connected to.

The workaround here is to log into your LogMeIn account, click the computer you want to control and connect to it (using your login credentials), and then INSTEAD OF CLICKING REMOTE CONTROL go to your address bar and replace “/main.html” at the end of the current URL with “/remctrl.html?type=flash” instead. Hit enter, and your remote control session will start as normal.

BOO to LogMeIn for making this so freaking difficult. >=[

Benchmarking Windows Guests on KVM:I/O performance

I’ve been using KVM in production to host Windows Server guests for close to 4 years now.  I’ve always been thoroughly impressed at what a great job KVM did with accelerating disk I/O for the guests – making Windows guests perform markedly faster virtualized than they used to on the bare metal.  So when I got really, REALLY bad performance recently on a few Windows Server Standard 2012 guests – bad enough to make the entire guest seem “locked up tight” for minutes at a time – I did some delving to figure out what was going on.

Linux and KVM offer a wealth of options for handling caching and underlying subsystems of host storage… an almost embarassing wealth, which nobody seems to have really benchmarked.  I have seen quite a few people tossing out offhanded comments about this cache mode or that cache mode being “safer”or “faster”or “better”, but no hard numbers.  So to both fix my own immediate problem and do some much-needed documentation, I spent more hours this week than I really want to think about doing some real, no-kidding, here-are-the-numbers benchmarking.

Methodology

Test system: AMD FX-8320 8-core CPU, 32GB DDR3 SDRAM, 1x WD 2TB Black (WD2002FAEX) drive, 1x Samsung 840 PRO Series 512GB SSD, Ubuntu 12.04.2-LTS fully updated, Windows Server 2008 R2 guest OS, HD Tune Pro 5.50 Windows disk benchmark suite.

The host and guest OS are both installed on the WD 2TB Black conventional disk; the Samsung 840 PRO Series SSD is attached to the guest in various configurations for benchmarking.  The guest OS is given approximately 30 seconds to “settle” after each boot and login before running any benchmarks.  No other operations are occurring on either guest or host while benchmarks are run.

Exploratory Testing

Before diving straight into “which combination works the fastest”, I really wanted to explore the individual characteristics of all the various overlapping options available.

The first thing I wanted to find out: how much of a penalty, if any, do you pay for operating a raw disk virtualized under KVM, as opposed to under Windows on the bare metal?  And how much of a boost do the VirtIO guest drivers offer over basic IDE drivers?

Baseline Performance

As you can see, we do pay a penalty – particularly without the VirtIO drivers, which offer a substantial increase in performance over the default IDE, even without caching.  We can also see that LVM logical volumes perform effectively identically to actual raw disks.  Nice to know!

Now that we know that “raw is raw”, and “VirtIO is significantly better than IDE”, the next burning question is: how much of a performance hit do we take if we use .qcow2 files on an actual filesystem, instead of feeding KVM a raw block device?  Actually, let’s pause that question – before that, why would we want to use a .qcow2 file instead of a raw disk or LV?  Two big answers: rsync, and state saves.  Unless you compile rsync from source with an experimental patch, you can’t use it to synchronize copies of a guest that are stored on a block device – whereas you can, with a qcow2 or raw file on a filesystem.  And you can’t save state (basically, like hibernation – only much faster, and handled by the host instead of the guest) with raw storage either – you need qcow2 for that.  As a really, really nice aside, if you’re using qcow2 and your host runs out of space… your guest pauses instead of crashing, and as soon as you’ve made more space available on your host, you can resume the guest as though nothing ever happened.  Which is nice.

So, if we can afford to, we really would like to have qcow2.  Can we afford to?

VirtIO-nocache

 

Yes… yes we can.  There’s nothing too exciting to see here – basically, the takeaway is “there is little to no performance penalty for using qcow2 files on a filesystem instead of raw disks.”  So, performance is determined by cache settings and by the presence of VirtIO drivers in our guest… not so much by whether we’re using raw disks, or LV, or ZVOL, or qcow2 files.

One caveat: I tested using fully-allocated qcow2 files.  I did a little bit of casual testing with sparsely allocated (aka “thin provisioned”) qcow2 files, and basically, they follow the same performance curves, but with roughly half the performance on some writes.  This isn’t  that big a deal, in my opinion – you only have to do a “slow” write to any given block once.  After that, it’s a rewrite, not a new write, and you’re back to the same performance level you’d have had if you’d fully allocated your qcow2 file to start with.  So, basically, it’s a self-correcting problem, with a tolerable temporary performance penalty.  I’m more than willing to deal with that in return for not having to potentially synchronize gigabytes of slack space when I do backups and migrations.

So… since performance is determined largely by cache settings, let’s take a look at how those play out in the guest:

storage cache methods

 

In plain English, “writethrough” – the default – is read caching with no write cache, and “writeback” is both read and write caching.  “None” is a little deceptive – it’s not just “no caching”, it actually requires Direct I/O access to the storage medium, which not all filesystems provide.

The first thing we see here is that the default cache method, writethrough, is very very fast at reading, but painfully slow on writes – cripplingly so.  On very small writes, writethrough is capable of less than 0.2 MB/sec in some cases!  This is on a brand-new 840 Pro Series SSD... and it’s going to get even worse than this later, when we look at qcow2 storage.  Avoid, avoid, avoid.

KVM caching really is pretty phenomenal when it hits, though.  Take a look at the writeback cache method – it jumps well above bare metal performance for large reads and writes… and it’s not a small jump, either; 1MB random reads of well over 1GB / sec are completely normal.  It’s potentially a little risky, though – you could potentially lose guest data if you have a power failure or host system crash during a write.  This shouldn’t be an issue on a stable host with a UPS and apcupsd.

Finally, there’s cache=none.  It works.  It doesn’t impress.  It isn’t risky in terms of data safety.  And it generally performs somewhat better with extremely, extremely small random I/O… but without getting the truly mind-boggling wins that caching can offer.  In my personal opinion, cache=none is mostly useful when you’re limited to IDE drivers in your guest.  Also worth noting: “cache=none” isn’t available on ZFS or FUSE filesystems.

Moving on, we get to the stuff I really care about when I started this project – ZFS!  Storing guests on ZFS is really exciting, because it offers you the ability to take block-level host-managed snapshots of your guests; set and modify quotas; set and configure compression; do asynchronous replication; do block-level deduplication – the list goes on and on and on.  This is a really big deal.  But… how’s the performance?

ZFS storage methods

The performance is very, very solid… as long as you don’t use writethrough.  If you use writethrough cache and ZFS, you’re going to have a bad time.  Also worth noting: Direct I/O is not available on the ZFS filesystem – although it is available with ZFS zvols! – so there are no results here for “cache=none” and ZFS qcow2.

The big, big, big thing you need to take away from this is that abysmal write performance line for ZFS/qcow2/writethrough – well under 2MB/sec for any and all writes.  If you set a server up this way, it will look blazing quick and you’ll love it… until the first time you or a user tries to write a few hundred MB of data to it across the network, at which point the whole thing will lock up tighter than a drum for half an hour plus.  Which will make you, and your users, very unhappy.

What else can we learn here?  Well, although we’ve got the option of using a zvol – which is basically ZFS’s answer to an LVM LV – we really would like to avoid it, for the same reasons we talked about when we compared qcow to raw.  So, let’s look at the performance of that raw zvol – is it worth the hassle?  In the end, no.

But here’s the big surprise – if we set up a ZVOL, then format it with ext4 and put a .qcow2 on top of that… it performs as well, and in some cases better than, the raw zvol itself did!  As odd as it sounds, this leaves qcow2-on-ext4-on-zvol as one of our best performing overall storage methods, with the most convenient options for management.  It sounds like it’d be a horrible Rube Goldberg, but it performs like best-in-breed.  Who’d’a thunk it?

There’s one more scenario worth exploring – so far, since discovering how much faster it was, we’ve almost exclusively looked at VirtIO performance.  But you can’t always get VirtIO – for example, I have a couple of particularly crotchety old P2V’ed Small Business Server images that absolutely refuse to boot under VirtIO without blue-screening.  It happens.  What are your best options if you’re stuck with IDE?

IDE performance

 

Without VirtIO drivers, caching does very, very little for you at best, and kills your performance at worst.  (Hello again, horrible writethrough write performance.)  So you really want to use “cache=none” if you’re stuck on IDE.  If you can’t do that for some reason (like using ZFS as a filesystem, not a zvol), writeback will perform quite acceptably… but it will also expose you to whatever added data integrity risk that the write caching presents, without giving you any performance benefits in return.  Caveat emptor.

Final Tests / Top Performers

At this point, we’ve pretty thoroughly explored how individual options affect performance, and the general ways in which they interact.  So it’s time to cut to the chase: what are our top performers?

First, let’s look at our top read performers.  My method for determining “best” read performance was to take the 4KB random read and the sequential read, then multiply the 4KB random by a factor which, when applied to the bare metal Windows performance, would leave a roughly identical value to the sequential read.  Once you’ve done this, taking the average of the two gives you a mean weighted value that makes 4KB read performance roughly as “important” as sequential read performance.  Sorting the data by these values gives us…

Top performers weighted read

 

Woah, hey, what’s that joker in the deck?  RAIDZ1…?

My primary workstation is also an FX-8320 with 32GB of DDR3, but instead of an SSD, it has a 4 drive RAIDZ1 array of Western Digital 1TB Black (WD1001FAEX) drives.  I thought it would be interesting to see how the RAIDZ1 on spinning rust compared to the 840 Pro SSD… and was pretty surprised to see it completely stomping it, across the board.  At least, that’s how it looks in these benchmarks.  The benchmarks don’t tell the whole story, though, which we’ll cover in more detail later.  For now, we just want to notice that yes, a relatively small and inexpensive RAIDZ1 array does surprisingly well compared to a top-of-the-line SSD – and that makes for some very interesting and affordable options, if you need to combine large amounts of data with high performance access.

Joker aside, the winner here is pretty obvious – qcow2 on xfs, writeback.  Wait, xfs?  Yep, xfs.  I didn’t benchmark xfs as thoroughly as I did ext4 – never tried it layered on top of a zvol, in particular – but I did do an otherwise full set of xfs benchmarks.  In general, xfs slightly outperforms ext4 across an identically shaped curve – enough of a difference notice on a graph, but not enough to write home about.  The difference is just enough to punt ext4/writeback out of the top 5 – and even though we aren’t actually testing write performance here, it’s worth noting how much better xfs/writeback writes than the two bottom-of-the-barrel “top performers” do.

I keep harping on this, but seriously, look close at those two writethrough entries – it’s not as bad as writethrough-on-ZFS-qcow2, but it’s still way worse than any of the other contenders, with 4KB writes under 2MB/sec.  That’s single-raw-spinning-disk territory, at best.  Writethrough does give you great read performance, and it’s “safe” as in data integrity, but it’s dangerous as hell in terms of suddenly underperforming under really badly under heavy load.  That’s not just “I see a valley on the graph” levels of bad, it’s very potentially “hey IT guy, why did the server lock up?” levels of bad.  I do not recommend writethrough caching, “default” option or not.

How ’bout write performance?  I calculated “weighted write” just like “weighted read” – divide the bare metal sequential write speed by the bare metal 4KB write speed, then apply the resulting factor to all the 4KB random writes, and average them with the sequential writes.  Here are the top 5 weighted write performers:

Top performers, weighted write

 

The first thing to notice here is that while the top 5 slots have changed, the peak read numbers really haven’t.  All we’ve really done here is kick the writethrough entries to the curb – we haven’t paid any significant penalty in read performance to do so.  Realizing that, let’s not waste too much time talking about this one… instead, let’s cut straight to the “money graph” – our top performers in average mean weighted read and write performance.  The following are, plain and simple, the best performers for any general purpose (and most fairly specialized) use cases:

Top performers mean weighted r-w

Interestingly, our “jokers” – zvol/ext4/qcow2/writeback and zfs/qcow2/writeback on my workstation’s relatively humble 4-drive RAIDZ1 – are still dominating the pack, at #1 and #2 respectively. This is because they read as well as any of the heavy lifters do, and are showing significantly better write performance – with caveats, which we’ll cover in the conclusions.

Jokers aside, xfs/qcow2/writeback is next, followed by zvol/ext4/qcow2/writeback.  You aren’t seeing any cache=none at all here – the gains some cache=none contenders make in very tiny writes just don’t offset the penalties paid in reads and in larger writes from foregoing the cache.  If you have a very heavily teeny-tiny-write-loaded workload – like a super-heavy-traffic database – cache=none may still perform better…  but you probably don’t want virtualization for a really heavy database workload in the first place, KVM or otherwise.  If you hammer your disks, rust or solid state, to within an inch of their lives… you’re really going to feel that raw performance penalty.

Conclusions

In the end, the recommendation is pretty clear – a ZFS zvol with ext4, qcow2 files, and writeback caching offers you the absolute best performance.  (Using xfs on a zvol would almost certainly perform as well, or even better, but I didn’t test that exact combination here.)  Best read performance, best write performance, qcow2 management features, ZFS snapshots / replication / data integrity / optional compression / optional deduplication / etc – there really aren’t any drawbacks here… other than the complexity of the setup itself. In the real world, I use simple .qcow2 on ZFS, no zvols required. The difference in performance between the two was measurable on graphs, but it’s not significant enough to make me want to actually maintain the added complexity.

If you can’t or won’t use ZFS for whatever reason (like licensing concerns), xfs is probably your next best bet – but if that scares you, just use ext4 – the difference won’t be enough to matter much in the long run.

There’s really no reason to mess around with raw disks or raw files – they add significant extra hassle, remove significant features, and don’t offer tangible performance benefits.

If you’re going to use writeback caching, you should be extra certain of power integrity – UPS on the server with apcupsd.  If you’re at all uncertain of your power integrity… take the read performance hit and go with nocache.  (On the other hand, if you’re using ZFS and taking rolling hourly snapshots… maybe it’s worth taking more risks for the extra performance.  Ultimately, you’re the admin, you get to call the shots.)

Caveats

It’s very important to note that these benchmarks do not tell the whole story about disk performance under KVM.  In particular, both the random and sequential reads used here bypass the cache considerably more heavily than most general purpose workloads would… minimizing the impact that the cache has.  And yes, it is significant.  See those > 1GB/sec peaks in the random read performance?  That kind of thing happens a lot more in a normal workload than it does in a random walk – particularly with ZFS storage, which uses the ARC (Adaptive Replacement Cache) rather than the simple FIFO cache used by other systems.  The ARC makes decisions about cache eviction based not only on the time since an object was last seen, but the frequency at which it’s seen, among other things – with the result that it kicks serious butt, especially after a good amount of time to warm up and learn the behavior patterns of the system.

So, please take these numbers with a grain of salt.  They’re quite accurate for what they are, but they’re synthetic benchmarks, not a real-world experience.  Windows on KVM is actually a much (much) better experience than the raw numbers here would have you believe – the importance of better-managed cache, and persistent cache, really can’t be over-emphasized.

The actual guest OS installation for these tests was on a single spinning disk, but it used ZFS/qcow2/writeback for the underlying storage.  I needed to reboot the guest after every single row of data – and in many cases, several times more, because I screwed something or another up.  In the end, I rebooted that Windows Server 2008 R2 guest upwards of 50 times.  And on a single spinning disk, shutdowns took about 4 seconds and boot times (from BIOS to desktop) took about 3 seconds.  You don’t get that kind of performance out of the bare metal, and you can’t see it in these graphs, either.  For most general-purpose workloads, these graphs are closer to being a “worst-place scenario” than are a direct model.

That sword cuts both ways, though – the “jokers in the deck”, my RAIDZ1 of spinning rust, isn’t really quite as impressive as it looks above.  I’m spitballing here, but I think a lot of the difference is that ZFS is willing to more aggressively cache writes with a RAIDZ array than it is with a single member, probably because it expects that more spindles == faster writes, and it only wants to keep so many writes in cache before it flushes them.  That’s a guess, and only a guess, but the reality is that after those jaw-dropping 1.9GB/sec 1MB random write runs, I could hear the spindles chattering for a significant chunk of time, getting all those writes committed to the rust.  That also means that if I’d had the patience for bigger write runs than 5GB, I’d have seen performance dropping significantly.  So you really shouldn’t think “hey, forget SSDs completely, spinning rust is fine for everybody!”  It’s not.  It is, however, a surprisingly good competitor on the cheap if you buy enough of it, and if your write runs come in bursts – even “bursts” ranging in the gigabytes.

 

latest signed VirtIO drivers

I never can remember how to get to the latest SIGNED VirtIO drivers for Windows guests under KVM, and it always takes me a surprising amount of time to find them.  So, here’s a bookmark for me:

http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/

Note: very much fresher drivers than the latest copy I had!  Newest drivers there at this time are 2013-04-17.  🙂

Windows Server 2012: COA label hatred


stupid_win2012_coa_label

Seriously, Microsoft?  SERIOUSLY?  You expect everybody to read their product key off of THAT FREAKING LABEL and type it in?

Note 1: on my monitor at least, that image is almost exactly at scale.  Hold up a dime and check on your own.

Note 2: WHY DID I BOTHER BLURRING THAT PRODUCT KEY OUT, NOBODY CAN READ IT ANYWAY.  Also: the actual COA label is roughly 1/3 of that sticker in the image – the odd little gray mottling surrounding it is actually text saying “THIS IS NOT A COA”.  You tear off the colorful bit, which is the entirety of the ACTUAL certificate of authenticity, and is to be pasted onto the computer licensed to run this copy of Server 2012.

Note 3: even if you CAN read the product key (I can, thank you genetics, thank you 20/15 eyesight), you still have to – somehow, from somewhere – figure out that you need to open a Powershell prompt and type in “slmgr.vbs /ipk xxxxx-xxxxx-xxxxx-xxxxx” to install it.  Oh yeah, and DO type the dashes, because if you don’t, it will tell you the product key is invalid.  Also: don’t forget to open up the Windows Activation screen afterwards and THEN activate.

Note 4: I KNOW YOU HAVE A QA DEPARTMENT, MICROSOFT.  DO YOU JUST IGNORE THEM, OR WHAT?  Nobody, and I mean nobody should have to tolerate this kind of crap.  I find it absolutely impossible to believe that never once did some poor schmuck do a test run through a product installation and say “hey guys, wow, this… kinda sucks.  Like REALLY badly.”
 

Windows Server 2012 slow network/SMB/CIFS problem

Add me to the list of people who had GLACIALLY slow SMB/CIFS/network file transfer performance between Server 2012 and XP or 7 clients – no idea if it would be any better with a Windows 8 client, but it was TERRIBLE (read: less than 500 KB/sec on gigabit network with solid state storage) file server performance and XP clients.

Also add me to the list of people for whom disabling mandatory SMB signing did the trick to cure the problem.

http://mctexpert.blogspot.com/2011/02/disable-smb-signing.html

TL;DR:

Open up Group Policy Editor, and right-click-and-edit Default Domain Controller Policy.  Go to Computer Configuration/Policies/Windows Settings/Security Settings/Local Policies/Security Options, and set Domain member: Digitally encrypt or sign secure channel data (always) and Microsoft network server: Digitally sign communications (always) to Disabled.

In theory, gpupdate /force should get the job done, but in practice I had to reboot my Server 2012 instance to make it take effect.  Once I’d done so, the difference in speed was extremely obvious – even folder listings were visibly faster, and copying a 9MB file from a share to a client desktop went from taking 20+ seconds to being instantaneous, as it bloody well should be.

Worth noting: this shouldn’t be a problem on a non-domain-controller 2012 server, as you can see from the location of the GPs I had to edit.  Also worth noting: these settings aren’t set correctly on Windows Server Essentials 2012 either, despite the fact that WSE is always both a domain controller and a fileserver.  Way to QA your products, MS… sigh.

Windows Server 2012 / Windows 8 activation boondoggle

On my VERY FIRST activation of Windows Server 2012 Standard today, I got the incredibly unhelpful error message “The filename, directory name or volume label syntax is incorrect.”

My first reaction, of course, was “PC Load Letter?!”

My second was to google the error. Unfortunately, but probably not all that surprisingly, INCREDIBLE amounts of weird issues that have nothing to do with each other can spring this error on you. Eventually, I found the one that was actually related to activation of Windows 8, which is the same issue that Server 2012 has. The problem is that MS has configured Windows 8 and Server 2012 by default to look for a Key Management Server… which it isn’t going to find, if you aren’t in an enterprise that maintains a KMS. And it’s not bright enough to fallback to just asking you for an old-style MAK product key, which is almost certainly what you have.

The fix is to go to the command prompt. Which can ALSO be confusing… what you do is, hit the Windows key and type cmd (there’s no visible search box UNTIL you start typing. But just start typing. Yay, Metro interface.) then press enter.  This brings up the familiar old “dos” console.  From there, you want to type in:

slmgr.vbs /ipk  XXXXX-XXXXX-XXXXX-XXXXX-XXXXX

… use your actual product key instead of all those Xs, of course.  You should very quickly get a popup Windows Script Host window that says “Installed Product Key XXXXX-XXXXX-XXXXX-XXXXX-XXXXX successfully”.  Now when you go back to activate your Server 2012 or Windows 8 installation, it will activate successfully.