Slow domain login on Windows 7

Had a client whose login to the domain took upwards of 40 seconds on his Windows 7 machine.  Oddly, completely deleting the profile from his workstation didn’t fix the issue – even after setting up the profile anew after first domain login, it still took upwards of 40 seconds.  Just as oddly, other profiles on the domain didn’t have the same issue on his workstation – they logged in in less than 10 seconds.

Never did figure out what caused it, but I did find a cure –

  • Run gpedit.msc.
  • Go to computer configuration.
  • Go to Administrative templates.
  • Go to System.
  • Go to User profiles.
  • Enable “Set maximum wait time for the network if a user has a roaming user profile or remote home directory” and set to 0 seconds

Problem solved; the user could now logon in less than ten seconds.

Troubleshooting Exchange 2007/2010: a quick guide

This is mostly intended for myself… but if it helps you, you’re welcome.

Exchange 2007/2010 with Outlook 2007 clients is a hellkitten to get right, and I do not say this affectionately.  You need to get RPC over HTTP working, or the Out-Of-Office Assistant will not work, and neither will the offline Address Book (or, very likely, the GAL).

In order to get RPC over HTTP working, you must have several virtual directories running right in IIS, you must have client certificates ignored on those virtual directories, you must have both Basic AND Integrated authentication on those directories, and you must have a proper SSL certificate on the site.  On a standard Exchange setup, this will be the (Default Web Site).  On an SBS setup, this will be the (SBS Web Applications) site.

Definition of “proper” SSL certificate: you must have both the internal domain name AND any external domain names ON THE SAME CERT.  If your internal domain is “domain.local” or something like that, this probably means you’re going to have to use self-signed certs (and deal with security warnings on clients outside the local domain).  If you have an FQDN, you ought to be able to get everything on one UCC certificate… you will need, at a minimum, internaldomain.com, mail.internaldomain.com, externaldomain.com, and mail.externaldomain.com.  If possible, you also want autodiscover.externaldomain.com and autodiscover.internal.com, but they aren’t strictly necessary.

Here are some incredibly brief tips toward finagling the virtual directories and the certificates.  Except where specified otherwise, these are all commandlets run from the Exchange Management Shell – there is very little you can or should be doing from the Exchange Management Console for working with these issues.

Testing from Outlook:
control-right-click the Outlook icon in the system tray, and you will have options for “Connection Status…” and “Test E-Mail Autoconfiguration…” available.  Your ultimate goal here is to get the “Test E-Mail Autoconfiguration…” option working.  If you DON’T get this working, you’re not going to have a fully functional Exchange setup, regardless of what anything in the “Connection Status…” tells you.  To get this working, you will need to have either mail.yourdommainname.com or autodiscover.yourdomainname.com both in DNS and on the SSL certificate bound to the site in IIS which hosts the virtual directories for Available Services, the OAB, UM, and OWA.  If you specified both internalurls and externalurls in your virtual directory setup, both
of them need to work properly from inside the domain or local clients will not work; you can’t really control whether they decide to use the internalurl or the externalurl, and in my experience, they will frequently choose to use the externalurl, even if they’re plugged into the same switch and sitting physically right next to the Exchange server.

If your “Testing Autoconfiguration…” comes up with failures, you’ve got problems with your certificates, your virtual directories, your settings for URLs to your virtual directories, or all three… head to the tips below to examine and troubleshoot.

A word of warning about the Exchange Management Shell:
The EMS commandlets sometimes use Uri and sometimes use Url for their argument names… so be careful; even though they both mean the same thing, you have to get the right arbitrary spelling for the right arbitrary commandlets.  (Thanks for that, Microsoft…)

Another word of warning about the EMS:
you can get away with using all lower case for the commandlets themselves, but argument names for the commandlets require CamelCase as shown in the examples below.

A third and final word of warning about the EMS:
The examples I’ve shown below are extremely terse, and assume that, once pointed to examples of working usage, you can figure out the gist of what they mean, what they do, and likely useful ways to do related things just from seeing the syntax shown.  If you don’t feel comfortably that this is the case, then for the love of working systems stop right now and hire a (more experienced) professional!

And now, on to the actual EMS usages:


test basic RPC proxy connectivity:
rpcping -t ncacn_http -s servername -o RpcProxy=proxyservername -P "user,domain,pass" -I "user,domain,pass" -H 2 -u 10 -a connect -F 3 -v 3 -E -R none
test RPC proxy through to Information Store default port on back-end:
rpcping -t ncacn_http -s servername -o RpcProxy=proxyservername -P "user,domain,pass" -I "user,domain,pass" -H 1 -F 3 -a connect -u 10 -v 3 -e 6001
test RPC proxy through to IS backend default port using Mutual auth:
RpcPing –t ncacn_http –s ExchangeMBXServer  -o RpcProxy=RpcProxyServer -P "user,domain,password" -I "user,domain,password" -H 1 –F 3 –a connect –u 10 –v 3 –e 6001 –B msstd:server_certificate_subject
test all web services:
Test-OutlookWebServices
setting the Exchange cert: (note that not all services may be installed)
enable-ExchangeCertificate -thumbprint "thumbprintfromcert" -services "IIS,IMAP,POP,SMTP,UM"
if private key is missing: get serial number from cert and…
certutil -repairstore my "serialnumberfromcert"
Autodiscover:
Get-ClientAccessServer | Select Name, *Internal* | fl
Set-ClientAccessServer -Identity servername -AutoDiscoverServiceInternalUri: https://mydomain.com/Autodiscover/Autodiscover.xml
OAB:

in EMS, Server Configuration -> Client Access -> select server in top window -> click Offline Address Book Distribution tab in bottom window -> click OAB properties in right window, under Actions; set internal and external URLs from there

Web Services:
Get-WebServicesVirtualDirectory | Select name, *url* | fl
Set-WebServicesVirtualDirectory –Identity “<EWS Name>” –InternalUrl: https://url.domain.local/EWS/Exchange.asmx
Unified Messaging:
Get-UMVirtualDirectory | Select Name, *url* | fl
Set-UMVirtualDirectory –Identity: “<UM Virtual Directory>” –InternalURL: <URL/UnifiedMessaging/Service.asmx>

Perspectives on Open Source: The Three P’s

Yesterday and today I attended POSSCON, the Palmetto Open Source Software Conference.  They’ve got a pretty great speaker lineup this year – Chris Wanstrath, one of the co-founders of GitHub, was particularly inspiring.  It’s also pretty interesting to me, as a consultant, to see just who shows up for this kind of thing – an open source conference in a town not particularly known for being a giant mecca for open source.  (In fact, the POSSCON speakers and promoters went out of their way to praise Columbia for opening itself up to the conference – but that’s not the same thing as being someplace like SF or NY, a lodestone that accumulates OSS devs and culture whether it likes it or not.)

So who does show up for an OSS conference in a mid-sized Southern town?  A pretty randomized mix of “suits”, hobbyists, and developers.

The thing that these three basic types of attendant have in common, of course, is that one way or another, they’re interested in open source software – and for the most part, they’re “for it”.  But the reasons vary pretty wildly, and they vary in ways that don’t necessarily match up evenly with the three “obvious” divisions that you’re most likely to see at first glance.

So, if you’re “for” open source software, and you’re interested in actively promoting it, it helps to understand not only why you like it yourself, but why others might – and how their perspectives and yours can dovetail, even if they aren’t the same.  I like to think of these perspectives as “The Three P’s”:

  • Philosophy
  • Pragmatism
  • Paranoia

First, let’s talk about philosophy.  There are a lot of folks – yours truly included – who can get pretty excited about the basic philosophy of open source.  The idea that we’re all contributing to a permanent increase in the sum of human knowledge and capability is pretty heady, and ultimately, that’s what the OSS philosophy is all about.  Proprietary software and knowledge can very easily go away and be lost forever (until somebody reinvents it all over again), but OSS is a lot more likely to survive changes in underlying technologies, organizations, and motivations to remain available for whoever might need it.  Additionally, the reduction of the barrier-to-entry to effectively nil means that a lot of people get empowered further than their monetary income or social circumstances would normally allow.  When somebody talks disparagingly (or affectionately) about “open source hippies”, this is what they’re talking about!

But maybe you don’t care about that.  Maybe you’re a hard-headed realist – and that’s where pragmatism comes into play.  There may be things that you simply can’t do with closed source software, but you’ve found open source software projects that let you do them, or let you do them more easily and cheaply.  If what you want to do is create a collaborative documentation project, then you probably can’t find anything better than Mediawiki on an open source software stack to do it.  Or perhaps you’re a developer, and you want easy access to the sheer volume of peer review, in-the-field testing, and free QA and contributions that open sourcing your project can provide.  Or maybe you’re a small business – or a small cog in a very large business – and it’s easier to get the motivation to put a project together than the approval for budget to pay for software licensing for it.  Ultimately, though, this P is about a hardheaded, realistic intention to get a job done, and OSS just happens to be the tool that makes it possible for you… or not.  People who fall into this category are the most likely to have “mixed source” infrastructure, where OSS tools sit side-by-side with closed-source, proprietary tools; whatever gives the best ROI is what gets used, period.

Finally, we have paranoia.  This one’s a little misleading; the word has negative connotations, but as the old saying goes “you’re not paranoid if they really are out to get you.”  Someone primarily motivated by the third P is worried not just about the current situation, but about what can happen tomorrow.  They might be worried about what’s “hidden in the code” in proprietary applications – what if they left a backdoor in the code?  Can they get at my private data?  Might they disable functionality and potentially shut down my business because some automated check “thinks I’m a pirate?” – or they might be worried about the changing motivations and viability of other organizations – anybody who started creating documents in the 80s has probably been through at least one horrified realization “I still have the data from that old app, but I don’t have anything that will open it!”  Corporate mergers can also create some pretty nasty situations for the end-user; big orgs frequently swallow small orgs with the express purpose of getting access to the smaller org’s customer base… and putting them in a “forced switch” situation where the app the end-user originally installed is no longer available or supported, so now the end-user has to migrate to something that may cost more money, may not have the desired feature set, or may for whatever reason “just not fit”.

Conversely, all three P’s can be viewed the other way: someone might think “it’s my work, I don’t want to give it away!” or “things are only as good as what you pay for them” or “how can I control it if I don’t have to budget for it?” and be philosophically in opposition.  They might believe that the documentation isn’t sufficient, or that the support structure isn’t rigid enough, etc. and be pragmatically opposed.  Or, they might cling fiercely to the idea “it’s not safe if there isn’t somebody I can sue” or “I don’t want the whole world to know intimate details of how my systems work!” and be opposed on grounds of paranoia.

It’s important to think about these “three P’s”, and how they apply to you, to others around you, and to each other.  If you’re advocating OSS and want to see it more widely used in your community, understand your own motivations for it, and understand the motivations of the folks who you’d like to spread it to.  If you’re curious about OSS and trying to figure out how or why you should use it (or care), understand your own motivations, and go from there.  And if you, or someone you’re discussing OSS with, are primarily motivated by only one or another of the three P’s, be sure to address how the other two P’s inform the one that’s the primary concern, rather than wasting your time flogging philosophy to a pragmatist, or pragmatism to a paranoiac.

Solid State Drives

If you’ve never seen a machine equipped with a good Solid State Drive (SSD)… they’re pretty impressive.  In this clip, I’m putting an Ubuntu 9.10 workstation with an Intel SSD through its paces.

Some of the reason that machine is so fast is Ubuntu – the newest release has some pretty significant disk speed related enhancements – but the vast majority of it is the solid state drive.  (For those of you not familiar with Linux, it might help you to think of GiMP as “Photoshop” – both because it does pretty much the same job, and because both are notorious for being EXTREMELY slow to start up.)

You do have to be careful when you’re buying an SSD, though – they’re not all created equal.  In fact, some of them are absolutely atrocious, with significantly worse performance than conventional hard drives… so you need to know what you’re doing (or trust who you’re buying from) when you go that route.  In particular, anything with a jMicron controller in it is better taken out back and shot than put in a production machine.  You also need to be aware that you’re going to pay a lot more per megabyte for solid state – an 80GB SSD costs about as much as two 1.5 terabyte conventional hard drives.  So you probably don’t want SSDs (yet) for tasks involving large amounts of bulk storage.

But, as the video demonstrates… if what you need is performance, there’s nothing else in the same league; a few hundred bucks spent on a good SSD will give you more real-world performance benefit for most tasks than several thousand dollars spent otherwise.

It’s also worth noting that the current generation of SSDs are generally 2.5″ form factor, meaning they fit interchangeably in notebooks, netbooks, or desktop computers.  You typically won’t see as much of the top-end performance on a notebook or netbook – their SATA controllers usually bottleneck at a third of the top-end speed of the best SSDs – but they’re just as much (if not more) worth the upgrade, because conventional laptop HDDs perform much more poorly than full-size HDDs, so the speed boost is even more of a blessing.

Re-targeting existing .NET apps to a specific runtime version

File under “wow, I had no idea you could even do that”… also, file under “you really shouldn’t have to do that,” but that’s another story.

CPA clients are always an interesting challenge. By the standards of the IT world, their applications are really pretty simple and they don’t have a lot of data to deal with… but unfortunately, the tax codes they have to model were designed for human clerks physically stamping and filing papers in actual filing cabinets, not for modern data storage. Worse, those same tax codes vary from year to year, in arbitrary ways that seem “simple” from a human perspective but are difficult to fit into a logical framework. The end result is, your poor accountant has to deal with software that’s actually re-written once a year, every year, rather than a sane, logical framework that just gets a few variables tweaked from time to time.

Needless to say, quality control on this kind of software is frequently not all that it could be. In one particular case, a client was using an application called TaxWorks. The client has a nice, modern firm with a Windows Terminal Server for all of his accountants to do their work in. Doing it this way lets him (and me!) maintain the bewildering array of niche applications a CPA needs on a single machine, rather than having to keep them installed and updated across an entire network of workstations. For the most part, this is a great thing… but unfortunately, the folks who write TaxWorks apparently never did too much testing on any Windows Server platforms; the application works fine on Windows XP, and at first it worked fine on Server 2003 R2… until one day, my client called me and said TaxWorks just quit working. Which it did; the app would start but then would hang in several places. Many many hours of troubleshooting and many many calls to technical support later, we finally got to the root of the problem: on XP, TaxWorks works fine. But on Server 2003, TaxWorks stops working if any version of the .NET framework later than 2.0 gets installed.

For a while, we got by just avoiding installing .NET 3.0 and 3.5 from Windows Updates; but eventually, of course, one of the other applications on the server actually required .NET 3.5. The good news is, it turns out that even if you’re not the developer of a particular .NET app, you can target it to a specific version of the .NET runtime after-the-fact. It’s not particularly well documented – unless you’re a .NET developer yourself, in which case you should have done this BEFORE your app ever got in front of a customer – but the capability is there. .NET applications have an XML “application configuration file”, which is usually named “(yourapp).exe.config”, in the same directory as (yourapp).exe. Inside that configuration file, there should already be a supportedRuntime directive, which targets a specific version of the .NET framework… but if there isn’t one, you can insert it like this:

 <!-- JRS : attempting to force use of .NET v2 SP2 -->
 <startup>
    <supportedRuntime version="v2.0.50727" />
 </startup>

Of course, one of the challenges is figuring out what the heck runtime version corresponds to a human-readable “version” of .NET – it is extremely specific, and if you don’t get the string right, your app will then refuse to run.  After some trial and error navigating MSDN, I finally found this kb article showing the correct version strings for the various versions of the framework that are out there.  Long story short, if you drop to a cmd prompt and dir %systemroot%\Microsoft.NET\Framework, you’ll get a list of the versions of the .NET framework installed on the machine in question, in the formats you need to fill in the supportedRuntime directive in an application configuration file like the example above. With all that figured out, the code snippet above did the trick – the offending application no longer tries to reference the latest version of .NET installed; now it sticks to 2.0 (and works!) even with 3.5 also installed on the server.

10 PRINT “HELLO WORLD” : GOTO 10

Today I was telling my friend Chris about setting up Xrdp on Ubuntu Linux, and he said “you know, you really ought to write a blog for all the business stuff you do.” At first, it seemed redundant – I’ve been running technical wiki sites for years now – but after I thought about it for a while, it struck me as a really good idea. Wikis work well as a repository of knowledge, when you already know what you’re looking for and where to look, but introducing new ideas isn’t one of the format’s strengths.

So what will you see here? Day-to-day problems and solutions, covering most of the major platforms, with an emphasis on the needs you run across servicing power users and small-to-medium businesses.

Thanks for stopping by!