Bit9

There was a recent disclosure of an important OpenSSL vulnerability recently that’s gotten a lot of attention, and deservedly so. SSL is one important means of securing communications these days, for online banking and shopping, as well as web applications and services and plenty of other uses.  I must say up front that I have the highest regard for the OpenSSL team and their efforts.  However, the government’s FIPS certification process unfortunately makes OpenSSL less secure.  Here’s how.

FIPS-compliant OpenSSL libraries are not provided in linked shared-object or DLL form.  They must be built from source by vendors.  The build process is a multi-step process which I’ll describe in very abridged terms.  I’ll skip over tedious but required bits such as validating the source’s signatures.  There are other, more technical tedious bits to get to.  A FIPS library consists of two parts; a FIPS module (or canister), and the API.  These two parts must be built from two different versions of the source (I’m not making this up), since the core FIPS module must be built from a special one-off version.  After configuration, a stub executable must be built that includes the FIPS canister, the purpose of which is to hash the in-memory image of the FIPS canister and output the hash to standard output.  This is put into a C source file as data to be included in the final DLL or shared object product eventually produced.  At run time, FIPS mode is entered via a special call, which calculates the in-memory hash of the FIPS canister and compares it to the original hash generated by the stub executable at build time.  If these match, all is good and FIPS mode is enabled.

The purpose of this rigmarole is presumably to ensure that the image has not been tampered with and matches the intent of the original source signed by the folks at OpenSSL.  Instead, this exercise is fairly silly, ensures almost nothing but compliance, and may actually harm security, as follows.

There are multitude ways that the image could be tampered with.  Myriad tools in the toolchain are involved in translating source code to binary, and if this toolchain is flawed or maliciously tampered with, the output is compromised.  However, the apparent integrity as verified by hashing will not detect this.  Presumably the hashing aggravation is primarily meant to protect against in-memory tampering with the FIPS canister image.  So let’s examine that scenario.

There’s at least two reasons that validating the run time image is useless and silly.  First, the only time that the image is checked is when the set_fips_mode() call is made, meaning that the image can be tampered with after that with no protection afforded by the previous hash check, and secondly because all loaded code is protected (on most popular platforms anyway) by making all code sections read-only, a protection enforced by the CPU.  Assuming that this protection is somehow maliciously by-passable (a reasonable assumption), then there’s no reason to believe that the malicious modification is actually going to happen at load time – it’s far more likely to happen later, during exploitation of some software vulnerability.  In that case, all bets are off, and said malware is free to modify whatever code it desires.

Now, all this unnecessary and pedantic work hasn’t actually caused any harm, probably.  So why the claim that the FIPS process makes the end result less secure?  Here’s why.  The resulting module MUST be loaded at a fixed address.  Otherwise, relocations will be applied since the compiled code is not compiled as position independent, so relocating the module will cause changes in relocations, causing the in-memory FIPS canister to have a different hash value, and (deep breath) finally causing the hash check in set_fips_mode() to fail.  Since the module is not re-locatable, it cannot take part in Address Space Layout Randomization (ASLR), which is an important part of modern exploit mitigation, which means the module is ROP-fodder.

Further, I suspect (though I have not verified) that the recent OpenSSL security flaw, despite being fixed in OpenSSL, still applies to FIPS-compliant versions of OpenSSL.  If that’s true, these flaws can’t be fixed without putting the FIPS module through the certification process again – a process which takes months.  In fact, the most recent OpenSSL certification process was formally submitted on 12/23/2011, and according to the OpenSSL web site, “The current best estimate for final formal award of a FIPS 140-2 validation certificate is February 2012″, er, a couple months ago (from here:http://www.openssl.org/docs/fips/fipsvalidation.html).

The root of this problem is the security-through-compliance model, a favorite pattern of government interference in information security matters.  When bureaucracy and heavy process rule the security landscape, what they produce is heavy and non-resilient software ecosystems that hamper things like patching best practices, and sometimes, like in this case, manage to inject their own unique vulnerabilities.

Another example of this bureaucracy in action is the recent approval by the DoD of Google Android 2.2 for use in sensitive environments.  Android 2.2 vulnerabilities were discovered well before its approval by the DoD, but the process of vetting is so slow and inflexible that this was practically an unavoidable outcome.

It’s sadly ironic when our choices in the infosec space are between compliance and security when the purpose of the former was supposedly the latter.  Mais, c’est la gouvernement!

Leave a Reply