The HSM Fallacy


Hardware Security Modules (HSMs) are the talk of the town these days when it comes to security in IoT devices.  So let's talk about what these things do, why IoT developers and projects want to use them, and why they may not actually add as much security as you think they do.

An HSM is a single chip device with non-volatile memory storage and a processing unit internal to the IC package.  It runs a program that provides an external interface and communication protocol to its non-volatile memory via I2C and/or SPI to a host MCU (Micro-Controller Unit) running the user application.  The HSM's non volatile memory is where the host MCU's secrets can be kept safely.  In theory.

Secrets include things like private symmetric encryption/decryption keys, asymmetric signing/authentication keys, cipher metadata like nonces, IVs, and counters that can never be repeated.  Any of these things may be required by an application - let's say your application.

Now the standard recommended best practice for securing your secrets goes something like this.

If your secrets are stored on non-volatile memory inside your MCU's IC package, i.e. in internal flash, you can easily protect it from most individuals and organizations by simply disabling local (e.g. device-in-hand) communication to the device through its debug port (JTAG).  With STM32 MCU's this is done by enabling ReaD out Protection (RDP) level 2.  With JTAG permanently disabled, there is no way that most people, including yourself, can access the content of the internal flash - period.  Also let's say your application is a pretty run-of-the-mill embedded system that doesn't offer a user accessible shell with the ability to read memory or execute stuff.  Most MCU projects don't have anywhere near this capability so let's assume your application is not an attack vector.

Now I said most people cannot access flash content with JTAG disabled.  There are organizations in non-descript office complexes with salaried employees that can access the memory.  When these secret extraction projects comes across the desks of the 'extraction team', your device first stops with the hardware engineer.  He or she physically disassembles the IC - known as decapping, exposing the silicon die(s) inside to direct probing.  With highly specialized equipment, the engineer can read out the flash memory directly, completely bypassing the JTAG port.  Once this process is complete, the extracted memory contents containing your firmware application as well as any stored secrets is thrown over the wall to the software engineer in the next cubicle.  The software engineer, possibly using custom tools he or she wrote, will execute any applications in simulated environments, examine memory and function calls, and in fairly short order (maybe two or three days including lunch, coffee breaks and that afternoon yoga session) have the secrets fully extracted and delivered to the project manager. 

So there you have it, this is what any IoT MCU is vulnerable to. Of course it costs money and time to do this, so the value of the secrets extracted to the "interested parties" must be worth it.  Not all secrets are worth this effort, so that plays into the design of your IoT security system.

Alright, so you figure your IoT project is worth it to some "bad guys" to capture and send one of your devices to the non-descript office-complex company with a generic name in the building directory to get your data authentication signing key, or just to get your firmware application to clone onto their device, or look for weaknesses so they can attack other devices in the field and get them to mount a DDoS against Minecraft servers.  Or maybe disable a hospital wing or water treatment plant in which your IoT devices are deeply embedded and in control.  It would certainly be worthwhile to the "bad guys" to pay the non-descript office-complex managers and employees to help attack your devices if your devices were in control of lives.  

Alright, so you build and deploy some life controlling IoT devices, or maybe you just don't want anyone to be able to clone your widget you've invested a lot of money to develop. 

You want to put an HSM on your device because you heard they were good to have for security reasons.  What does that mean? Well, it means your secret data signing or encryption key is not stored in your MCU's internal flash but instead in the HSM's internal flash, presumably because it is much harder for the non-descript office complex company to implement their extraction procedures on.  Remember, though, that an HSM is likely another MCU under the hood.  Maybe its got some physical/mechanical guards in place around the die(s).  We don't know exactly what it has to protect its secrets because the HSM vendors use security by obscurity (they don't publish what makes their devices more secure than regular MCUs).  But let's say for a moment that the non-descript office-complex company salaried employee assigned to this task takes a look and says "sorry boss this HSM is too tough for me - I've gotta pass on this one".

Okay, so the secret is safe in the HSM, for now.  But your application needs to access this secret from time to time, and here we find a problem.  The application runs on the host MCU - but the secrets are on the HSM.  To get the secrets, the host MCU has to make a request over a well-known bus - I2C or SPI - to the HSM, and receive the secret over that same bus.  Well hold on!  Can't we just attach a logic analyzer to this bus and see the messages and the secrets flying across? Absolutely we can.  But the HSM vendors thought of that too.  Their solution? Encrypt the link - that aught to do it!  If the messages and responses on the I2C/SPI bus are encrypted, the non-descript office-complex salaried employee cannot see the secrets, and cracking the encryption algorithm (AES) is still widely seen as infeasible (your banking system still depends on it) if implemented properly.  Let's say the AES is implemented properly.  

AES and similar ciphers are known as a symmetric ciphers.  Both parties to the link - host MCU and HSM - need to know the key, which must be secret.  So what does this mean? It means a secret has to be stored - back here again - on your host MCU's internal flash in order to encrypt and decrypt messages to the HSM where your real secrets are stored.  If you think this sounds a bit fishy you would be right.  You are now back to square one.  The salaried employee at the non-descript office complex company knows this, and immediately gets to work extracting the content of your host MCU's flash memory.  The extraction team  uses the same process as before, only this time an extra step is added - the software engineer, using the discovered AES cipher key, applies it to the communication link to the HSM and decrypts the HSM's response when it comes back, exposing your super-secret data authentication key without so much as a how do you do.  Maybe this takes another day or two - with the project budget upped accordingly.  Next!

Disclaimer.  I know nothing about how HSMs are actually made beyond what is in published datasheets and I know nothing about the murky workings of the security underworld.  But I have no doubt in my mind that activities like this do exist.  I build projects based on STM32 microcontrollers, and I know that despite my best efforts at hiding and securing secrets, they can be extracted with enough effort, time and cost.  I also know that adding an HSM increases the BOM cost and complexity of my project for very little actual added security if my story above is even partially true.

So what can we do?  Well, I think proper security 'architecture' can make a big difference.  Let's look at how I implement secure firmware update - with the stm32-secure-patching-bootloader product.  Secure firmware update means that the firmware update file itself is encrypted (confidential to protect IP from competitors cloning, copying, etc.), and signed.  Signing a firmware update means - with the proper implementation - that your devices will only install firmware signed by your private key - safely stored in your facility (e.g. a build machine) and not anywhere on your devices.  What is stored on your devices - and therefore subject to extraction - is the secret encryption (AES) key and the public signing (ECDSA) key.  We'll look at the risks associated with storing these two keys on the devices which are outside your control.

Obviously, storing the secret encryption key on your device exposes all your firmware update files to being decrypted should this key be extracted.  But note that if the attackers were able to extract this shared secret key, they have also already extracted your firmware - making having the key itself less useful.  With the key, they could pull future updates from a public download server and decrypt them.  I help to mitigate this download-and-decrypt risk in the stm32-secure-patching-engine by supporting patching firmware updates.  With the encryption key known, the patches can be decrypted but do not contain a complete and functional application - making it difficult if not impossible for someone to clone firmware application version 2 knowing just the patch from version 1 to version 2.  Sure, the non-descript office-complex employees could keep one of your devices hostage and use as a firmware update incubator - downloading the patches and re-extracting the updated firmware each time.  Or they could just apply the patch algorithm directly.  Patches are not really a defense against these guys, but if that product-wide AES secret key did get out to the general public, its use by the average hacker would be reduced to using patches instead of complete images for firmware updates since your product is protected by RDP Level 2 and the hacker does not have access to the expensive and specialized equipment used by the non-descript office-complex employees.

We've established that exposure of the secret encryption key in a firmware update system is not going to enable compromises of individual devices.  Why? Because there is still the signature.  The AES encryption key is not used to allow or disallow an update to occur - for example an attacker knowing your AES key cannot modify a firmware image to cause your product to do something bad and then have your product install it.  The signing key prevents this because the attacker cannot create a valid signature that the stm32-secure-patching-bootloader will accept.  The attacker cannot create a valid signature because they do not have the secret signing key, which only you have stowed securely away in your office, for example.  Remember that what is stored on each of your devices is the public portion of the signing key.  The public key enables verification but not signing capability.  Again, this is the same security that your banking system uses and relies on.  Let's say that the non-descript office-complex employee extracts the public signing key from the device on their desk.  What can they do with it?  Not much actually.  When talking about secure firmware update, the real goal of bad guys is to be able to inject malicious firmware onto devices.  But without the private signing key they can't do this locally or remotely (by for example modifying an update, signing it, placing it on a server and then somehow tricking your device to fetch an update from there).  Even if they have the public signing key that means nothing because it is by definition public.  What they could do is replace the public key on that device on their desk with one of their own, corresponding to a private key that they also control.  In this case they could inject their own firmware at will - but only on that one device where they replaced the public key.  But wait, haven't they already successfully hacked the device?  Could they not just place their own application firmware on the flash directly?  Well of course they could.  So really what it comes down to is that firmware injection on devices using signed firmware updates - like those using stm32-secure-patching-bootloader - require a specific physical attack on each device onto which firmware is to be injected.  This means physically removing a fielded device, bringing into the labs of the non-descript office-complex company and executing the attack project.  I don't know about your devices, but if one of my Measurement Earth Air Quality Monitoring devices (which uses the stm32-secure-patching-bootloader) went missing I'd know about it (missing data and/or location has moved, etc.) and I can remotely invalidate that device by disabling its account in the cloud.  This means that the security in the stm32-secure-patching-bootloader is an effective defense against a malicious party from pretending to be one of your devices, or gaining control of one or more devices through firmware update. 

And all without the use of an HSM.

 

 

 

0 comments

  • There are no comments yet. Be the first one to post a comment on this article!

Leave a comment

Please note, comments must be approved before they are published