Introduction

Securing a piece of software embedded into a chip is not an easy task. First, you have to define what could be the threats affecting your software, what would be the attacker model (i.e. remote attacker, physical attacker or both of them), what are the security objectives and the corresponding security requirements. Then, you will have to design an architecture that would ease the proper implementation of the countermeasures, implement them and finally test them. Skipping or failing to complete any of these steps invariably results in a vulnerable system that may not be compliant with the future European Cyber Resilience Act.

Most of the time, secure pieces of software are designed to be resistant to a remote attacker that cannot have physical access to the device (IEC62443 compliant pieces of software are resistant to such an attacker). However when parts of a system are freely accessible on the field, resistance to remote attackers might not be sufficient. This is for example the case for systems relying on IoT devices scattered on the field. Indeed, an attacker may try to extract and reverse a device firmware. Then he would search for vulnerabilities that could be exploited remotely or he could retrieve a cryptographic key that would be shared across several devices (which, for practical reasons, happens more often than one would think). Attackers may also just extract sensitive customer’s data stored in the device.

As we will see in another article, carrying out a hardware attack is not necessarily something complex or expensive. For example, the ChipWhisperer from NEWAE is an inexpensive yet very powerful side-channel and fault injection tool. Therefore, embedded software developers should take care to secure their products against both remote and physical attackers. This is what MCUBOOT’s developer intended to do by hardening their code. In the remainder of this article we will see how MCUBOOT has been protected against fault attacks and why these countermeasures are not as efficient as one would expect.

Implemented Countermeasures

MCUBOOT is a well known and widely used secure bootloader devised to secure the boot process of a microcontroller. MCUBOOT uses cryptography to authenticate the device firmware at each boot. In addition, it enables secure updates on the field by authenticating and deciphering new firmwares uploaded to the device.

Countermeasures against fault injections were introduced by Arm at the end of 2020. In the remainder of this article, the last stable version of MCUBOOT is considered, namely version 1.9.0. In this version of MCUBOOT, countermeasures consist in replacing simple C style return values with complex return values composed of two shares.

typedef volatile struct { 
    volatile int val;
    volatile int msk;
} fih_int;

Success is encoded by 0x1AAAAAAA and failure by 0x15555555, success is not the exact complement of failure as this would be a weak construction vulnerable to fault injection. fih_int type is used to store both the value and its masked representation (constant mask defined at build time). The same type can be used to convey any integer value and helpers are provided to ease the conversion process. These complex values are securely handled by several comparison functions where conditional tests are doubled.

/* Standard equality. If A == B then 1, else 0 */
__attribute__((always_inline)) inline int fih_eq(fih_int x, fih_int y) { 
    fih_int_validate(x);
    fih_int_validate(y);    
    return (x.val == y.val) && fih_delay() && (x.msk == y.msk);
}

A control flow integrity mechanism is also used to prevent function calls from being bypassed. A counter is incremented at each function call and decremented just before the function returns. Before each call, the variable that will store the return value is initialized to FIH_FAILURE.

#define FIH_CALL(f, ret, ...) \    
do { \        
    FIH_LABEL("FIH_CALL_START"); \        
    FIH_CFI_PRECALL_BLOCK; \        
    ret = FIH_FAILURE; \        
    if (fih_delay()) { \            
        ret = f(__VA_ARGS__); \        
    } \        
    FIH_CFI_POSTCALL_BLOCK; \        
    FIH_LABEL("FIH_CALL_END"); \    
} while (0)
#endif

Whilst four security levels are defined (OFF, LOW, MEDIUM and HIGH), only the OFF, MEDIUM and HIGH are really useful:

Previous Security Assessments

Actual robustness of the countermeasures can be assessed using continuous integration tools provided in the MCUBOOT repository. These tools leverage GDB and Qemu to simulate the operation of a device executing MCUBOOT. Faults are injected using GDB as fault injection probe. Only the skip fault model (the execution of one instruction is skipped) is implemented. To speed up the simulation, faults are injected only on a small portion of code located between FIH_CALL_START and FIH_CALL_END labels which are set by the secure FIH_CALL macro. Hence only the security of the code located near a function call is assessed. It goes without saying that this type of security assessment is far too limited.

An interesting security assessment was performed in 2021 by an intern at ARM in cooperation with Sorbonne university and ANSSI using an unknown (probably homemade) tool implementing several fault models, namely instruction skip and registers set to 0, -1 and 1. The study shows the effect of the MCUBOOT security level and the compiler optimization level on the actual security of the resulting binary. It concludes that higher the security level is lower is the number of successful fault injections.

In addition, it shows that compiler optimization level has a strong effect on the effectiveness of countermeasures. Binary is more vulnerable when it is the result of a compilation with optimization levels O0, Og or Os. Surprisingly there are few or no detected vulnerabilities when code is compiled with O1, O2 and O3 whether security countermeasures are enabled or not… Considering these amazing but suspicious results, it might be useful to carry out our own tests.

The next post of this series of posts will deals with security assessment of MCUBOOT using our own simulation based fault injection tool.

TrustnGo markets products and services which can help you to secure your products.

Leave a Reply

Your email address will not be published. Required fields are marked *