dm00493651 Introduction To stm32 Microcontrollers Security Stmicroelectronics
dm00493651 Introduction To stm32 Microcontrollers Security Stmicroelectronics
Application note
Introduction
This application note presents the basics of security in STM32 microcontrollers.
Security in microcontrollers encompasses several aspects including protection of firmware intellectual property, protection of
private data in the device, and guarantee of a service execution.
The context of IoT has made security even more important. The huge number of connected devices makes them an attractive
target for attackers and several remote attacks have shown the vulnerabilities of device communication channels. With IoT,
the security extends the requirements for confidentiality and authentication to communication channels, which often require
encryption.
This document is intended to help the building of a secure system by applying countermeasures to different types of attack.
In the first part, after a quick overview of different types of threats, examples of typical attacks are presented to show how
attackers exploit the different vulnerabilities in an embedded system.
The subsequent sections focus on the set of hardware and software protections that defend the system from these attacks.
The last sections list all security features available in the STM32 Series, and guidelines are given to build a secure system.
STM32F0 Series, STM32F1 Series, STM32F2 Series, STM32F3 Series, STM32F4 Series, STM32F7 Series,
Microcontrollers STM32G0 Series, STM32G4 Series, STM32H7 Series, STM32L0 Series, STM32L1 Series, STM32L4 Series,
STM32L4+ Series, STM32L5 Series, STM32U5 Series, STM32WB Series, STM32WL Series
1 General information
The table below presents a non-exhaustive list of the acronyms used in this document and their definitions.
Table 2. Glossary
Term Definition
Term Definition
Documentation references
The reference manual of each device gives details on availability of security features and on memory protections
implementation.
A programming manual is also available for each Arm® Cortex® version and can be used for MPU (memory
protection unit) description:
• STM32 Cortex®-M33 MCUs programming manual (PM0264)
• STM32F7 Series and STM32H7 Series Cortex®-M7 processor programming manual (PM0253)
• STM32 Cortex®-M4 MCUs and MPUs programming manual (PM0214)
• STM32F10xxx/20xxx/21xxx/L1xxxx Cortex®-M3 programming manual (PM0056)
• Cortex®-M0+ programming manual for STM32L0, STM32G0, STM32WL and STM32WB Series (PM0223)
Refer to the following set of user manuals and application notes (available on www.st.com) for detailed
description of security features:
• user manual STM32 crypto library (UM1924): describes the API of the STM32 crypto library; provided with
the X‑CUBE‑CRYPTOLIB Expansion Package.
• user manual Getting started with the X-CUBE-SBSFU STM32Cube Expansion Package (UM2262): presents
the SB (secure boot) and SFU (secure firmware update) ST solutions; provided with the X‑CUBE‑SBSFU
Expansion Package.
• application notes Proprietary Code Read Out Protection on STM32xx microcontrollers (AN4246, AN4701,
AN4758, AN4968): explain how to set up and work with PCROP firmware for the respective STM32L1, F4,
L4 and F7 Series; provided with the X‑CUBE‑PCROP Expansion Package.
• application note Managing memory protection unit (MPU) in STM32 MCUs (AN4838): describes how to
manage the MPU in the STM32 products.
• application note STM32WB ST firmware upgrade services (AN5185)
Note: Arm is a registered trademark of Arm Limited (or its subsidiaries) in the US and/or elsewhere.
2 Overview
Services provider
Device
Unsecure Device
Device
Device
Corrupted system Attack propagation
Sensor data (such as healthcare data or log of positions) Unauthorized sale of personal data
User data (such as ID, PIN, password or accounts) Usurpation
Data
Transactions logs Spying
Cryptographic keys Blackmail
Denial of service
Control of device (bootloader, Device correct functionality
Attacks on service providers
malicious application) Device/user identity
Fraudulent access to service (cloud)
Device counterfeit
Device hardware architecture/design
Software counterfeit
User code Software patent/architecture
Software modification
Technology patents
Access to secure areas
3 Attack types
This section presents the different types of attack that a microcontroller may have to face, from the most basic
ones to very sophisticated and expensive ones. The last part presents typical examples of attacks targeting an
IoT system.
Attacks on microcontroller are classified in one of the following types:
• Software attack: exploits software vulnerabilities (such as bug or protocol weaknesses).
• Hardware non-invasive attack: focuses on MCU interfaces and environment information.
• Hardware invasive attack: destructive attack with direct access to silicon
The table below gives an overview of the cost and techniques used for each types of attack.
Attacks
Software Hardware non–invasive Hardware invasive
types
Scope Remote or local Local board and device level Local device level
Software bugs Debug port Probing
Protocol weaknesses Power Glitches Laser
Techniques
Trojan horse Fault injection FIB
Eavesdropping Side-channels analysis Reverse engineering
From very low to high, Quite low cost. Need only moderately
Cost/ Very expensive. Need dedicated
depending on the security sophisticated equipment and knowledge to
expertise equipment and very specific skills.
failure targeted implement.
Access to confidential assets Reverse engineering of the device
(code and data). (silicon intellectual property)
Access to secret data or device internal
Objectives
Usurpation behavior (algorithm). Access to hidden hardware and
Denial of service software secrets (Flash access )
Malware injection
There are various methods to inject a piece of code inside the system. The size of the malware depends on the
target but may be very small (few tens of bytes). To be executed, the malware must be injected in the device
memory (RAM or Flash memory). Once injected, the challenge is to have it executed by the CPU, which means
that the PC (program counter) must branch to it.
Methods of injecting malware can be categorized as follows:
• Basics device access/"open doors”:
– Debug port: JTAG or SWD interface
– Bootloader: if accessible, can be used to read/write memory content through any available interface.
– Execution from external memory
These malware injections are easy to counter with simple hardware mechanisms that are described in
Section 4 Device protections .
• Application download:
– Firmware update procedure: a malware can be transferred instead of a new FW.
– OS with capability to download new applications.
This category countermeasure is based on authentication between the device and the server or directly with
code authentication. Authentication relies on cryptography algorithms.
• Weaknesses of communication ports and bugs exploitation:
– Execution of data. Sometimes it is possible to sneak the malware in as data and exploit incorrect
boundary check to execute it.
– Stack-based buffer overflows, heap-based buffer overflows, jump-to-libc attacks and data-only attacks
This third category is by definition difficult to avoid. Most embedded system applications are coded using
low-level languages such as C/C++. These languages are considered unsafe because they can lead to
memory management errors leveraged by attackers (such as stack, heap or buffers overflow). The general
idea is to reduce as much as possible what is called the attack surface, by minimizing the untrusted or
unverified part of firmware. One solution consists in isolating the execution and the resources of the different
processes. For example, the TF-M includes such a mechanism.
• Use of untrusted libraries with device back door
This last category is an intentional malware introduction that facilitates device corruption. Today, lot of
firmware developments rely on software shared on the web and complex ones can hide Trojan horses. As
in previous category, the way to countermeasure this threat is to reduce the surface attack by isolating as
much as possible the process execution and protecting the critical code and data.
Brute forcing
This type of attack targets the authentication based on a shared secret. A secure device may require a session
authentication before accessing services (in the cloud for example) and a human-machine interface (HMI) can be
exploited with an automatic process in order to try successive passwords exhaustively.
Interesting countermeasures are listed below:
• Limit the number of login trials with a monotonic counter (implemented with a timer, or if possible, with a
backup domain).
• Increase the delay between consecutive login attempts.
• Add a challenge-response mechanism to break automatic trials.
Reverse engineering
The goal is to understand the inner structure of the device and analyze its functionality. This is quite a challenging
task with modern devices featuring millions of gates.
The first step is to create a map of the microcontroller. It can be done by using an optical microscope to produce a
high-resolution photograph of the device surface. Deeper layers can then be analyzed in a second step, after the
metal layers have been stripped off by etching the device.
Reading the data
Using the electron microscope, the data, represented by an electric charge, becomes visible. It is possible to read
the whole device memory.
Micro probing and internal fault injection
Micro probing consists in interacting with the device at metal layer level. Thin electrodes are used to establish an
electrical contact directly with the surface of the device so that the attacker can observe, manipulate, and interfere
with it while the device is running.
Device modification
More sophisticated tools can be used to perform attacks. FIB (focused ion beam) workstations, for example,
simplify the manual probing of deep metal and polysilicon lines. They also can be used to modify the device
structure by cutting existing or creating new interconnection lines and even new transistors.
Connectivity
STM32
Sensors Actuators
Initial provisioning
The cryptographic data for root of trust for the chain of security must be injected to the SoC in a controlled
trusted way. Whether it is a key, a certificate or a hash initial value, it must remain immutable and/or secret. Once
programmed inside the device, the data protection mechanism must be enabled and only authorized process
must have access to it.
• Risks: firmware corruption or usurpation
• Countermeasures:
– Trusted manufacturer environment
– Use of secure data provisioning services (SFI)
– Data protection mechanisms
– Secure application isolation
– Use of OTP memory
Boot modification
The purpose of this attack is to use the bootloader to access to device content. The attack aims at modifying the
boot mode and/or the boot address to preempt the user application and to take control of the CPU through the
bootloader (via USB DFU, I2C or SPI), the debug port or through a firmware injected in RAM. The boot mode and
the address are controlled by device configuration and/or input pin and must be protected.
• Risks: full access of the microcontroller content
• Countermeasures:
– Unique boot entry
– Bootloader and debug disabled (see Readout protection (RDP))
Firmware update
The firmware update procedure allows a product owner to propose corrected version of the firmware to ensure
the best user experience during device lifetime. However, a firmware update gives an attacker an opportunity to
enter the device with its own firmware or a corrupted version of the existing firmware.
The process must be secured with firmware authentication and integrity verification. A successful attack requires
an access to the cryptographic procedure and keys (refer to the Initial provisioning section at the beginning of this
chapter).
• Risk: device firmware corruption
• Countermeasure: SFU application with authentication and integrity checks. Confidentiality can also be added
by encrypting the firmware in addition to signature.
Communication interfaces
Serial interfaces (such as SPI, I2C or USART) are used either by the bootloader or by applications to exchange
data and/or commands with the device. The interception of a communication allows an attacker to use the
interface as a device entry point. The firmware protocol can also be prone for bugs (like overflow).
• Risk: Access to device content
• Countermeasures:
– Make physical bus hard to reach on board.
– Isolate software communication stacks to prevent them from accessing critical data and operations.
– Use cryptography for data exchange.
– Disable I/F ports when not needed.
– Check inputs carefully
Debug port
The debug port provides access to the full content of the device: core and peripherals registers, Flash memory
and SRAM content. Used for application development, it may be tempting to keep it alive for investigating future
bugs. This is the first breach tried by an attacker with physical access to the device.
• Risk: full access to the device
• Countermeasure: Disable device debug capabilities (see Readout protection (RDP) feature).
SRAM
The SRAM is the device running memory. It embeds runtime buffers and variables (such as stack or heap) and
can embed firmware and keys. While in the non-volatile memory, the secrets may be stored as encrypted, when
loaded to the SRAM, they need to be present in plain view to be used. In the same time, the SRAM usually holds
communication buffers. For these two reasons, an attacker may be tempted to focus his effort on the SRAM.
At least three types of attack can be raised against this memory: code (malware) injection, memory corruption
through buffer overflow and retrieval of secrets through temporary stored variables.
• Risks: buffer overflow, data theft or device control
• Countermeasures:
– Firewall
– Memory protection unit
– Secure area
Communication stack
Connectivity protocols (such as Bluetooth, Ethernet, Wi-Fi or LoRa) have complex communication firmware
stacks. These stacks, often available in open source, must not always be considered as trusted. A potential
weakness can be massively exploited.
• Risk: device access (content, control) through network
• Countermeasures:
– Communication process isolation
– Server authentication
– Secure firmware update to patch bugs
Communication eavesdrop
Data exchanges between a device and an IoT service can be eavesdropped, either directly by a compatible RF
device or through the network. An hacker may seek for retrieving data, getting device IDs or accessing services.
Cryptography can be adopted by all communication protocols. Several encryption steps are often considered to
protect the communication between all the different layers (device, gateway, applications).
• Risk: observation and spoofing of network traffic.
• Countermeasure: use cryptography version of communication stack (like TLS for Ethernet)
4 Device protections
Security protections described in this section are controlled by hardware mechanisms. They are set either by
configuring the device through option bytes, or dynamically by hardware component settings:
• Memory protection: main security feature as it is used to protect code and data from internal (software) and
external attacks
• Software isolation: inter-processes protection to avoid internal attacks
• Interface protection: allows to protect device entry points like serial or debug ports
• System monitoring: detects device external tampering attempts or abnormal behaviors
On typical firmware architecture running on Armv8 TrustZone, the non-secure domain executes the application
and the OS tasks, while the secure domain executes the secure application and the system root-of-trust
mechanisms.
CPU1 CPU2
non-secure secure DMA
MPU MPU
AHB
Security
Flash interface AHB/APB bridge
controller
SRAM
Bus masters Slaves
The figure below provides a simple view of memories access architecture in a microcontroller.
Flash Flash
user memory user memory
Bus masters (such as CPU or DMA)
OTP
Bank 1 Bank 2
SRAM
STM32
microcontroller
NOR/NAND Flash
FMC
SDRAM
Octo- or Quad-SPI
OTFDEC
Octo- or
Quad-SPI Flash
The table below summarizes the particularities of each type of memories and typical protection features.
. Internal
System Flash ROM part of the Flash memory. Embeds Cannot be updated (erase/written).
. NVM
memory device bootloader and other ST services. A part may also be unreadable.
. ROM
. Internal Internal protections:
User Flash memory Flash memory for user application
. NVM • RDP
• WRP (not for SRAM)
• TrustZone
Working memory for Stack, heap or buffers. • PCROP (not for SRAM)
. Internal Can be used to execute the firmware
SRAM • OTP (not in SRAM)
. Volatile downloaded from internal or external non-
• Firewall
volatile memories.
• Secure hide protection (not for SRAM)
• MPU
Cryptography
NAND, NOR, Octo- . External
Additional memory for applications or data
or Quad-SPI Flash Write protection (on Flash device)
. NVM storage
memory
TrustZone
. External
SDRAM Additional RAM for application execution Cryptography
. Volatile
1. The attribute protection is only for CPU access and is not taken into account for other bus master (such as DMA).
2. Reading the CPUID indicates which CPU is currently executing code. An example can be found in the
HAL_GetCurrentCPUID function.
Other serial interfaces can also be used. If the bootloader is available, the device content can be accessed
through I2C, SPI, USART or USB‑DFU. If the interface is open during the runtime, the application transfer
protocol must limit its access capabilities (such as operation mode or address access range).
Associated STM32 features:
• Read protection (RDP)
• Disable of unused ports.
• Forbid bootloader access (configured by RDP in STM32 devices).
5 Secure applications
In order to create a secure system, the hardware features must be used in a secure firmware architecture
implementation. An industry standard solution is the PSA, proposed by Arm for the IoT ecosystem. The ST
proprietary solution is Secure boot (SB) and Secure firmware update (SFU).
This section defines the root and chain of trust concept before presenting the following typical secure
applications implementing the features listed below:
• Secure boot .
• Secure firmware update (SFU)
• Secure storage
• Cryptographic services
These applications have a close link with cryptography. All cryptographic schemes are based on the three
concepts of secret key, public key and hashing. Basics of cryptography are explained in Appendix A.
Cryptography - Main concepts.
Note: • The user manual Getting started with the X-CUBE-SBSFU STM32Cube Expansion Package (UM2262)
provides an implementation example of SB and SFU (www.st.com/en/product/x-cube-sbsfu).
• The user manual Getting started with STM32CubeL5 TF-M application (UM2671) describes an example of
TF-M implementation with the STM32L5 Series MCU.
• The user manual Getting started with STM32CubeU5 TF-M application (UM2851) describes an example of
TF-M implementation with the STM32U5 Series MCU.
Reset
X
Set security peripheral configuration
Secure boot
(MPU, firewall or IWDG)
User application
Architecture
An SFU transfer involves two entities: the firmware owner (OEM) and the device to be updated (see the figure
below). As the communication channel is generally considered as non-secure since it is subject to eavesdropping,
the overall security responsibility is shared between the sender (firmware owner server) and the receiver (the
device).
Application
From OEM side, a secure server is maintained that is responsible for sending the encrypted (if confidentiality is
required) and signed firmware to an authenticated device.
The SFU application running on device is in charge of the following:
• authentication and integrity checking of the loaded image before installing it
• decrypting the new firmware if confidentiality is required
• checking the new firmware version (anti-rollback mechanism)
Both solutions had different requirements and differ in architectures. They are not meant to be interchangeable, or
compete.
More details about building on TF-M on STM32 devices is available in the STM32 TF-M User Manual (UM2671).
A detailed comparison is available in the AN5447 application note (section X-CUBE-SBSFU vs. TF‑M
comparison).
Cortex core M0 M3 M3 M4 M4 M7
1. The RDP in STM32F1 Series is only for Flash memory protection. RDP is set (RDP1) or unset (RDP0). RDP Level 2 is not
implemented.
2. Only XL density parts feature the MPU.
3. No UBE means that the boot only relies on RDP level 2. UBE in Yes means that a dedicated boot lock service exists.
STM32L4
Feature STM32L0 STM32L1 STM32L5 STM32U5
STM32L4+
By area with
8‑byte granularity
PCROP By sectors By sectors No No
One area per
bank
Up to two One secure hide
secure hide areas areas (HDP) per
HDP(1) No No No (HDP) inside the bank, inside the
TrustZone secure TrustZone secure
domain domain
Firewall Yes No Yes TrustZone based TrustZone based
MPU Yes Yes Yes Yes Yes
1. HDP is known as secure user memory, sticky area or securable memory depending on the product.
2. No UBE means that the boot only relies on RDP level 2. UBE in Yes means that a dedicated boot lock service exists.
Table 11. Security features for STM32H7, STM32G0, STM32G4, STM32WB and STM32WL Series
STM32H7 Series
Feature STM32H72x/ STM32H74x/ STM32H7Ax/ STM32G0 STM32G4 STM32WB STM32WL
73x 75x Bx
M4 and M4 and
Cortex core M7 M0+ M4
M0+ M0+(1)
+ backup + backup
SRAM + backup SRAM + backup
+ backup + backup
RDP additional SRAM + backup registers
+ backup + backup registers registers
protection (2) + backup registers + CCM-
registers registers + SRAM2 + SRAM2
registers SRAM
+ OTFDEC + OTFDEC
By area By area By area
with By page with 4- with
By group of 2‑Kbyte Kbyte 2‑Kbyte
(2 Kbytes
WRP By sectors (128 Kbytes) x4 8-Kbyte granularity granularity granularity
or
sectors
Two areas 4 Kbytes) Two areas Two areas
available available available
By area
By area By area By area
with 64‑bit
with with with
By area with By area with 256‑byte or 128‑bit
512‑byte 2‑Kbyte 1‑Kbyte
PCROP 256‑byte granularity granularity
granularity granularity granularity
granularity One area per bank Up to two
Two areas Up to two Two areas
areas
available (3) areas available
Yes
Yes Yes
(dedicated
Yes (secure user memory, with 256‑byte (securable (securable
HDP(4) to CM0+ Yes
granularity) memory memory
firmware
area) area)
only)
Firewall No No No No Yes
MPU Yes Yes Yes Yes (CM4) Yes
Yes (boot Yes (boot
Unique boot entry(5) Yes (unique entry point in secure access) lock Yes No lock
feature) feature)
Internal tamper
Yes Yes Yes No Yes
detection
IWDG Yes Yes Yes Yes Yes
Device ID (96 bits) Yes Yes Yes Yes Yes
Symmetric AES, AES AES AES AES
Hardware
Asymmetric No No No PKA PKA
crypto (2)
Support HASH, TRNG TRNG TRNG TRNG TRNG
CCM SRAM2, SRAM2,
SRAM with with 1- with 1-
SRAM write protection No No
A-Kbyte Kbyte Kbyte
granularity granularity granularity
Debug port
registers
Backup
Backup
Backup
FLASH FLASH FLASH
Reg
Reg
SRAM1 SRAM SRAM
SRAM2 SRAM1 SRAM
RDP Boot from user Flash Debug or boot from SRAM or from bootloader
Area
Level Read Write Erase Read Write Erase
1. Backup registers/SRAM
6.4 TrustZone
The following section describes the main features of the TrustZone architecture. For further information, refer to
the application note STM32L5 Series TrustZone® features (AN5347) and to the reference manual STM32L552xx
and STM32L562xx advanced Arm®-based 32-bit MCUs (RM0438).
The ARMV8-M TrustZone architecture defines two domains at system level: secure and non-secure. The full
memory-map space is split into secure and non-secure areas. This includes all memory types (Flash, SRAM
and external memories), as well as all peripherals that can be shared (with specific context for each domain) or
dedicated to one domain or the other.
At system level, the isolation between secure and non-secure domains relies on the following hardware
mechanisms (see the figure below).
• specific core architecture (ARMV8-M Cortex-M33) with a dual-bank register for secure and non-secure
domains, and a secure attribution unit (SAU) to assert address range security status.
• implementation defined attribution unit (IDAU) which is complementary to the SAU.
• bus infrastructure that propagates the secure and privilege attributes of any transaction (AHB5)
• dedicated hardware blocks managing the split between the two domains (GTZC to define security attribute
for internal SRAMs and external FSMC/OCTOSPI memories, and peripherals)
ARMv8-M
Cortex-M33
AHB master
SAU/MPU
AHB5
APB
peripheral Internal External
SRAM memory
TrustZone specific implementation
Reset
Flash memory
The HDP is a static protection configured by option bytes. Once set, the CPU boots on the firmware embedded in
this area, whatever the boot configuration set by boot pin or boot address.
When should HDP be used
The HDP is suited for a code that must only be executed after reset, like secure boot for root of trust.
This protection is available in STM32H7, STM32G0, STM32G4 and STM32L5 Series with slights differences in its
implementation and name (refer to dedicated reference manuals for details).
6.8 Firewall
The firewall is a hardware protection peripheral controlling the bus transactions and filtering accesses to three
particular area: a code area (Flash memory), a volatile data area (SRAM) and a non-volatile data area (Flash
memory). The protected code is accessible through a single entry point (the call-gate mechanism explained
below). Any attempt to jump and try to execute any of the functions included in the code section without passing
through the entry point, generates a system reset.
The firewall is part of the dynamic protections. It must be set at startup (for example by a SB application).
Call gate mechanism
The firewall is opened by calling a 'call-gate' mechanism: a single entry point that must be used to open the gate
and to execute the code protected by the firewall. If the protected code is accessed without passing through the
call gate mechanism, then a system reset is generated. If any instruction is fetched outside the protected area,
the firewall is closed (see the figure below).
IDLE
Firewall
enabled Reset
Reset
Call gate
CLOSED OPEN
Instruction fetch
out of prot. area
Since the only way to respect the call gate sequence is to pass through the single call gate entry point, a
mechanism must be provided in order to support application calling multiple firewall-protected functions from
unprotected code area (such as encrypt and decrypt functions). A parameter can be used to specify which
function to execute (such as CallGate(F1_ID) or CallGate(F2_ID)). According to the parameter, the right
function is internally called.
unprotected_code.c
Function ID f1()
f1a()
Call gate single entry point
f1b()
f2()
f2a()
f2b()
f3()
f3a()
f3b()
Firewall code section
The table below shows the different cases supported by mixing modes and access attributes.
1. Execute Never (XN) attribute is set by region and is valid for both modes. It can be used to avoid SRAM code injection for
example.
The code executed in privileged mode can access additional specific instructions (MRS) and can also access Arm
core peripheral registers (such as NVIC, DWT or SBC). This is useful for OS kernels or pieces of secure code that
require access to sensitive resources that are otherwise inaccessible to unprivileged firmware.
An OS kernel can manipulate MPU attributes dynamically to grant access to specific resources depending on the
currently running task. Access right may be updated each time the OS switches from one task to another.
After keys have been provisioned inside the secure area, the user application can use them by calling a secure
load service with an index referencing the key and no more the key itself.
CKS
Key 0
Wireless stack Key 1
CPU2 Key n
AES hardware
block
Secure key
register
IPCC and mailbox
Data register
CPU1
User application
The STM32 devices embed a programmable voltage detector (PVD) that can detect a drop of power. The PVD
allows the configuration of a minimum voltage threshold, below which an interrupt is generated so that the
appropriate actions may be implemented.
When should PVD be used
the PVD must be used as soon as a sensitive application is running and is likely to leave some confidential data in
the working memory (SRAM). A memory cleaning can be launched in case of power down detection.
Note: The PVD is available on all STM32 series.
6.16 Device ID
Each STM32 device has a unique 96-bit identifier providing an individual reference for any device in any context.
These bits can never be altered by the user.
The unique device identifier can be used for direct device authentication or used, for instance, to derive a unique
key from a master OEM key.
6.17 Cryptography
As described in Section 5 , cryptography algorithms are essential to secure an embedded system. The
cryptography ensures confidentiality, integrity and authentication of data or code. For efficiently supporting these
functions, most STM32 series offer microcontroller options with embedded hardware cryptography peripherals.
These hardware blocks allow the cryptographic computations (such as hashing or symmetric algorithms) to be
accelerated. For devices with no such specific hardware acceleration, the STM32 cryptographic firmware library
(CryptoLib) provides a software implementation of a large set of cryptographic algorithms.
The OTFDEC offers the possibility to decrypt the content directly with a low latency penalty and without the need
for SRAM allocation. The OTFDEC is a hardware block that decrypt on-the-fly bus (AHB) traffic based on the
read-request address information. It is used with Octo-SPI interface (see the figure below).
Instruction data/system
cache cache
OTFDEC
OCTOSPI
SPI bus
SoC boundary
SPI NOR
Flash
The OTFDEC uses the AES-128 in counter mode, with a 128-bit key to achieve a latency below 12 AHB cycles.
Up to four independent and non-overlapping, encrypted regions can be defined (4-Kbyte granularity), each with its
own key.
When OTFDEC should be used
The OTFDEC is used when an external memory is used by the system. In the case of a TrustZone capable MCU,
the decryption keys can only be made accessible through the secure mode. See AN5281 for more details.
Note: The OTFDEC is available on STM32L5 Series and STM32H7 Series only.
7 Guidelines
Secure systems may take advantage of many security supporting hardware feature. Some are useful for any
system, and need little change in the application code to be activated and fully functional. It is the case of the
RDP feature, that prevents basic access to the Flash memory by disabling the debug port. Other features must be
selected depending on user application and the required security level.
This section helps defining the adapted set of security features, depending on the system use-cases.
Use-cases are gathered in four main groups: protection against external (1) and internal (2) threats, security
maintenance (3) and other use-cases related to cryptography (4) (see the table below).
1 Device protection against external threats: RDP protection, tamper detection, device monitoring
1.1 Device configuration (option bytes, not supposed to be modified ever)
• Use RDP Level 2. This closes the device from any external access.
1.2 Remove debug capability for the device.
• Use RDP Level 2 for permanently disabling the debug.
1.3 Protect a device against a loss of external clock source (crystal).
• Enable clock security system (CSS).
-
1.4 Detect a system-level intrusion.
• Use tamper detection capability of the RTC peripheral.
1.5 Protect a device from code injection.
• Use RDP.
• Isolate communication port protocol with MPU, firewall or HDP.
• Limit communication port protocol access range.
• Use write protection on empty memory areas (Flash memory and SRAM).
2. Code protection against internal threats: TrustZone, PCROP, MPU, firewall and HDP
2.1 Protect the code against cloning.
• Use RDP Level 1 or 2 against external access.
• Use PCROP on most sensitive parts of the code against internal access.
• Use OTFDEC to secure code stored in the external memory.
2.2 How to protect secret data from other processes
• Use firewall to protect both code and data.
• Use MPU to protect secret data area from being read.
- • Use HDP in case data must only be used at reset.
• Use secure domain of TrustZone if available.
2.3 Protect code and data when not fully verified or trusted libraries are used.
• Use PCROP to protect user most sensitive code.
• Use firewall to protect user sensitive application (code, data and execution).
• Use MPU and de-privilege the untrusted library.
• Use IWDG to avoid any deadlock.
• Use secure domain of TrustZone if available.
3. Device security check and maintenance: integrity checks, SB, SFU
3.1 Check code integrity.
• Hash firmware code at reset and compare to expected value.
• Enable ECC on the Flash memory and parity check on the SRAM.
-
3.2 Security checks or embedded firmware authentication
• Implement SB application with cryptography.
• Protect SB application secret data (refer to previous sections).
8 Conclusion
No system can be made secure by simply enabling security features in the hardware. Security must be rooted in
the architecture of the complete solution.
The threats must be identified, the countermeasures correctly designed and implemented in synergy with other
security features.
As security demands considerable resources, it is important to correctly evaluate the risks and spend the
resources efficiently, keeping in mind the cost of attack and the value of the protected asset.
The idea of root of trust is very helpful because it uses a more analytical approach, as opposed to a holistic
approach for which everything depends on everything else.
With the STM32 Series microcontrollers, the embedded and IoT security can be very cost-effective and efficient.
The inherent weakness of these algorithms is the key sharing between both parties. It may not be an issue in
secure environments (such as manufacturing plants) but when both parties are distant, the key transfer becomes
a challenge.
Among all secret key algorithms, block-based algorithms are very common since they can be efficiently
accelerated by hardware or software parallel implementations. Typical AES (advanced encryption standard)
algorithms operate on clear blocks of 128 bits. They produce ciphered blocks of the same length using keys of
128, 192 or 256 bits. The different ways to chain consecutive blocks are called “mode of operations”. They include
cipher block chaining (CBC), counter mode (CTR) and Galois counter mode (GCM).
Since these algorithms are deterministic, they always mix input data with a random value, known as nonce, used
only for one session as initialization vector.
Public key A
John Doe2
• A message encrypted by the public key can only be read by the private key owner.
Public key A
John Doe2
Message HASH
The difference with classic CRC is the robustness due to operations that are more complex and a much higher
digest length: up to 512 bits instead of 16 or 32 bits. As an example, CRC are reserved for fast integrity checks
during data transfers. Digest length makes them virtually unique and ensures that no collision occurs.
Typical algorithms are the MD5 (128-bit digest), SHA-1 (160-bit digest), SHA-2 (224-, 256-,384-, or 512-bit digest)
and SHA-3 (224-, 256-, 384-, or 512-bit digest).
Message HASH
Secret Secret
key key
MAC MAC MAC
Message HASH
Certificate
A certificate is related to public key algorithms. It authenticates the public key in an asymmetric transfer. It is used
to counteract usurpation by an attacker that could substitute the right public key by his own key. A certificate
consists in the public key signed by a certificate authority (CA) private key. This CA is considered as fully trusted.
In addition to the public key, the certificate also contains version numbers, validity period and some IDs.
Revision history
Contents
1 General information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
2.1 Security purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Attack types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
3.1 Introduction to attack types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Software attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Hardware attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3.1 Non-invasive attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3.2 Silicon invasive attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.4 IoT system attack examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.5 List of attack targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Device protections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1 TrustZone® for Armv8-M architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Dual-core security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Memory protections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3.1 System Flash memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.2 User Flash memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3.3 Embedded SRAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.4 External Flash memories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3.5 STM32 memory protections overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Software isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.5 Debug port and other interfaces protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.6 Boot protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.7 System monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Secure applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
5.1 Root and chain of trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 ST proprietary SBSFU solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2.1 Secure boot (SB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2.2 Secure firmware update (SFU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Arm TF-M solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.4 Product certifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6 STM32 security features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
6.1 Security features overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Readout protection (RDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
List of tables
Table 1. Applicable products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Table 2. Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Table 3. Assets to be protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Table 4. Attacks types and costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Table 5. Memory types and associated protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Table 6. Scope of STM32 embedded memories protection features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Table 7. Software isolation mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Table 8. Basic feature differences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Table 9. Security features for STM32Fx Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Table 10. Security features for STM32Lx and STM32Ux Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Table 11. Security features for STM32H7, STM32G0, STM32G4, STM32WB and STM32WL Series . . . . . . . . . . . . . . . . 29
Table 12. RDP protections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Table 13. Attributes and access permission managed by MPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Table 14. Process isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Table 15. Security use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Table 16. Document revision history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
List of figures
Figure 1. Corrupted connected device threat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Figure 2. IoT system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Figure 3. Armv8-M TrustZone execution modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 4. Simplified diagram of dual-core system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Figure 5. Memory types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 6. Secure boot FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 7. Secure server/device SFU architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 8. Example of RDP protections (STM32L4 Series) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 9. TrustZone implementation at system level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Figure 10. HDP protected firmware access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 11. Firewall FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Figure 12. Firewall application example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 13. Dual-core architecture with CKS service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Figure 14. Typical OTFDEC usage in a SoC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 15. Symmetric cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Figure 16. Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 17. PKA encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Figure 18. Message hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 19. MAC generation with secrete key algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 20. Signature generation with public key algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47