Embedded Linux Labs
Embedded Linux Labs
STM32MP1 variant
Practical Labs
https://bootlin.com
Training setup
Download files and directories used in practical labs
$ cd
$ wget https://bootlin.com/doc/training/sessions/online.embedded-linux.apr2024/\
embedded-linux-labs.tar.xz
$ tar xvf embedded-linux-labs.tar.xz
Lab data are now available in an embedded-linux-labs directory in your home directory. This directory
contains directories and files used in the various practical labs. It will also be used as working space, in
particular to keep generated files separate when needed.
More guidelines
Can be useful throughout any of the labs
• Read instructions and tips carefully. Lots of people make mistakes or waste time because they missed
an explanation or a guideline.
• Always read error messages carefully, in particular the first one which is issued. Some people stumble
on very simple errors just because they specified a wrong file path and didn’t pay enough attention to
the corresponding error message.
• Never stay stuck with a strange problem more than 5 minutes. Show your problem to your colleagues
or to the instructor.
• You should only use the root user for operations that require super-user privileges, such as: mounting
a file system, loading a kernel module, changing file ownership, configuring the network. Most regular
1 This tool from Microsoft is Open Source! To try it on Ubuntu: sudo snap install code --classic
tasks (such as downloading, extracting sources, compiling...) can be done as a regular user.
• If you ran commands from a root shell by mistake, your regular user may no longer be able to handle
the corresponding generated files. In this case, use the chown -R command to give the new files back
to your regular user.
Example: $ sudo chown -R myuser.myuser linux/
Setup
Go to the $HOME/embedded-linux-labs/toolchain directory.
$ sudo apt install build-essential git autoconf bison flex texinfo help2man gawk libtool-bin \
libncurses5-dev unzip
Getting Crosstool-ng
Let’s download the sources of Crosstool-ng, through its git source repository, and switch to a commit that
we have tested:
$ git clone https://github.com/crosstool-ng/crosstool-ng
$ cd crosstool-ng/
$ git checkout crosstool-ng-1.26.0
We can then either install Crosstool-ng globally on the system, or keep it locally in its download direc-
tory. We’ll choose the latter solution. As documented at https://crosstool-ng.github.io/docs/install/
#hackers-way, do:
$ ./configure --enable-local
$ make
$ ./ct-ng help
$ ./ct-ng menuconfig
$ ./ct-ng build
The toolchain will be installed by default in $HOME/x-tools/. That’s something you could have changed in
Crosstool-ng’s configuration.
And wait!
You can use the file command on your binary to make sure it has correctly been compiled for the ARM
architecture.
Did you know that you can still execute this binary from your x86 host? To do this, install the QEMU user
emulator, which just emulates target instruction sets, not an entire system with devices:
$ qemu-arm hello
qemu-arm: Could not open '/lib/ld-musl-armhf.so.1': No such file or directory
What’s happening is that qemu-arm is missing the shared library loader (compiled for ARM) that this binary
relies on. Let’s find it in our newly compiled toolchain:
/home/tux/x-tools/arm-training-linux-musleabihf/arm-training-linux-musleabihf/sysroot/lib/
ld-musl-armhf.so.1
We can now use the -L option of qemu-arm to let it knowwhere shared libraries are:
$ qemu-arm -L ~/x-tools/arm-training-linux-musleabihf/arm-training-linux-musleabihf/sysroot \
hello
Hello world!
Cleaning up
Do this only if you have limited storage space. In case you made a mistake in the toolchain configuration,
you may need to run Crosstool-ng again, keeping generated files would save a significant amount of time.
To save about 9 GB of storage space, do a ./ct-ng clean in the Crosstool-NG source directory. This will
remove the source code of the different toolchain components, as well as all the generated files that are now
useless since the toolchain has been installed in $HOME/x-tools.
As the bootloader is the first piece of software executed by a hardware platform, the installation procedure
of the bootloader is very specific to the hardware platform. There are usually two cases:
• The processor offers nothing to ease the installation of the bootloader, in which case the JTAG has to
be used to initialize flash storage and write the bootloader code to flash. Detailed knowledge of the
hardware is of course required to perform these operations.
• The processor offers a monitor, implemented in ROM, and through which access to the memories is
made easier.
The STM32MP1 SoC, falls into the second category. The monitor integrated in the ROM reads the SD card
to search for a valid bootloader (the boot mode is actually configurable via a few input pins). In case no
bootloader is found, it will operate in a fallback mode, that will allow to use an external tool to reflash some
executable through USB. Therefore, either by using an MMC/SD card or that fallback mode, we can start
up an STM32MP1-based board without having anything installed on it.
Setup
Go to the $HOME/embedded-linux-labs/bootloader directory.
If you run ls -l /dev/ttyACM0, you can also see that only root and users belonging to the dialout group
have read and write access to the serial console. Therefore, you need to add your user to the dialout group:
Important: for the group change to be effective, you have to reboot your computer (at least on Ubuntu
22.04) and log in again. A workaround is to run newgrp dialout, but it is not global. You have to run it in
each terminal.
Run picocom -b 115200 /dev/ttyACM0, to start serial communication on /dev/ttyACM0, with a baudrate of
115200. If you wish to exit picocom, press [Ctrl][a] followed by [Ctrl][x].
Don’t be surprised if you don’t get anything on the serial console yet, even if you reset the board. That’s
because the SoC has nothing to boot on yet. We will prepare a micro SD card to boot on in the next
paragraphs.
In our case, fsbl is provided by TF-A BL2 and ssbl is provided by U-Boot.
TF-A BL2 is loading U-Boot from the Firmware Image Package (FIP), that will also contain the configuration
for this second part. The FIP is generated from TF-A sources, so first we are going to build U-Boot.
U-Boot setup
Download U-Boot:
$ git clone https://gitlab.denx.de/u-boot/u-boot
$ cd u-boot
$ git checkout v2023.04
Get an understanding of U-Boot’s configuration and compilation steps by reading the README file, and specif-
ically the Building the Software section.
1. Specify the cross-compiler prefix (the part before gcc in the cross-compiler executable name):
$ export CROSS_COMPILE=arm-linux-
2. Run $ make <NAME>_defconfig , where the list of available configurations can be found in the configs/
directory. There are multiple stm32mp15 configurations. We will use the standard one (stm32mp15).
3. Now that you have a valid initial configuration, you can now run $ make menuconfig to further edit
your bootloader features.
• In the Environment submenu, we will configure U-Boot so that it stores its environment inside a
file called uboot.env in an ext4 filesystem:
– Enable Environment is in a EXT4 filesystem. Disable all other options for environment
storage (e.g. MMC, SPI, UBI)
– Device and partition for where to store the environment in EXT4: 0:4
• In the Device Drivers → Watchdog Timer Support submenu, disable IWDG watchdog driver for
STM32 MP's family, so that U-Boot doesn’t start the watchdog.
Install the following packages which should be needed to compile U-Boot for your board:
4. Finally, run
make DEVICE_TREE=stm32mp157a-dk1
which will build U-Boot 2 . The DEVICE_TREE variable specifies the specific Device Tree that describes
our hardware board. You can see that in this case, U-Boot only ships a Device Tree for the board with
the previous version of the chip (stm32mp157a instead of stm32mp157d). Alternatively, if you wish to
run just make, specify our board’s device tree name on Device Tree Control → Default Device Tree
for DT Control option.
TF-A setup
Get the mainline TF-A sources:
$ cd ..
$ git clone https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git
$ cd trusted-firmware-a/
$ git checkout v2.9
At the end of the build, the important output files generated are located in build/stm32mp1/release/. We
will find there:
• tf-a-stm32mp157a-dk1.stm32, which is TF-A BL2, serving as our first stage bootloader
• fip.bin, which is the FIP image, which itself includes U-Boot. This image will serve as the second
stage bootloader.
So, as far as bootloaders are concerned, the SD card partitioning will look like:
Number Start End Size File system Name Flags
1 2048s 4095s 2048s fsbl1
2 4096s 6143s 2048s fsbl2
3 6144s 10239s 4096s fip
4 10240s 131071s 120832s bootfs
On your workstation, plug in the SD card your instructor gave you. Type the sudo dmesg command to see
which device is used by your workstation. In case the device is /dev/mmcblk0, you will see something like
[46939.425299] mmc0: new high speed SDHC card at address 0007
[46939.427947] mmcblk0: mmc0:0007 SD16G 14.5 GiB
The device file name may be different (such as /dev/sdb if the card reader is connected to a USB bus (either
internally or using a USB card reader).
In the following instructions, we will assume that your SD card is seen as /dev/mmcblk0 by your PC work-
station.
Type the mount command to check your currently mounted partitions. If SD partitions are mounted, unmount
them:
We will erase the existing partition table and partition contents by simply zero-ing the first 128 MiB of the
SD card:
The ROM monitor handles GPT partition tables, let’s create one:
(parted) mklabel gpt
Then, the 4 partitions are created with:
(parted) mkpart fsbl1 0% 4095s
(parted) mkpart fsbl2 4096s 6143s
(parted) mkpart fip 6144s 10239s
(parted) mkpart bootfs 10240s 131071s
You can verify everything looks right with:
(parted) print
Model: SD SA08G (sd/mmc)
Disk /dev/mmcblk0: 7747MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
(parted)
The -O ^metadata_csum option allows to create the filesystem without enabling metadata checksums, which
U-Boot doesn’t seem to support yet.
Now write the TF-A binary in both fsbl partitions:
Then flash the fip partition with the Firmware Image Package containing U-Boot, the BL32 monitor and
their configuration (device tree):
Setting up networking
The next step is to configure U-boot and your workstation to let your board download files, such as the
kernel image and Device Tree Binary (DTB), using the TFTP protocol through a network connection.
With a network cable, connect the Ethernet port of your board to the one of your computer. If your computer
already has a wired connection to the network, your instructor will provide you with a USB Ethernet adapter.
A new network interface should appear on your Linux system.
Of course, make sure that this address belongs to a separate network segment from the one of the main
company network.
To make these settings permanent, save the environment:
=> saveenv
=> ip a
The network interface name is likely to be enxxx3 . If you have a pluggable Ethernet device, it’s easy to
identify as it’s the one that shows up after pluging in the device.
Then, instead of configuring the host IP address from NetworkManager’s graphical interface, let’s do it
through its command line interface, which is so much easier to use:
The tftp command should have downloaded the textfile.txt file from your development workstation into
the board’s memory at location 0xc20000004 .
You can verify that the download was successful by dumping the contents of the memory:
=> md 0xc2000000
We will see in the next labs how to use U-Boot to download, flash and boot a kernel.
PredictableNetworkInterfaceNames/
4 This location is part of the board DRAM. If you want to check where this value comes from, you can check the SoC
Rescue binaries
If you have trouble generating binaries that work properly, or later make a mistake that causes you to lose
your bootloader binaries, you will find working versions under data/ in the current lab directory.
• Get the kernel sources from git, using the official Linux source tree.
• Fetch the sources for the stable Linux releases, by declaring a remote tree and getting stable branches
from it.
Setup
Create the $HOME/embedded-linux-labs/kernel directory and go into it.
Since the Linux kernel git repository is huge, our goal here is to start downloading it right now, before starting
the lectures about the Linux kernel.
However, this requires downloading more than 2.7 GB of data. If you are running this command from home,
or if you have very fast access to the Internet at work (and if you are not 256 participants in the training
room), you can do it directly by connecting to https://git.kernel.org:
If Internet access is not fast enough and if multiple people have to share it, your instructor will give you a
USB flash drive with a tar.gz archive of a recently cloned Linux source tree.
You will just have to extract this archive in the current directory, and then pull the most recent changes over
the network:
tar xf linux-git.tar.gz
cd linux
git checkout master
git pull
Of course, if you directly ran git clone, you won’t have to run git pull, as git clone already retrieved the
latest changes. You may need to run git pull in the future though, if you want to update a newer Linux
version.
We will add this separate repository as another remote to be able to use the stable releases:
git remote add stable https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux
git fetch stable
As this still represents many git objects to download (450 MiB when 5.9 was the latest version), if you are
using an already downloaded git tree, your instructor will probably have fetched the stable branch ahead of
time for you too. You can check by running:
git branch -a
We will choose a particular stable version in the next labs.
Now, let’s continue the lectures. This will leave time for the commands that you typed to complete their
execution (if needed).
Kernel - Cross-compiling
Objective: Learn how to cross-compile a kernel for an ARM target platform.
Setup
Stay in the $HOME/embedded-linux-labs/kernel directory.
Check the version again using the make kernelversion command to make sure you now have a 6.1.x version.
$ export PATH=$HOME/x-tools/arm-training-linux-musleabihf/bin:$PATH
Cross compiling
You’re now ready to cross-compile your kernel. Simply run:
$ make
and wait a while for the kernel to compile. Don’t forget to use make -j<n> if you have multiple cores on your
machine!
Look at the kernel build output to see which file contains the kernel image.
Also look in the Device Tree Source directory to see which .dtb files got compiled. Find which .dtb file
corresponds to your board.
You should see Linux boot and finally panicking. This is expected: we haven’t provided a working root
filesystem for our device yet.
You can now automate all this every time the board is booted or reset. Reset the board, and customize
bootcmd:
=> setenv bootcmd 'tftp 0xc2000000 zImage; tftp 0xc4000000 stm32mp157a-dk1.dtb; bootz
0xc2000000 - 0xc4000000'
=> saveenv
Restart the board to make sure that booting the kernel is now automated.
Lab implementation
While (s)he develops a root filesystem for a device, a developer needs to make frequent changes to the
filesystem contents, like modifying scripts or adding newly compiled programs.
It isn’t practical at all to reflash the root filesystem on the target every time a change is made. Fortunately,
it is possible to set up networking between the development workstation and the target. Then, workstation
files can be accessed by the target through the network, using NFS.
Unless you test a boot sequence, you no longer need to reboot the target to test the impact of script or
application updates.
Setup
Go to the $HOME/embedded-linux-labs/tinysystem/ directory.
Kernel configuration
We will re-use the kernel sources from our previous lab, in $HOME/embedded-linux-labs/kernel/.
In the kernel configuration built in the previous lab, verify that you have all options needed for booting the
system using a root filesystem mounted over NFS. Also check that CONFIG_DEVTMPFS_MOUNT is enabled (we
will explain it later in this lab). If necessary, rebuild your kernel.
Install the NFS server by installing the nfs-kernel-server package if you don’t have it yet. Once installed,
edit the /etc/exports file as root to add the following line, assuming that the IP address of your board will
be 192.168.0.100:
/home/<user>/embedded-linux-labs/tinysystem/nfsroot 192.168.0.100(rw,no_root_squash,
no_subtree_check)
Make sure that the path and the options are on the same line. Also make sure that there is no space between
the IP address and the NFS options, otherwise default options will be used for this IP address, causing your
root filesystem to be read-only.
$ sudo exportfs -r
Of course, you need to adapt the IP addresses to your exact network setup. Save the environment variables
(with saveenv).
Now, boot your system. The kernel should be able to mount the root filesystem over NFS:
If the kernel fails to mount the NFS filesystem, look carefully at the error messages in the console. If this
doesn’t give any clue, you can also have a look at the NFS server logs in /var/log/syslog.
However, at this stage, the kernel should stop because of the below issue:
This happens because the kernel is trying to mount the devtmpfs filesystem in /dev/ in the root filesystem.
This virtual filesystem contains device files (such as ttyS0) for all the devices known to the kernel, and with
CONFIG_DEVTMPFS_MOUNT, our kernel tries to automatically mount devtmpfs on /dev.
To address this, just create a dev directory under nfsroot and reboot.
Now, the kernel should complain for the last time, saying that it can’t find an init application:
Kernel panic - not syncing: No working init found. Try passing init= option to
kernel. See Linux Documentation/admin-guide/init.rst for guidance.
Obviously, our root filesystem being mostly empty, there isn’t such an application yet. In the next paragraph,
you will add BusyBox to your root filesystem and finally make it usable.
Now, configure BusyBox with the configuration file provided in the data/ directory (remember that the
BusyBox configuration file is .config in the BusyBox sources).
Then, you can use $ make menuconfig to further customize the BusyBox configuration. At least, keep the
setting that builds a static BusyBox. Compiling BusyBox statically in the first place makes it easy to set
up the system, because there are no dependencies on libraries. Later on, we will set up shared libraries and
recompile BusyBox.
Build BusyBox using the toolchain that you used to build the kernel.
Going back to the BusyBox configuration interface, check the installation directory for BusyBox5 . Set it to
the path to your nfsroot directory.
Now run $ make install to install BusyBox in this directory.
Try to boot your new system on the board. You should now reach a command line prompt, allowing you to
execute the commands of your choice.
Virtual filesystems
Run the $ ps command. You can see that it complains that the /proc directory does not exist. The ps
command and other process-related commands use the proc virtual filesystem to get their information from
the kernel.
From the Linux command line in the target, create the proc, sys and etc directories in your root filesystem.
Now mount the proc virtual filesystem. Now that /proc is available, test again the ps command.
Note that you can also now halt your target in a clean way with the halt command, thanks to proc being
mounted6 .
When nothing is specified before the leading ::, /dev/console is used. However, while this device is fine for
a simple shell, it is not elaborate enough to support things such as job control ([Ctrl][c] and [Ctrl][z]),
allowing to interrupt and suspend jobs.
So, to get rid of the warning message, we need init to run /bin/sh in a real terminal device:
ttySTM0::askfirst:/bin/sh
Reboot the system and the message will be gone!
Going further
If you have time before the others complete their labs...
Initramfs booting
Configure your kernel to include the contents of the nfsroot directory as an initramfs.
Before doing this, you will need to create an init link in the toplevel directory to sbin/init, because the
kernel will try to execute /init.
You will also need to mount devtmpfs from the rcS script, it cannot be mounted automatically by the kernel
when you’re booting from an initramfs.
Note: you won’t need to modify your root= setting in the kernel command line. It will just be ignored if you
have an initramfs.
When this works, go back to booting the system through NFS. This will be much more convenient in the
next labs.
Goals
Now that we have access to a command line shell thanks to a working root filesystem, we can now explore
existing devices and make new ones available. In particular, we will make changes to the Device Tree and
compile an out-of-tree Linux kernel module.
Setup
Go to the $HOME/embedded-linux-labs/hardware directory, which provides useful files for this lab.
However, we will go on booting the system through NFS, using the root filesystem built by the previous lab.
Exploring /dev
Start by exploring /dev on your target system. Here are a few noteworthy device files that you will see:
• Terminal devices: devices starting with tty. Terminals are user interfaces taking text as input and
producing text as output, and are typically used by interactive shells. In particular, you will find
console which matches the device specified through console= in the kernel command line. You will
also find the ttySTM0 device file.
• Pseudo-terminal devices: devices starting with pty, used when you connect through SSH for example.
Those are virtual devices, but there are so many in /dev that we wanted to give a description here.
• MMC device(s) and partitions: devices starting with mmcblk. You should here recognize the MMC
device(s) on your system and the associated partitions.
• If you have a real board (not QEMU) and a USB stick, you could plug it in and if your kernel was built
with USB host and mass storage support, you should see a new sda device appear, together with the
sda<n> devices for its partitions.
Don’t hesitate to explore /dev on your workstation too and ask any questions to your instructor.
Exploring /sys
The next thing you can explore is the Sysfs filesystem.
A good place to start is /sys/class, which exposes devices classified by the kernel frameworks which manage
them.
For example, go to /sys/class/net, and you will see all the networking interfaces on your system, whether
they are internal, external or virtual ones.
Find which subdirectory corresponds to the network connection to your host system, and then check device
properties such as:
• speed: will show you whether this is a gigabit or hundred megabit interface.
• address: will show the device MAC address. No need to get it from a complex command!
• statistics/rx_bytes will show you how many bytes were received on this interface.
Don’t hesitate to look for further interesting properties by yourself!
You can also check whether /sys/class/thermal exists and is not empty on your system. That’s the thermal
framework, and it allows to access temperature measures from the thermal sensors on your system.
Next, you can now explore all the buses (virtual or physical) available on your system, by checking the
contents of /sys/bus.
In particular, go to /sys/bus/mmc/devices to see all the MMC devices on your system. Go inside the directory
for the first device and check several files (for example):
• preferred_erase_size: the preferred erase block for your device. It’s recommended that partitions
start at multiples of this size.
• name: the product name for your device. You could display it in a user interface or log file, for example.
Don’t hesitate to spend more time exploring /sys on your system and asking questions to your instructor.
Driving GPIOs
At this stage, we can only explore GPIOs through the legacy interface in /sys/class/gpio, because the
libgpiod interface commands are provided through a dedicated project which we have to build separately, and
Busybox does not provide a re-implementation for the libgpiod tools. In a later lab, we will build libgpiod
tools which use the modern /dev/gpiochipX interface.
The first thing to do is to enable this legacy interface by enabling CONFIG_GPIO_SYSFS in the kernel configu-
ration. Also make sure Debugfs is enabled (CONFIG_DEBUG_FS and CONFIG_DEBUG_FS_ALLOW_ALL).
After rebooting the new kernel, the first thing to do is to mount the Debugfs filesystem:
Then, you can check information about available GPIOs banks and which GPIOs are already in use:
# cat /sys/kernel/debug/gpio
We are now going to use one of the Arduino Uno header pins at the back of the board, which is not already
used by another device.
Take one of the M-M breadboard wires provided by your instructor and:
If you check the Pinout of the Arduino™ connectors table in the board documentation 8 , you will see that
the ARD_D2 pin on the board is connected to the PE1 STM32 pin. PE1 is actually a GPIO pin on GPIO
bank E, and is configured as a GPIO by default (no need to change pin muxing to use this pin as a GPIO).
If you get back to the contents of /sys/kernel/debug/gpio, you’ll find that GPIO bank E corresponds to
gpiochip4 and to GPIO numbers 64 to 79. Hence, PE1, the second pin on this bank corresponds to GPIO
number 65.
We now have everything we need to drive this GPIO using the legacy interface. First, let’s enable it:
# cd /sys/class/gpio
# echo 65 > export
If indeed the pin is still available, this should create a new gpio65 file should appear in /sys/class/gpio.
We can now configure this pin as input:
# cat gpio65/value
1
Note that you could also configure the pin as output and set its value through the value file. This way, you
could add an external LED to your board, for example.
Before moving on to the next section, you can also check /sys/kernel/debug/gpio again, and see that
gpio-65 is now in use, through the sysfs interface, and is configured as an input pin.
When you’re done, you can see your GPIO free:
Driving LEDs
First, make sure your kernel is compiled with CONFIG_LEDS_CLASS=y, CONFIG_LEDS_GPIO=y and CONFIG_LEDS_
TRIGGER_TIMER=y.
Then, go to /sys/class/leds to see all the LEDs that you are allowed to control.
Let’s control the LED which is called heartbeat.
Go into the directory for this LED, and check its trigger (what routine is used to drive its value):
# cat trigger
As you can see, there are many triggers to choose from, the current being heartbeat, corresponding to the
CPU activity.
You can disable all triggers by:
You could also use the timer trigger to light the LED with specified time on and time off:
# i2cdetect -l
i2c-1 i2c STM32F7 I2C(0x5c002000) I2C adapter
i2c-0 i2c STM32F7 I2C(0x40012000) I2C adapter
i2c-0 is the I2C controller with registers at 0x40012000, which is I2C1 in the STM32MP1 nomenclature.
i2c-1 is the I2C controller with registers at 0x5c002000, which is I2C4 in the STM32MP1 nomenclature.
Refer to the STM32MP1 memory map in the datasheet for details. Pay attention to the numbering difference:
i2c-0, i2c-1 is the Linux numbering, based on the registration order of enabled I2C busses. Here, because
only I2C1 and I2C4 are enabled, they are called i2c-0 and i2c-1.
Using the datasheet for the SoC 9 , we can find what is the base address of the registers for the I2C5 controller:
it is 0x40015000.
&i2c5 {
status = "okay";
/delete-property/ pinctrl-names;
};
As you can see, it’s also possible to include dts files, and not only dtsi ones.
Why the /delete-property/ statement? That’s because we want to see what happens when a device doesn’t
have associated pin definitions yet.
A device like an I2C controller node is typically declared in the DTSI files for the SoC, without pin settings
as these are board specific. Pin definitions are then usually defined at board level.
In our case, we don’t see such definitions, but they are actually found in the arch/arm/boot/dts/stm32mp15xx-
dkx.dtsi file, shared between multiple stm32mp15 DK boards, which is included by the toplevel Device Tree
for our board.
Modify the arch/arm/boot/dts/Makefile file to add your custom Device Tree, and then have it compiled
(make dtbs).
Reboot your board with the update.
Back to the running system, we can now see that there is one more I2C bus. We can also recognize the I2C5
address (0x40015000) though it’s now associated to the i2c-1 device name, which already existed previously,
but mapped to a different physical device:
# i2cdetect -l
i2c-1 i2c STM32F7 I2C(0x40015000) I2C adapter
i2c-2 i2c STM32F7 I2C(0x5c002000) I2C adapter
i2c-0 i2c STM32F7 I2C(0x40012000) I2C adapter
Now, let’s use i2cdetect’s capability to probe a bus for devices. Let’s start by the bus associated to i2c-2:
9 https://www.st.com/resource/en/reference_manual/dm00327659-stm32mp157-advanced-arm-based-32-bit-mpus-
stmicroelectronics.pdf
# i2cdetect -r 2
i2cdetect: WARNING! This program can confuse your I2C bus
Continue? [y/N] y
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- 28 -- -- -- -- -- -- --
30: -- -- -- UU -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
• One at address 0x33, indicated by UU, which means that there is a kernel driver actively driving this
device.
• Another one at address 0x28. We just know that it’s currently not bound to a kernel driver.
You will see that the command will fail to connect to the bus. That’s because the corresponding signals are
not exposed yet to the outside connectors through pin muxing.
So, get back to your Device Tree and remove the /delete-property/ line. Recompile your Device Tree and
reboot.
# i2cdetect -r 1
i2cdetect: WARNING! This program can confuse your I2C bus
Continue? [y/N] y
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
No device is detected yet, because this bus is just used for external devices. It’s time to add one though.
SCL
PWR
GND SDA
# i2cdetect -r 1
i2cdetect: WARNING! This program can confuse your I2C bus
Continue? [y/N] y
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- 52 -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
We will later compile an out-of-tree kernel module to support this device.
# lsusb
Bus 002 Device 002: ID 0424:2514
Bus 001 Device 001: ID 1d6b:0002
Bus 002 Device 001: ID 1d6b:0002
Now, when you plug the USB headset, a number of messages should appear on the console, and running
lsusb again should show an additional device:
# lsusb
Bus 002 Device 004: ID 0d8c:0014
Bus 002 Device 002: ID 0424:2514
Bus 001 Device 001: ID 1d6b:0002
Bus 002 Device 001: ID 1d6b:0002
The device of vendor ID 0d8c and product ID 0014 has appeared. Of course, this depends on the actual USB
audio device that you used.
The device also appears in /sys/bus/usb/devices/, in a directory whose name depends on the topology of
the USB bus. When the device is plugged in the kernel messages show:
# cd /sys/bus/usb/devices/1-1.3
# cat idVendor
0d8c
# cat idProduct
0014
# cat manufacturer
C-Media Electronics Inc.
# cat product
USB Audio Device
However, while the USB device is detected, we currently do not have any driver for this device, so no actual
sound card is detected.
parameter in the kernel configuration, and you should find that it is already enabled as a module.
So, instead of compiling the corresponding driver as a built-in, that’s a good opportunity to practice with
kernel modules.
So, compile your modules:
make modules
Then, following details given in the lectures, install the modules in our NFS root filesystem ($HOME/embedded-
linux-labs/tinysystem/nfsroot).
Also make sure to update the kernel image (make zImage), and reboot the board. Indeed, due to the
changes we have made to the kernel source code, the kernel version is now 6.1.<x>-dirty, the dirty keyword
indicating that the Git working tree has uncommitted changes. The modules are therefore installed in
/lib/modules/6.1.<x>-dirty/, and the version of the running Linux kernel must match this.
After rebooting, try to load the module that we need:
modprobe snd-usb-audio
By running lsmod, see all the module dependencies that were loaded too.
You can also see that a new USB device driver in /sys/bus/usb/drivers/snd-usb-audio. This directory
shows which USB devices are bound to this driver.
You can check that /proc/asound now exists (thanks to loading modules for the ALSA, the Linux sound
subsystem), and that one sound card is available:
# cat /proc/asound/cards
0 [Device ]: USB-Audio - USB Audio Device
C-Media Electronics Inc. USB Audio Device at usb-5800d000.usb-1.1, full \
speed
Check also the /dev/snd directory, which should now contain some character device files. These will be used
by the user-space libraries and applications to access the audio devices.
Modify your startup scripts so that the snd-usb-audio module is always loaded at startup.
We cannot test the sound card yet, as we will need to build some software first. Be patient, this is coming
soon.
make -C $HOME/embedded-linux-labs/kernel/linux \
M=$PWD \
INSTALL_MOD_PATH=$HOME/embedded-linux-labs/tinysystem/nfsroot \
modules_install
You can see that this installs out-of-tree kernel modules under lib/modules/<version>/extra/.
Back on the target, you can now check that your custom module can be loaded:
# modprobe nunchuk
[ 4317.737978] nunchuk: loading out-of-tree module taints kernel.
See kbuild/modules in kernel documentation for details about building out-of-tree kernel modules.
However, run i2cdetect -r 1 again. You will see that the Nunchuk is still detected, but still not driven by
the kernel. Otherwise, it would be signaled by the UU character. You may also look at the nunchuk.c file and
notice a Nunchuk device probed successfully message that you didn’t see when loading the module.
That’s because the Linux kernel doesn’t know about the Nunchuk device yet, even though the driver for this
kind of devices is already loaded. Our device also has to be described in the Device Tree.
You can confirm this by having a look at the contents of the /sys/bus/i2c directory. It contains two
subdirectories: devices and drivers.
In drivers, there should be a nunchuk subdirectory, but no symbolic link to a device yet. In devices you
should see some devices, but not the Nunchuk one yet.
nunchuk: joystick@52 {
compatible = "nintendo,nunchuk";
reg = <0x52>;
};
};
Here are a few notes:
• The clock-frequency property is used to configure the bus to operate at 100 KHz. This is supposed
to be required for the Nunchuk.
• The Nunchuk device is added through a child node in the I2C controller node.
• For the kernel to probe and drive our device, it’s required that the compatible string matches one of
the compatible strings supported by the driver.
• The reg property is the address of the device on the I2C bus. If it doesn’t match, the driver will probe
the device but won’t be able to communicate with it.
Recompile your Device Tree and reboot your kernel with the new binary.
You can now load your module again, and this time, you should see that the Nunchuk driver probed the
Nunchuk device:
# modprobe nunchuk
# cat /dev/input/event3 | od -x
Caution: using od directly on input event files should work but is currently broken with the Musl library.
We are investigating this issue.
We will use the Nunchuk to control audio playback in an upcoming lab.
This is necessary to create a commit with the git commit -s command, as required by the Linux kernel
contribution guidelines.
Let’s create the branch and the patch now:
Goals
After doing the A tiny embedded system lab, we are going to copy the filesystem contents to the SD card.
The storage will be split into several partitions, and your board will boot on an root filesystem on this SD
card, without using NFS anymore.
Setup
Throughout this lab, we will continue to use the root filesystem we have created in the $HOME/embedded-
linux-labs/tinysystem/nfsroot directory, which we will progressively adapt to use block filesystems.
You are now ready to modify bootcmd to boot the board from SD card. But first, save the settings for booting
from tftp:
This will be useful to switch back to tftp booting mode later in the labs.
Finally, using editenv bootcmd, adjust bootcmd so that the board starts using the kernel from the SD card.
Now, reset the board to check that it boots in the same way from the SD card.
Now, the whole system (bootloader, kernel and filesystems) is stored on the SD card. That’s very useful
for product demos, for example. You can switch demos by switching SD cards, and the system depends on
nothing else. In particular, no networking is necessary.
To illustrate how to use existing libraries and applications, we will extend the small root filesystem built in
the A tiny embedded system lab to add the ALSA libraries and tools to run basic sound support tests, and
the libgpiod library and executables to manage GPIOs. ALSA stands for Advanced Linux Sound Architecture,
and is the Linux audio subsystem.
We’ll see that manually re-using existing libraries is quite tedious, so that more automated procedures are
necessary to make it easier. However, learning how to perform these operations manually will significantly
help you when you face issues with more automated tools.
Of course, all these libraries rely on the C library, which is not mentioned here, because it is already part
of the root filesystem built in the A tiny embedded system lab. You might wonder how to figure out this
dependency tree by yourself. Basically, there are several ways, that can be combined:
• Read the library documentation, which often mentions the dependencies;
• Read the help message of the configure script (by running ./configure --help).
• By running the configure script, compiling and looking at the errors.
To configure, compile and install all the components of our system, we’re going to start from the bottom of
the tree with alsa-lib, then continue with alsa-utils. Then, we will also build libgpiod and ipcalc.
Preparation
For our cross-compilation work, we will need two separate spaces:
• A staging space in which we will directly install all the packages: non-stripped versions of the libraries,
headers, documentation and other files needed for the compilation. This staging space can be quite big,
but will not be used on our target, only for compiling libraries or applications;
• A target space, in which we will only copy the required files from the staging space: binaries and
libraries, after stripping, configuration files needed at runtime, etc. This target space will take a lot
less space than the staging space, and it will contain only the files that are really needed to make the
system work on the target.
To sum up, the staging space will contain everything that’s needed for compilation, while the target space
will contain only what’s needed for execution.
Create the $HOME/embedded-linux-labs/thirdparty directory, and inside, create two directories: staging
and target.
For the target, we need a basic system with BusyBox and initialization scripts. We will re-use the system
built in the A tiny embedded system lab, so copy this system in the target directory:
$ cp -a $HOME/embedded-linux-labs/tinysystem/nfsroot/* target/
Note that for this lab, a lot of typing will be required. To save time typing, we advise you to copy and paste
commands from the electronic version of these instructions.
Testing
Make sure the target/ directory is exported by your NFS server to your board by modifying /etc/exports
and restarting your NFS server.
Make your board boot from this new directory through NFS.
alsa-lib
alsa-lib is a library supposed to handle the interaction with the ALSA subsystem. It is available at https:
//alsa-project.org. Download version 1.2.9, and extract it in $HOME/embedded-linux-labs/thirdparty/.
Tip: if the website for any of the source packages that we need to download in the next sections is down, a
great mirror that you can use is http://sources.buildroot.net/.
Back to alsa-lib sources, look at the configure script and see that it has been generated by autoconf (the
header contains a sentence like Generated by GNU Autoconf 2.69). Most of the time, autoconf comes with
automake, that generates Makefiles from Makefile.am files. So alsa-lib uses a rather common build system.
Let’s try to configure and build it:
$ ./configure
$ make
If you look at the generated binaries, you’ll see that they are x86 ones because we compiled the sources with
gcc, the default compiler. This is obviously not what we want, so let’s clean-up the generated objects and
tell the configure script to use the ARM cross-compiler:
$ make clean
$ CC=arm-linux-gcc ./configure
Of course, the arm-linux-gcc cross-compiler must be in your PATH prior to running the configure script. The
CC environment variable is the classical name for specifying the compiler to use.
Quickly, you should get an error saying:
checking whether we are cross compiling... configure: error: in `/home/tux/embedded-linux-labs
/thirdparty/alsa-lib-1.2.9':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details
If you look at the config.log file, you can see that the configure script compiles a binary with the cross-
compiler and then tries to run it on the development workstation. This is a rather usual thing to do for a
configure script, and that’s why it tests so early that it’s actually doable, and bails out if not.
Obviously, it cannot work in our case, and the scripts exits. The job of the configure script is to test the
configuration of the system. To do so, it tries to compile and run a few sample applications to test if this
library is available, if this compiler option is supported, etc. But in our case, running the test examples is
definitely not possible.
We need to tell the configure script that we are cross-compiling, and this can be done using the --build
and --host options, as described in the help of the configure script:
System types:
--build=BUILD configure for building on BUILD [guessed]
--host=HOST cross-compile to build programs to run on HOST [BUILD]
The --build option allows to specify on which system the package is built, while the --host option allows
to specify on which system the package will run. By default, the value of the --build option is guessed
and the value of --host is the same as the value of the --build option. The value is guessed using the
./config.guess script, which on your system should return x86_64-pc-linux-gnu. See https://www.gnu.
org/software/autoconf/manual/html_node/Specifying-Names.html for more details on these options.
$ ./configure --host=arm-linux
The configure script should end properly now, and create a Makefile.
However, there is one subtle issue to handle. We need to tell alsa-lib to disable a feature called alsa topology.
alsa-lib will build fine but we will encounter some problems afterwards, during alsa-utils building. So you
should configure alsa-lib as follows:
Look at the result of compiling in src/.libs: a set of object files and a set of libasound.so* files.
The libasound.so* files are a dynamic version of the library. The shared library itself is libasound.so.2.0.0,
it has been generated by the following command line:
$ arm-linux-gcc -shared conf.o confmisc.o input.o output.o async.o error.o dlmisc.o socket.o \
shmarea.o userfile.o names.o -lm -ldl -lpthread -lrt -Wl,-soname -Wl,libasound.so.2 -o \
libasound.so.2.0.0
$ ln -s libasound.so.2.0.0 libasound.so.2
$ ln -s libasound.so.2.0.0 libasound.so
• libasound.so is used at compile time when you want to compile an application that is dynamically
linked against the library. To do so, you pass the -lLIBNAME option to the compiler, which will look for
a file named lib<LIBNAME>.so. In our case, the compilation option is -lasound and the name of the
library file is libasound.so. So, the libasound.so symlink is needed at compile time;
• libasound.so.2 is needed because it is the SONAME of the library. SONAME stands for Shared Object
Name. It is the name of the library as it will be stored in applications linked against this library. It
means that at runtime, the dynamic loader will look for exactly this name when looking for the shared
library. So this symbolic link is needed at runtime.
$ arm-linux-readelf -d libasound.so.2.0.0
and look at the (SONAME) line. You’ll also see that this library needs the C library, because of the (NEEDED)
line on libc.so.0.
The mechanism of SONAME allows to change the library without recompiling the applications linked with this
library. Let’s say that a security problem is found in the alsa-lib release that provides libasound 2.0.0, and
fixed in the next alsa-lib release, which will now provide libasound 2.0.1.
You can just recompile the library, install it on your target system, change the libasound.so.2 link so that
it points to libasound.so.2.0.1 and restart your applications. And it will work, because your applications
don’t look specifically for libasound.so.2.0.0 but for the SONAME libasound.so.2.
However, it also means that as a library developer, if you break the ABI of the library, you must change the
SONAME: change from libasound.so.2 to libasound.so.3.
Finally, the last step is to tell the configure script where the library is going to be installed. Most configure
scripts consider that the installation prefix is /usr/local/ (so that the library is installed in /usr/local/lib,
the headers in /usr/local/include, etc.). But in our system, we simply want the libraries to be installed in
the /usr prefix, so let’s tell the configure script about this:
For this library, this option may not change anything to the resulting binaries, but for safety, it is always
recommended to make sure that the prefix matches where your library will be running on the target system.
Do not confuse the prefix (where the application or library will be running on the target system) from the
location where the application or library will be installed on your host while building the root filesystem.
For example, libasound will be installed in $HOME/embedded-linux-labs/thirdparty/target/usr/lib/ be-
cause this is the directory where we are building the root filesystem, but once our target system will be
running, it will see libasound in /usr/lib.
The prefix corresponds to the path in the target system and never on the host. So, one should never pass
a prefix like $HOME/embedded-linux-labs/thirdparty/target/usr, otherwise at runtime, the application or
library may look for files inside this directory on the target system, which obviously doesn’t exist! By default,
most build systems will install the application or library in the given prefix (/usr or /usr/local), but with
most build systems (including autotools), the installation prefix can be overridden, and be different from the
configuration prefix.
We now only have the installation process left to do.
First, let’s make the installation in the staging space:
• $ cp -a staging/usr/lib/libasound.so.2* target/usr/lib
5. Measure the size of the target/usr/lib/libasound.so.2.0.0 library library again after stripping. How
many unnecessary bytes were saved?
Then, we need to install the alsa-lib configuration files:
$ mkdir -p target/usr/share
$ cp -a staging/usr/share/alsa target/usr/share
Now, we need to adjust one small detail in one of the configuration files. Indeed, /usr/share/alsa/alsa.conf
assumes a UNIX group called audio exists, which is not the case on our very small system. So edit this file,
and replace defaults.pcm.ipc_gid audio by defaults.pcm.ipc_gid 0 instead.
And we’re done with alsa-lib!
Alsa-utils
Download alsa-utils from the ALSA offical webpage. We tested the lab with version 1.2.9.
Once uncompressed, we quickly discover that the alsa-utils build system is based on the autotools, so we will
work once again with a regular configure script.
As we’ve seen previously, we will have to provide the prefix and host options and the CC variable:
Now, we should quiclky get an error in the execution of the configure script:
checking for libasound headers version >= 1.2.5 (1.2.5)... not present.
configure: error: Sufficiently new version of libasound not found.
Again, we can check in config.log what the configure script is trying to do:
configure:15855: checking for libasound headers version >= 1.2.5 (1.2.5)
configure:15902: arm-linux-gcc -c -g -O2 conftest.c >&5
conftest.c:24:10: fatal error: alsa/asoundlib.h: No such file or directory
Of course, since alsa-utils uses alsa-lib, it includes its header file! So we need to tell the C compiler where
the headers can be found: there are not in the default directory /usr/include/, but in the /usr/include
directory of our staging space. The help text of the configure script says:
CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if
you have headers in a nonstandard directory <include dir>
Let’s use it:
$ CPPFLAGS=-I$HOME/embedded-linux-labs/thirdparty/staging/usr/include \
./configure --host=arm-linux --prefix=/usr
Now, it should stop a bit later, this time with the error:
checking for snd_ctl_open in -lasound... no
configure: error: No linkable libasound was found.
The configure script tries to compile an application against libasound (as can be seen from the -lasound
option): alsa-utils uses alsa-lib, so the configure script wants to make sure this library is already installed.
Unfortunately, the ld linker doesn’t find it. So, let’s tell the linker where to look for libraries using the -L
option followed by the directory where our libraries are (in staging/usr/lib). This -L option can be passed
to the linker by using the LDFLAGS at configure time, as told by the help text of the configure script:
LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries in a
nonstandard directory <lib dir>
Let’s use this LDFLAGS variable:
$ LDFLAGS=-L$HOME/embedded-linux-labs/thirdparty/staging/usr/lib \
CPPFLAGS=-I$HOME/embedded-linux-labs/thirdparty/staging/usr/include \
./configure --host=arm-linux --prefix=/usr
Once again, it should fail a bit further down the tests, this time complaining about a missing curses helper
header. curses or ncurses is a graphical framework to design UIs in the terminal. This is only used by
alsamixer, one of the tools provided by alsa-utils, that we are not going to use. Hence, we can just disable
the build of alsamixer.
Of course, if we wanted it, we would have had to build ncurses first, just like we built alsa-lib.
$ LDFLAGS=-L$HOME/embedded-linux-labs/thirdparty/staging/usr/lib \
CPPFLAGS=-I$HOME/embedded-linux-labs/thirdparty/staging/usr/include \
./configure --host=arm-linux --prefix=/usr \
--disable-alsamixer
Then, run the compilation with make. You may hit a final error:
Making all in po
make[2]: Entering directory '/home/tux/embedded-linux-labs/
thirdparty/alsa-utils-1.2.9/alsaconf/po'
mv: cannot stat 't-ja.gmo': No such file or directory
This can be fixed by disabling support for alsaconf too:
$ LDFLAGS=-L$HOME/embedded-linux-labs/thirdparty/staging/usr/lib \
CPPFLAGS=-I$HOME/embedded-linux-labs/thirdparty/staging/usr/include \
./configure --host=arm-linux --prefix=/usr \
--disable-alsamixer --disable-alsaconf
You can now run make again. It should work this time.
Let’s now begin the installation process. Before really installing in the staging directory, let’s install in a
dummy directory, to see what’s going to be installed (this dummy directory will not be used afterwards, it
is only to verify what will be installed before polluting the staging space):
The DESTDIR variable can be used with all Makefiles based on automake. It allows to override the installation
directory: instead of being installed in the configuration prefix directory, the files will be installed in DESTDIR/
configuration-prefix.
Now, let’s see what has been installed in /tmp/alsa-utils/ (run tree /tmp/alsa-utils):
/tmp/alsa-utils/
|-- lib
| `-- udev
| `-- rules.d
| `-- 90-alsa-restore.rules
|-- usr
| |-- bin
| | |-- aconnect
| | |-- alsabat
| | |-- alsaloop
| | |-- alsaucm
| | |-- amidi
| | |-- amixer
| | |-- aplay
| | |-- aplaymidi
| | |-- arecord -> aplay
| | |-- arecordmidi
| | |-- aseqdump
| | |-- aseqnet
| | |-- axfer
| | |-- iecset
| | `-- speaker-test
| |-- sbin
| | |-- alsabat-test.sh
| | |-- alsactl
| | `-- alsa-info.sh
| `-- share
| |-- alsa
| | `-- init
| | |-- 00main
| | |-- ca0106
| | |-- default
| | |-- hda
| | |-- help
| | |-- info
| | `-- test
| |-- locale
| | |-- de
| | | `-- LC_MESSAGES
| | | `-- alsa-utils.mo
| | |-- eu
| | | `-- LC_MESSAGES
| | | `-- alsa-utils.mo
| | |-- fr
| | | `-- LC_MESSAGES
| | | `-- alsa-utils.mo
| | |-- ja
| | | `-- LC_MESSAGES
| | | `-- alsa-utils.mo
| | |-- ka
| | | `-- LC_MESSAGES
| | | `-- alsa-utils.mo
| | `-- sk
| | `-- LC_MESSAGES
| | `-- alsa-utils.mo
| |-- man
| | |-- man1
| | | |-- aconnect.1
| | | |-- alsabat.1
| | | |-- alsactl.1
| | | |-- alsa-info.sh.1
| | | |-- alsaloop.1
| | | |-- amidi.1
| | | |-- amixer.1
| | | |-- aplay.1
| | | |-- aplaymidi.1
| | | |-- arecord.1 -> aplay.1
| | | |-- arecordmidi.1
| | | |-- aseqdump.1
| | | |-- aseqnet.1
| | | |-- axfer.1
| | | |-- axfer-list.1
| | | |-- axfer-transfer.1
| | | |-- iecset.1
| | | `-- speaker-test.1
| | `-- man7
| `-- sounds
| `-- alsa
| |-- Front_Center.wav
| |-- Front_Left.wav
| |-- Front_Right.wav
| |-- Noise.wav
| |-- Rear_Center.wav
| |-- Rear_Left.wav
| |-- Rear_Right.wav
| |-- Side_Left.wav
| `-- Side_Right.wav
`-- var
`-- lib
`-- alsa
30 directories, 59 files
So, we have:
• The udev rules in lib/udev
• The alsa-utils binaries in /usr/bin and /usr/sbin
• Some sound samples in /usr/share/sounds
• The various translations in /usr/share/locale
• The manual pages in /usr/share/man/, explaining how to use the various tools
• Some configuration samples in /usr/share/alsa.
Now, let’s make the installation in the staging space:
libgpiod
Compiling libgpiod
We are now going to use libgpiod (instead of the deprecated interface in /sys/class/gpio, whose executables
(gpiodetect, gpioset, gpioget...) will allow us to drive and manage GPIOs from shell scripts.
Here, we will be using the 2.0.x version of libgpiod.
As we are not starting from a release, we will need to install further development tools to generate some files
like the configure script:
./autogen.sh
Run ./configure --help script, and see that this script provides a --enable-tools option which allows to
build the userspace executables that we want.
As this project doesn’t have any external library dependency, let’s configure libgpiod in a similar way as
alsa-utils:
$ ./configure --host=arm-linux --prefix=/usr --enable-tools
$ make
Installation to the staging space can be done using the classical DESTDIR mechanism:
And finally, only manually install and strip the files needed at runtime in the target space:
$ cd ..
$ cp -a staging/usr/lib/libgpiod.so.3* target/usr/lib/
$ arm-linux-strip target/usr/lib/libgpiod*
$ cp -a staging/usr/bin/gpio* target/usr/bin/
$ arm-linux-strip target/usr/bin/gpio*
Testing libgpiod
First, connect GPIO PE1 (pin D2 of connector CN14) connected to ground (pin 7 of connector CN16), as in
the Accessing Hardware Devices lab.
Now, let’s run the gpiodetect command on the target, and check that you can list the various GPIO banks
on your system.
# gpiodetect
gpiochip0 [GPIOA] (16 lines)
gpiochip1 [GPIOB] (16 lines)
gpiochip2 [GPIOC] (16 lines)
gpiochip3 [GPIOD] (16 lines)
gpiochip4 [GPIOE] (16 lines)
gpiochip5 [GPIOF] (16 lines)
gpiochip6 [GPIOG] (16 lines)
gpiochip7 [GPIOH] (16 lines)
gpiochip8 [GPIOI] (12 lines)
gpiochip9 [GPIOZ] (8 lines)
# gpioget -c gpiochip4 1
"1"=inactive
Now, connect your wire to 3V3 (pin 2 of connector CN16). You should now read:
# gpioget -c gpiochip4 1
"1"=active
You see that you didn’t have to configure the GPIO as input. libgpiod did that for you.
If you have an LED and a small breadboard (or M-F breadboard wires), you could also try to drive the
GPIO in output mode. Connect the short pin of the LED to GND, and the long one to the GPIO. Then
then following command should light up the diode:
ipcalc
After practicing with autotools based packages, let’s build ipcalc, which is using Meson as build system. We
won’t really need this utility in our system, but at least it has no dependencies and therefore offers an easy
way to build our first Meson based package.
So, first install the meson package:
In the main lab directory, then let’s check out the sources through git:
To cross-compile with Meson, we need to create a cross file. Let’s create the ../cross-file.txt file with
the below contents:
[binaries]
c = 'arm-linux-gcc'
[host_machine]
system = 'linux'
cpu_family = 'arm'
cpu = 'cortex-a7'
endian = 'little'
We also need to create a special directory for building:
$ mkdir cross-build
$ cd cross-build
We can now have meson create the Ninja build files for us:
$ ninja
$ cd ../..
$ cp staging/usr/bin/ipcalc target/usr/bin/
$ arm-linux-strip target/usr/bin/ipcalc
Note that we could have asked ninja install to strip the executable for us when installing it into the staging
directory. To do, this, we would have added a strip entry in the cross file, and passed --strip to Meson.
However, it’s better to keep files unstripped in the staging space, in case we need to debug them.
You can now test that ipcalc works on the target:
# ipcalc 192.168.0.100
Address: 192.168.0.100
Address space: Private Use
Final touch
To finish this lab completely, and to be consistent with what we’ve done before, let’s strip the C library and
its loader too.
First, check the initial size of the binaries:
$ ls -l target/lib
$ chmod +w target/lib/*.so.*
$ arm-linux-strip target/lib/*.so.*
$ ls -l target/lib/
Goals
Compared to the previous lab, we are going to build a more elaborate system, still containing alsa-utils (and
of course its alsa-lib dependency), but this time using Buildroot, an automated build system.
The automated build system will also allow us to add more packages and play real audio on our system,
thanks to the Music Player Daemon (mpd) (https://www.musicpd.org/ and its mpc client.
As in a real project, we will also build the Linux kernel from Buildroot, and install the kernel modules in the
root filesystem.
Setup
Go to the $HOME/embedded-linux-labs/buildroot directory.
Now checkout the tag corresponding to the latest 2023.02.<n> release (Long Term Support), which we have
tested for this lab.
Several subdirectories or files are visible, the most important ones are:
• boot contains the Makefiles and configuration items related to the compilation of common bootloaders
(GRUB, U-Boot, Barebox, etc.)
• board contains board specific configurations and root filesystem overlays.
• configs contains a set of predefined configurations, similar to the concept of defconfig in the kernel.
• docs contains the documentation for Buildroot.
• fs contains the code used to generate the various root filesystem image formats
• linux contains the Makefile and configuration items related to the compilation of the Linux kernel
• Makefile is the main Makefile that we will use to use Buildroot: everything works through Makefiles
in Buildroot;
• package is a directory that contains all the Makefiles, patches and configuration items to compile
the user space applications and libraries of your embedded Linux system. Have a look at various
subdirectories and see what they contain;
• system contains the root filesystem skeleton and the device tables used when a static /dev is used;
• toolchain contains the Makefiles, patches and configuration items to generate the cross-compiling
toolchain.
mkdir -p board/bootlin/training
cp ../../kernel/linux/.config board/bootlin/training/linux.config
cp ../../kernel/linux/0001-Custom-DTS-for-Bootlin-lab.patch \
board/bootlin/training/
Configure Buildroot
In our case, we would like to:
• Generate an embedded Linux system for ARM;
• Use an already existing external toolchain instead of having Buildroot generating one for us;
• Compile the Linux kernel and deploy its modules in the root filesystem;
• Integrate BusyBox, alsa-utils, mpd, mpc and evtest in our embedded Linux system;
• Integrate the target filesystem into a tarball
To run the configuration utility of Buildroot, simply run:
$ make menuconfig
Set the following options. Don’t hesitate to press the Help button whenever you need more details about a
given option:
• Target options
– Target Architecture: ARM (little endian)
– Target Architecture Variant: cortex-A7
– Target ABI: EABIhf
– Floating point strategy: VFPv4
• Toolchain
– Toolchain type: External toolchain
– Toolchain: Custom toolchain
– Toolchain path: use the toolchain you built: /home/<user>/x-tools/arm-training-linux-
musleabihf (replace <user> by your actual user name)
– External toolchain gcc version: 12.x
– External toolchain kernel headers series: 6.1.x or later
– External toolchain C library: musl (experimental)
– We must tell Buildroot about our toolchain configuration, so select Toolchain has SSP support?
and Toolchain has C++ support?. Buildroot will check these parameters anyway.
• Kernel
– Enable Linux Kernel
– Set Kernel version to Latest version (6.1)
– Set Custom kernel patches to board/bootlin/training/0001-Custom-DTS-for-Bootlin-lab.patch
– Set Kernel configuration to Using a custom (def)config file)
– Set Configuration file path to board/bootlin/training/linux.config
– Select Build a Device Tree Blob (DTB)
– Set In-tree Device Tree Source file names to stm32mp157a-dk1-custom
• Target packages
– Keep BusyBox (default version) and keep the BusyBox configuration proposed by Buildroot;
– Audio and video applications
∗ Select alsa-utils, and in the submenu:
· Only keep speaker-test
∗ Select mpd, and in the submenu:
· Keep only alsa, vorbis and tcp sockets
∗ Select mpd-mpc.
– Hardware handling
∗ Select evtest
This userspace application allows to test events from input devices. This way, we will be able
to test the Nunchuk by getting details about which buttons were pressed.
• Filesystem images
– Select tar the root filesystem
Exit the menuconfig interface. Your configuration has now been saved to the .config file.
$ make
Buildroot will first create a small environment with the external toolchain, then download, extract, configure,
compile and install each component of the embedded system.
All the compilation has taken place in the output/ subdirectory. Let’s explore its contents:
• build, is the directory in which each component built by Buildroot is extracted, and where the build
actually takes place
• host, is the directory where Buildroot installs some components for the host. As Buildroot doesn’t
want to depend on too many things installed in the developer machines, it installs some tools needed
to compile the packages for the target. In our case it installed pkg-config (since the version of the host
may be ancient) and tools to generate the root filesystem image (genext2fs, makedevs, fakeroot).
• images, which contains the final images produced by Buildroot. In our case it contains a tarball of the
filesystem, called rootfs.tar, plus the compressed kernel and Device Tree binary. Depending on the
configuration, there could also a bootloader binary or a full SD card image.
• staging, which contains the “build” space of the target system. All the target libraries, with headers
and documentation. It also contains the system headers and the C library, which in our case have been
copied from the cross-compiling toolchain.
• target, is the target root filesystem. All applications and libraries, usually stripped, are installed in
this directory. However, it cannot be used directly as the root filesystem, as all the device files are
missing: it is not possible to create them without being root, and Buildroot has a policy of not running
anything as root.
Add our nfsroot directory to the list of directories exported by NFS in /etc/exports.
Also update the kernel and Device Tree binaries used by your board, from the ones compiled by Buildroot
in output/images/.
Boot the board, and log in (root account, no password).
You should now reach a shell.
mkdir -p board/bootlin/training/rootfs-overlay/
Then add a custom startup script, by adding an etc/init.d/S03modprobe executable file to the overlay
directory, with the below contents:
#!/bin/sh
modprobe snd-usb-audio
Then, go back to Buildroot’s configuration interface:
• System configuration
– Set Root filesystem overlay directories to board/bootlin/training/rootfs-overlay
Build your image again. This should be quick as Buildroot doesn’t need to recompile anything. It will just
apply the root filesystem overlay.
Update your nfsroot directory, reboot the board and check that the snd_usb_audio module is loaded as
expected.
You can run speaker-test to check that audio indeed works.
mkdir -p board/bootlin/training/rootfs-overlay/var/lib/mpd/music
cp ../data/music/* board/bootlin/training/rootfs-overlay/var/lib/mpd/music
Update your root filesystem. Thanks to NFS, you don’t need to restart your system.
Using the ps command, check that the mpd server was started by the system, as implemented by the /etc/
init.d/S95mpd script.
If that’s the case, you are now ready to run mpc client commands to control music playback. First, let’s make
mpd process the newly added music files. Run this command on the target:
# mpc update
You should see the files getting indexed, by displaying the contents of the /var/log/mpd.log file:
Jan 01 00:04 : exception: Failed to open '/var/lib/mpd/state': No such file or directory
Jan 01 00:15 : update: added /2-arpent.ogg
Jan 01 00:15 : update: added /6-le-baguette.ogg
Jan 01 00:15 : update: added /4-land-of-pirates.ogg
Jan 01 00:15 : update: added /3-chronos.ogg
Jan 01 00:15 : update: added /1-sample.ogg
Jan 01 00:15 : update: added /7-fireworks.ogg
Jan 01 00:15 : update: added /5-ukulele-song.ogg
You can also check the list of available files:
# mpc listall
1-sample.ogg
2-arpent.ogg
5-ukulele-song.ogg
3-chronos.ogg
7-fireworks.ogg
6-le-baguette.ogg
4-land-of-pirates.ogg
To play files, you first need to create a playlist. Let’s create a playlist by adding all music files to it:
# mpc add /
You should now be able to start playing the songs in the playlist:
# mpc play
If you find that changing the volume is not available, you can add a custom configuration for MPD, as the
standard one provided by Buildroot doesn’t support allowing to change the audio playback volume with all
sound cards we have tested. We will simply add this file to our overlay:
cp ../data/mpd.conf board/bootlin/training/rootfs-overlay/etc/
Run Buildroot again and update your root filesystem. Here again, you don’t need to reboot. It’s sufficient
to restart MPD to make it read the new configuration file:
# /etc/init.d/S95mpd restart
You can now make sure that modifying the volume works.
Later, we will compile and debug a custom MPD client application.
Analyzing dependencies
It’s always useful to understand the dependencies drawn by the packages we build.
First we need to install a Graphviz:
$ make graph-depends
We can now study the dependency graph:
$ evince output/graphs/graph-depends.pdf
In particular, you can see that adding MPD and its client required to compile Meson for the host, and in
turn, Python 3 for the host too. This substantially contributed to the build time.
$(eval $(kernel-module))
$(eval $(generic-package))
Then, configure Buildroot to build your package, run Buildroot and update your root filesystem.
Can you load the nunchuk module now? If everything’s fine, add a line to /etc/init.d/S03modprobe for this
driver, and update your root filesystem once again.
# evtest
No device specified, trying to scan all of /dev/input/event*
Available devices:
/dev/input/event0: pmic_onkey
/dev/input/event1: Logitech Inc. Logitech USB Headset H340 Consumer Control
/dev/input/event2: Logitech Inc. Logitech USB Headset H340
/dev/input/event3: Wii Nunchuk
Select the device event number [0-3]:
Going further
If you finish your lab before the others
• For more music playing fun, you can install the ario or cantata MPD client on your host machine
(sudo apt install ario, sudo apt install cantata), configure it to connect to the IP address of
your target system with the default port, and you will also be able to control playback from your host
machine.
Goals
Compared to the previous lab, we go on increasing the complexity of the system, this time by using the
systemd init system, and by taking advantage of it to add a few extra features, in particular ones that will
be useful for debugging in the next lab.
Setup
Since systemd requires the GNU C library, we are going to make a new Buildroot build in a new working
directory, and using a different cross-compiling toolchain.
So, create the $HOME/embedded-linux-labs/integration directory and go inside it.
Make a new clone of Buildroot from the existing local Git repository, and checkout our bootlin-labs branch:
rm -r board/bootlin/training/rootfs-overlay/etc/init.d/
Buildroot configuration
Configure Buildroot as follows:
• Target options
– Select the same architecture and CPU settings as in the previous lab.
• Toolchain
– Toolchain type: External toolchain
– Toolchain: Bootlin toolchains
This time, we will use a Bootlin ready-made toolchain for glibc, as this is necessary for using
systemd.
– Toolchain origin: Toolchain to be downloaded and installed
– Bootlin toolchain variant: armv7-eabihf glibc bleeding-edge 2022.08-1
– Select Copy gdb server to the Target
• System configuration
– Init system: systemd
– Root filesystem overlay directories: board/bootlin/training/rootfs-overlay
• Kernel
# systemctl status
You can also check all the mounted filesystems and be impressed:
# mount
However, check the mpd.service file for our MPD server. This should help you to realize all the options
provided by systemd to start and control system services, while keeping the system secure and their resources
under control.
You won’t be able to match this level of control and security in a ”hand-made” system.
For snd_usb_audio, there are many possible matching values, so it’s not straightforward to be sure which
matched your particular device.
However, you can find in sysfs which MODALIAS was emitted for your device:
# cd /sys/class/sound/card0/device
# ls -la
# cat modalias
usb:v1B3Fp2008d0100dc00dsc00dp00ic01isc01ip00in00
With a bit of patience, you could find the matching line in the modules.alias file.
If you want to see the information sent to Udev by the kernel when a new device is plugged in, here are a
few debugging commands.
First unplug your device and run:
# udevadm monitor
Then plug in your headset again. You will find all the events emitted by the kernel, and with the same string
(with UDEV instead of KERNEL), the time when Udev finished processing each event.
You can also see the MODALIAS values carried by these events:
Here you will recognize our Nunchuk device through its 0x52 address.
# cd 1-0052
# ls -la
# cat modalias
of:NjoystickT(null)Cnintendo,nunchuk
Here the bus is of, meaning Open Firmware, which was the former name of the Device Tree. When an event
was emitted by the kernel with this MODALIAS string, the nunchuk module got loaded by Udev thanks to the
matching alias.
This actually happened when systemd ran the coldplugging operation: at system startup, it asked the kernel
to emit hotplug events for devices already present when the system booted:
[ OK ] Finished Coldplug All udev Devices.
On non-x86 platforms, that’s typically for devices described in the Device Tree. This way, both static and
hotplugged devices can be handled in the same way, using the same Udev rules.
# mpc update
# mpc add /
# mpc play
If it doesn’t, look at the systemd logs in your serial console history. systemd should let you know about the
failing services and the commands to run to get more details.
Setup
We will continue to use the same root filesystem.
Our goal is to compile and debug our own MPD client. This client will be driven by the Nunckuk to switch
between audio tracks, and to adjust the playback volume.
However, this client will be used together with mpc, as it won’t be able to create the playlist and start the
playback. It will just be used to control the volume and switch between songs. So, you need to run mpc
commands first before trying the new client:
mpc update
mpc add /
mpc pause
$ export PATH=$HOME/embedded-linux-labs/integration/buildroot/output/host/bin:$PATH
The compiler complains about undefined references to some symbols in libmpdclient. This is normal, since
we didn’t tell the compiler to link with this library. So let’s use pkg-config to query the pkg-config database
about the list of libraries needed to build an application against libmpdclient12 :
12 Again, output/host/bin has a special pkg-config that automatically knows where to look, so it already knows the right paths
Copy the nunchuk-mpd-client executable to the /root directory of the root filesystem, and then strip it.
Back to target system, try to run the program:
# /root/nunchuk-mpd-client
ERROR: didn't manage to find the Nunchuk device in /dev/input. Is the Nunchuk driver loaded?
Using strace
Let’s run the program through the strace command to find out why this happens.
You should see that it’s trying to access files that don’t exist. Once you’ve found what’s wrong, fix the code
(or ask your instructor for help if needed), then rebuild the program and run it again:
# /root/nunchuk-mpd-client
ERROR: didn't manage to find the Nunchuk device in /dev/input. Is the Nunchuk driver loaded?
Using ltrace
Let’s run the program through ltrace now. We will be able to see the shared library calls.
Take your time to study the ltrace output. That’s interesting information! Back to our issue, the last lines
of output should make the issue pretty obvious.
Fix the bug in the code, recompile the program, copy it to the target, strip it and start it again.
You should now be able to use the new client, driving the server through the following Nunchuk inputs:
• Joystick up: volume up 5%
• Joystick down: volume down 5%
• Joystick left: previous song
• Joystick right: next song
• Z (big) button: pause / play
• C (small) button: quit client
Have fun with the new client. You’ll just realize that quitting causes the program to crash with a segmentation
fault. Let’s debug this too.
$ arm-linux-gdb nunchuk-mpd-client
gdb starts and loads the debugging information from the nunchuk-mpd-client binary (in the appdev directory)
which has been compiled with -g.
Then, we need to tell where to find our libraries, since they are not present in the default /lib and /usr/lib
directories on your workstation. This is done by setting the gdb sysroot variable (on one line):
Then, use gdb as usual to set breakpoints, look at the source code, run the application step by step, etc.
In our case, we’ll just start the program, press the C button to quit to cause the the segmentation fault:
(gdb) continue
After the segmentation fault, you can ask for a backtrace to see where this happened:
(gdb) backtrace
This will tell you that the segmentation fault occurred in a function of the libmpdclient, called by our
program. You will also get the number of the line in the program which caused this. This should help you
to find the bug in our application.
Once you found it, don’t fix it yet. We are going to make further experiments around this segmentation fault.
warning: Can't open file /root/nunchuk-mpd-client during file-backed mapping note processing
warning: Can't open file /usr/lib/libc.so.6 during file-backed mapping note processing
warning: Can't open file /usr/lib/libmpdclient.so.2.20 during file-backed mapping note processing
warning: Can't open file /usr/lib/ld-linux-armhf.so.3 during file-backed mapping note processing
In the gdb shell, set the sysroot setting as previously, and then generate a backtrace to see where the program
crashed. You can even see the value of all variables in the different function contexts of your program:
(gdb) bt full
This way, you can have a lot of information about the crash without running the program through the
debugger.
If you face trouble, you can check the Dropbear logs on the target:
• prep-debug.sh: script to recompile the program, copy it to the target through SSH, and start it through
the debugger. Open this file and update the target IP and path settings if necessary.
• .vscode/c_cpp_properties.json: settings for the code editor. Modify the paths in this file according
to your setup.
• .vscode/launch.json: these are the settings for remote debugging. Again, open this file, update the
paths, and the target IP address if necessary.
$ code
The first thing to do is to make sure the C/C++ extension from Microsoft (ms-vscode.cpptools) is installed.
Do this using the Extensions vertical tab:
Then click on the nunchuk-mpd-client.c file in the left column to open it in VS Code.
Now, start by compiling your program from VS Code, copying it to the target, and running it through the
debugger by using the Terminal → Run Build Task... menu entry.
Last but not least, you can start debugging the program by clicking on the Run and Debug tab, and then on
the gdb (Launch) at the top:
In the debug console, you should see that debugging has started. The bottom line of the interface should
turn orange too:
Then, start using the Nunchuk to control playback, and when you try to quit with the C button, VS Code
should now see the segmentation fault:
You can then look at variables, the call stack, browse the code...
To stop debugging, you should use Run → Stop Debugging.
By studing the the code, you should eventually find that what’s causing the segmentation fault is the call
to free() in the test for the C button. Remove this line, save the file through the File menu (otherwise
nothing will change), and then compile and run the application again. This time, there should be no more
segmentation fault when you hit the C button.
If you are ahead of time, don’t hesitate to spend more time with VS Code, for example to add breakpoints
and execute the program step by step.
perf report
See the time spent in various kernel ([k]) and userspace ([.]) functions.
Now, let’s profile the whole system. First, make sure that the system is currently playing audio. Then SSH
to your board and run perf top (working better through SSH) to see live information about kernel and
userspace functions consuming most CPU time.
This is interactive, but hard to analyze. You can also run perf record for about 30 seconds, followed by
perf report to have a useful summary of system wide activity for a substantial amount of time.
This was a very brief start at practising with perf, which offers many more possibilities than we could see
here.
What to remember
During this lab, we learned that...
• It’s easy to study the behavior of programs and diagnose issues without even having the source code,
thanks to strace, ltrace and perf.
• You can use perf as a system wide profiler too.
• You can leave a small gdbserver program (about 400 KB) on your target that allows to debug target
applications, using a standard gdb debugger on the development host, or a graphical IDE such as VS
Code.
• It is fine to strip applications and binaries on the target machine, as long as the programs and libraries
with debugging symbols are available on the development host.
• Thanks to core dumps, you can know where a program crashed, without having to reproduce the issue
by running the program through the debugger.
NUNCHUK_MPD_CLIENT_VERSION = 1.0
NUNCHUK_MPD_CLIENT_SITE = $(HOME)/embedded-linux-labs/appdev/nunchuk-mpd-client-1.0
NUNCHUK_MPD_CLIENT_SITE_METHOD = local
NUNCHUK_MPD_CLIENT_DEPENDENCIES = host-pkgconf libmpdclient
$(eval $(meson-package))
All you have to do now is to enable the nunchuk-mpd-client package in Buildroot’s configuration, run make,
update the root filesystem and check on the target that /usr/bin/nunchuk-mpd-client exists and runs fine.
All this was pretty straightforward, wasn’t it? Meson rocks!
Congratulations, you’ve reached the end of all our labs. Try to look back, and see how much experience
you’ve gained in these last days.