The Embedded Project Cookbook a Step by Step Guide for Microcontroller
The Embedded Project Cookbook a Step by Step Guide for Microcontroller
Project Cookbook
A Step-by-Step Guide
for Microcontroller Projects
John T. Taylor
Wayne T. Taylor
The Embedded Project Cookbook: A Step-by-Step Guide for
Microcontroller Projects
John T. Taylor Wayne T. Taylor
Covington, GA, USA Golden, CO, USA
Copyright © 2024 by The Editor(s) (if applicable) and The Author(s), under
exclusive license to APress Media, LLC, part of Springer Nature
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way,
and transmission or information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark
symbol with every occurrence of a trademarked name, logo, or image we use the names, logos,
and images only in an editorial fashion and to the benefit of the trademark owner, with no
intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if
they are not identified as such, is not to be taken as an expression of opinion as to whether or not
they are subject to proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal
responsibility for any errors or omissions that may be made. The publisher makes no warranty,
express or implied, with respect to the material contained herein.
Managing Director, Apress Media LLC: Welmoed Spahr
Acquisitions Editor: Melissa Duffy
Development Editor: James Markham
Editorial Project Manager: Gryffin Winkler
Cover designed by eStudioCalamar
Cover image designed by Tom Christensen from Pixabay
Distributed to the book trade worldwide by Springer Science+Business Media New York, 1
New York Plaza, Suite 4600, New York, NY 10004-1562, USA. Phone 1-800-SPRINGER, fax (201)
348-4505, e-mail [email protected], or visit www.springeronline.com. Apress Media,
LLC is a California LLC and the sole member (owner) is Springer Science + Business Media
Finance Inc (SSBM Finance Inc). SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail [email protected]; for reprint,
paperback, or audio rights, please e-mail [email protected].
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook
versions and licenses are also available for most titles. For more information, reference our Print
and eBook Bulk Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is
available to readers on GitHub. For more detailed information, please visit https://www.apress.
com/gp/services/source-code.
If disposing of this product, please recycle the paper
To Sally, Bailey, Kelly, and Todd.
—J.T.
Table of Contents
About the Authors������������������������������������������������������������������������������xiii
Preface���������������������������������������������������������������������������������������������xvii
Chapter 1: Introduction������������������������������������������������������������������������1
Software Development Processes������������������������������������������������������������������������2
Software Development Life Cycle�������������������������������������������������������������������������5
Outputs and Artifacts��������������������������������������������������������������������������������������������7
What You’ll Need to Know�������������������������������������������������������������������������������������8
Coding in C and C++���������������������������������������������������������������������������������������������9
What Toys You Will Need���������������������������������������������������������������������������������������9
Regulated Industries�������������������������������������������������������������������������������������������10
What Is Not Covered��������������������������������������������������������������������������������������������11
Conclusion����������������������������������������������������������������������������������������������������������12
Chapter 2: Requirements��������������������������������������������������������������������13
Formal Requirements�����������������������������������������������������������������������������������������14
Functional vs. Nonfunctional������������������������������������������������������������������������������16
Sources for Requirements����������������������������������������������������������������������������������16
Challenges in Collecting Requirements��������������������������������������������������������������18
Exiting the Requirements Step���������������������������������������������������������������������������19
GM6000���������������������������������������������������������������������������������������������������������������19
Summary������������������������������������������������������������������������������������������������������������22
v
Table of Contents
Chapter 3: Analysis����������������������������������������������������������������������������25
System Engineering��������������������������������������������������������������������������������������������26
GM6000 System Architecture������������������������������������������������������������������������26
Software Architecture�����������������������������������������������������������������������������������������28
Moving from Inputs to Outputs���������������������������������������������������������������������������30
Hardware Interfaces��������������������������������������������������������������������������������������31
Performance Constraints�������������������������������������������������������������������������������32
Programming Languages������������������������������������������������������������������������������34
Subsystems���������������������������������������������������������������������������������������������������35
Subsystem Interfaces������������������������������������������������������������������������������������40
Process Model�����������������������������������������������������������������������������������������������42
Functional Simulator�������������������������������������������������������������������������������������45
Cybersecurity������������������������������������������������������������������������������������������������48
Memory Allocation�����������������������������������������������������������������������������������������49
Inter-thread and Inter-process Communication��������������������������������������������50
File and Directory Organization���������������������������������������������������������������������51
Localization and Internationalization�������������������������������������������������������������52
Requirement Traceability������������������������������������������������������������������������������������54
Summary������������������������������������������������������������������������������������������������������������56
vi
Table of Contents
Chapter 5: Preparation�����������������������������������������������������������������������77
GitHub Projects���������������������������������������������������������������������������������������������������78
GitHub Wiki����������������������������������������������������������������������������������������������������������79
Continuous Integration Requirements����������������������������������������������������������������82
Jenkins���������������������������������������������������������������������������������������������������������������84
Summary������������������������������������������������������������������������������������������������������������86
Chapter 6: Foundation������������������������������������������������������������������������89
SCM Repositories������������������������������������������������������������������������������������������������90
Source Code Organization�����������������������������������������������������������������������������������90
Build System and Scripts������������������������������������������������������������������������������������92
Skeleton Applications������������������������������������������������������������������������������������������94
CI “Build-All” Script���������������������������������������������������������������������������������������������94
Software Detailed Design�����������������������������������������������������������������������������������95
Summary������������������������������������������������������������������������������������������������������������98
vii
Table of Contents
viii
Table of Contents
ix
Table of Contents
x
Table of Contents
Separation of Concerns�������������������������������������������������������������������������������238
Polymorphism����������������������������������������������������������������������������������������������256
Dos and Don’ts��������������������������������������������������������������������������������������������������263
Summary����������������������������������������������������������������������������������������������������������265
Appendix G: RATT������������������������������������������������������������������������������437
Appendix H: GM6000 Requirements�������������������������������������������������449
xi
Table of Contents
Index�������������������������������������������������������������������������������������������������671
xii
About the Authors
John Taylor has been an embedded developer
for over 30 years. He has worked as a firmware
engineer, technical lead, system engineer,
software architect, and software development
manager for companies such as Ingersoll
Rand, Carrier, Allen-Bradley, Hitachi Telecom,
Emerson, AMD, and several startup companies.
He has developed firmware for products
that include HVAC control systems, telecom
SONET nodes, IoT devices, microcode for
communication chips, and medical devices.
He is the co-author of five US patents and holds a bachelor’s degree in
mathematics and computer science.
xiii
About the Technical Reviewer
Jeff Gable is an embedded software consultant
for the medical device industry, where
he helps medical device startups develop
bullet-proof software to take their prototypes
through FDA submission and into production.
Combining his expertise in embedded
software, FDA design controls, and practical
Agile methodologies, Jeff helps existing
software teams be more effective and efficient
or handles the entire software development
and documentation effort for a new device.
Jeff has spent his entire career doing safety-critical product
development in small, cross-disciplinary teams. After stints in aerospace,
automotive, and medical, he founded Gable Technology, Inc. in 2019 to
focus on medical device startups. He also co-hosts the Agile Embedded
podcast, where he discusses how device developers don't have to choose
between time-to-market and quality.
In his spare time, Jeff enjoys rock climbing, woodworking, and
spending time with his wife and two small children.
xv
Preface
My personal motivation for writing this cookbook is so that I never have to
start an embedded project from scratch again. I am tired of reinventing the
wheel every time I move to a new project, or new team, or new company.
I have started over many times, and every time I find myself doing all the
same things over again. This, then, is a cookbook for all the “same things”
I do—all the same things that I inevitably have to do. In a sense, these are
my recipes for success.
On my next “new project,” I plan to literally copy and paste from the
code and documentation templates I have created for this book. And for
those bits that are so different that a literal copy and paste won’t work, I
plan to use this cookbook as a “reference design” for generating the new
content. For example, suppose for my next project I need a hash table
(i.e., a dictionary) that does not use dynamic memory allocation. My
options would be
For me, the perfect world choice is option one—copy, paste into a new
file, and then “save as” with a new file name. Option two would be to use
the material in this book as a reference design. Start with one of the code
or documentation templates and adapt it to the needs of the new project.
And option three would be the last resort. Been there; done that; don’t
want to do it ever again.
xvii
Preface
xviii
CHAPTER 1
Introduction
The purpose of this cookbook is to enable the reader to never have to
develop a microcontroller software project from scratch. By a project,
I mean everything that is involved in releasing a commercially viable
product that meets industry standards for quality. A project, therefore,
includes noncode artifacts such as software processes, software
documentation, continuous integration, design reviews and code reviews,
etc. Of course, source code is included in this as well. And it is production-
quality source code; it incorporates essential middleware such as an OS
abstraction layer (OSAL), containers that don’t use dynamic memory,
inter-thread communication modules, a command-line console, and
support for a functional simulator.
The book is organized in the approximate chronological order of a
software development life cycle. In fact, it begins with a discussion of the
software development process and the software development life cycle.
However, the individual chapters are largely independent and can stand
alone. Or, said another way, you are encouraged to navigate the chapters in
whatever order seems most interesting to you.
2
Chapter 1 Introduction
The more additional processes and steps you add, the more
sophisticated your development process becomes, and—if you add the
right additional processes—the better the results. Figure 1-3 illustrates this
continuum.
3
Chapter 1 Introduction
4
Chapter 1 Introduction
want to do.” And, yes, it’s not fun writing architecture documentation or
automated unit tests and the like, but it’s the difference between being a
hacker or a professional, spit-and-bailing wire or craftsmanship.
• Planning
• Construction
• Release
These three stages are waterfall in nature. That is, you typically
don’t want to start the construction stage until the planning stage has
completed. That said, work within each stage is very much iterative, so if
new requirements (planning) arise in the middle of coding (construction),
the new requirements can be accommodated in the next iteration through
the construction phase. To some, in this day of Agile development, it might
seem like a step backward to employ even a limited waterfall approach, but
I would make the following counter arguments:
5
Chapter 1 Introduction
6
Chapter 1 Introduction
grade” and “production quality.” That is, everything in this book has been
used and incorporated in real-life products. Nevertheless, there are some
limitations to the GM6000 project:
7
Chapter 1 Introduction
8
Chapter 1 Introduction
1
Paraphrased from John W. Berman: “There’s never enough time to do it right, but
there’s always enough time to do it over.”
9
Chapter 1 Introduction
• Target hardware.
• STMicroelectronics’ NUCLEO-F413ZH
development board.
Regulated Industries
Most of my early career was spent working in domains with no or very
minimal regulatory requirements. But when I finally did work on medical
devices, I was pleased to discover that the best practices I had accumulated
over the years were reflected in the quality processes required by the FDA
or EMA. Consequently, the processes presented here are applicable to
both nonregulated and regulated domains. Nevertheless, if you’re working
in a regulated industry, you should compare what is presented here against
your specific circumstances and then make choices about what to adopt,
exclude, or modify to fit your project’s needs.
10
Chapter 1 Introduction
Additionally, while they are worthy topics for discussion, this book
only indirectly touches on the following:
• Multithreading
• Real-time scheduling
• Interrupt handling
• Algorithm design
11
Chapter 1 Introduction
This is not to say that the framework does not support multithreading
or interrupt handling or real-time scheduling. Rather, I didn’t consider
this book the right place for those discussion. To extend the cookbook
metaphor a little more, I consider that a list of ingredients. And while
ingredients are important, I’m more interested here in the recipes that
detail how to prepare, combine, and bake it all together.
Conclusion
Finally, it is important to understand that this book is about how to
productize software, not a book on how to evaluate hardware or create a
proof of concept. In my experience, following the processes described in
this book will provide you and your software team with the tools to achieve
a high-quality, robust product without slowing down the project timeline.
Again, for a broader discussion of why I consider these processes best
practices, I refer you to Patterns in the Machine,2 which makes the case for
the efficiency, flexibility, and maintainability of many of these approaches
to embedded software development.
2
John Taylor and Wayne Taylor. Patterns in the Machine: A Software Engineering
Guide to Embedded Development. Apress Publishers, 2021
12
CHAPTER 2
Requirements
Collecting requirements is the first step in the planning stage. This is where
you and your team consolidate the user and business needs into problem
statements and then define in rough terms how that problem will be
solved. Requirements articulate product needs like
• Functions
• Capabilities
• Attributes
• Capacities
These written requirements become the inputs for the second step in
the planning phase. Most of the time, though, the analysis step needs to
start before the requirements have all been collected and agreed upon.
Consequently, don’t burden yourself with the expectation that all the
requirements need to be defined before exiting the requirements step.
Rather, identify an initial set of requirements with your team as early
as possible to ensure there’s time to complete the analysis step. The
minimum deliverable or output for the requirements step is a draft set of
requirements that can be used as input for the analysis step.
Formal Requirements
Typically, requirements are captured in a table form or in a database.
If the content of your requirements is presented in a natural language
form or story form that is often referred to as a product specification. In
my experience, a product specification is a better way to communicate
to people an overall understanding of the requirements; however, a list
of formal requirements is a more efficient way to track work items and
14
Chapter 2 Requirements
15
Chapter 2 Requirements
• Availability
• Compatibility
• Reliability
• Maintainability
• Manufacturability
• Regulatory
• Scalability
16
Chapter 2 Requirements
The initial set of requirements coming out of the planning stage will
be a mix of MRS, PRS, and SRS requirements. This is to be expected as no
development life cycle is truly waterfall.
17
Chapter 2 Requirements
18
Chapter 2 Requirements
GM6000
Table 2-1 is the list of initial requirements for a hypothetical heater
controller that I like to call the GM6000. This list is intended to illustrate
the kinds of requirements that are available when you start to develop the
software architecture in the analysis step. As you make progress on the
software architecture, additional requirements will present themselves,
and you will need to work with your extended team to get the new
requirements included in the MRS or PRS requirements documents.
19
Chapter 2 Requirements
1
Commercial release, where Rel 1.0 is the initial product release.
20
Chapter 2 Requirements
MR-107 User interface The DHC unit shall support display, LEDs, and user 1.0
inputs (e.g., physical buttons, keypad membrane,
etc.). The arrangement of the display and user
inputs can be different between heater enclosures.
MR-108 User actions The DHC display, LEDs, and user inputs shall allow 1.0
the user to do the following:
• Turn the heater on and off
• Set the maximum fan speed
• Specify the temperature set point
MR-109 User The DHC display LEDs shall provide the user with 1.0
information the following information:
• Current temperature
• DHC on/off state
• Active heating state
• Fan on/off state
• Alerts and failure conditions
PR-100 Sub- The DHC heater closure shall contain the following 1.0
assemblies sub-assemblies:
• Control Board (CB)
• Heating Element (HE)
• Display and User Inputs (DUI)
• Blower Assembly (BA)
• Power Supply (PS)
• Temperature Sensor (TS)
(continued)
21
Chapter 2 Requirements
PR-101 Wireless The DHC heater closure shall contain the following 2.0
module sub-assemblies:
• Wireless Module (WM)
PR-103 Heater safety The Heating Element (HE) sub-assembly shall 1.0
contain a hardware temperature protection circuit
that forces the heating source off when it exceeds
the designed safety limits.
PR-105 Heater element The Heating Element (HE) sub-assembly shall 1.0
interface have a proportional heating output interface to the
Control Board (CB).
PR-106 Blower The Blower Assembly (BA) sub-assembly shall 1.0
assembly have a proportional speed control interface to the
interface Control Board (CB).
PR-107 Temperature The Temperature Sensor (TS) sub-assembly shall 1.0
sensor use a thermistor for measuring space temperature.
S
ummary
The goal of the requirements step is to identify the problem statement
presented by the user and business needs. In addition, a high-level
solution is identified and proposed for the problem statement. Both the
problem statement and the high-level solution are captured in the form of
formal requirements.
22
Chapter 2 Requirements
INPUTS
• User needs
• Business needs
OUTPUTS
23
CHAPTER 3
Analysis
In the analysis step of the planning stage, you will create three artifacts:
• System architecture (SA)—The system architecture
document describes the discrete pieces or units of
functionality that will be tied together to make the
product. It consists of diagrams with boxes and lines
whose semantics usually mean “contains” and “is
connected to.”
• Software architecture (SWA) documents—The software
architecture, on the other hand, provides the designs
for the system components that describe how each unit
works. These designs usually contain diagrams that are
more sophisticated in that they may be structural or
behavioral and their lines and boxes often have more
particular meanings or rules associated with them (like
UML diagrams).
• Requirements trace matrix—The requirements trace
matrix is generally a spreadsheet that allows you to
map each requirement to the code that satisfies it and
the tests that validate it.
System Engineering
I will note here that system engineering is not a software role. And while it
is not unusual for a software engineer to fill the role of the system engineer,
a discussion about the intricacies of developing the system architecture is
outside the scope of this book. But, as it is an essential input to the software
architecture document, Appendix I, “GM6000 System Architecture,”
provides an example of a system architecture document for the GM6000.
26
Chapter 3 Analysis
Component Description
Enclosure The box that contains the product. The enclosure should be IP51
rated.
Control Board The board that contains the microcontroller that runs the heater
(CB) controller. The CB contains circuits and other chips as needed to
support the microcontroller unit (MCU) and software.
Display and User A separate board that contains the display, buttons, and LEDs
Inputs (DUI) used for interaction with the user. This DUI can be located
anywhere within the enclosure and is connected to the CB via a
wire harness.
… …
27
Chapter 3 Analysis
Software Architecture
There is no canonical definition of software architecture. This cookbook
defines software architecture as
Identifying the solution at a high level and defining
the rules of engagement for the subsequent design
and implementation steps.
• Hardware interfaces.
• Performance constraints.
• Programming languages.
• Subsystem interfaces.
• Process model.
• Functional simulator.
• Cybersecurity.
• Memory allocation.
28
Chapter 3 Analysis
29
Chapter 3 Analysis
Nevertheless, the inputs I provide are sufficient to create the first draft
of the software architecture.
The following sections discuss the creation of the content of the
software architecture document. While you are performing the analysis
and making architectural decisions, you’ll discover that there are still a lot
of unknown or open questions. Some of these questions may be answered
during the planning stage. Some are detailed design questions that will be
answered in the construction stage. Either way, keep track of these open
questions because they will eventually have to be answered.
30
Chapter 3 Analysis
Hardware Interfaces
Create a hardware block diagram in relation to the microcontroller.
That is, outline the inputs and outputs to and from the microcontroller
(see Figure 3-2). Whenever possible, omit details that are not critical to
understanding the functionality of the inputs and outputs. For example,
simply identify that there will be “external serial data storage.” Do not call
out a specific chip, storage technology (e.g., flash vs. EEPROM), or specific
type of serial bus.
connector Heater
MCU
connector Blower
Console
connector Temperature
Sensor
Wireless Module
Legend:
Bi direconal Analog Input
Wireless Sensor Serial Bus
Programming/ GPIO
debugger
PWM
31
Chapter 3 Analysis
Component Description
… …
Data Storage Serial persistent data storage for saving configuration,
user settings, etc.
… …
Performance Constraints
For all of the identified hardware interfaces, you will want to make an
assessment of real-time performance and bandwidth usage. Also you
should make performance assessments for any applications, driver stacks,
crypto routines, etc., that will require significant CPU usage. Since the
specifics of these interfaces are still unknown, the assessment will be an
approximation—that is, an order-of-magnitude estimate—rather than a
32
Chapter 3 Analysis
precision value. Note that I use the term “real time” to describe contexts
where stimuli must be detected and reacted to in less than one second.
Events and actions that occur slower than 1 Hz can be achieved without
special considerations.
The following are some excerpts from the software architecture
document in Appendix J, “GM6000 Software Architecture,” that illustrate
the performance analysis.
Display
The microcontroller unit (MCU) communicates with the display controller
via a serial bus (e.g., SPI or I2C). There is a time constraint in that the physical
transfer time for an entire screen’s worth of pixel data (including color data)
must be fast enough to ensure a good user experience. There is also a RAM
constraint with respect to the display in the MCU; it requires that there will be
at least one off-screen frame buffer that can hold an entire screen’s worth of pixel
data. The size of the pixel data is a function of the display’s resolution times the
color depth. The assessments and recommendations are as follows:
Temperature Sensor
The space temperature must be sampled and potentially filtered before
being used as an input to the control algorithm. However, controlling
space temperature is a relatively slow system (i.e., much slower than 1 Hz).
Consequently, the assessments and recommendations are as follows:
33
Chapter 3 Analysis
Threading
A Real-Time Operating System (RTOS) with many threads will be
used. Switching between threads—that is, a context switch—requires a
measurable amount of time. This becomes important when there are sub-
millisecond timing requirements and when looking at overall CPU usage.
The RTOS also adds timing overhead for maintaining its system tick timer,
which is typically interrupt based. The assessments and recommendations
are as follows:
Programming Languages
Selecting a programming language may seem like a trivial decision, but
it still needs to be an explicit decision. The experience of the developers,
regulatory considerations, performance, memory management, security,
tool availability, licensing issues, etc., all need to be considered when
selecting the programming language. The language choice should be
documented in the software architecture document as well as in the
Software Development Plan.
In most cases, I prefer to use C++. I know that not everyone agrees with
me on this, but I make the case for the advantages of using C++ in Patterns
in the Machine. I mention it here to say that you should not exclude C++
from consideration simply because you are working in a resource-
constrained environment.
34
Chapter 3 Analysis
Subsystems
You will want to decompose the software for your project into components
or subsystems. The number and granularity of the components are a
choice that you will have to make. Some things to consider when defining
subsystems are as follows:
35
Chapter 3 Analysis
The GM6000 control board software is broken down into the following
subsystems. In Figure 3-3, the subsystems in the dashed boxes represent
future or anticipated functionality.
OSAL
Drivers Graphics Library
OS
Boot
BSP Loader
HARDWARE
36
Chapter 3 Analysis
Application
The application subsystem contains the top-level business logic for the entire
application. This includes functionality such as
BSP
The Board Support Package (BSP) subsystem is responsible for abstracting
the details of the microcontroller unit (MCU) datasheet. For example, it is
responsible for
Diagnostics
The diagnostics subsystem is responsible for monitoring the software’s
health, defining the diagnostics logic, and self-testing the system. This
includes features such as power on self-tests and metrics capture.
37
Chapter 3 Analysis
Drivers
The driver subsystem is the collection of driver code that does not reside in
the BSP subsystem. Drivers that directly interact with hardware are required
to be separated into layers. There should be at least three layers:
Graphics Library
The graphics library subsystem is responsible for providing graphic
primitives, fonts, window management, widgets, etc. The expectation is that
the graphic library will be third-party software. The minimum requirements
for the graphics library are as follows:
38
Chapter 3 Analysis
Heating
The heating subsystem is responsible for the closed loop space temperature
control. This is the code for the heat control algorithm.
Persistent Storage
The persistent storage subsystem provides the framework, interfaces, data
integrity checks, etc., for storing and retrieving data that is stored in local
persistent storage. The persistent storage paradigm is a RAM-cached model.
The RAM-cached model is as follows:
UI
The user interface (UI) subsystem is responsible for the business logic and
interaction with end users of the unit. This includes the LCD display screens,
screen navigation, consuming button inputs, LED outputs, etc. The UI
subsystem has a hard dependency on the graphic library subsystem. This
hard dependency is acceptable because the graphic library is platform
independent.
39
Chapter 3 Analysis
Subsystem Interfaces
This section is where you define how the various subsystems, components,
modules, and drivers will interact with each other. For example, it
addresses the following questions:
Interfaces
The preferred, and primary, interface for sharing data between subsystems
will be done via the data model pattern. The secondary interface will be
message-based inter-thread communications (ITC). Which mechanism
is used to share data will be determined on a case-by-case basis with the
preference being to use the data model. However, the decision can be “both”
because both approaches can co-exist within a subsystem.
40
Chapter 3 Analysis
Applicaon Heang
Funconal
APIs
Diagnoscs UI
System
Services
Persistent
Data Model Alert Mgmt Crypto
Storage
Soware Logging
Sensor
Update Comms
Graphics
Library
Drivers Console
41
Chapter 3 Analysis
Process Model
The following questions need to be answered in the architecture
document:
42
Chapter 3 Analysis
Process Model
The software will be implemented as a multithreaded application using
real-time preemptive scheduling. The preemptive scheduling provides for the
following:
Thread Priorities
The application shall be designed such that the relative thread priorities
between individual threads do not matter with respect to correctness.
Correctness in this context means the application would still function,
albeit sluggishly, and not crash if all the threads had the same priority. The
exception to this rule is for threads that are used exclusively as deferred
interrupt handlers.
Data Integrity
Data that is shared between threads must be implemented in a manner to
ensure data integrity. That is, read, write, read-modify-write operations
must be atomic with respect to other threads accessing the data. The
following is a list of allowed mechanisms for sharing data across threads:
43
Chapter 3 Analysis
44
Chapter 3 Analysis
If the MCU is used, it must be clearly documented in the detailed design. The
following guidelines shall be followed for sharing data between a thread
and ISRs:
Functional Simulator
If your project requires you to implement automated unit tests for your
project, you will find that a lot of the work that goes into creating a
functional simulator will be done while creating the automated unit tests.
The reason is because the automated unit tests impose a decoupled design
that allows components and modules to be tested as platform-
independent code. Consequently, creating a simulator that reuses
abstracted interfaces is not a lot of additional effort—if it is planned for
from the beginning of the project. The two biggest pieces of work are as
follows:
45
Chapter 3 Analysis
Simulator
The software architecture and design accommodate the creation of a
functional simulator. A functional simulator is the execution of production
source code on a platform that is not the target platform. A functional
simulator is expected to provide the majority of the functionality but not
necessarily the real-time performance of the actual product. Or, more
simply, functional simulation enables developers to develop, execute, and
46
Chapter 3 Analysis
test production code without the target hardware. Figure 3-5 illustrates what
is common and different between the software built for the target platform
and software built for the functional simulator. The architecture for the
functional simulator is on the right.
47
Chapter 3 Analysis
Cybersecurity
Cybersecurity may or may not be a large topic for your project.
Nevertheless, even if your initial reaction is “there are no cybersecurity
concerns for this product,” you should still take the time to document why
there are no concerns. Also note that I include the protection of personally
identifiable information (PII) and intellectual property (IP) as part of
cybersecurity analysis.
Sometimes just writing down your reasoning as to why cybersecurity is
not an issue will reveal gaps that need to be addressed. Depending on the
number and types of attack surfaces your product has, it is not uncommon
to break the cybersecurity analysis into its own document. And a separate
document is okay; what is important is that you do some analysis and
document your findings.
A discussion of the best practices and methodologies for performing
cybersecurity analysis is beyond the scope of this book. For example,
with the GM6000 project, the cybersecurity concerns are minimal. Here
is a snippet of the “Cybersecurity” section from the software architecture
document that can be found in Appendix J, “GM6000 Software Architecture.”
48
Chapter 3 Analysis
Cybersecurity
The software in the GM6000 is considered to be a low-risk target in that it is
easier to compromise the physical components of a GM6000 than the software.
Assuming that the software is compromised, there are no safety issues
because the HE has hardware safety circuits. The worst-case scenarios for
compromised software are along the lines of denial-of-service (DoS) attacks,
which might cause the DHC to not heat the space, yield uncomfortable
temperature control, or run constantly to incur a high energy bill.
No PII is stored in persistent storage. There are no privacy issues
associated with the purchase or use of the GM6000.
Another possible security risk is the theft of intellectual property. That is,
can a malicious bad actor steal and reverse-engineer the software in the control
board? This is considered low risk since there are no patented algorithms or
trade secrets contained within the software and the software only has value
within the company’s hardware. The considered attack surfaces are as follows:
Memory Allocation
The architecture document should define requirements, rules, and
constraints for dynamic memory allocation. Because of the nature of
embedded projects, the extensive use of dynamic memory allocation
is discouraged. When you have a device that could potentially run for
years before it is power-cycled or reset, the probability of running out of
heap memory due to fragmentation becomes a valid concern. Here is the
“Memory Allocation” section from the software architecture document
that can be found in Appendix J, “GM6000 Software Architecture.”
49
Chapter 3 Analysis
50
Chapter 3 Analysis
51
Chapter 3 Analysis
52
Chapter 3 Analysis
53
Chapter 3 Analysis
Requirement Traceability
Requirement traceability refers to the ability to follow the path a
requirement takes from design all the way through to a specific test case.
There are three types of traceability: forward, backward, and bidirectional.
Forward traceability is defined as starting with a requirement and working
downward (e.g., from requirements down to test cases). Backward
traceability is the opposite (e.g., from test cases up to requirements).
Bidirectional is the ability to trace in both directions.
It is not uncommon for SDLC processes to include requirements
tracing. In my experience, forward tracing requirements to verification
tests is all part and parcel of creating the verification test plan. Forward
tracing to design documentation and source code can be more
challenging, but it doesn’t have to be.
The following steps simplify the forward tracing of requirements to
design artifacts (i.e., the software architecture and Software Detailed
Design documents) and then to source code:
1
Content section means any section that is not part of the boilerplate or the
housekeeping sections. The introduction, glossary, and change log sections are
examples of non-content sections.
54
Chapter 3 Analysis
There are similar rules for the detailed design document, which also
support forward tracing from the software architecture to detailed design
sections and then to source code. See Chapter 6, “Foundation,” for details.
To see the requirements tracing for the GM6000, see the “Software
Requirements Traced to Software Architecture” document in Appendix P,
“GM6000 Software Requirements Trace Matrix.” You will notice that the trace
matrix reveals two orphan subsystems (SWA-12 Bootloader and SWA-26
Software Update). This particular scenario exemplifies a situation where the
software team fully expects to create a feature—the ability to update software
in the field—but there is no formal requirement because the software team
hasn’t reminded the marketing team that this would be a good idea … yet.
Even when your SDLC processes do not require requirements forward
traceability to design artifacts, I still strongly recommend that you follow
the steps mentioned previously because it is an effective mechanism
for closing the loop on whether the software team implemented all the
product requirements. It also helps prevent working on features that are
not required for release.
55
Chapter 3 Analysis
Summary
The initial draft of the software architecture document is one of the major
deliverables for the analysis step in the planning stage. As discussed in the
“Requirement Traceability” section, there is an auxiliary deliverable to the
software architecture document, which is the trace matrix for the software
architecture.
INPUTS
56
Chapter 3 Analysis
OUTPUTS
57
CHAPTER 4
Software
Development Plan
Nothing in this step requires invention. The work here is simply to capture
the stuff that the development team normally does on a daily basis.
Creating the Software Development Plan (SDP) is simply making the
team’s informal processes formal.
The value add of the SDP is that it eliminates many misunderstanding
and miscommunication issues. It proactively addresses the “I didn’t know
I needed to …” problems that invariably occur when you have more than
one person writing software for a project. A written and current SDP is
especially helpful when a new team member is added to the project; it is a
great tool for transmitting key tribal knowledge to a new team member.
A large percentage of the SDP decisions are independent of the
actual project. That is, they are part of your company’s existing software
development life cycle (SDLC) processes or Quality Management System
(QMS). This means that work on the SDP can begin early and even
be created in parallel with the requirements documents. That said, I
recommend that you start the software architecture document before
finalizing the first draft of the SDP. The software architecture will provide
more context and scope for the software being developed than just the
high-level requirements.
Project-Independent Processes
and Standards
I recommend that you create the following development processes and
standards for every project:
60
Chapter 4 Software Development Plan
61
Chapter 4 Software Development Plan
Additional Guidelines
You should also reference some additional guidelines that may not be
under the control of your software team. For example, some important
guidelines may be owned by the quality team, and you can simply include
references to those guidelines. However, if those documents don’t exist,
you will need to create them. I recommended that you develop guidelines
for the following items:
• Requirements traceability
• Regulatory concerns
62
Chapter 4 Software Development Plan
2. Overview
3. Glossary
4. Document references
6. Software items
7. Documentation outputs
8. Requirements
10. Cybersecurity
11. Tools
12. SCM
13. Testing
14. Deliverables
63
Chapter 4 Software Development Plan
Housekeeping
Sections 1–4 and section 15, Document name and version number,
Overview, Glossary, Document references, and Change log, are housekeeping
sections for software documentation. They are self-explanatory, and it
should be easy to fill in these sections. For example, the Overview section
is just a sentence or two that states the scope of the document. Here is an
example from Appendix K, “GM6000 Software Development Plan”:
64
Chapter 4 Software Development Plan
Also note that when a person is assigned a role, it does not mean
that they are the sole author of the document or the deliverable. The
responsibility of the role is to ensure the completion of the document and
not necessarily to write it themselves.
Software Items
This section identifies what the top-level software deliverables are. For
each software item, the following items are called out:
65
Chapter 4 Software Development Plan
Documentation Outputs
This section is used to call out non-source code artifacts that the software
development team will deliver. It describes who has the responsibility to
see that the artifact is completed and delivered and who, if it is a different
person, the subject matter expert (SME) is.
Not all the artifacts or documents need to be Word-style documents.
For example, Doxygen HTML pages, wiki pages, etc., are acceptable.
Depending on your project, you may need to create all these documents,
or possibly just a subset. And in some cases, you may need additional
documents that aren’t listed here. Possible artifacts are
• Software architecture
• Doxygen output
66
Chapter 4 Software Development Plan
• CI setup
• Release notes
67
Chapter 4 Software Development Plan
Requirements
This section specifies what requirements documentation (with respect to
software) is needed, who is responsible for the requirements, and what is
the canonical source for the requirements. The section also includes what
traceability processes need to be put in place. Basically, all of the processes
discussed in Chapter 3, “Analysis,” are captured in the SDP. Here is an
example:
68
Chapter 4 Software Development Plan
69
Chapter 4 Software Development Plan
Cybersecurity
This section identifies the workflows and deliverables—if any—needed to
address cybersecurity. Here is an example from the SDP.
70
Chapter 4 Software Development Plan
Tools
This section identifies tools used to develop the software. Because it
is sometimes a requirement that you can go back and re-create (i.e.,
recompile) any released version of the software, it is also important to
describe how tools will be archived.
Here is an example from the SDP:
71
Chapter 4 Software Development Plan
Testing
This section specifies details how the testing—which is the responsibility
of the software team—will be done. Topics that should be covered are as
follows:
• Unit test requirements and code coverage metrics as
applicable
73
Chapter 4 Software Development Plan
Deliverables
This section summarizes the various deliverables called out in the SDP
document. This is essentially the software deliverables checklist for the
entire project. Here is an example from the SDP in Appendix K, “GM6000
Software Development Plan”:
74
Chapter 4 Software Development Plan
Summary
Don’t skip creating an SDP. Lacking a formal SDP simply means that a de
facto SDP will organically evolve and be inconsistently applied over the
course of the project where it will be a constant source of drama.
INPUTS
OUTPUTS
75
CHAPTER 5
Preparation
The preparation step is not about defining and developing your product
as much as it is about putting together the infrastructure that supports the
day-to-day work. The tools that you will be using should have been called
out in the Software Development Plan (SDP), but if you have tools that
aren’t referenced there, this is the time to add them to the SDP along with
the rationale for prescribing their use.
The SDP for the GM6000 calls for the following tools:
GitHub Projects
GitHub Projects is an adaptable, flexible tool for planning and tracking
your work.2 It’s a free tool that allows you to track issues such as bug
reports or tasks or feature requests, and it provides various views to
facilitate prioritization and tracking. For example, Figure 5-1 is an example
of the Kanban-style view of the GM6000 project. For a given project, the
1
https://github.com/johnttaylor/epc/wiki
2
https://docs.github.com/en/issues/planning-and-tracking-with-
projects/learning-about-projects/about-projects
78
Chapter 5 Preparation
issue cards can span multiple repositories, and branches can be created
directly from the issue cards themselves. For the details of setting up
GitHub Projects, I recommend the “Creating a Project” documentation
provided by GitHub.3
GitHub Wiki
GitHub provides one free wiki per public repository. The following list
provides examples of documents you should consider capturing on a Wiki:
3
https://docs.github.com/en/issues/planning-and-tracking-with-
projects/creating-projects/creating-a-project
79
Chapter 5 Preparation
• Coding standards
Figure 5-2 shows an example of a GitHub wiki page for the GM6000
project.
80
Chapter 5 Preparation
[Home](https://github.com/johnttaylor/epc/wiki/Home )
* [GM6000](https://github.com/johnttaylor/epc/wiki/GM6000)
* [GM6000 SWBOM](https://github.com/johnttaylor/epc/wiki/
GM6000---Software-Bill-of-Materials-(SWBOM))
* [GitHub Projects](https://github.com/johnttaylor/epc/wiki/
GitHub---Projects)
4
https://docs.github.com/en/communities/documenting-your-project-
with-wikis/creating-a-footer-or-sidebar-for-your-wiki#creating-
a-sidebar
81
Chapter 5 Preparation
* [Setup](https://github.com/johnttaylor/epc/wiki/GitHub-
Projects---Setup)
* [Developer Environment](https://github.com/johnttaylor/epc/
wiki/Development-Environment)
* [Install Build Tools](https://github.com/johnttaylor/epc/
wiki/Developer---Install-Build-Tools)
* [GIT Install](https://github.com/johnttaylor/epc/wiki/
Developer---GIT-Install)
* [Install Local Tools](https://github.com/johnttaylor/epc/
wiki/Developer---Install-Local-Tools)
* [Tools](https://github.com/johnttaylor/epc/wiki/Tools)
82
Chapter 5 Preparation
83
Chapter 5 Preparation
Jenkins
Installing and configuring Jenkins for your specific CI needs is nontrivial.
But here is the good news:
5
www.jenkins.io/doc/book/installing/
84
Chapter 5 Preparation
85
Chapter 5 Preparation
Summary
The preparation step in the planning stage is all about enabling software
development. Many of the activities are either mostly completed, because
the tools are already in place from a previous project, or can be started
in parallel with the creation of the Software Development Plan (SDP). So
there should be ample time to complete this step before the construction
stage begins. But I can tell you from experience, if you don’t complete
this step before construction begins, you can expect to experience major
headaches and rework. Without a working CI server, you may have broken
builds or failing unit tests without even knowing it. And the more code that
is written before these issues are discovered, the more the work (and pain)
needed to correct them increases.
INPUTS
OUTPUTS
86
Chapter 5 Preparation
• User accounts have been created for all of the team members
(including nonsoftware developers) in the tracking tool.
87
CHAPTER 6
Foundation
The foundation step in the planning phase is about getting the day-to-day
environment set up for developers so they can start the construction phase
with a production-ready workflow. This includes performing tasks, for
example:
• Setting up the SCM repositories (e.g., setting up the
repository in GitHub)
• Defining the top-level source code organization for
the project
• Creating build scripts that can build unit tests and
application images
• Creating skeleton applications for the project, including
one for the functional simulator
• Creating CI build scripts that are fully integrated and
that build all the unit tests and application projects
using the CI build server
• Creating the outline for the Software Detailed Design
(SDD) document. Because the SDD is very much a
living document, as detailed design is done on a just-
in-time basis, nothing is set in stone at this point, and
you can expect things to change throughout the course
of the project. (See Chapter 11, “Just-in-Time Detailed
Design,” for more details.)
SCM Repositories
The SCM tool should have been “stood up” as part of the preparation step.
So for the foundation step, it is simply a matter of creating the repository
and setting up the branch structure according to the strategy you defined
in the SDP.
90
Chapter 6 Foundation
91
Chapter 6 Foundation
1
https://cmake.org/
92
Chapter 6 Foundation
93
Chapter 6 Foundation
After you have selected your build tools, you need to create scripts
that will build the skeleton applications and unit tests (both manual and
automated).
The art of constructing build scripts and using makefiles is out of the
scope of this book, simply because of the number of build tools available and
the nuances and variations that come with them. Suffice it to say, though,
that over the years I have been variously frustrated by many of the build tools
that I was either required to use or tried to use. Consequently, I built my own
tool—the NQBP2 build engine—which eschews makefiles in favor of a list of
directories to build. This tool is freely available and documented on GitHub.
The GM6000 example uses the NQBP2 build engine, and more details
about NQBP2 are provided in Appendix F, “NQBP2 Build System.”
Skeleton Applications
The skeleton projects provide the structure for all the code developed
in the construction phase. There should be a skeleton application for
all released images, including target hardware builds and functional
simulator builds for the applications. Chapter 7, “Building Applications
with the Main Pattern,” provides details about creating the skeleton
application for the target hardware and the functional simulator.
94
Chapter 6 Foundation
The build-all script should be architected such that it does not have
to be updated when new applications and unit tests are added to the
project.
95
Chapter 6 Foundation
96
Chapter 6 Foundation
Design that the project needs. However, all sections (except housekeeping
sections) must be traceable back to a section in the software architecture
document. If a detailed design section can’t be traced back to the software
architecture document, you should stop immediately and resolve the
disconnect.
The other sections for the SDD outline for the GM6000 are as follows:
97
Chapter 6 Foundation
Summary
The foundation step is the final, gating work that needs to be completed
before moving on to the construction phase. The foundation step includes
a small amount of design work and some implementation work to get
the build scripts to successfully build the skeleton applications and to
integrate these scripts with the CI server.
You may be tempted to skip the foundation steps, or to do them later
after you started coding the interesting stuff. Don’t! The value of the CI
server is to detect broken builds immediately. Until your CI server is fully
stood up, you won’t know if your develop or main branches are broken.
Additionally, skipping building the skeleton applications is simply creating
technical debt. The creation and startup code will exist at some point, so
doing it first, while planning for known permutations, is a much more
efficient and cost-effective solution than organically evolving it over time.
INPUTS
98
Chapter 6 Foundation
OUTPUTS
• The first draft of the outline for the Software Detailed Design
document. In addition to the outline, the SDD should also
include the design for
99
CHAPTER 7
Building Applications
with the Main Pattern
At this point in the process, your principal objective is to set up a process
to build your application to run on your hardware. However, you may also
want to build variations of your application to provide different features
and functionality. Additionally, you may want to create builds that will
allow your application to run on different hardware. Consequently, the
goal of this chapter is to put a structure in place that lets you build these
different versions in an automated and scalable way.
Even if you have only a single primary application running on a
single hardware platform, I recommend that you still build a functional
simulator for your application. A simulator provides the benefits of being
able to run and test large sections of your application code even before the
hardware is available. It also provides a much richer debug environment
for members of the development team.1
The additional effort to create a functional simulator is mostly
planning. The key piece of that planning is starting with a decoupled
design that can effectively isolate compiler-specific or platform-specific
code. The trick, here, is not to do this with IF-DEFs in your function calls.
1
For an in-depth discussion of the value proposition for a functional simulator,
I recommend reading Patterns in the Machine: A Software Engineering Guide to
Embedded Development.
© The Editor(s) (if applicable) and The Author(s), 101
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7_7
Chapter 7 Building Applications with the Main Pattern
102
Chapter 7 Building Applications with the Main Pattern
103
Chapter 7 Building Applications with the Main Pattern
• Mutexes
• Semaphores
You can create your own OSAL or use an open source OSAL such
as CMSIS-RTOS2 for ARM Cortex processors. In addition, many
microcontroller vendors (e.g. Microchip, TI, etc.) provide an OSAL with
their SDKs. The example code for the GM6000 uses the OSAL provided by
the CPL C++ class library.
104
Chapter 7 Building Applications with the Main Pattern
In Figure 7-3, the differences between the target application and the
functional simulator are contained in the blocks labeled.
• Main
• Platform
105
Chapter 7 Building Applications with the Main Pattern
Implementing Main
The Main pattern only addresses runtime creation and initialization. Of
course, in practice, some of the platform-specific bindings will be done
at compile or link time. These bindings are not explicitly part of the Main
pattern.
It might be helpful at this point to briefly review how an embedded
application gets executed. The following sequence assumes the
application is running on a microcontroller:
106
Chapter 7 Building Applications with the Main Pattern
107
Chapter 7 Building Applications with the Main Pattern
For steps 3a and 3b, the source code is platform dependent. One
approach is to just have a single chunk of code and use an #ifdef
whenever there are platform deltas. The problem is that #ifdefs within
function calls do not scale, and your code will quickly become unreadable
and difficult to maintain. The alternative is to separate—by platform—
the source code into individual files and use link time bindings to select
the platform-specific files. This means the logic of determining what
hardware-specific and application-specific pieces make up a stand-alone
module is moved out of your source code and into your build scripts. This
approach declutters the individual source code files and is scalable to
many platform variants.
For the remaining steps, 3c, 3d, and 3e, there is value in having a
common application startup across different platforms, especially as the
amount and complexity of the application startup code increase.
108
Chapter 7 Building Applications with the Main Pattern
alpha1/main.cpp simulator/main.cpp
#include
stac void task( void* args ) { #include main(...) {
runTheApplicaon( inializeCplLibrary()
getBspConsoleIn(), infd = stdin
getBspConsoleOut() ) appmain.h ou d = stdout
}
runTheApplicaon( cli_infd, cli_oud )
runTheApplicaon(infd, ou d)
main(...) { }
inializeBsp()
inializeCplLibrary()
#include
createFreeRTOSThread( task, … )
startFreeRTOS()
}
appmain.cpp
simulatorPlaorm.cpp
alpha1Plaorm.cpp stac HeangAlgorithm heatAlgo_(…)
… SimEEPROM nvramDriver_(…)
EEPROM nvramDriver_(…) runTheApplicaon( infd, ou d ) {
pla orm_inialize() inializePla orm() {
pla orm_inialize() { nvramDriver_.start()
nvramDriver_.start() startDebugShell( infd, ou d ) }
} createApplicaonThread( ... )
openPla orm() {
pla orm_open() { pla orm_open() ...
... heatAlgo_.open() }
}
waitForShutdown() pla orm_close() {
pla orm_close() { ...
... heatAlgo_.close() }
} pla orm_close()
pla orm_exit() {
pla orm_exit() { pla orm_exit() exit();
bsp_reset_MCU() } }
}
#include
plaorm.h
#include #include
plaorm_inialize()
plaorm_open()
plaorm_close()
plaorm_exit()
109
Chapter 7 Building Applications with the Main Pattern
• src/Ajax/Main
• src/Ajax/Main/_app
• src/Ajax/Main/_plat_alpah1
• src/Ajax/Main/_plat_simulator
Application Variant
It may be that you have a marketing requirement for good, better, and best
versions of your application, or it may be that you just need a test-enabled
application for a certification process. But whatever the reason for creating
additional applications, each separate application will be built from a large
amount of common code (see Figure 7-5).
110
Chapter 7 Building Applications with the Main Pattern
Common Common
Common code
simulator-code target-specific code
Common
target2-specific code
Essentially there are two axes of variance: the Y axis for different
application variants and the X axis for different platforms (target
hardware, simulator, etc.). Starting with two application variants (a main
application and an engineering test application) and two platforms (target
hardware and simulator), this yields nine distinct code combinations (see
Figure 7-7).
111
Chapter 7 Building Applications with the Main Pattern
is better to extend the Main pattern to handle these new combinations. For
the GM6000 example, the Main pattern has been extended to build two
application variants and three platform variants.
112
Chapter 7 Building Applications with the Main Pattern
2
The GM6000 example actually supports three different platforms: the simulator
and two different hardware boards. To simplify the discussion for this chapter,
only two platforms—the simulator and STM32 hardware target—are used. Scaling
up to more target platforms or application variants only requires adding “new”
code, that is, no refactoring of the existing Main pattern code.
113
Chapter 7 Building Applications with the Main Pattern
Here is a snippet from SDD-33 from the SDD for the Eros application.
114
Chapter 7 Building Applications with the Main Pattern
The Eros application shares, or extends, the Ajax Main pattern (see the
“[SDD-32] Creation and Startup (Application)” section). The following
directory structure shall be used for the Eros-specific code and extensions to
the Ajax startup/shutdown logic.
src/Eros/Main/ //
Platform/Application Specific implementation
+--- app.cpp //
Eros Application (non-platform) implementation
+--- _plat_xxxx/ //
Platform variant 1 start-up implementation
| +--- app_platform.cpp //
Eros app + specific startup implementation
+--- _plat_yyyy/ //
Platform variant 2 start-up implementation
Build Scripts
After creating the code for the largely empty Main pattern, the next step is
to set up the build scripts for the skeleton projects. For the GM6000 project,
I use the nqbp2 build system. In fact, all the build examples in the book
assume that nqbp2 is being used. However, you can use whatever collection
115
Chapter 7 Building Applications with the Main Pattern
• projects/GM6000/Ajax/alpha1/windows/gcc-arm
• projects/GM6000/Ajax/simulator/windows/vc12
• projects/GM6000/Eros/alpha1/windows/gcc-arm
• projects/GM6000/Eros/simulator/windows/vc12
In practice, there are actually more build directories than this because
the simulator is built using both the MinGW Windows compiler and the
GCC compiler for a Linux host.
Shared libdirs.b files are created for each possible combination of
common code. These files are located under the /project/GM6000 tree in
various subdirectories.
116
Chapter 7 Building Applications with the Main Pattern
File Contents
Figures 7-9 and 7-10 are listings of the master libdirs.b files for the
Ajax target (alpha1 HW) build and the simulator (Win32-VC) build.
117
Chapter 7 Building Applications with the Main Pattern
# Common stuffs
../../../ajax_common_libdirs.b
../../../ajax_target_common_libdirs.b
../../../../common_libdirs.b
../../../../target_common_libdirs.b
# BSP
src/Cpl/Io/Serial/ST/M32F4
src/Bsp/Initech/alpha1/trace
src/Bsp/Initech/alpha1
src/Bsp/Initech/alpha1/MX
src/Bsp/Initech/alpha1/MX/Core/Src > freertos.c
src/Bsp/Initech/alpha1/console
# SDK
xsrc/stm32F4-SDK/Drivers/STM32F4xx_HAL_Driver/Src >
stm32f4xx_hal_timebase_rtc_alarm_template.c
stm32f4xx_hal_timebase_rtc_wakeup_template.c
stm32f4xx_hal_timebase_tim_template.c
# FreeRTOS
xsrc/freertos
xsrc/freertos/portable/MemMang
xsrc/freertos/portable/GCC/ARM_CM4F
# SEGGER SysVIEW
src/Bsp/Initech/alpha1/SeggerSysView
118
Chapter 7 Building Applications with the Main Pattern
Preprocessor
One of the best practices that is used in the development of the GM6000
application is the LConfig pattern.3 The LConfig pattern is used to provide
project-specific configuration (i.e., magic constants and preprocessor
symbols). Each build directory has its own header file—in this case,
colony_config.h—that provides the project specifics. For example, the
LConfig pattern is used to set the buffer sizes for the debug console and
the human-readable major-minor-patch version information.
As with the build script, there are common configuration settings
across the application and platform variants. Shared colony_config.h files
are created for each possible combination.
Simulator
When initially creating the skeleton applications, there are very few
differences between the target build and the simulator build. The actual
differences at this point are as follows:
119
Chapter 7 Building Applications with the Main Pattern
120
Chapter 7 Building Applications with the Main Pattern
Summary
Creating the skeleton applications that include the functional simulator
variants is the primary deliverable for the foundation step. The Main pattern is
an architectural pattern that is used to allow reuse of platform-independent
code across the different platforms and application variants. Constructing
the skeleton applications is mostly a boilerplate activity, assuming you have
an existing middleware package (such as the CPL C++ class library) that
provides interfaces and abstractions that decouple the underlying platform.
Avoid the temptation to create the skeleton applications in combination with
features, drivers, etc. From my experience, creating the skeletons upfront
helps identify all the possible build variants, which helps the implementation
of the Main pattern to be cleaner and significantly easier to maintain.
For the GM6000, an example of the initial skeleton applications
using the Main pattern with support for multiple applications and target
hardware variants can be found on the main-pattern_initial-skeleton-
projects branch in the Git repository.
INPUTS
121
Chapter 7 Building Applications with the Main Pattern
OUTPUTS
122
CHAPTER 8
Continuous
Integration Builds
This chapter walks through the construction of the “build-all” scripts for
the GM6000 example code. The continuous integration (CI) server invokes
the build-all scripts when performing pull requests and merging code to
stable branches such as develop and main. At a minimum, the build-all
scripts should do the following things:
• Build all unit tests, both manual and automated
unit tests
• Execute all automated unit tests and reports pass/fail
• Generate code coverage metrics
• Build all applications. This includes board variants,
functional simulator, debug and nondebug
versions, etc.
• Run Doxygen (assuming you are using Doxygen)
• Help with collecting build artifacts. What this entails
depends on what functionality is or is not provided by
your CI server. Some examples of artifacts are
• Application images
• Doxygen output
• Code coverage data
© The Editor(s) (if applicable) and The Author(s), 123
under exclusive license to APress Media, LLC, part of Springer Nature 2024
J. T. Taylor and W. T. Taylor, The Embedded Project Cookbook,
https://doi.org/10.1007/979-8-8688-0327-7_8
Chapter 8 Continuous Integration Builds
124
Chapter 8 Continuous Integration Builds
The CI Server
At this point, the CI server should be up and running and should be able to
invoke scripts stored in the code repository (see Chapter 5, “Preparation”).
In addition, the skeleton applications should have been constructed
(see Chapter 6, “Foundation”), and there should be a certain number of
existing unit tests that are part of the CPL C++ class library.
The build system used for examples in this book is nqbp2 (see
Appendix F, “nqbp2 Build System”). However, the code itself has no direct
dependencies on nqbp2, which means you can use a different build
system, but you will need to create the build makefiles and scripts.
Directory Organization
The key directories at the root of the source code tree are as follows:
125
Chapter 8 Continuous Integration Builds
Naming Conventions
You can use whatever naming convention you would like. The important
thing is that you should be able to differentiate scripts for different use
cases by simply looking at the file name. For example, here are some key
use cases to differentiate:
126
Chapter 8 Continuous Integration Builds
127
Chapter 8 Continuous Integration Builds
Name Description
a.exe, Single lowercase “a” Scripts used for all automated units that
a.out prefix to denote parallel can be run in parallel with other units.
aa.exe, Dual lowercase “aa” Scripts used for all automated units
aa.out prefix to denote that cannot be run in parallel with other
not parallel units. An example here would be a test
that uses a hard-wired TCP port number.
b.exe, Single lowercase “b” Scripts used for all automated units that
b.out prefix to denote parallel can be run in parallel and that require
and require a script an external Python script to run. An
example here would be piping a golden
input file to stdin of the test executable.
bb.exe, Dual lowercase “bb” Scripts used for all automated units that
bb.out prefix to denote not cannot be run in parallel and that require
parallel and require an external Python script to run.
a script
a.py Single lowercase “a” Python program used to execute the
prefix to script to denote b.exe and b.out executables.
it will be running only
tests prefixed with single
lowercase “b”
aa.py Dual lowercase “aa” Python program used to execute the
prefix to script to denote bb.exe and bb.out executables.
it will be running only
tests prefixed with
dual lowercase “bb”
<all Manual units can use any name for the
others> executable except for the one listed
previously.
128
Chapter 8 Continuous Integration Builds
• It builds all target unit tests (e.g., builds that use the
GCC-ARM cross compiler). Typically these are manual
unit tests.
129
Chapter 8 Continuous Integration Builds
The script is built so that adding new unit tests (automated or manual)
or new applications under the projects/ directory does not require the
script to be modified.
If you are adding a new hardware target—for example, changing from
the STM32 alpha1 board to the STM32 dev1 board—then a single edit to
script is required to specify the new hardware target.
If new build artifacts are needed, then the script has to be edited to
generate the new artifacts and to copy the new items to the _artifacts/
directory.
Table 8-2 summarizes where changes need to be made to
accommodate changes to the build script.
130
Chapter 8 Continuous Integration Builds
New Unit Test Create a new unit test build directory under the tests/ directory.
New Build Edit the build-all scripts to generate the new build artifact and copy
Artifact it to the _artifacts/ directory.
New Target Edit the build-all scripts as needed. If the new target is a “next
Hardware board spin,” then only the "_TARGET” and “_TARGET2” variables
need to be updated.
New Create the application build directory under the projects/
Application directory.
@echo on
:: This script is used by the CI\Build machine to build the Windows Host
:: projects
::
:: usage: build_all_windows.bat <buildNumber> [branch]
set _TOPDIR=%~dp0
set _TOOLS=%_TOPDIR%..\xsrc\nqbp2\other
set _ROOT=%_TOPDIR%..
set _TARGET=alpha1
set _TARGET2=alpha1-atmel
:: Set Build info (and force build number to zero for "non-official" builds)
set BUILD_NUMBER=%1
set BUILD_BRANCH=none
IF NOT "/%2"=="/" set BUILD_BRANCH=%2
IF "%BUILD_BRANCH%"=="none"" set BUILD_NUMBER=0
echo:
echo:BUILD: BUILD_NUMBER=%BUILD_NUMBER%, BRANCH=%BUILD_BRANCH%
131
Chapter 8 Continuous Integration Builds
:: Build unit test projects (debug builds for more accurate code coverage)
cd %_ROOT%\tests
%_TOOLS%\bob.py -v4 mingw_w64 -cg --bldtime -b win32 --bldnum %BUILD_NUMBER%
IF ERRORLEVEL 1 EXIT /b 1
...
132
Chapter 8 Continuous Integration Builds
...
:: Everything worked!
:builds_done
echo:EVERTHING WORKED
exit /b 0
133
Chapter 8 Continuous Integration Builds
# This script is used by the CI/Build machine to build the Linux projects
#
# The script ASSUMES that the working directory is the package root
#
# usage: build_linux.sh <bldnum>
#
set -e
134
Chapter 8 Continuous Integration Builds
Summary
It is critical that the CI server and the CI build scripts are fully up and
running before the construction stage begins. By not having the CI process
in place, you risk starting the project with broken builds and a potential
descent into initial merge hell.
INPUTS
OUTPUTS
• The CI builds are full featured. That is, they support and
produce the following outputs on every build.
135
Chapter 8 Continuous Integration Builds
• They tag the develop and main branches in GitHub with the
CI build number.
136
CHAPTER 9
Requirements
Revisited
It is extremely rare that all of the requirements have been defined before
the construction phase. In fact, it is not even reasonable to expect that the
requirements will be 100% complete before starting implementation. The
reason is because unless all of the details are known—and nailing down
the details is what you do in the construction phase—it is impossible to
have all of the requirements carved in stone. So in addition to design,
implementation, and testing, there will inevitably be a continuing
amount of requirement work that needs to be addressed. This includes
refactoring and clarifying existing requirements as well as developing new
requirements. Common sources for new requirements that show up in the
construction phase are as follows:
Analysis
There are many types of analysis (e.g., FMEA) that are typically done for an
embedded product to ensure it is safe, reliable, compliant with regulatory
agencies, and secure. These tests and analysis cover all aspects of the
product—mechanical, electrical, packaging, and manufacturing—not
just software. This is where many of the edge cases are uncovered and
possible solutions are proposed. Often, these solutions take the form of
new requirements, which are also sometimes called risk control measures.
Typically, these new requirements are product (PRS) or engineering
detailed requirements (SRS)—although sometimes they feed back into the
marketing (MRS) requirements.
To mitigate this risk, the following requirement was added as a risk control
measure to the PRS:
PR-206: Heater Safety. The Heating Element (HE) sub-assembly shall provide
an indication to the Control Board (CB) when it has been shut off due to a
safety limit.
1
See https://en.wikipedia.org/wiki/Integral_windup
139
Chapter 9 Requirements Revisited
There is a single PRS requirement for the heating algorithm for the GM6000
example.
A choice was made to use a fuzzy logic controller (FLC) for the core temperature
control algorithm instead of simple error or proportional-integral-derivative (PID)
algorithms. The details and nuisances of the FLC are nontrivial, so the algorithm
design was captured as a separate document (see Appendix N, “GM6000 Fuzzy
Logic Temperature Control”). Here is a snippet from the FLC document.
140
Chapter 9 Requirements Revisited
Fuzzification
There are two input membership functions: one for absolute temperature error
and one for differential temperature error. Both membership functions use
triangle membership sets as shown in Figure 9-1.
141
Chapter 9 Requirements Revisited
142
Chapter 9 Requirements Revisited
143
Chapter 9 Requirements Revisited
144
Chapter 9 Requirements Revisited
2
See Chapter 2 for discussion of functional vs. nonfunctional requirements.
145
Chapter 9 Requirements Revisited
Requirements Tracing
All functional requirements should be forward traceable to one or more
test cases in the verification test plan.3 If there are test cases that can’t
be backward traced to requirements, there is disconnect between what
was asked for and what is expected for the release. The disconnect
needs to be reconciled, not just for the verification testing, but for the
software implementation (and the other engineering disciplines) as
well. Reconciled means that there are no orphans when forward tracing
functional requirements to test cases or when backward tracing test cases
to requirements. NFRs are not typically included in the verification tracing.
Requirements should also be traced to design outputs. Design outputs
include the software architecture and detailed design documents as well as
the source code. The importance or usefulness of tracing to design outputs
is the same as it is for verification testing; it ensures that you are building
everything that was asked for and not building stuff that is not needed. Unlike
verification tracing, NFRs should be included in the tracing to design outputs.
3
See Chapter 3, “Analysis,” for a discussion of forward and backward requirements
tracing.
146
Chapter 9 Requirements Revisited
Requirements tracing to design outputs is not the hard and fast rule
that it is for verification testing. However, with some up-front planning,
and following the SDLC process steps in order, tracing requirements to
design outputs is a straightforward exercise.
Requirements tracing can be done manually, or it can be done using
a requirement management tool (e.g., Doors or Jama). The advantage of
using requirement management tools is that they are good at handling the
many-to-many relationships that occur when tracing requirements and
provide both forward and backward traces. The downside to these tools
is their cost and learning curve. For the GM6000 project, I manually trace
requirements to and from design outputs using a spreadsheet.
The following processes simplify the work needed to forward trace
software-related requirements down to source code files:
4
A content section is any section that is not a housekeeping section. For example,
the Introduction, Glossary, and Change Log are considered housekeeping
sections.
147
Chapter 9 Requirements Revisited
148
Chapter 9 Requirements Revisited
After you have completed these steps, you have effectively forward
traced the software architecture through detailed design to source code
files. The remaining steps are to forward trace requirements (i.e., MRS,
PRS, SRS) to sections in the software architecture document.
Summary
The formal requirements should avoid excessive design details or specifics
whenever possible. The use of design statements should be used to bridge
the gap between formal requirements and the details needed by the
software team to design and implement the solutions.
149
Chapter 9 Requirements Revisited
INPUTS
150
Chapter 9 Requirements Revisited
OUTPUTS
151
CHAPTER 10
Tasks
This is the start of the construction phase, and this phase is where the
bulk of the development work happens. Assuming that all of the software
deliverables from the planning phase were completed, there should now
be very few interdependencies when it comes to software work. That is,
the simulator has removed dependencies on hardware availability, the
UI has been decoupled from the application’s business logic, the control
algorithm has been decoupled from its inputs and outputs, etc. And
since there are now minimal interdependencies, most of the software
development can be done in parallel; you can have many developers
working on the same project without getting bogged down in merge hell.
The software work, then—the work involved in producing working
code—can be broken down into tasks, where each task has the following
elements:
1. Requirements
4. A code review
A single task does not have to contain all elements; however, the order
of the elements must be honored. For example, there are three types
of tasks:
The tasks as discussed in this chapter are very narrow in scope, and
they are limited primarily to generating source code. But there are many
other tasks—pieces of work or project activities such as failure analysis, CI
maintenance, bug fixing, bug tracking, integration testing, etc.—that still
need to be planned, tracked, and executed for the project to be completed.
1) Requirements
The requirements for a task obviously scope the amount of work that
has to be done. However, if there are no requirements, then stop design
and coding activities and go pull the requirements from the appropriate
stakeholders. Also remember that with respect to tasks, design statements
are considered requirements.
154
Chapter 10 Tasks
If there are partial or incomplete requirements for the task, either split
the current task into additional tasks: some where you do have sufficient
requirements and others where the task is still undefined and ambiguous.
Save the undefined and ambiguous tasks for later when they are better and
more fully specified. Proceeding without clear requirements puts all of the
downstream work at risk.
1
Test-Driven Development: https://en.wikipedia.org/wiki/
Test-driven_development
155
Chapter 10 Tasks
5) Merge
Never break the build.
156
Chapter 10 Tasks
driver is done. However, there is still more work needed to update (e.g.,
account for different pin assignments) and verify the driver after the target
hardware is received. Consequently, a new task or card should be put
into the project backlog to account for the additional work. If this isn’t
done, bad things happen when the target board is finally available and
incorporated into testing. Jim is now fully booked with other tasks and is
not “really available” to finish the EEPROM driver. A minor fire drill ensues
over priorities for when and who will finish the EEPROM driver.
While this example is admittedly a bit contrived, most software tasks
have nuances where there is still some amount of future work left when
the code is merged. You will save a lot of time and angst if the development
team (including the program manager) clearly defines what done means.
My recommendation for a definition of done is as follows:
157
Chapter 10 Tasks
Task Granularity
What is the appropriate granularity of a task? Or, rather, how much time
should the task take to complete? Generally, taking into account the five
elements of a coding task discussed previously, a task should not take
longer than a typical Agile sprint, or approximately two weeks. That said,
tasks can be as short as a couple of hours or as long as several weeks.
Table 10-1 shows some hypothetical examples of tasks for the GM6000
project.
158
Chapter 10 Tasks
SPI Driver Coding The requirement for the task is that the physical interface
to the LCD display be an SPI bus.
1) The detailed design is created. The design is captured
as one or more sections in the SDD. These sections
formalize the driver structure, including how the
SPI interface will be abstracted to be platform
independent.
2) The new sections in the SDD are reviewed.
3) Before the code is written, a ticket is created, if there
is not one already, and a corresponding branch in the
repository is created for the code.
4) The code is written.
5) As a coding task requires a manual unit test to verify
the SPI operation on a hardware platform, the manual
unit test or tests are created and run.
6) After the tests pass, a pull request is generated, the
code is reviewed, and any action items arising from
the review are resolved.
7) The code is then merged, and if the CI build is
successful, the code is merged into its parent branch.
Control Requirement There is only one formal requirement (PR-207) with
Algorithm respect to the heater control algorithm.
Definition
The scope of the task is to define the actual control
algorithm and document the algorithm as design
statements. The documented algorithm design is then
reviewed by the appropriate SMEs.
(continued)
159
Chapter 10 Tasks
• Design
160
Chapter 10 Tasks
• Design Review
• Requirements
• Design
• Design Review
• Unit Test
• Code
161
Chapter 10 Tasks
Summary
Software tasks are a tool and process for decomposing the software effort
into small, well-defined units that can be managed using Agile tools
or more traditional Gantt chart–based schedulers. Within a task, there
is a waterfall process that starts with requirements, which is followed
by detailed design, which is then followed by creating the unit tests
and coding.
INPUTS
OUTPUTS
• Working code
• Unit tests
• Design reviews
• Code reviews
162
Chapter 10 Tasks
163
CHAPTER 11
Just-in-Time Detailed
Design
While defining the overall software architecture is a waterfall process,
the planning phase activities, or detailed software design, are made up of
many individual activities that are done on a just-in-time basis. As work
progresses during the construction phase, the Software Detailed Design is
updated along the way. At the start of the construction phase, the state of
the Software Detailed Design document should consist of at least
As you put together the detailed design components, you can write for
a knowledgeable reader who is
166
Chapter 11 Just-in-Time Detailed Design
167
Chapter 11 Just-in-Time Detailed Design
Examples
I can’t tell you how to do detailed design. There is no one-size-fits-all
recipe for doing it, and at the end of the day, it will always be a function of
your experience, abilities, and domain knowledge. But I can give examples
of what I do. Here are some examples from this design process that I used
when creating the GM6000 project.
Subsystem Design
As defined in the software architecture of the GM6000, persistent storage is
one of the top-level subsystems identified. It is responsible for “framework,
data integrity checks, etc., for storing and retrieving data that is stored
in local persistent storage.” The snippets that follow illustrate how the
subsystem design was documented in the SDD (Appendix M, “GM6000
Software Detailed Design (Final Draft)”).
168
Chapter 11 Just-in-Time Detailed Design
169
Chapter 11 Just-in-Time Detailed Design
[SDD-55] Records
Here is the design for how persistent storage records are structured and
how data integrity and power-failure use cases will be handled.
The application data is persistently stored using records. A record is the
unit of atomic read/write operations from or to persistent storage. The CPL
C++ class library’s persistent storage framework is used. In addition, the
library’s Model Point records are used, where each record contains one or
more model points.
All records are mirrored—two copies of the data are stored—to ensure
that no data is lost if there is power failure during a write operation.
All of the records use a 32-bit CRC for detecting data corruption.
Instead of a single record, separate records are used to insulate the data
against data corruption. For example, as the Metrics record is updated
multiple times per hour, it has a higher probability for data corruption than
the Personality record that is written once.
The application data is broken down into the following records:
• User Settings
• Metrics
170
Chapter 11 Just-in-Time Detailed Design
171
Chapter 11 Just-in-Time Detailed Design
The concrete Record instances are defined per project variant (e.g., Ajax
vs. Eros), and the source code files are located at
src/Ajax/Main
src/Eros/Main
The concrete driver for the STM32 microcontroller family uses the ST
HAL I2C interfaces for the underlying I2C bus implementation.
173
Chapter 11 Just-in-Time Detailed Design
src/Driver/I2C
src/Driver/I2C/STM32
174
Chapter 11 Just-in-Time Detailed Design
src/Driver/Button
src/Driver/Button/STM32
175
Chapter 11 Just-in-Time Detailed Design
176
Chapter 11 Just-in-Time Detailed Design
Graphics Library
The Graphics Library is a top-level subsystem identified in the software
architecture document. The software architecture calls out the Graphics
Library subsystem as third-party code.
177
Chapter 11 Just-in-Time Detailed Design
Note: This decision should be revisited for future GM6000 derivatives that
have a different display or a more sophisticated UI. The Pimoroni library
has the following limitations that do not recommend it as a “foundational”
component:
xsrc/pimoroni
178
Chapter 11 Just-in-Time Detailed Design
179
Chapter 11 Just-in-Time Detailed Design
180
Chapter 11 Just-in-Time Detailed Design
src/Ajax/ScreenMgr
181
Chapter 11 Just-in-Time Detailed Design
Design Reviews
Always perform design reviews and always perform them before coding
the component. That is, there should be a design review for each task
(see Chapter 10, “Tasks”), and you shouldn’t start coding until the design
review has been completed. In my experience, design reviews have a
bigger return on investment (for the effort spent) than code reviews. The
perennial problem with code reviews is that they miss the forest for trees.
That is, code reviews tend to focus on the syntax of the code and don’t
really look at the semantics or the overall design of the software. Code
reviews can catch implementation errors, but they don’t often expose
design flaws. And design flaws are much harder and more expensive to fix
than implementation errors.
Do not skip the design review step when the design content is just a
few sentences or paragraphs. Holding design reviews is a great early
feedback loop between the developers and the software lead.
Review Artifacts
Design reviews can be very informal. For example, the software lead
reviews the updated SDD and gives real-time feedback to the developer.
However, depending on your QMS or regulatory environment, you may
be required to have review artifacts (i.e., a paper trail) that evidence that
reviews were held and that action items were identified and resolved.
When formal review artifacts are required, it is tempting to hold one or
two design reviews at the end of projects. Don’t do this. This is analogous
to having only a single code review of all the code after you have released
the software. Reviews of any kind only add value when performed in real
time before coding begins. And, of course, action items coming out of the
review should be resolved before coding begins as well.
182
Chapter 11 Just-in-Time Detailed Design
183
Chapter 11 Just-in-Time Detailed Design
Summary
The software detail design process is about decoupling problem-solving
from the act of writing source code. When detailed design creates
solutions, then “coding” becomes a translation activity of transforming the
design into source code. The detailed design process should include the
following:
184
Chapter 11 Just-in-Time Detailed Design
INPUTS
1
https://en.wikipedia.org/wiki/Rubber_duck_debugging
185
Chapter 11 Just-in-Time Detailed Design
OUTPUTS
186
CHAPTER 12
You can now begin the coding and unit testing process. This involves
the following:
188
Chapter 12 Coding, Unit Tests, and Pull Requests
Check-In Strategies
After the implementation is completed, and the desired test coverage has
been achieved, the next step is to check everything into the repository. One
advantage of doing the work on a branch is that the developer can check
in their work in progress without impacting others on the team. I highly
recommend checking in (and pushing) source code multiple times a day.
This provides the advantage of being able to revert or back out changes
if your latest work goes south. It also ensures that your code is stored on
more than one hard drive.
Pull Requests
Use pull requests (PRs) to merge the source on the temporary branch to its
parent branch (e.g., develop or main). Other source control tools provide
similar functionality, but for this discussion, my examples will be GitHub
specific.1 The pull request mechanism has the following advantages:
1
If you are not using Git, you may have to augment your SCM’s merge functionality
with external processes for code reviews and CI integration.
189
Chapter 12 Coding, Unit Tests, and Pull Requests
190
Chapter 12 Coding, Unit Tests, and Pull Requests
Granularity
I strongly recommend the best practice of implementing new components
in isolation without integrating the component into the application.
That is, I recommend that you first go through the coding process—
steps 1 through 5—and then create a second ticket to integrate the new
component into the actual application or applications. Things are simpler
if the scope of the changes is restricted to the integration changes only.
Additionally, these are the following benefits:
Examples
The sections that follow provide examples from the GM6000 project using
the coding and unit testing process.
I2C Driver
The I2C driver is needed for serial communications to the off-board
EEPROM IC used for the persistent storage in the GM6000 application.
The class diagram in Figure 12-1 illustrates that there are different
implementations for each target platform.
191
Chapter 12 Coding, Unit Tests, and Pull Requests
Before the coding and unit testing began, the following steps were
completed:
192
Chapter 12 Coding, Unit Tests, and Pull Requests
Table 12-1. Process and work summary for the I2C driver
Step Work
Branch 1. Create a branch off of develop for the work. The branch name should
contain the ticket card number in its name.
193
Chapter 12 Coding, Unit Tests, and Pull Requests
Step Work
Test 1. Create the unit test project and build scripts that are specific to target
Project hardware. The new unit test project is located under the existing
tests/Driver directory tree at tests/Driver/I2C/_0test/
master-eeprom/NUCLEO-F413ZH-alpah1/windws/gcc-arm.
a. Typically, I clone a similar test project directory to start. For the I2C
driver’s test project, I cloned the tests/Driver/DIO/_0test/
out_pwm/NUCLEO-F413ZH-alpah1/windows/gcc-arm. Then I
modified the copied files to create a I2C unit test using these steps:
2. Compile, link, and download the unit tests. Then verify that the
test code passes. Iterate to fix any bug found or to extend the test
coverage.
(continued)
194
Chapter 12 Coding, Unit Tests, and Pull Requests
Step Work
a. Notify the code reviewers that the code is ready for review. This is
done automatically by the Git server when the PR owner selects or
assigns reviewers as part of creating the PR.
4. If there are CI build failures, commit and push code fixes to the PR
(which triggers a new CI build).
5. Resolve all code review comments and action items. And again, any
changes you make to files in the PR trigger a new CI build.
6. After all review comments have been resolved, and the last pushed
commit has built successfully, the branch can be merged to develop.
The merge will trigger a CI build for the develop branch.
7. Delete the PR branch as it is no longer needed.
At this point, the I2C driver exists and has been verified, but it has not
been integrated into the actual GM6000 application (i.e., Ajax or Eros).
Additional tickets or tasks are needed to integrate the driver into the
Application build.
The I2C driver is intermediate driver in the GM6000 architecture in
that it is used by the Driver::NV::Onsemi::CAT24C512 driver. In turn, the
NV driver is used by the Cpl::Dm persistent storage framework for reading
and writing persistent application records. The remaining tasks would be
the following:
195
Chapter 12 Coding, Unit Tests, and Pull Requests
Screen Manager
The UI consists of a set of screens. At any given time, only one screen is
active. Each screen is responsible for displaying content and reacting to UI
events (e.g., button presses) or model point change notifications from the
application. The Screen Manager component is responsible for managing
the navigation between screens as well as handling the special use cases
such as the splash and UI halted screens.
The Screen Manager itself does not perform screen draw operation
so it is independent of the Graphics library as well as being hardware
independent. Before the coding and unit testing began, the following steps
should have been completed:
• A ticket was created for the driver.
• The requirements were identified. In this case, it is the
UI wireframes (see the EPC wiki).
• The detailed design was completed and reviewed.
From the detailed design, the Screen Manager is
responsible for
196
Chapter 12 Coding, Unit Tests, and Pull Requests
• Screen navigation
At this point, I was ready to start the process of creating the Screen
Manager. Table 12-2 summarizes the work.
Table 12-2. Process and work summary for the Screen Manager
Step Work
Branch 1. Create a GIT branch (off of develop) for the work. The branch
name should contain the ticket number in its name.
197
Chapter 12 Coding, Unit Tests, and Pull Requests
Test Project 1. Create the unit test projects. Since it is an automated unit test, the
test needs to be built with three different compilers to eliminate
issues that could potentially occur when using the target cross
compiler. These are the three compilers I use:
• Microsoft Visual Studio compiler—Used because it provides
the richest debugging environment
• MinGW compiler—Used to generate code coverage metrics
• GCC for Linux—Used to ensure the code is truly platform
independent
2. I recommend that you build and test with the compiler toolchain
that is the easiest to debug with. After all tests pass, then build
and verify them with the other compilers. The unit test project
directories are as follows:
tests/Ajax/ScreenMgr/_0test/windows/vc12
tests/Ajax/ScreenMgr/_0test/windows/mingw_w64
tests/Ajax/ScreenMgr/_0test/linux/mingw_gcc
3. Compile, link, and execute the unit tests and verify that the test
code passes with the targeted code coverage.
4. Iterate through the process to extend the test coverage and fix
any bugs.
Pull Request 1. Run the top/run_doxgyen.py script to verify that there are
no Doxygen errors in the new file that was added. This step is
needed because the CI builds will fail if there are Doxygen errors
present in the code base.
a. Notify the code reviewers that the code is ready for review.
This is done automatically by the Git server when the PR
author selects or assigns reviewers as part of creating the PR.
4. If there are CI build failures, commit and push code fixes to the
PR (which will trigger new CI builds).
5. Resolve all code review comments and action items. And again,
any changes you make to files in the PR triggers a new CI build.
6. After all review comments have been resolved and the last
pushed commit has built successfully, the branch can be merged
to develop. The merge will trigger a CI build for the develop
branch.
7. Delete the PR branch as it is no longer needed.
199
Chapter 12 Coding, Unit Tests, and Pull Requests
At this point, the Screen Manager code exists and has been verified,
but it has not been integrated into the actual GM6000 application (i.e.,
Ajax or Eros). A second ticket is needed to actually integrate the Screen
Manager into the application build. For the GM6000 code, I created a
second ticket that included the following:
• Creating a stub for a Home screen for the Ajax and Eros
applications.
Summary
If you wait until after the Software Detailed Design has been completed
before writing source code and unit tests, you effectively decouple
problem-solving from the act of writing source code. The coding, unit test,
and pull request process should include the following:
200
Chapter 12 Coding, Unit Tests, and Pull Requests
INPUTS
OUTPUTS
201
CHAPTER 13
Integration Testing
Integration testing is where multiple components are combined into
a single entity and tested. In theory, all of the components have been
previously tested in isolation with unit tests, and the goal of the integration
testing is to verify that the combined behavior meets the specified
requirements. Integration testing also serves as incremental validation of
the product as more and more functionality becomes available.
Integration testing is one of those things that is going to happen
whether you explicitly plan for it or not. The scope and effort of the
integration testing effort can range from small to large, and the testing
artifacts that are generated can be formal or informal. Here are some
examples of integration testing:
1. Plan
5. Report results
204
Chapter 13 Integration Testing
• It sets a timeline.
205
Chapter 13 Integration Testing
206
Chapter 13 Integration Testing
Based on the current test results, the software and hardware leads
determine if the goals of the testing have been reached. If yes, the software
lead sends out an email with test results. Note that the testing goals can be
met without all the test cases passing. For example, if the RF range tests
fail, and a new antenna design is needed to resolve the issue, testing can be
paused until new hardware is received, and then the integration test plan
can be re-executed.
1
A formal build is defined as a build of a “stable” branch (e.g., develop or main)
performed on the CI build server where the source code has been tagged and
labeled. It is imperative that all non-unit testing be performed on formal builds
because the provenance of a formal build is known and labelled in the SCM
repository (as opposed to a private build performed on a developer’s computer).
207
Chapter 13 Integration Testing
Smoke Tests
Smoke or sanity tests are essentially integration tests that are continually
executed. Depending on the development organization, smoke tests
are defined and performed by the software test team, the development
team, or both. In addition, smoke tests can be automated or manual. The
automated tests can be executed as part of the CI build or run on demand.
If the tests are manual, it is essential that the test cases and steps be
sufficiently documented so that the actual test coverage is repeatable over
time even when different engineers perform the testing.
One downside to smoke tests is that they can be easily broken as
requirements and implementations evolve over time. This means that
Simulator
A functional simulator can be a no-cost platform for performing
automated smoke tests that can be run as part of the CI builds. These
automated simulator tests can be simple or complex. In my experience,
the only limitation for simulator-based tests that run as part of the CI build
is the amount of time they add to the overall CI build time.
208
Chapter 13 Integration Testing
2
https://pypi.org/project/pexpect/
209
Chapter 13 Integration Testing
Summary
Integration testing performed by the software team occurs throughout a
project. How formal or informal the integration testing should be depends
on what is being integrated. Generally, a more formal integration testing
effort is required when integrating components across teams. However,
a minimum level of formality should be that the pass/fail criteria is
written down.
Continual execution of integration tests, for example, smoke tests or
sanity tests, provides an initial quality check of a build. Ideally, these tests
would be incorporated in the CI build process.
INPUTS
• Source code
210
Chapter 13 Integration Testing
OUTPUTS
211
CHAPTER 14
Board Support
Package
The Board Support Package (BSP) is a layer of software that allows
applications and operating systems to run on the hardware platform.
Exactly what is included in a BSP depends on the hardware platform, the
targeted operating system (if one is being used), and potential third-party
packages that may be available for the hardware (e.g., graphics libraries).
The BSP for a Raspberry PI running Linux is much more complex than a
BSP for a stand-alone microcontroller. In fact, I have worked on numerous
microcontroller-based projects that had no explicit concept of a BSP in
their design. So while there is no one-size-fits-all definition or template for
BSPs in the microcontroller hardware space, ideally a microcontroller BSP
encapsulates the following:
Compiler Toolchain
The compiler toolchain is all of the glue and magic that has to be in place
from when the MCU’s reset vector executes up until the application’s
main() function is called. This includes items, for example:
1. The vector table—The list of function pointers that
are called when a hardware interrupt occurs.
214
Chapter 14 Board Support Package
215
Chapter 14 Board Support Package
BSPs in Practice
As previously mentioned, there is no one size fits all when it comes to
microcontroller BSPs. There are two BSPs for the GM6000 project: one is
for the ST NUCLEO-F413ZH evaluation board, and the second is for the
Adafruit Grand Central M4 Express board. The two BSPs are structured
completely differently, and the only thing they have in common is each
BSP has Api.h and Api.cpp files. The BSP for the Nucleo board uses ST’s
HAL library and Cube MX IDE for low-level hardware functionality. The
BSP for the Adafruit Grand Central board leverages the Arduino framework
for abstracting the hardware. To illustrate the differences, the following are
summaries of the file and directory structure of the two BSPs.
ST NUCLEO-F413ZH BSP
src/Bsp/Initech/alpha1/ // Root directory for the BSP
+--- Api.h // Public BSP interface
+--- Api.cpp // Implementation for Api.h
+--- ...
+--- stdio.cpp // C Library support
+--- syscalls.c // C Library support
+--- ...
+--- mini_cpp.cpp // C++ support when not linking against stdlibc++
+--- MX/ // Root directory for the ST Cube MX tool
| +--- MX.ioc // MX project file
| +--- startup_stm32f413xx.s // Initializes the Data & BSS segments
| +--- STM32F413ZHTx_FLASH.ld // Linker script
| +--- ...
| \--- Core/ // Contains the MX tool's auto generated code.
// The dir contains the vector table, clock cfg, etc.
|
+--- console/ // Support for the CPL usage of the debug UART
| +--- Ouptut.h // Cpl::Io stream instance for the debug UART
| \--- Ouptut.cpp // Cpl::Io stream instance for the debug UART
+--- SeggerSysView/ // Run time support for Segger's SysView tool
\--- trace/ // Support for CPL tracing
\--- Ouptut.cpp // Tracing using C libray stdio and Cpl::Io streams
217
Chapter 14 Board Support Package
Structure
The structure for BSPs that I recommend is minimal because each BSP is
conceptually unique since it is compiler, MCU, and board specific. This
structure is a single Api.h header file that uses the LHeader pattern and
exposes all of the BSP’s public interfaces. An in-depth discussion of the
LHeader pattern can be found in Appendix D, “LHeader and LConfig
Patterns.” However, here is a summary of how I used the LHeader
pattern in conjunction with BSPs.
218
Chapter 14 Board Support Package
219
Chapter 14 Board Support Package
not a BSP. For example, when migrating from using an evaluation board to
the first in–house designed board, the existing driver source code does not
have to be updated when changing to the in-house board.
220
Chapter 14 Board Support Package
• Keep the BSPs small since they are not reuse friendly.
Typically, this means implementing drivers outside of
the BSP whenever possible.
221
Chapter 14 Board Support Package
• BSPs are fairly static in nature. That is, after they are
working, they require very little care and feeding. Doing
a lot of refactoring or maintenance on existing BSP
functionality is a potential indication that there are
underlying architecture or design issues.
Bootloader
The discussion so far has omitted any discussion using a bootloader with
the MCU. The reason is because designing and implementing a bootloader
is outside the scope of this book. However, many microcontroller projects
include a bootloader so that the firmware can be upgraded after the device
has been deployed. Conceptually a bootloader does the following:
222
Chapter 14 Board Support Package
the owner of the MCU’s vector table. However, the changes are relatively
isolated (e.g., an alternate linker script) and usually do not disrupt the
existing BSP structure.
Summary
The goal of the Board Support Package is to encapsule the low-level details
of the MCU hardware, the board schematic, and compiler hardware
support into a single layer or component. The design of a BSP should
decouple the concrete implementation from being directly referenced (i.e.,
#include statements) by the drivers and application code that consume
the BSP’s public interfaces. This allows the client source code to be
independent of concrete BSPs. The decoupling of a BSP’s public interfaces
can be done by using the LHeader pattern.
INPUTS
223
Chapter 14 Board Support Package
OUTPUTS
• Design reviews
• Code reviews
224
CHAPTER 15
Drivers
This chapter is about how to design drivers that are decoupled
from a specific hardware platform that can be reused on different
microcontrollers. In this context, reuse means reuse across teams,
departments, or your company. I am not advocating designing general-
purpose drivers that work on any and all platforms; just design for the
platforms you are actively using today.
Writing decouple drivers does not take more effort or time to
implement than a traditional platform-specific driver. This is especially
true once you have implemented a few decoupled drivers. The only extra
effort needed is a mental shift in thinking about what functionality the
driver needs to provide as opposed to getting bogged down in the low-level
hardware details. I am not saying the hardware details don’t matter; they
do. But defining a driver in terms of a specific microcontroller register set
only pushes hardware details into your application’s business logic.
A decoupled driver requires the following:
Binding Times
A Hardware Abstraction Layer is created using late bindings. The general
idea behind late binding time is that you want to wait as long as possible
before binding data or functions to names. Here are the four types of name
bindings. However, only the last three are considered late bindings.
226
Chapter 15 Drivers
Public Interface
Defining a public interface for a driver is a straightforward task. What
makes it complicated is trying to do it without referencing any underlying
hardware-specific data types. And interacting with the MCU’s registers
(and its SDK) always involves data types of some kind. For example, when
configuring a Pulse Width Modulation (PWM) output signal using the ST
HAL library for the STM32F4x, the MCU requires a pointer to a timer block
(TIM_HandleTypeDef) and a channel number (of type uint32_t). When
using the Arduino framework with the ATSAMD51 MCU, only a simple int
is needed. So how do you define a handle that can be used to configure a
PWM output signal that is platform independent?
227
Chapter 15 Drivers
#include "colony_map.h"
// Note: no path specified
228
Chapter 15 Drivers
229
Chapter 15 Drivers
230
Chapter 15 Drivers
Facade
A facade driver design is one where the public interface is defined and
then each supported platform has its own unique implementation. That
is, there is no explicit HAL defined. A simple and effective approach for a
facade design is to use a link-time binding. This involves declaring a set of
functions (i.e., the driver’s public interface) and then having platform-
specific implementations for that set of functions. In this way, each
platform gets its own implementation. The PWM driver in the CPL class
library is an example of an HAL that uses link-time binding. This driver
generates a PWM output signal at a fixed frequency with a variable duty
cycle controlled by the application.
Figure 15-2 illustrates how a client of the PWM driver is decoupled
from the target platform by using link-time bindings.
231
Chapter 15 Drivers
232
Chapter 15 Drivers
233
Chapter 15 Drivers
234
Chapter 15 Drivers
In the following three sections, you will see the implementation of the
PWM functionality for each of these platforms:
235
Chapter 15 Drivers
236
Chapter 15 Drivers
237
Chapter 15 Drivers
Separation of Concerns
The “separation of concerns” approach to driver design is to separate
the business logic of the driver from the hardware details. This involves
creating an explicit HAL interface definition in a header file that is
separate from the driver’s public interface. The HAL interface specifies
basic hardware actions. Or, said another way, the HAL interface should
be limited to encapsulating access to the MCU’s registers or SDK function
238
Chapter 15 Drivers
239
Chapter 15 Drivers
240
Chapter 15 Drivers
241
Chapter 15 Drivers
242
Chapter 15 Drivers
Figure 15-10 shows a snippet from the HAL header file (located at
src/Driver/Button/Hal.h). Note the following:
1
https://en.wikipedia.org/wiki/Factory_method_pattern
243
Chapter 15 Drivers
In the following three sections, you will see the implementation of the
button driver functionality for each of these platforms:
• STM32—See Figures 15-11, 15-12, and 15-13.
244
Chapter 15 Drivers
245
Chapter 15 Drivers
246
Chapter 15 Drivers
247
Chapter 15 Drivers
248
Chapter 15 Drivers
249
Chapter 15 Drivers
250
Chapter 15 Drivers
251
Chapter 15 Drivers
252
Chapter 15 Drivers
253
Chapter 15 Drivers
LHeader Caveats
At this point, I should note that the LHeader pattern has an implementation
weakness. It breaks down in situations where interface A defers a type
definition using the LHeader pattern, and interface B also defers a type
definition using the LHeader pattern, and interface B has a dependency on
interface A. This use case results in a “cyclic header include scenario,” and
the compile will fail. This problem can be solved by adding an additional
header latch using the header latch symbol defined in the HAL header file to
the platform-specific mappings header file. Figure 15-12 shows an example
254
Chapter 15 Drivers
of the additional header latch from STM32 mapping header file for the
Button driver, and Appendix D, “LHeader and LConfig Patterns,” provides
additional details.
Unit Testing
All drivers should have unit tests. These unit tests are typically manual
unit tests because they are built and executed on a hardware target.2 One
advantage of using the separation of concerns paradigm for a driver is
that you can write automated unit tests that run as part of CI builds. For
example, with the Button driver, there are two separate unit tests:
2
In my experience, test automation involving target hardware is an exception
because of the effort and expenses involved.
255
Chapter 15 Drivers
Polymorphism
A polymorphic design is similar to a facade design, except that it
uses run-time bindings for selecting the concrete hardware-specific
implementation. A polymorphic design is best suited when using C++.3
The I2C driver is an example of a polymorphic Hardware Abstraction
Layer. It encapsulates an I2C data transfer from an I2C master to an I2C
slave device.
Figure 15-20 is a class diagram of I2C driver abstract interface and
concrete child classes.
3
It is possible to implement run-time polymorphism in C. For example, the
original CFront C++ compiler translated the C++ source code into C code and then
passed it to the C compiler. I recommend that you use C++ instead of hand-
crafting the equivalent of the vtables in C.
256
Chapter 15 Drivers
257
Chapter 15 Drivers
258
Chapter 15 Drivers
259
Chapter 15 Drivers
260
Chapter 15 Drivers
261
Chapter 15 Drivers
262
Chapter 15 Drivers
D
os and Don’ts
There is no silver bullet when it comes to driver design. That said,
I recommend the following best practices:
• When designing a new driver, base your design on
your current hardware platform, BSP, and application
needs. This minimizes the amount of glue logic that
is needed for the current project. It also avoids having
to “guesstimate” the nuances of future hardware
platforms and requirements. Don’t spend a lot of effort
in coming up with an all-encompassing interface or
HAL definition. Don’t overdesign your driver, and
don’t include functionality that is not needed by the
application. For example, if your application needs a
UART driver that never needs to send or receive a break
character,4 do not include logic for supporting break
characters in your UART driver.
• Always include a start() or initialize() function
even if your initial implementation does not require
it. In my experience, as a driver and the application
mature, you typically end up needing some
initialization logic. I also recommend always including
a stop() or shutdown() function as well.
• If you have experience writing the driver on other
platforms, leverage that knowledge into current
driver design.
4
A break character consists of all zeros and must persist for a minimum of 11bit
times before the next character is received. A break character can be used as an
out-of-band signal, for example, to signal the beginning of a data packet.
263
Chapter 15 Drivers
264
Chapter 15 Drivers
S
ummary
A decoupled driver design allows reuse of drivers across multiple hardware
platforms. A driver is decoupled from the hardware by creating a Hardware
Abstraction Layer (HAL) for its run-time usage. The following late binding
strategies can be used for the construction of the HAL interfaces:
• Facade—Using link-time bindings
265
Chapter 15 Drivers
INPUTS
OUTPUTS
• Unit tests
• Design reviews
• Code reviews
266
CHAPTER 16
Release
For most of the construction stage, you are focused on writing and testing
and optimizing your software. And as the product gets better and better
and the bugs get fewer and fewer, you start to have the sense that you’re
almost done. However, even when you finally reach the point when you
can say “the software is done,” there are still some last-mile tasks that need
to be completed in order to get the software to your customers. These
mostly not-coding-related tasks comprise the release activities of the
project.
The release stage of the project overlaps the end-of-construction
stage. That is, about the time you start thinking about creating a release
candidate, you should be starting your release activities. Ideally, if you’ve
followed the steps in this cookbook, your release activities will simply
involve collecting, reviewing, and finalizing reports and documentation
that you already have. If not, you’ll get to experience the angst and drama
of a poorly planned release: fighting feature creep, trying to locate licenses,
fighting through installation and update issues, etc. If you find yourself
in the position of struggling with the logistics of releasing your software,
it means you probably “cheated” during the planning and construction
stages and built up technical debt that now has to be retired before
shipping the software. My recommendation is if you are working with an
Agile methodology, you practice “releasing” the software at the end of
each sprint. That is, when you estimate you have about three sprints left to
finish the project, go through all the release activities as if you were going
to release the product.
268
Chapter 16 Release
269
Chapter 16 Release
270
Chapter 16 Release
When discussing releases, all release builds must be formal builds and
have a human-friendly version identifier. Generally, there are four different
types of releases:
271
Chapter 16 Release
272
Chapter 16 Release
273
Chapter 16 Release
across all the stakeholders, which results in there always being one more
software change request. The CCB process (formal or informal) provides
the discipline to stop changes so the software can finally ship.
274
Chapter 16 Release
It should be obvious that the SBOM should be created as you go. That
is, the process should start when external packages are first incorporated
into the code base, not as a “paperwork” item during the release stage.
For non-open-source licensing, purchasing software licenses takes time
and money, and you need to make sure that money for those purchases is
budgeted. Your company’s legal department needs to weigh in on what is
or is not acceptable with respect to proprietary and open source licenses.
Or, said another way, make sure you know, and preferably have some
documentation on, your company’s software licensing policies. Don’t
assume something is okay because it is widely used in or around your
company because nothing is as frustrating as the last-minute scramble to
redesign (and retest) your application because one of your packages has
unacceptable licensing terms.
The SBOM is not difficult to create since it is essentially just a table that
identifies which third-party packages are used along with their pertinent
information. My recommendations for creating and maintaining the
SBOM are as follows:
275
Chapter 16 Release
Anomalies List
By definition, your bug tracking tool is the canonical source for all known
defects. The anomalies list is simply a snapshot in time of the known
defects and issues for a given release. In theory, the anomalies list for any
given release could be extracted from the bug tracking tool, but having
an explicit document that enumerates all of the known defects for a given
release simplifies communications within the cross-functional team and
with the leadership team. When an anomalies list needs to be generated
should be defined by your QMS process; however, as it is a key tool in
determining if a release is ready for early access or GA, you may find
yourself generating the list more frequently at the end of the project.
Release Notes
Internal release notes should be generated every time a formal build is
deployed to anyone outside of the immediate software team. This means
you need release notes for formal builds that go to testers, hardware
engineers, QA, etc. Simply put, the internal release summarizes what is
and what is not the release. It is important to remember that when writing
the internal release notes, the target audience is everyone on the
team—not just the software developers. The software lead or the “build
master” is typically responsible for generating the internal release notes.
There should be a line or bullet item for every change (from the
previous release) that is included in the current release. There should be
an item for every pull request that was merged into the release branch.
Internal release notes can be as simple as enumerating all of the work
item tickets and bug fixes—preferably with titles and hyperlinks—that
went into the release. Remember, if you are following the cookbook,
there will be a work item or bug tickets created for each pull request. One
advantage of referencing work or bug tickets is that individual tickets will
276
Chapter 16 Release
• Performance issues
Deployment
In most cases, the embedded software you create is sold with your
company’s custom hardware. This means that deploying your software
requires making the images available to your company’s manufacturing
process. Companies that manufacture physical products typically track
customer-facing releases in a Product Lifecycle Management (PLM) tool
such as Windchill or SAP that is used to manage all of the drawings and
bill of materials for the hardware. With respect to embedded software,
the software images and their respective source files are bill-of-material
line items or sub-assemblies tracked by the PLM tool. Usually, the PLM
tool contains an electronic copy of the release files. As PLM tools have
277
Chapter 16 Release
a very strict process for adding or updating material items, the process
discourages frequent updates, so you don’t want to put every working
release or candidate release into the PLM system—just the alpha, beta, and
gold releases.
These processes are company and PLM tool specific. For example,
at one company I worked for, the PLM tool was Windchill, but since the
software images were zipped snapshots of the source code (i.e., very large
binary files), the electronic copies of the files were formally stored in a
different system and only a cover sheet referencing the storage location
was placed in Windchill.
Typically, the following information is required for each release into
the PLM system:
278
Chapter 16 Release
The same care and due diligence that went into updating the PLM
system for manufacturing with a new release should be applied when
releasing images to the OTA server. The last thing you want is a self-
made crisis of releasing the wrong or bad software to the field. Since
almost all embedded software interacts with the physical world, a
bad release can have negative real-world consequences for a period
of time before a fixed release can be deployed (e.g., no heating for an
entire building for days or weeks in the middle of winter). Or, worst
case, the OTA release “bricks” the hardware, requiring field service
personnel to resurrect the device.
279
Chapter 16 Release
QMS Deliverables
While the Quality Management System (QMS) deliverables do not
technically gate the building and testing of a release, the required
processes can delay shipping the software to customers. What is involved
in the QMS deliverables is obviously specific to your company’s defined
QMS processes. On one end of the spectrum, there are startup companies
that have no QMS processes, and all that matters is shipping the software.
On the other end, there are the regulated industries, such as medical
devices, that will have quite verbose QMS processes. If you have no, or
minimal, QMS processes defined, I recommend the following process be
followed and the following artifacts be generated for each gold release.
• System architecture
• Software architecture
• CI setup
280
Chapter 16 Release
281
Chapter 16 Release
2
Archiving a build server can take many forms. For example, if the build server is
a VM, then creating a backup or snapshot of VM is sufficient. If the build server is
a Docker container, then backing up the Docker file or Docker image is sufficient.
For a physical box, creating images of the box (using tools like Ghost, AOMEI,
or True Image) could be an option. Just make sure that the tool you use has the
ability to restore your image to a machine that has different hardware than the
original box.
282
Chapter 16 Release
Summary
The start of the release stage overlaps the end of the construction phase
and finishes when a gold release is approved. The release stage has several
deliverables in addition to the release software images. Ideally, the bulk of
the effort during the release stage is focused on collecting the necessary
artifacts and quality documents, and not the logistics and actual work of
creating and completing them.
Ensure that you have a functioning Change Control Board in place for
the end-game sprints in order to prevent feature creep and never-ending
loops of just-one-more-change and to reduce your regression testing
efforts.
Finally, put all end-customer release images and source code into
your company’s PLM system to be included as part of the assembled end
product.
INPUTS
OUTPUTS
283
Chapter 16 Release
• Software architecture
• Doxygen output
284