Test-Driven Development For Embedded Software: Manual For Industrial Developers and Academic Instructors
Test-Driven Development For Embedded Software: Manual For Industrial Developers and Academic Instructors
{C;}
{C++;}
Test-Driven Development
for Embedded Software
M an u a l fo r ind ust r ial d evelopers and a c ademic instruc tors.
a
a
b
Piet Cordemans , Sille Van Landschoot , Jeroen Boydens
P. Cordemans and S. Van Landschoot are scientic sta members at KHBO funded by IWT-090191.
J. Boydens is a professor in Software Engineering at KHBO.
J. Boydens is an aliated researcher with the Department of Computer Science,
K.U.Leuven - Celestijnenlaan 200 A, B-3001 Leuven, Belgium.
b
c
KHBO Dept. Industrial Sciences & Technology, Zeedijk 101, B-8400 Oostende, Belgium
version 1.0
http://ep.khbo.be/TDD4ES
User committee:
DSP Valley, E.D.&A., FMTC, K.d.G., K.U.Leuven, Marelec, Newtec, Q-star test,
This work is licensed under the Creative Commons Attribution-NonCommercial License. To view
a copy of this license, visit http://creativecommons.org/licenses/by-nc/3.0/
Preface
This manual was produced in the context of the IWT Tetra research project 090191: TDD4ES - Test
Driven Development for Embedded Software, which started October 2009 and ended September 2011.
Tetra stands for Technology Transfer, which means that research was specically oriented towards
applicability in the industry. To achieve this goal, the project was co-nanced by a number of industrial
partners who became part of the user committee. Other members included scientic and valorization
partners, respectively supporting research and dissemination of results. User committee meetings were
held regularly to assess preliminary results and provide valuable feedback.
The strategies, patterns and methodologies described in this manual have been applied in three
case studies as a proof of concept. One of these case studies was delivered by a member of the user
committee, who provided an industrial embedded hardware system, as well as a legacy software code
base. Note that the project and this manual specically focus on the practice of TDD. Other Agile[55]
or eXtreme Programming[15] practices, such as Continuous Integration and pair programming, were
not considered.
These practices might prove their worth, especially when combined with TDD or
considering the context of embedded. Yet for more detailed information, references have been provided
in the bibliography.
Furthermore nearing the end of the TDD4ES project, James Grenning published a book on the
subject of Test-Driven Development for Embedded C[36]. This book provides a lot of information on
TDD for embedded, which is quite similar to our own experiences with the subject. Therefore a choice
has been made to give a specic subject, namely TDD for C, less attention in this manual. Instead
for those who are interested in the C specic issues for TDD, the book of Grenning is recommended.
Nevertheless, this manual covers some topics which are novel ideas. So, even for those who are only
developing in C, a number of ideas in this manual might prove to be interesting.
Although most
examples in this manual are written in C++, they should be simple enough to be comprehensible with
only a minimal understanding of Object Oriented features.
For those interested in the general motivation start with 1 Introduction.
know about Test-Driven Development, start with 2 Test-Driven Development. For those who know
TDD and want to start immediately, begin with 3 Test-Driven Development for embedded software.
Finally for those interested in the advanced concepts, chapters 4 and 5 deal with advanced topics, such
as legacy code and patterns.
Contents
Introduction
Why Test-Driven Development? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
Automated testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3
Incremental development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Test-Driven Development
2.1
1.1
TDD mantra
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1
Create test
10
2.1.2
Red bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.1.3
Green bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.1.4
Refactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2
Advantages
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3
Limitations
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.5
Test coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.6
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
15
18
3.1
Embedded constraints
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
3.2
Test on target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.3
3.4
3.2.1
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
3.2.2
Process
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.2.3
Code example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
Test on host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
3.3.1
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
3.3.2
Mock replacement
24
3.3.3
Process
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
3.3.4
Code example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
Remote testing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3.4.1
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3.4.2
Process
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
3.4.3
Code example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
3.4.4
Remote prototyping
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
3.4.5
Code example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
3.5
Overview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
3.6
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
Legacy code
Test on target - host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.3
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
Patterns
5.1
48
4.1
51
3-tier TDD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
5.1.1
52
5.1.2
Hardware-aware code
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
5.1.3
Hardware-specic code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
5.2
Testing patterns
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
5.3
Embedded patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
5.4
Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
In conclusion
63
6.1
Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2
Conclusion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
6.3
Summary of contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
63
Chapter 1
Introduction
The relative cost to x an error exponentially increases in a later phase of the project.
Barry Boehm[16]
The term embedded system covers a wide range of electronic applications. These can be simple systems such as a controller for a microwave oven, or complex systems like a digital camera. However, all
these systems have one thing in common, which is that they are designed with one specic application
in mind. The embedded system looks like a small computer with dedicated hardware, but the main
dierence with a computer system is that its goal does not change during its lifetime. An embedded
system for a digital camera will always remain the system for a digital camera, it will not change into a
system for a microwave oven. It should be noted that multiple embedded systems might be combined
in one appliance. A cell phone could for instance integrate an mp3-player, as well as a digital camera.
These underlying embedded systems can be fully integrated, or kept completely separate.
Embedded systems have a number of key properties: (1) their restricted price, (2) their restricted
size, (3) their required performance and (4) their restricted power consumption.
More advanced
properties are their (5) reactiveness to events and their (6) time-critical behavior. These properties
make embedded system design a very specic co-design of hardware and software.
The hardware
as rmware updates or patches are hard or even impossible to apply. Thorough testing of embedded
software is essential to assure the desired functionality has been achieved[17]. Even though testing does
not prove the absence of bugs[21], it can conrm certain expectations, which boosts the condence in
the quality of the code.
As indicated the importance of testing is essential for the quality of embedded software, however
the current techniques are ad-hoc, end-to-end and debugging, only focusing on the current issue[42].
Also typical for embedded system development, testing is postponed after integration of software and
hardware[57].
1.1
As embedded systems are currently becoming more and more complex, the importance of their software
component rises. Furthermore, due to the denite deployment of embedded software once it is released,
it is unaordable to deliver faulty software. A thorough testing is essential to minimize software bugs.
The design of embedded software is strongly dependent on the underlying hardware.
hardware and software is essential in a successful embedded system design.
Co-design of
design time, the hardware might not always be available, so software testing is often considered to
be impossible. Therefore testing is mostly postponed until after hardware development and testing is
typically limited to debugging or ad-hoc testing. Moreover, as it is the last phase in the process, it
might be shortened when the deadline is nearing. Integrating tests from the start of the development
process is essential for a meticulous testing of the code. In fact, these tests can drive the development
of software, hence Test-Driven Development.
It is crucial for embedded systems that they are tested very thoroughly, since the cost of repair grows
exponentially once the system is taken in production, as stated in gure 1.1, which depicts the law of
Boehm. However, the embedded system can only be tested once the complete development process is
nished. Most embedded systems are developed using the waterfall process for their software. First the
user requirements are gathered. Next, these requirements are translated into functional specications.
Once all specications are formally written down, the global technical design phase can start. After
the global design comes the detailed technical design, as a basis for the next phase of programming.
Finally, the system can be tested. If the hardware is still not available at this point, simulation tools
and instruction set compilers are used to verify the behavior of the software component. Thorough
testing can only be done when the hardware is fully congured. In the current strategy for developing
embedded systems, the testing phase is generally done manually. This ad-hoc testing is mostly heuristic
and only focuses on one specic scenario. At this point debugging facilities are very handy to look at
the inside functioning of the software component that is tested. As noted, the testing is done late in
the development cycle, with all due disadvantages. When we want to start testing as early as possible,
a number of problems arise. One problem being the hardware unavailability, and another being the
Cost of Repair
1000
100
10
0
Specification
Design
Programming
Production
Lifecycle
As illustrated in gure 1.2 the development phase of TDD takes longer than that of conventional
development. A reason for this is the extra eort it takes to create tests during development. However, a
quality assurance phase is necessary when creating software using conventional development processes.
During quality assurance, bugs and misconceptions are xed in the software. This phase is generally
known as the beta test phase.
This quality assurance phase is spread out during TDD, hence the
longer development time. The investment in testing early on in TDD results in a non-negligible life
cycle benet.
Development
Quality Assurance
Conventional
Development
TDD
Investment
Mller & Padberg, 2003
NetReturn
LifeCycleBenefit
1.2
Automated testing
Testing manually is time-consuming and error-prone. Test-Driven Development demands that tests are
frequently run during development; as such that test automation is indispensable. Without automated
tests, testing the code will occur less frequently, which results in larger chunks of code tested at once,
which will inevitable contain more bugs, slowing the process even further. Finally this will drag the
development cycle to a halt. An automated test suite is a necessity in TDD, however automating tests
in an embedded development environment is not trivial. Primarily because hardware dependencies are
fundamental to embedded software.
Developing using the TDD strategy should be done with caution.
the broken window syndrome[38] states that one must be careful with broken tests. The basic idea
of broken window comes from windows in an abandoned building: once one test starts to fail, shortly
after the number of failing tests will grow gradually, ultimately leading to all tests being broken. The
tests can break for dierent reasons: the expected behavior is changed, or the implementation of the
business code is changed. In the former case the developer adjusts the test code to correctly test the
updated expected behavior. In the latter case the implemented business code does not support the
original expected behavior. To solve this problem, the business code is adjusted so the test succeeds.
Another potential pitfall is test cancer[28]. It is a bad habit to leave out a failing test from the test
scenario. One could think that by leaving out this test, the scenario now completely succeeds. But
when the failing test is not corrected, more and more tests will fail and be left out from the scenario.
This way the failing tests spread like cancer through the developed code.
1.3
Incremental development
To support development of modern embedded systems, the waterfall process and other sequential
models fail to deliver.
necessity to avoid integration issues early on. As indicated by the law of Boehm, earlier detection of
problems reduces their costs. Acknowledging this fact, traditional software development methodologies
have shifted from Waterfall to Agile practices.
Engineering in general lends itself towards incremental development, hence software development
follows this same lead. But since software engineering is an extremely agile discipline, other development models emerge in the embedded world. On the one hand, this is because of the ease of changing
and duplicating software. On the other hand, a fast feedback cycle, supported by automated tests,
can be provided, which is needed to guide the development process.
However, regardless of complications, three benets are obtained when incrementally developing
embedded software.
introduced early on. Next, developing incrementally pushes the design towards a thorough modularization.
Finally, as the hardware platform will inevitably change, hardware abstraction becomes a
necessity. This improves software reusability across multiple platforms and across dierent iterations.
The software development process of Test-Driven Development (TDD) nds its origin in general
software development. TDD's methodology originated in the late eighties, but since the general acceptance of cyclic development processes such as eXtreme Programming (XP), SCRUM and the Unied
Process (UP), it has received more attention.
1.4
Further reading
George and Williams[30] have conducted a research on the eects of TDD on development time and
test coverage. Siniaalto [56] provides an overview of the experiments regarding TDD and productivity.
Nagappan [50] describes the eects of TDD in four industrial case studies. Note that research on the
alleged benets of TDD for embedded software is limited to several experience reports, such as written
by Schooenderwoert [52, 53] and Greene [32].
Chapter 2
Test-Driven Development
By writing a test before implementing the item under test, attention is focused on the
item's interface and observable behavior.
Kent Beck[14]
Test-Driven Development (TDD) is a fast paced incremental software development strategy, based
on automated unit tests. First, this section describes the core of TDD (2.1), using a simple example.
Next, the advantages (2.2) of developing by TDD are discussed and the limitations (2.3) of the strategy
are indicated. Finally, an overview of unit testing frameworks (2.4) is given.
2.1
TDD mantra
Test-Driven Development consists of a number of steps, sometimes called the TDD mantra. In TDD,
before a feature is implemented, a test is written to describe the intended behavior of the feature.
Next, a minimal implementation is provided to get the test passing. Once the test passes, code can
be refactored. Refactoring is restructuring code without altering its external behavior or adding new
features to it.When the quality of code meets an acceptable level, the cycle starts over again, as visually
represented in gure 2.1. Red and green refer to the colors sometimes used in a unit test framework,
respectively indicating test failure and success.
Create Test
Red
Green
Refactor
TDD reverses the conventional consecutive order of steps, as tests should be written before the code
itself is written. Starting with a failing test gives an indication that the scope of the test encompasses
new and unimplemented behavior. Moreover, if no production code is written without an accompanying
test, one can assure that most of the code will be covered by tests.
Also fundamental to the concept is that every step is supported by executing a suite of automated
unit tests. These tests are executed to detect regression faults either due to adding functionality or
refactoring code.
1 TEST(KelvinTest)
2 {
3
Temperature myTemperature;
4
double aValue = 273.15;
5
myTemperature.setKelvin(aValue);
6
CHECK_EQUAL(aValue, myTemperature.getKelvin());
7 }
10
When a test is created, it might call non-existing functions, which results in compilation errors. Subsequently these compilation errors must be dealt with before proceeding. However, one should only
implement a minimum amount of code in order to get the system through compiling. Eectively TDD
states that no code is ever written without a covering test. Writing more code than needed to proceed to the following step should be avoided. In eect, a programmer should not try to anticipate on
additional features.
In order to continue with the Temperature example, listing 2.2 provides a skeleton of code to lead
to a failing KelvinTest. Note that no implementation is given to the methods.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Temperature
{
public:
Temperature();
virtual ~Temperature();
double getKelvin() const;
void setKelvin(double k);
};
double Temperature::getKelvin() const
{
return 0;
}
void Temperature::setKelvin(double k)
{
}
Once the test and accompanying code get through compilation it becomes possible to run the test
and see it failing. A failing test indicates that a new feature is implemented. Writing a failing test is
also an indication that the test can fail. If a test never fails, it is worthless and might lead to false
assumptions on the code under test. The code already provided should be enough to proceed to the
next step.
11
Listing 2.3: Providing a fake result in order to pass the rst test
test, which proves the current implementation is not sucient. This example might be too simple to
prove its use, but in a realistic development process, one encounters functionality, which might be too
complicated or unclear at that moment to directly write a good implementation. Faking the rst test
and subsequently writing a second test might provide more information on the problem, ultimately
leading to a more realistic solution.
However, faking the result is a very small step in the TDD cycle. When possible, one can immediately provide the obvious implementation. Continuing the example, listing 2.4 adds a private eld
kelvin and implements the accessor and mutator, accordingly.
Listing 2.4: Providing the actual implementation in order to pass the rst test
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class Temperature
{
public:
Temperature();
virtual ~Temperature();
double getKelvin() const;
void setKelvin(double k);
private:
double kelvin;
};
double Temperature::getKelvin() const
{
return kelvin;
}
void Temperature::setKelvin(double k)
{
kelvin = k;
}
Writing the obvious implementation is the ideal development strategy, but sometimes it is impossible to just do that. In that situation, TDD allows to change gears and work according to the fake or
some other strategy. In fact, when starting to implement an feature and failing to immediately provide
the correct code, one can easily erase previous changes and start over, taking smaller steps this time.
It is important that code written in this phase only deals with writing code to get the test passing.
This means no assumptions should be made towards future features, as future features should be
covered by writing new tests.
At this point, one can start over, picking a new feature to implement or should it be necessary
to refactor the current implementation.
can be extended with the feature of setting and retrieving the temperature in Celsius.
Following
the TDD strategy, a test is created rst, next it is made to compile and prove it fails.
Finally a
minimal implementation is provided to get the test passing, ultimately leading to the following code
(for conciseness, listing 2.5 does not show the Kelvin related code).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
TEST(CelsiusTest)
{
Temperature myTemperature;
double aValue = 18.5;
myTemperature->setCelcius(aValue);
CHECK_CLOSE(aValue, myTemperature->getCelcius(), 0.01);
}
class Temperature
{
public:
Temperature();
virtual ~Temperature();
double getCelsius() const;
void setCelsius(double c);
private:
double kelvin;
};
double Temperature::getCelsius() const
{
return kelvin - 273.15;
}
void Temperature::setCelsius(double c)
{
kelvin = c + 273.15;
}
Now, the code works, as proven by running the tests and see them passing, thus reaching the socalled green bar state. However, internally duplication has shown up. The constant 273.15, which is
the oset between Celsius and Kelvin, recurs twice and does not explain its intent. Therefore, before
creating a new test, refactoring is in order.
2.1.4 Refactor
Refactoring is changing code for readability, remove duplication or improve software design, without
changing or adding its behavior. This becomes necessary, because providing the minimal implementation will lead to code that is suboptimal, eectively less manageable. Moreover, as TDD focuses on one
feature at a time during the rst steps of its cycle, one could lose the general overview. Maintaining
focus on a single feature has its merits though, but at some point one needs to tend to the quality of
software.
Coincidentally, regularly running automated unit tests will ensure that no behavior is altered while
refactoring. Running these tests frequently will indicate when refactoring breaks the system, providing
important feedback.
In the thermometer example, listing 2.6, the hard coded value is replaced by a static const. Rerunning the tests proves that the behavior remains the same, as expected.
1
2
3
4
5
6
7
8
9
10
Fundamental to the concept of TDD is that refactoring and adding new behavior are strictly
separated activities.
should be solved in quick order or the changes must be reverted. On the other hand, when adding
new functionality, the focus should stay on the current issue, only conducting the refactorings when
all tests are passing. Refactoring can and should be applied to the tests themselves as well. In that
situation the implementation stays the same and can be reassured that the test does not change its
own scope.
2.2
Advantages
are those which result from incrementally developing an automated test suite. Also TDD allows for
a steady and measurable progression. Finally TDD forces a programmer to focus on three important
aspects.
As TDD imposes to frequently run a test suite, four particular advantages result from it. First, the
tests provide a safety net when refactoring, alerting the programmer when a refactoring went wrong,
eectively altering the behavior of software. Next, running tests frequently will detect regression when
code for a new feature interferes with other functionality. Furthermore, when encountering a bug later
on (TDD cannot guarantee the absences of bugs in code), a test can be written to detect the bug. This
test should fail rst, so one can be sure it tests the code where the bug resides. After that making the
test pass will solve the bug and leave a test in place in order to detect regression. Moreover, tests will
ensure software modules can run in isolation, which improve their reusability. Finally, a test suite will
indicate the state of the code and when all tests are passing, programmers can be more condent in
their code.
Next to the automated test suite, TDD also allows for a development rate, which is steady and
measurable. Each feature can be covered by one or more tests. When the tests are passing, it indicates
becomes possible to adjust the development rate. It can go fast when the implementation is obvious
or slower when it becomes dicult. Anyhow, progression is assured.
Finally TDD is attributed to put the focus on three fundamental issues. First, focus is placed on
the current issue, which ensures that a programmer can concentrate on one thing at a time. Next, TDD
puts the focus on the interface and external behavior of software, rather than its implementation. By
testing its own software, TDD forces a programmer to think how software functionality will be oered
to the external world.
where a software module is approached by a test case instead of formal assertions. Lastly TDD moves
the focus from debugging code to testing.
revert to an old state, write a new test concerning an assumption and see if it holds. This is a more
eective way of working as opposed to relying on a debugger.
2.3
Limitations
TDD has a number of imperfections, which mainly concern the overhead introduced by testing, thoroughness of testing and particular diculties when automating particular hard to test code.
Writing tests covering all development code doubles the amount of code that needs to be written.
Moreover the tests that are written need to be maintained as well as production code. Furthermore
setting up a test environment, might require additional eort, especially when multiple platforms are
targeted.
Next a critical remark has to be made on the eectiveness of the tests written in TDD. First, they
are written by the same person who writes the code under test.
focused tests, which only expose problems known to the programmer. In eect, having a large test
suite of unit tests, does not take away the need of integration and system tests. On the other hand
code coverage is not guaranteed. It is the responsibility of the programmer to diverge from the happy
path and also test corner cases. Additionally, tests for TDD specically focus on black box unit testing,
because these tests tend to be less brittle than tests, which also test the internals of a module. However
for functional code coverage glass box tests are also necessary. Although these are focused towards
a specic implementation.
Therefore these must be changed each time the internals of the module
changes, hence they are called brittle, and incur an additional overhead on test maintenance.
At last TDD is specically eective to test library code. This is code which is not directly involved
with the outside world, for instance user interface, databases or hardware. However when developing
code related to the outside world, one has to lapse on software mocks. This introduces an additional
overhead, as well as assumptions on how the outside world will react. Therefore it becomes vital to
do some extra tests, which verify these assumptions.
2.4
upon an archetypal framework known as xUnit, from which various ports exist, like JUnit for Java
and CppUnit for C++. In fact for C and C++ more than 40 ports exist to date and most of them are
open source. Regardless of the specic implementation, most of these have some common structure,
which is shown in gure 2.2.
15
Test Runner
Library
Suite
Setup
*assertions
Execute
Unit Tests
Teardown
Report
mostly provides some specic assertions, like checking equality for various types. Optionally it might
also check for exceptions, timing, memory leaks, etc. On the other hand the test runner calls the unit
tests, setup, teardown and reports to the programmer. Setup and teardown in combination with a test
is called a test xture. First, setup provides the necessary environment for the unit test to execute.
After test execution, teardown cleans the environment.
guarantee test isolation. Instead of halting execution when it encounters a failing assertion, the test
runner will gracefully eject a message, which contains valuable information of the failed assertion. That
way all tests can run and a report is composed of failing and passing tests. For organizational reasons
tests might be grouped into suites, which can be independently executed.
Choosing a unit test framework depends on three criteria.
Portability. Considering the dierent environments for embedded system applications, portability
and adaptability are a main requisite for a unit test framework. It should have a small memory
footprint, allow to easily adapt its output and do not rely on many libraries.
Overhead of writing a new test. Writing tests in TDD is a common occurrence and therefore
it should be made easy to do so. However some frameworks demand a lot of boilerplate code,
especially when it is obligatory to register each test.
Minor features, such as timing assert functionality, memory leak detection and handling of crashes
introduced by the code under test, are not essential, but can provide a nice addition to the
framework. Furthermore some unit testing frameworks can be extended with a complementary
mocking framework, which facilitates the creation of mocks.
Deciding on a unit test framework is a matter of the specications of the target platform. An incomplete
and concise overview of possible candidates is given.
MinUnit[6] is the smallest C framework possible. It consists of two C macro's, which provide
a minimal runner implementation and one assertion. This framework can be ported anywhere,
but it should be extended to be of any use and it will take much boilerplate code to implement
the tests.
Unity[8] is similar to Embunit and additionally contains a lot of embedded specic assertions. It
is also part of a tool suite called Rake, which includes a mocking framework and code generation
tools written in Ruby to deal with boilerplate code.
16
UnitTest++[44] is C++ based, which can be ported to any but the smallest embedded platforms.
Usability of the framework is at prime, but it requires some work to adapt the framework to
specic needs.
CppUTest [1] is one of the latest C++ testing frameworks, it also has a complementary mocking
framework, called CppUMock.
GoogleTest [4] is the most full-blown C++ unit test framework to date. It provides integration
with its mocking framework, GoogleMock. However it is not specically targeted to embedded
systems and is not easily ported.
2.5
Test coverage
As mentioned in section 2.3, unit tests written in the TDD process cannot replace the testing phase.
Nevertheless it may complement and reduce the amount of testing, when the following remarks are
considered.
First unit tests will only cover as much as the programmer deemed necessary. Corner cases tend
to be untested, as they will mostly cover redundant paths through the code.
which will cover the same path with dierent values are prohibited by the rule that a test should fail
rst. This rule is stated with good reason, as redundant tests tend to lead to multiple failing tests
if a regression is introduced, hence obfuscating the bug. Testing corner case values should be done
separately from the activity of programming according to TDD. An extra test suite, which is not part
of the development cycle allows for a minimalistic eort to deal with corner case values. Should one of
these test detect a bug, the test can easily be migrated to the TDD test suite to x the problem and
detect regression.
On the other hand programmers become responsible to adhere strictly to the rules of TDD and
only implement a minimum of code necessary to get a passing test. Especially in conditional code,
one could easily introduce extra untested cases. For instance an
if
else
2.6
Further reading
An extensive introduction to TDD is given in the seminal book by Kent Beck, Test-Driven Development, by example[14]. The F.I.R.S.T. acronym is coined by Robert Martin [45]. Refactoring [25]
gives an overview of a great number of refactorings. An introduction to Design by Contract is given
by Mitchell and McKim[49] . Hamill[37] provides a language independent overview of unit test frameworks, while Llopis [43] has given an overview of some older C++ unit test frameworks.
17
Chapter 3
Test-Driven Development for
embedded software
Quality is free, but only to those who are willing to pay for it.
Tom DeMarco[20]
Ideally Test-Driven Development is used to develop code which does not have any external dependencies. This kind of code suits TDD well, as it can be developed fast, in isolation and does not
require a complicated setup. However, when dealing with embedded software the embedded environment complicates development. Four typical constraints inuence embedded software development and
have their eect on TDD. To deal with these issues three strategies have been dened, which tackle
one or more of these constraints. Each of these strategies leads to a specic setup and inuence the
software development process. However, neither of these strategies is the ideal solution and typically
a choice needs to be made depending on the platform and type of application.
3.1
Embedded constraints
Embedded systems encompass a range of electronic systems, which only have a microprocessor in
common.
These embedded platforms might range from a simple 6 pins 8-bit microcontroller to a
characteristics, which are common as well. First, the development platform diers from the execution
platform, also known as host and target respectively. Next, all embedded systems are designed to be
cost-eective, resulting in limited memory and processing power. Finally, at least a part of software
on an embedded system is closely related to the hardware.
1) Development speed
results in frequently compiling and running tests. However, when the target for test execution is not
the same as the host for developing software, a delay is introduced into development. For instance,
this is the time to ash the embedded memory and transmit test data back to the host machine.
Considering that a cycle of TDD minimally consists of two test runs and expectantly will take several
more, this delay becomes a bottleneck in development according to TDD. A considerable delay will
result in running the test suite less frequent, which in turn results to taking larger steps in development.
Moreover this will introduce more failures, leading to more delays. In turn this will reduce the number
of test runs, etc.
Furthermore, due to the specic nature of embedded systems, software is developed concurrently
with hardware. This implies that hardware might be (partially) unavailable during software develop-
ment. As TDD is a fast iterative cycle, delays introduced in co-designing the embedded system should
not interrupt software development. In TDD's case, both the development and testing platform should
remain available.
2) Memory footprint
embedded system.
Rather than solely the program code residing in target memory, tests and the
testing framework are also added. This results in at least doubling the memory footprint needed.
3) Cross-compilation issues
developing and testing on a host system solves the previously described problems. However, the target
platform will dier from the host system, either in processor architecture or build tool chain. These
issues could lead to incompatibilities between the host and target build. Comparable to other bugs,
detection of incompatible software has a less signicant impact should it be detected early on.
In
fact, building portable software is a merit on its own as software migration between target platforms
improves code reuse.
4) Hardware dependencies
tomation of tests.
Furthermore hardware might not be available during software development. Regardless of the reason,
in order to successfully program according to TDD, tests need to run frequently. This implies that executing tests should not depend on the target platform. Finally, in order to eectively use an external
dependency in a test, setup and teardown will considerably get more complicated.
3.2
In the
Test on target
Test on target strategy, TDD issues raised by the target platform are not dealt with. NeverTest on target is a fundamental strategy as a means of verication. First executing tests on
theless,
target deliver feedback as part of an on-host development strategy. Moreover during the development
of system, integration or real-time tests, the eort in mocking specic hardware aspects is too labor
intensive. Finally, writing validation tests when adopting TDD in a legacy code based system, provides
a self-validating, unambiguous system to verify existing behavior.
3.2.1 Implementation
Fundamental to
Test on target
Secondary, some specic on-target test functionalities, like timed asserts or memory leak detection, are
interesting features to include. Finally, it is important to consider the ease of adapting the framework
when no standard output is available.
Tests are written in program memory of the target system, alongside the code under test itself.
Generally, the test report is transmitted to the host system, to review the results of the test run.
However, in extremely limited cases it becomes even possible to indicate passing or failing tests on
a single LED. Yet this implies that all free debug information is lost and therefore this should be
considered as a nal resort on very limited embedded systems.
19
<<upload>>
Tests
Development
Software under
development
<<upload>>
3.2.2 Process
Test on target
upload and execution cycles on target. Still it complements embedded TDD in three typical situations.
First, it extends the regular TDD cycle on host, in order to detect cross-platform issues, which
is shown in the embedded TDD cycle, gure 3.2. Complementary to the TDD cycle on host, three
additional steps are taken to discover incompatibilities between host and target. First, after tests are
passing on the host system, the target compiler is invoked to statically detect compile-time errors.
Next, once a day, if all compile-time errors are resolved, the automated on-target tests are executed.
Finally, every few days, provided that all automated tests are passing, manual system or acceptance
tests are done.
Note that time indications are dependent on an equilibrium between nding cross-
Host
Target
1. Create test
2. Red bar
3. Green bar
4. Refactor
1. Compile
for target
2. Fix
+/ 10 minutes
Few hours
1. Unit tests
in target
2. Fix
Daily
J. Grenning, 2004
1. Manual
tests
2. Fix
Few days
Test on target
For instance, memory management operations, real-time execution, on-target library functionality and
IO-bound driver functions are impossible to test accurately on a host system.
forcing TDD on host will only delay development.
In these situations,
implementations are needed to solve some of these cases, resulting in tests that only test the mock.
TDD should only be applied to software that is useful to test. When external software is encountered,
minimize, isolate and consolidate its behavior.
Finally,
Test on target
Changing existing software without tests giving feedback on its behavior, is undesirable. After all this
is the main reason to introduce TDD in the rst place. However, chances are that legacy software does
not have an accompanying test suite.
capturing the system's fundamental behavior is essential to safely conduct the necessary changes.
1 TEST(RepeatButtonTest)
2 {
3
Button *button = new Button(&IOPIN0, 7);
4
5
button->setCurrentState(RELEASED);
6
CHECK(button->getState() == RELEASED);
7
8
button->setCurrentState(PRESSED);
9
CHECK(button->getState() == PRESSED);
10
11
button->setCurrentState(RELEASED);
12
button->setCurrentState(PRESSED);
13
CHECK(button->getState() == REPEAT);
14
15
delete button;
16 }
This test creates a button object and tries to check whether its state logic functions correctly,
namely two consecutive high states should result in REPEAT. Now, one way to test a button is to
press it repeatedly and see what happens. Yet this requires manual interaction and is not feasible to
manually test the button every time a code change is made. However, automation of events related to
hardware can be solved with software. In this case an additional method is added to the button class,
setCurrentState, which allows to press and release the button in software.
Two remarks are generally put forward when adding methods for testing purposes.
hand, these methods will litter production code.
On the one
original class and add these methods in a test subclass . On the other hand, when a hardware event
is mocked by some software, it might contain bugs on its own. Furthermore there is no guarantee that
the mock software is a good representation of the hardware event it is replacing. Finally, is the actual
code under test or rather the mock code tested this way?
These remarks indicate that manual testing is never ruled out entirely. In the case of automating
tests and adding software for this purpose, a general rule of thumb is to test it both manually and
automated.
If both tests indicate the same behavior, consider the mock software as good as the
original hardware event. The added value of the automated test will pay back for its investment when
refactoring the code under test or extending its behavior.
In the RepeatButtonTest the state of the button is changed almost instantaneously to simulate a
glitch caused by the mechanical contact of the button. In this button, such behavior should be ignored.
Usually the compiler will complain about some missing denitions, therefore listing 3.2 provides the
code to satisfy the compiler and allow the hex le to be uploaded to target.
a In a programming language, which is not object oriented, a solution for hiding testing methods requires a bit more
work, as will be demonstrated in section 3.3.2.
1
2
3
4
5
6
7
8
9
10
The implementation of each method is omitted, but these can be considered to be empty or returning a constant RELEASED value in the case of getState(). Running the test on target will fail on
the second assert, which allows to go to green bar. Listing 3.3 provides the correct implementation,
which will make the test pass.
Note that two private elds are added to the Button class, namely
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Now the test passes, yet the software button is not dealing with any hardware at the moment.
Namely, the currentState is not fetched from the hardware register. Listing 3.4 adds a private method,
which will fetch the currentState value. This can be a simple accessor for testing purposes, since in
that case the currentState will be set by the setCurrentState method.
Listing 3.4: A small refactoring is in order to allow a future coupling to the hardware.
1
2
3
4
5
6
7
8
9
10
11
12
ButtonState Button::getCurrentState()
{
return currentState;
}
ButtonState Button::getState()
{
ButtonState output = RELEASED;
currentState = getCurrentState();
if(currentState == PRESSED && previousState == RELEASED)
/*The rest of getState() remains the same */
}
This refactoring can be validated by uploading the code to target and run the test. Now, the nal
step is to actually fetch the value from the hardware register of the port, as shown in listing 3.5. This
will of course invalidate the test, so a manual verication of correct behavior is in order.
Listing 3.5: Changing the getCurrenState method to access the hardware register.
1 ButtonState Button::getCurrentState()
2 {
3
return (((*((volatile unsigned long *) portAddress)) & (1 << pinNumber))
? PRESSED : RELEASED ;
4 }
The actual implementation of getCurrentState is irrelevant. However manual replacing this code
with the mock implementation of getCurrentState can be regarded as an ill practice. This problem
can be solved in a multitude of ways, which will be discussed in section 3.3.2. Still, since automating
the test required to develop code which is only loosely coupled to hardware, the need to run tests on
target can be reduced even further until code can be tested on host.
3.3
Test on host
Ideally, program code and tests reside in memory of the programmer's development computer. This
situation guarantees the fastest feedback cycle in addition to independence of target availability. Furthermore, developing in isolation of target hardware improves modularity between application code
and drivers. Finally, as the host system has virtually unlimited resources, a state of the art unit testing
framework can be used.
In the
Test on host
strategy, development starts with tests and program code on the host system.
However, calls to the eective hardware are irrelevant on the host system.
replaces the hardware related functions, mocking the expected behavior. This is called a mock, i.e. a
fake implementation is provided for testing purposes. A mock represents the developer's assumptions
on hardware behavior. Once the developed code is migrated to the eective hardware system, these
assumptions can be veried.
3.3.1 Implementation
The
Test on host
strategy typically consists of two build congurations, as shown in gure 3.3. Re-
gardless of the level of abstraction of hardware, the underlying components can be mocked in the host
build. This enables the developer to run tests on the host system, regardless of any dependency on the
target platform. However, cross-platform issues might arise and these are impossible to detect when
no reference build or deployment model is available.
platform will identify these issues. Although, running the cross-compiler or deploying to a development
board could already identify some issues before the actual target is available.
<<migrate>>
Tests
Tests
Software under
development
<<migrate>>
<<substitute>>
Software under
development
Mock driver
code
Test on host
Therefore hardware mocks must be swapped with the real implementation without breaking the system
or performing elaborate actions to setup the switch. Five techniques have been identied, three based
upon object oriented principles and three C-based, which facilitate the process.
eective hardware and mock are addressed through a unied abstract class, which forms the interface
of the hardware driver.
Calls are directed to the interface thus both mock and eective hardware
driver provide an implementation. The interface should encompass all methods of the hardware driver
to ensure compatibility. Optionally the mock could extend the interface for test modularity purposes.
This enables customizing the mock on a test-per-test basis, reducing duplication in the test suite.
It should be noted that the interface could provide a partial implementation for the hardware
independent methods.
However, this would indicate that hardware dependencies are mixed with
hardware independent logic. In this situation a refactoring is in order to isolate hardware dependent
code.
Inheriting from the same interface guarantees compatibility between mock and real hardware driver,
as any inconsistency will be detected at compile time. Regarding future changes, extending the real
driver should appropriately reect in the interface.
The main reason of concern with this approach is the introduction of late binding, which inevitably
slows down the system in production. However it should be noted that such an indirection is acceptable
in most cases.
driver inherits from the real target driver, it is possible to switch them according to their respective
environment.
However it requires that all hardware related methods are identied, at least given
25
However, these issues can be worked around with some macro preprocessing.
members can be adjusted to public access solely for testing purposes. Also the virtual keyword can be
removed in the target build, as shown in listing 3.6. Note that guarding the scope of macro denitions
with
#undef
is essential for safe usage. Furthermore since these code snippets can be frequently used,
injecting them with a le include directive reduces code duplication, which would otherwise obfuscate
the business logic.
26
Listing 3.6: Preprocessor directives to enable inheritance-based mocking without late binding overhead.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#ifdef TESTONHOST
#define private public
#else
#define virtual
#endif
class RealHardware{
private:
virtual void hardwareCall();
...
};
#ifdef TESTONHOST
#undef private
#else
#undef virtual
#endif
Inheritance-based mock introduction is more about managing testability of code than actual testable
design. That being said, all overhead of testability can be easily removed in production code. However, ensuring that the macro denitions do not wreck havoc outside the le is fundamental in this
approach. Nonetheless, also when dealing with legacy code this approach is preferable. Considering
the amount of refactoring, which is necessary to extract the hardware independent interface in the
interface-based approach, the adjustments for inheritance-based mock replacement can be introduced
without an unnecessary layer of indirection.
installed in the rst place, in order to be able to address hardware or mock functionality without overhead of managing dependencies.
member object reference is injected during object construction of the composite object. Listing 3.7
shows a reference design of constructor injection.
1 class ConstructorInjection {
2
public: ConstructorInjection(HardwareDependency* hw){ m_hw = hw };
3
virtual ~ConstructorInjection();
4
private:
HardwareDependency* m_hw;
5 };
27
Next is setter injection, as shown in listing 3.8, which performs the same operation as constructor
injection.
Yet, instead of the constructor a setter method is used to register the reference.
This
introduces the possibility to change the reference at run-time without creating a new composite object.
On the other hand, when compared to constructor injection, it requires an additional overhead in test
management, namely the call of the setter method itself. Forgetting to do so will lead to incorrect
initialized objects under test. Moreover setter injection will introduce additional overhead in the setup
of the system in production. However, for instance during a system setup, which is not time-critical,
this overhead can be neglected. Yet, considering real-time systems or multiple run-time creation and
cleanup of the objects, the overhead becomes critical. Especially when considering resource constrained
systems like embedded processors. Therefore a sound advice is to only use setter injection when its
exibility is required and otherwise use constructor injection by default.
1 class SetterInjection {
2
public: SetterInjection();
3
virtual ~SetterInjection();
4
void injectDependency(HardwareDependency* hw){ m_hw = hw };
5
private:
HardwareDependency* m_hw;
6 };
Finally, interface injection registers the dependency by inheriting from an abstract class, which
contains a setter method. Two approaches can be followed with interface injection. On the one hand,
a specic class can be made for each type of objects to be injected.
interface class can be provided, which allows injecting objects of all types. Though this mechanism
will be discussed in chapter 5.
The previous strategies were based on object-oriented features, however when speed considerations
are critical embedded software is written in C. The following techniques do not use OO features, yet
allow switching unnoticeable, at least in production, between real and mock code.
functions in a header le and use the linking script or IDE to indicate which implementation le
corresponds to it, i.e.
the le containing the actual implementation or a similar le containing the
mock implementation. Correspondingly the host build will refer to the mock les and the target build
to the real implementation les, as visually represented in gure 3.6.
28
Figure 3.6: Host and target build refer to the mock and real implementation le respectively.
Practically the linker script (or IDE) will refer to three dierent subfolders. First is the common
folder, which contains all platform independent logic as well as header les containing the hardware
dependent function declarations.
Next is the host folder, which will include the mocks and nally
the target folder with the corresponding real implementations. Should a hardware implementation or
mock le be missing, the linker will return an error message as a reminder. A practical example of the
link-time based conguration is not given, considering the multitude of build systems and IDE's.
volves delivering the desired source code les to the linker, the macro preprocessed alternative involves
preprocessor directives, which manipulate the source code itself. For instance listing 3.9 provides an
almost identical eect to its link-time based alternative.
1
2
3
4
5
#ifdef TESTONHOST
#include "mockdriver.h"
#else
#include "realdriver.h"
#endif
However macro replacement allows to intersect inside a source le, as is illustrated by listing 3.10.
First, the function call to be mocked is replaced by a new function denition.
framework to implement the tests related to the code under test is injected in the le itself.
1
2
3
4
5
6
7
8
9
10
11
12
13
#ifdef TESTONHOST
#define functionMock(int arg1, int arg2) function(int arg1, int arg2)
void functionMock(int arg1, int arg2) {};
#endif
/* code containing function to be mocked */
#ifdef TESTONHOST
#include "unittestframework.h"
int main () {
/* run unit tests & report */
}
#endif
Although macros are commonly negatively regarded, the macros shown in the previous two listings
are generally safe and will not lead to bugs which are hard to nd. However the macro statements
will pollute the source code, which leads to less readable and thus less maintainable code. The main
advantage of macro preprocessed mock replacement is in dealing with legacy code.
Capturing the
behavior of legacy code in tests is something that should be done with the least refactoring, because
in legacy code, tests are lacking to provide feedback on the safety of the refactoring operations. Using
macros eectively allows leaving the production code unchanged, while setting up the necessary tests.
Conversely, when developing new applications link-time based mock replacement is preferred, as it
does not have any consequences on the production code.
building the vtable. However, implementing dynamic dispatch in C is not preferred when comparing it
to either the preprocessing or link-time solution. Introducing the vtable in code results in an execution
time overhead, which can be critical when considering the typical type of embedded C applications.
Furthermore constructing the vtable in C code is not preferred when the C++ alternative is available. On the one hand, while C++ compilers can do extensive optimization on virtual functions where
the actual type can be discovered at compile-time, this cannot be done by a C compiler. On the other
hand, there is a manifold overhead in code management to implement the vtable system in C when
compared to the native OO solution. In conclusion, the abstraction created by C++ allows to easily
forget the overhead introduced by late binding, yet also permits to improve code maintainability.
3.3.3 Process
In order to deal with the slow cycle of uploading embedded program code, executing tests on target
and reporting, Test on host is presented as the main solution.
In order to do so an assumption is
made that any algorithm can be developed and tested in isolation on the host platform. Isolation from
hardware-related behavior is critical with the purpose of dynamically delegating the call to the real or
mock implementation.
Considering the dierences between host and target platform, verication of correspondence between target and host implementation is essential. These dierences are:
Cross-compilation issues, which occur as compilers can generate dierent machine code from the
same source code. Also functions called from libraries for each platform might lead to dierent
results, as there is no guarantee regarding correspondence of both libraries.
host, the mocks are representing assumptions made on hardware behavior. This requires having
an in-depth knowledge of hardware specications. Furthermore, as the hardware platform for
embedded systems can evolve, these specications are not as solid or veried as is the case with
a host system.
Execution issues concerning the dierent platforms. These concern the dierence in data representation, i.e. little or big endian, word-size, overow, concurrency memory model, etc., and
speed, i.e. memory access times, clocking dierences, etc. These issues can only be uncovered
when the tests are executed on the target platform.
Test on host is the primary step in the embedded TDD cycle, as shown in gure 3.2. This cycle employs
the technique of dual targeting , which is a combination of Test on host and Test on target. In eect,
development in this process is an activity entirely executed according to Test on host, as a reasonable
development speed can be achieved.
Test on host, Test on target techniques are applied. Specically, time-intensive activities are executed
less frequent, which allows managing the process between development time and verication activities.
The embedded TDD cycle proscribes to regularly compile with the target compiler and subsequently
solve any cross-compilation issues. Next, automated tests can be ported to the target environment,
execute them and solve any problems that arise. Yet, as this is a time-intensive activity it should be
executed less frequently. Finally, some manual tests, which are the most labor-intensive, should only
be carried out every couple of days.
Listing 3.11: A test with a mock register to test a temperature sensor initialization.
1 TEST(ResetTempSensorTest)
2 {
3
unsigned int IOaddresses [8]; /* IO mapped memory representation on host */
4
unsigned int *IODIRmock;
/* register to mock */
5
IODIRmock = IOaddresses + 7; /* map it on the desired position in the array */
6
unsigned int pinNumber = 4;
7
/* mock external reset */
*IODIRmock = 0xFF;
8
TemperatureSensor *tempSensor = new TemperatureSensor(IOaddresses, pinNumber);
9
tempSensor->reset();
10
CHECK_EQUAL(*IODIRmock,0xEF); /* test the change of direction */
11 }
31
IODIR is a register, which sets the port direction of the microcontroller. It is part of the general set
of IO registers, which are represented on host with the array IOaddresses. With constructor injection,
the temperature sensor can be assigned to a specic pin on a port or when the test is executed on
host, the mock can be injected that way. After the test is developed, a minimal set of denitions is
needed to get through compilation. This listing is omitted, since the denitions can be deducted from
the test. Once the compiler has assembled the executable on host, it is run to indicate that the test
has a least failed once. Provided that it fails, a minimal implementation can be developed to get a
passing test, which has been done in listing 3.12.
Listing 3.12: Temperature sensor constructor injection and reset method implementation.
1
2
3
4
5
6
7
8
9
10
11
12
13
After running the test again and assuring that it passes, it is time to evaluate if refactoring is in
order. In fact, a small amount of duplication has entered between test and driver code. The location
of the IODIR register is xed and could be declared as a global static const variable. Afterwards when
more tests have been added to develop a functional reset, a migration to target might be appropriate
to verify whether the code eectively runs on target.
This example is an illustration of how TDD inuences code quality and low-level design and how
Test on host amplies this eect.
coupled. This enables to reuse this code more easily, in case the temperature sensor is placed on a
dierent pin or if the code needs to be migrated to a dierent platform.
3.4
Remote testing
Test on host in conjunction with Test on target provides a complete development process, in order
to successfully apply TDD. Yet, it introduces a signicant overhead to maintain two separate builds
and to write the hardware mocks. Remote testing is an alternative, which eliminates both of these
disadvantages.
Remote testing is based on the principle that tests and code under test do not need to be implemented in the same environment.
observation that TDD requires a signicant number of uploads to the target system, i.e. ashes .
3.4.1 Implementation
Remote testing is based on the technology of remoting, for instance Remote Procedure Calls (RPC),
Remote Method Invocation (RMI), Simple Object Access Protocol (SOAP) or Common Object Re-
quest Broker Architecture (CORBA). Remoting allows executing subroutines in another address space,
without the manual intervention of the programmer. When this is applied to TDD for embedded software, remoting allows for tests on host to call the code under test, which is located on the target
environment. Subsequently, the results of the subroutine on target are returned to the test on host for
evaluation.
Software under
development
Tests
Skeletons
Stubs
Broker on Target
Broker on Host
Regardless of the specic technology, a broker is required which will setup the necessary infrastructure to support remoting. In homogeneous systems, such as networked computing, the broker on
either side is the same. However, because of the specic nature of embedded systems, a fundamental
platform dierence between the target and the host broker exists.
On the one hand the broker on target has a threefold function. First, it maintains communication
between host and target platform on the target side. Next, it contains a list of available subroutines
which are remotely addressable. Finally, it keeps a list of references to memory chunks, which were
remotely created or are remotely accessible. These chunks are also called skeletons.
On the other hand the broker on host serves a similar, but slightly dierent function.
For one
thing it maintains the communication with the target. Also, it tracks the stubs on host, which are
interfaces on host corresponding to the skeletons in the target environment. These stubs provide an
addressable interface for tests, as if the eective subroutine would be available in the host system.
Rather than executing the called function's implementation, a stub merely redirects the call to the
target and delivers a return value as should the function have been called locally.
As the testing framework solely exists on the host environment, there is practically no limitation
on it. Even the programming language on host can dier completely from the target's programming
tests might require a larger amount of boilerplate code than strictly necessary. Writing tests in another
language is a convenience which can be exploited with Remote testing.
Unfortunately, the use of remoting technology introduces an overhead into software development.
Setting up broker infrastructure and ensuring subroutines are remotely accessible requires a couple
of additional actions.
mechanism called marshaling. This mechanism will allow the broker to invoke the subroutine when
a call from host marshals such an action.
which is eectively identical to the interface on target. Invoking the subroutine on host will marshal
the request, thus triggering the subroutine on target, barring communication issues between host and
target.
Some remoting technologies, for instance CORBA, incorporate the use of an Interface Description
Language (IDL). An IDL allows dening an interface in a language neutral manner to bridge the
gap between otherwise incompatible platforms.
to remoting. However the specications describing the interfaces are typically used to automatically
generate the correct serialization format. Such a format is used between brokers to manage data and
calls. As serialization issues concern the low level mechanics of remoting, an IDL provides a high level
format, which relieves some burden of the programmer.
3.4.2 Process
The Remote testing development cycle changes the conventional TDD cycle in the rst step. When
creating a test, the interface of the called subroutine under test must be remotely dened. This results
in the creation of a stub on host which makes the dened interface available on the host platform,
while the corresponding skeleton on target must also be created. Subsequent steps are straightforward,
following the traditional TDD cycle.
1. Create a test
2. Dene an interface on host
(a) Call the subroutine with test values
(b) Assert the outcome
(c) Make it compile
(d) If the subroutine is newly created: add a corresponding skeleton on target
(e) Run the test, which should result in a failing test
3. Red bar
(a) Add an implementation to the target code
(b) Flash to the target
(c) Run the test, which should result in a passing test
4. Green bar: either refactor or add a new test
rather low return of investment inherent to Remote testing, an adaption to the process is made, which
results in a new process called Remote prototyping.
As
shown in listing 3.13, the RemoteTemperatureSensorTest addresses the methods as if the temperature
sensor is available in the test execution environment.
Listing 3.13: A typical Remote testing cycle starts with a writing a test on host.
1
2
3
4
5
6
7
8
9
10
/*Host*/
static Broker broker = new Broker(); /* Initialization of Broker on host*/
TEST(RemoteTemperatureSensorTest)
{
TemperatureSensor* TempSensor = new TemperatureSensor(port, pin);
TempSensor->reset();
CHECK_CLOSE(TempSensor->read(), 42, 10);
delete TempSensor;
}
Next, in order to get the test compiling, a stub must also be dened on host. For this example,
the remote infrastructure code is given in listing 3.14. However many remoting technologies permit
to automatically generate stubs and skeletons. Moreover, the code related to the remoting process is
based upon a very simple broker implementation. Other remoting related code will have a dierent
syntax depending on the library or framework used, yet the principles remain the same.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
/*Host*/
class TemperatureSensor : public ITemperatureSensor
{
public:
TemperatureSensor(Broker* broker);
~TemperatureSensor();
void reset();
int read();
};
void TemperatureSensor::reset()
{
broker.call(id, "reset");
}
int TemperatureSensor::read()
{
broker.call(id, "read");
return broker.intReturn();
}
The constructor of the stub will keep a reference to the host broker and it will also keep track
of the ID received by the target broker. Furthermore each public method of the TemperatureSensor
interface, which needs to be remotely available, must also be implemented with a stub implementation.
In order to keep track of consistency between interfaces on host and on target, it is possible to dene
a single abstract class and consequently inherit from it. The stub implementation merely calls and
returns, respectively the method and its return value.
1
2
3
4
5
6
7
8
9
10
11
/*Target*/
class TemperatureSensor : public IBroker, public ITemperatureSensor
{
public:
TemperatureSensor(int port, int pin);
~TemperatureSensor();
void reset();
int read();
string invoke(const char* functionAndParameters);
};
The TemperatureSensor class on target contains the expected constructor, destructor and called
methods, yet also has an invoke method.
methods.
The skeleton inherits of the IBroker abstract class, whose interface is provided in listing
3.16.
1
2
3
4
5
6
7
/*Target*/
class IBroker
{
public:
int ib_ID;
virtual string invoke(const char*) = 0;
};
Inheriting from the IBroker abstract class provides the skeleton with an ID, which binds instances
of skeletons on target and stubs on host with each other. Moreover the virtual invoke method ensures
an invoke is provided in every skeleton. While the broker on target will automatically assign an ID,
the invoke methods must be implemented.
The nal part of setting up the remote infrastructure is registration of the remotely addressable
methods in the invoke method of the skeleton.
class.
Listing 3.17: Registration of the remotely addressable methods in the invoke method.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
/*Target*/
TemperatureSensor::invoke(const char* string)
{
string result = "";
if(ib_parsePrototype(functionName))
{
if(ib_functionName.compare("reset")==0)
{ reset();
return ""; }
if(ib_functionName.compare("read")==0)
{ result = parseReturnInt(read());
return result; }
}
}
In the invoke method a parser is called, which has split the stream between the brokers in a return
value type, function name and list of arguments. Registration of the methods is in fact a comparison
of the function name and dispatching of the required function. If a return value is required the value
itself must be parsed again to be sent back to host.
In summary,
1. Write a test
2. Generate or manually write the stub
3. Write the skeleton without an implementation
4. Generate or manually register the remote addressable functions in the invoke method
5. Upload to target
6. If the remote infrastructure is set up correctly, a failing test should be the result
Now, the normal TDD cycle can be resumed.
37
Tests
Stable
code base
Software under
development
Skeletons
Stubs
Broker on Target
Broker on Host
Remote prototyping is eective under the assumption that software under development is evolving,
but once the software has been thoroughly tested, a stable state is reached.
case the code base can be instrumented to be remotely addressable. Subsequently it is programmed
into the target system and thoroughly tested again to detect cross-compilation issues.
Once these
issues have been solved, the new code on target can be remotely addressed with the aim of continuing
development on the host system.
An overview of the remote prototyping process applied to an object oriented implementation, for
instance C++, is given in gure 3.9. A fundamental dierence exists when all objects can be statically
allocated or whether dynamic creation of memory objects is required.
Broker
on Target
setup
Skeletons
Class List
ID
execute
Broker
on Host
Stubs
Tests
ID
Stable code
Software
under
development
In a conguration in which the target environment can be statically created, setup of the target
system can be executed at compile time. The broker system is not involved in constructing the required
objects, yet keeps a reference to the statically created objects. Eectively the host system does not
need to congure the target system and treats it as a black box. Conversely the process of remote
prototyping with dynamic allocation requires additional conguration.
is approached as a glass box system. This incurs an additional overhead for managing the on target
components, yet allows dynamically reconguring the target system without wasting a program upload
cycle.
The dynamical remote prototyping strategy starts with initializing both the broker as well on target as on host side. Next, a test is executed, which initializes the environment. This involves setting
up the desired initial state on the target environment. This is in anticipation of the calls, which the
software under development will conduct. For instance, to create an object in the target, the following
steps are performed, as illustrated in gure 3.9.
1. The test will call the stub constructor, which provides the same interface as the actual class.
2. The stub delegates the call to the broker on host.
3. The broker on host translates the constructor call in a platform independent command and
transmits it to the target broker.
4. The broker on target interprets the command and calls the constructor of the respective skeleton
and in the meanwhile assigns an ID to the skeleton reference.
5. This ID is transmitted in an acknowledge message to the broker on host, which assigns the ID
to the stub object.
the stub object, which are delegated to the eective code on target. Likewise, any return values are
delivered to the stub. Optionally another test run can be done without rebooting the target system. A
cleanup phase is in order after each test has executed, otherwise the embedded system would eventually
run out of memory. Deleting objects on target is as transparent as on host, with the addition that the
stub must be cleaned up as well.
Remote prototyping deals with certain constraints inherent to embedded systems. However, some
issues can be encountered when implementing and using the Remoting infrastructure.
Embedded constraints
cessing power, of the remoting infrastructure on the embedded system is minimal. Of course it introduces some overhead to systems which do not need to incorporate the infrastructure for application
needs. On the other hand remote prototyping enables conducting unit tests with a real target reference.
Porting a unit test framework and running the tests in target memory as an alternative will
introduce a larger overhead than the remoting infrastructure and lead to unacceptable delays in an
iterative development process.
Next, the embedded infrastructure does not always provide all conventional communication peripherals, for instance Ethernet, which could limit remote prototyping applicability. However, if an IDL
is used, the eective communication layer is abstracted. Moreover, the minimal specications needed
to setup remote prototyping are limited as throughput is small and no timing constraints need to be
met.
Finally, remote prototyping requires that hardware and a minimalistic hardware interfacing is available. This could be an issue when hardware still needs to be developed. Furthermore hardware could
be unavailable or deploying code still under development might be potentially dangerous. Lastly, a
minimalistic software interface wrapping hardware interaction and implementing the remoting infrastructure is needed to enable remote prototyping.
Issues
The encountered issues when implementing and using remote prototyping can be classied
in three types.
A second
concern arises when dynamic memory allocation on the target side is considered. Thirdly, translation
of function calls to common architectural independent commands introduces additional issues.
Dierences between host and target platform can lead to erratic behavior, such as unexpected
overows or data misrepresentation.
misrepresentation issues. Likewise, border problems can be discovered by introducing some boundary
condition tests.
Next, on-target memory management is an additional consideration which is a side-eect of remote
prototyping. Considering the limited memory available on target and the single instantiation of most
driver components, dynamic memory allocation is not desired in embedded software.
Yet, remote
prototyping requires dynamic memory allocation to allow exible usage of the target system.
This
introduces the responsibility to manage memory, namely creation, deletion and avoiding fragmentation.
By all means this only aects the development process and unit verication of the system, as in
production this exibility is no longer required.
Finally, timing information between target and host is lost because of the asynchronous communication system, which can be troublesome when dealing with a real-time application. Furthermore
to unburden the communication channel, exchanging simple data types are preferred over serializing
complex data.
Tests
The purpose of Remote prototyping is to introduce a fast feedback cycle in the development of
embedded software. Introducing tests can identify execution dierences between the host and target
platform. In order to do so the code under test needs to be ported from the host system to the target
system.
By instrumenting code under test, the remote prototyping infrastructure can be reused to
execute the tests on host, while delegating the eective calls to the code on target.
40
Listing 3.18: A typical Remote prototyping cycle starts with a writing a test on host.
1
2
3
4
5
6
7
8
9
10
/*Host*/
static Broker broker = new Broker(); /* Initialization of Broker on host*/
TEST(RemoteThermometerTest)
{
Thermometer* thermometer = new Thermometer();
/* check if temperature is near room temperature +/- 5 degrees (Celsius)*/
CHECK_CLOSE(thermometer.getTemperature(), 20, 5);
delete thermometer;
}
First, the compiler will complain about the non-existent denition of the Thermometer class. Therefore, to satisfy the compiler, see listing 3.19.
1
2
3
4
5
6
7
8
/*Host*/
class Thermometer
{
public:
Thermometer();
virtual ~Thermometer();
double getTemperature();
};
A failing test indicates that the test has at least failed once and allows to proceed to the following
step, namely implementation of the code under test (listing 3.20).
41
Listing 3.20: After the test has failed, a correct implementation of getTemperature() is given.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
/*Host*/
Thermometer::Thermometer()
{
/*tempSensor is a private TemperatureSensor pointer,
whose definition has been omitted for brevity*/
this->tempSensor = new TemperatureSensor(port, pin);
}
double Thermometer::getTemperature()
{
tempSensor->reset();
int magnitude = tempSensor->read();
int sign = tempSensor->read();
/* conversion details are omitted */
return currentTemperature;
}
This results in a green bar, which indicates a passing test, thus concluding this example. Eectively
this example demonstrates the value of Remote prototyping. All development was on host, even though
importantly there was no more code needed to run the test on the host system. Granted that work
was already done in the Remote testing example, but once the remoting infrastructure is available,
developing embedded software becomes as easy as developing PC applications.
3.5
Overview
Test on target, Test on host, Remote testing and Remote prototyping have been dened as strategies
to develop in a TDD fashion for embedded. All of these strategies have advantages and disadvantages
when a comparison between them is made. Furthermore because of the disadvantages, these strategies
excel in a particular embedded environment.
strategy and an overview is given of how development in a project can be composed of combinations
of these strategies.
The baseline of this comparison is Test on target. Namely for the particular reason that when the
number of code uploads to target is the only consideration, Test on target is the worst strategy to
choose. It is possible to demonstrate this when the classical TDD cycle is considered, as in gure 3.10.
c The hardware function calls in example 3.20 are: TemperatureSensor constructor call, TemperatureSensor.read()
and TemperatureSensor.reset().
42
Create Test
Red
Green
Refactor
When TDD is strictly applied in Test on target, every step will require a code upload to target.
Considering the iterative nature of TDD, each step will be frequently run through. Moreover since a
target upload is a very slow act, this will deteriorate the development cycle to a grinding halt. Thus
reducing the number of uploads is critical in order to successfully apply TDD for embedded software.
Nonetheless Test on target still has its merit.
interacts with the hardware of the embedded system is too convoluted for Test on host mocking and
downright impossible for the Remoting strategies.
Remote testing
When considering the eect of Remote testing on TDD for embedded software,
one to prove the test is eectively failing and a second one, which contains the
implementation to make the test pass. Note that this is under the assumption that the correct code is
immediately implemented and no refactorings are needed. If it takes multiple tries to nd the correct
implementation or when refactoring the number of ashes rises.
In order to decrease the number of required uploads, tests can be implemented in the host environment, i.e. Remote testing. Eectively this reduces the number of ashes by the number of tests
per subroutine minus one. One is subtracted because a new subroutine will require to ash the empty
skeleton to the target. Therefore the benet of remote testing as a way to apply TDD to embedded
software is limited, as demonstrated in table 3.1. The ideal case is when a test passes after the rst
implementation is tried.
Remote testing
Worst case
1 : 1 : 1
0%
2 : 2 : 1
25%
3 : 3 : 1
33%
X : X : 1
Max = 49,99...%
General case
T : C : R
T R
T +C
100%
Consider that tests have a relative low complexity when compared to production code.
This
observation implies that a test is less likely to change than the eective code under test, which indicates
an eective reduction of the benets of remote testing. The possibility of changing code to reach green
bar, namely during the implementation or refactoring phase, is higher than the (re)denition of the
tests. Eectively this will reduce the ratio of tests versus code under test requiring an update of code
on target.
When only the number of uploads is taken into account, Remote testing will never be
harmful to the development process. Yet considering the higher complexity of production code and
refactoring, which mostly involves changing code under test, the benet of Remote testing diminishes
rapidly.
When other costs are taken into account, this strategy is suboptimal when compared to
Test on host. However, as a pure testing strategy, Remote testing might have its merit. Though the
application of Remote testing in this context was not further explored.
Remote prototyping
process is made when code is also developed on host, i.e. Remote prototyping. Remote prototyping
only requires a limited number of remote addressable subroutines to start with. Furthermore, once
code under development is stable, its public subroutines can be ported and made remote addressable
in turn. This is typically when an attempt can be made to integrate newly developed code into the
target system. At that moment these subroutines can be addressed by new code on host, which is of
course developed according to the Remote prototyping principle.
Where Remote prototyping is concerned, it is possible to imagine a situation which is in fact in
complete accordance to Remote testing. Namely when a new remote subroutine is added on target,
this will conform to the idea of executing a test on host, while code under test resides on target.
However code which is developed on host will reduce the number of uploads, which would normally be
expected in a typical Test on target fashion. Namely each action which would otherwise have provoked
an additional upload will add to the obtained benet of Remote prototyping.
Yet the question remains of how the Remote testing part of the process is related to the added
benet of Remote prototyping.
related source code. Namely on the one hand the number of Lines Of Code (LOC) related to remoting
infrastructure added to the LOC of subroutines which were not developed according to Remote prototyping, but rather with Remote testing. On the other hand the LOC of code and tests developed on
host. These assumptions will ultimately lead to table 3.2.
44
# Tests
Target (CD : R)
Host (CH )
Remote prototyping
50%
66%
Max = 99,99...%
0,75
1 : 1
0,25
12,5%
Min.
1-
1 : 1
General case
CD : R
CH
Max.
Min =
1
2 * 100%
CH
R
TT+C
+ T +C
D
H
100%
Table 3.2 indicates a net improvement of Remote prototyping when compared with Remote testing.
Furthermore it also guarantees an improvement when compared to Test on target. Nevertheless it also
shows the necessity of developing code and tests on host as the major benet is obtained when
CH are high when they are compared with respectively
Test on Host
and
and T.
Finally, a comparison can be made with the Test on host strategy. When only uploads
to target are considered, Test on host provides the best theoretical maximum performance, as it only
requires one upload to the target, i.e.
denitely contradicts with the incremental aspect of TDD. Typically a verication upload to target is
a recurrent irregular task, executed at the discretion of the programmer. Furthermore Test on host
and the remoting strategies have another fundamental dierence. Namely while setting up remoting
infrastructure is only necessary when a certain subroutine needs to be remotely addressable, Test on
host requires a mock. Although there are mocking frameworks which reduce the burden of manually
writing mocks, it still requires at least some manual adaptation. When the eort to develop and maintain mocks is ignored, a mathematical expression similar to the previous expressions can be composed
as shown in table 3.3. However it should be noted, that this expression does not consider the most
d.
# Tests (T)
Test on host
50%
Min.
Min = 0,0...1%
Max.
Max = 99,99...%
(
General case
Table 3.3: Test on host benet when only target uploads are considered
45
U
T +C
100%
In comparison
code uploads.
In the previous sections, the only metric which was considered was the number of
On the one hand when considering the test on target case, tests and a testing framework
will add to the required memory footprint of the program. While on the other hand, the processing
power of the target system is also limited, so a great number of tests on target will slow down the
execution of the test suite. Another metric to consider are the hardware dependencies, namely how
much eort does it require to write tests (and mocks) for hardware-related code? Finally, what is the
development overhead required to enable each strategy. For Test on target this is the porting of the
testing framework, while Test on host requires the development and maintenance of hardware mocks
and nally Remote prototyping requires Remoting infrastructure.
Table 3.4 provides a qualitative overview of the three strategies compared to each other when these
four metrics are considered.
Slow upload
Restricted resources
Hardware dependencies
Overhead
Test on target
Test on host
Remote prototyping
- - -
+++
++
Broker
on target
on host
on target
- - -
+++
+/-
Target memory
Host memory
Host memory
+++
- - -
+/-
Real hardware
Mock hardware
Intermediate format
- - -
- -
Test framework
Mocks
Remoting infrastructure
Table 3.4: Test on target, Test on host and Remote prototyping in comparison
The overview in table 3.4 is awed as it does not specify the embedded system properties.
Yet
the range of embedded systems is too extensive to include this information into a decision matrix.
For instance, for a large number of embedded systems, resources are not an issue. Therefore Test on
target becomes much more preferable. Another example is the case where remoting infrastructure is
already available and applying Remote prototyping does not have any overhead at all. Likewise when
an application is developed on embedded Linux, one can develop the application on a PC Linux system
with only minimal mocking needed, making Test on host the ideal choice. Moreover in this overview
no consideration is given to legacy code, yet the incorporation of legacy code will prohibit the use of
the Test on host strategy.
When deciding which strategy is preferable, no denite answer can be given.
In general, Test
on target is less preferred than Test on host and Remote prototyping, while Remote prototyping is
strictly better than Remote testing. Yet beyond these statements all comparisons are case-specic. For
instance when comparing Test on host versus Remote prototyping, it is impossible to make a sound
decision without considering the embedded target system and the availability of drivers, application
software, etc. Rather this matrix is a general guide in the decision process.
3.6
Further reading
The embedded TDD cycle is extensively described by Grenning[33, 34, 35, 36]. Meszaros[47] identies
six dierent types of mocks or using his nomenclature, test doubles.
Test dummy: has no actual implementation, but is used as a placeholder to satisfy the linker.
value.
Mock object: records the sequence of called methods and asserts on basis of that sequence.
Fowler[26] gave the three principles of Dependency Injection their respective name. Link-time polymorphism was rst identied by Koss and Langr[41]. A reference design and implementation of manually
building a vtable in C is provided in chapter 11 of Grenning's book on TDD for embedded C [36].
The CORBA specication[10] is an international standard maintained by the Object Management
Group (OMG)[7], a concise overview is given by McHale[46].
namely CORBA/e Compact and CORBA/e Micro[11].
execution on high-end embedded platforms, while the latter has been further reduced for resource constrained embedded platforms. LDRA provides a tool, which implements the idea of Remote testing[5].
In a case study on remote testing and prototyping, the mbed platform was used with their RPC
library[13].
47
Chapter 4
Legacy code
Modules should be both open and closed.
Bertrand Meyer[48]
The following chapters provide various related topics to Test-Driven Development for embedded
software. First, the topic of dealing with legacy code is addressed.
Applying TDD to embedded software development has already been extensively dealt with. However these techniques only described the process starting with a clean slate. In practice, professionally
developed code rarely starts anew.
generally means: code that was not developed according to TDD. A less strict denition is code which
is not accompanied with a test suite.
The main problem of applying TDD with a legacy code base is the problem to start including tests.
Code not developed with unit testing in mind is less testable, namely it likely does not provide clean
interfaces of small pieces of code. Rather it is more probable to be convoluted and interweaving a lot of
code, which could be separated. Of course legacy code can be refactored to make it more testable, but
there are no tests which can indicate whether refactoring did not break any functionality. So, legacy
code is best left untouched unless tests can be added to apply refactorings safely. This is a typical
circular cause and consequence problem , which leads to leaving legacy code mostly untouched.
In order to break the circular reference, tests can be introduced in code seams.
The denition
of a seam in code is a place in code where it is possible to introduce a test without changing the
code itself. There are three types of seams, namely preprocessing, link and object seams. The rst
is using preprocessor directives, similar to those in listing 3.9 and 3.10. The second is similar to the
concept of link-time base mock replacement as described in section 3.3.2. Object seams mostly involve
overriding the desired method with another method provided by a test (sub)class, as demonstrated
in the inheritance based mock replacement gure 3.5. Seams are closely related to mock replacement
techniques, but there is a subtle dierence.
in code and its three types are merely referring to the time in the software process, namely during
b.
48
4.1
First the principle of dealing with legacy code in Test on host is discussed.
However to start it is
assumed that no host build is available and that porting a legacy code base to host is not a simple
operation.
In that case dealing with legacy code starts with adding tests on target to capture and
preserve the legacy code's behavior. These tests run on target in order to get meaningful feedback on
the state of the legacy system. Ultimately, large refactorings or adding behavior to legacy software
will be unmanageable, while developing according to
Test on target.
host rst, introduces the risk of misinterpreting cross-platform issues as detecting an eective on-target
bug.
Adding tests prior to refactoring, which enables developing new code, gives an assurance that
changing does not break the code. By initiating this cycle on an ad hoc basis, attention is immediately
shifted to the regions of code which are likely to change the most in the near future. This leads to the
cycle shown in gure 4.1.
Create Test
Refactor
Target
Migrate
code
Host
Test on Host
Figure 4.1: Refactoring and migrating legacy software to apply Test on host
In eect, software proven to be reliable and stable in the past is kept unchanged.
Still, each
refactoring should lead to a state of code, which allows for introducing more tests. Migrating a part
of the legacy system to host is only justied to conduct major refactoring operations or to extend its
functionality. Once the (sub)system is migrated,
4.2
Test on host
An alternative to gradual migration of legacy code to host is Remote testing. Actually, the relevance of
Remote testing is largely attributed to its application in dealing with legacy code. The idea resembles
to the conventional Remote testing cycle, yet it starts with instrumenting an existing subroutine with
remoting infrastructure. This allows devising a test for the subroutine on host, which in turn enables a
safe refactoring phase. Furthermore the principle of remote prototyping can also be applied in a gradual
manner, where calls related to legacy software can be marshalled by the remoting infrastructure. A
visual representation of this process is shown in gure 4.2.
Extend legacy
subroutine
with remoting
Refactor
Target
Host
Create Test
Remote prototyping
Figure 4.2: Adding remote infrastructure to legacy code to apply Remote prototyping
The resemblance of both practices in dealing with legacy code is striking. The principle idea is to
adapt either Test on host in the rst case or Remote prototyping in the latter as soon as possible. When
evaluating both processes, the time of adopting the desired strategy is a key metric. As sooner this
adaption takes place, the more eective the strategy. In this respect, Remote prototyping is superior
to Test on host when dealing with legacy code. When the ideal case is considered, Remote prototyping
can be immediately applied, whereas Test on host will always require target code refactoring. When a
real legacy system is considered this contrast between both strategies is even more substantial. Namely,
Remote prototyping can be applied to a single subroutine, while Test on host will require more eort.
Either a subroutine is completely isolated on host with mocks or a substantial part of the code needs
to be migrated to host.
4.3
Further reading
Feathers [23] wrote the seminal book on the subject of legacy code, which introduces the denition of
seams and provides an overview of refactorings, similar to Fowler[25], but applied to legacy code. A
paper by Shihab ea.[54] describes a system to prioritize the addition of unit tests in legacy software
systems. In their conclusion they state that modication frequency and x frequency, in the respective
code history, along with function size are the most important metrics to write unit tests.
50
Chapter 5
Patterns
Program to an interface, not an implementation
Erich Gamma ea.[29]
The following sections deal with patterns in general related to Test-Driven Development for embedded software.
5.1
3-tier TDD
In dealing with TDD for embedded software, three levels of diculty to develop according to TDD
are distinguished.
Each of these levels imply their specic problems with each TDD4ES strategy.
temperature sensor. Yet it would not be surprising to expect some sort of getTemperature subroutine.
Hardware-aware code will typically oer a high level interface to a hardware component, yet it only
presents an abstraction of the component itself, which allows changing the underlying implementation.
Finally hardware-specic code is the code which interacts with the target-specic registers. It is
low level driver code, which is dependent on the platform, namely register size, endianness, addressing
the specic ports, etc. It fetches and stores data from the registers and delivers or receives data in a
human-readable type, for instance string or int. An example of a hardware-specic driver is a 1-Wire
driver, which implements the protocol and allows setting or getting a single byte from 1-Wire.
When developing these types of code, it is noted that the diculty to apply TDD increases, as
shown in gure 5.1.
51
Level of
complexity
for TDD
Hardware specific
Hardware aware
Hardware independent
Type-size related, like rounding errors or unexpected overows. Although most of these problems
are typically dealt with by redening all types to a common size across the platforms, it should
still be noted that unit tests on host will not detect any anomalies. This is the reason to run
tests for hardware independent code.
Associated with the execution environment. For instance, execution on target might miss deadlines or perform strangely after an incorrect context switch. The target environment is not likely
to have the same operating system, provided it has an OS, as the host environment.
Dierences in compiler optimizations. Compilers might have a dierent eect on the same code,
especially when optimizations are considered. Also problems with the volatile keyword can be
considered in this category. Running the compiler on host with low and high optimization might
catch some additional errors.
Most importantly, developing hardware independent code according to TDD, requires no additional
considerations for each of the strategies.
development according to this strategy is painstakingly slow. Furthermore hardware independent code
does not impose any limitations on either Remote prototyping or Test on host, so there should be no
reason to develop according to Test on target.
On the one hand when developing according to Test on host this investment will be a mock low level
driver. The complexity of this mock depends on the expected behavior of the driver. This particular
approach has two distinct advantages. First, it allows intercepting expected non-deterministic behavior
of the driver, which would otherwise complicate the test. For instance hardware-aware temperature
sensor code might eectively call a digital temperature sensor, as shown in listing 5.1. Yet the value
it will receive will be dependent on the measured room temperature, which might not be stable. An
example implementation is given in listing 5.2.
1
2
3
4
5
6
7
8
/* Non-deterministic test */
TEST(getTemperatureTest)
{
TemperatureSensorDriver realTempSensor;
Thermometer t = new Thermometer(realTempSensor);
/* check if temperature is near room temperature +/- 5 degrees (Celsius)*/
CHECK_CLOSE(t.getTemperature(), 20, 5);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class Thermometer
{
public:
Thermometer(TemperatureSensor tempSensor);
virtual ~Thermometer();
double getTemperature();
private:
TemperatureSensor tempSensor;
};
Thermometer::Thermometer(TemperatureSensor tempSensor)
{
/*Constructor injection*/
this->tempSensor = tempSensor;
}
double Thermometer::getTemperature()
{
int rawValue = tempSensor.read()/*Call low level driver*/
/* ...
Code under test:
rawValue conversion to temperatureInCelsius*/
return temperatureInCelsius;
}
However when a mock is called the returned value can be xed, as shown in listing 5.3.
53
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
TEST(getTemperatureTest)
{
TemperatureSensorMock mockTempSensor;
Thermometer t = new Thermometer(mockTempSensor);
/* check if the temperature conversion is correct*/
CHECK(t.getTemperature(), 20.5);
}
class TemperatureSensorMock : public TemperatureSensor
{
public:
TemperatureSensorMock();
virtual ~TemperatureSensorMock();
int read();
};
int TemperatureSensorMock::read()
{
return 0x147F;
}
The previous code example also provided an indication of the second advantage of using mocks to
isolate hardware-aware code for testing purposes. Namely, a consequence of the three-tier architecture
is that unit tests for hardware-aware code will typically test from the hardware independent tier. This
has two reasons.
box.
On the one hand a unit test typically approaches the unit under test as a black
On the other hand, implementation details of hardware-aware and hardware-specic code are
encapsulated, which means only the public interface is available for testing purposes. In order to deal
with unit test limitations, breaking encapsulation for testing is not considered as an option. Because
it is not only considered as a harmful practice, but is also superuous as mocks enable testing the
man-in-the-middle, also known as hardware-aware code. An overview of this pattern is shown in gure
5.2.
54
Tests
Hardware-aware code
(under test)
Hardware-aware calls
Hardware-specific interface
assertion
Mocks
Figure 5.2: Isolating the hardware-aware tier with a mock hardware component.
Fundamental to the idea of using mocks is that the actual assertion logic is provided in the mock
instead of in the test.
Whereas tests can test the publicly available subroutines, a mock, which is
put into the lower tier can assert some internals of the code under test, which would otherwise be
inaccessible. Listing 5.4 provides an example.
55
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
TEST(TestTempSensorCalls)
{
TemperatureSensorMock* mockTempSensor = new TemperatureSensorMock();
mockTempSensor->expectedCalls(2); /*prepare the mock what to expect*/
Thermometer* thermometer = new Thermometer(mockTempSensor);
/*method under test which will in turn call the mock methods*/
thermometer->getTemperature();
CHECK(mockTempSensor->verify());
/*verify the mock*/
}
class Thermometer {
public:
Thermometer(TemperatureSensor* tempSensor);
virtual ~Thermometer();
double getTemperature();
private:
TemperatureSensor* tempSensor;
};
Thermometer::Thermometer()
{
this->tempSensor = tempSensor;
}
double Thermometer::getTemperature()
{
tempSensor->reset();
tempSensor->write();
return tempSensor->read();
}
class TemperatureSensorMock : public TemperatureSensor, public Mock
{
public:
TemperatureSensorMock();
virtual ~TemperatureSensorMock();
double Read(){ this->record(0); }; /* sets the expected sequence */
void Write(){ this->record(1); };
void Reset(){ this->record(2); };
};
In the test of the example 5.4 a mock object is created and instructed what to expect. In this case
a sequence of calls is expected, which will be indicated with 2, 1 and 0. Only if the entire sequence is
called in that order the test will pass. This is the most used application of mock objects, but it is also
possible to register a minimal number of calls on the same subroutine, or even time how long it takes
for a subroutine to execute. Once the mock is prepared it is injected with constructor injection in the
hardware-specic tier.
Next, the method under test is executed, but rather than check its result, the verify method of the
mock is called. This method will return whether or not the sequence of calls was executed in the right
order.
Note that the mock receives both the interface of the hardware component, in this case the
TemperatureSensor class, and inherits the implementation of the Mock class. The respective hardware
component type is inherited so it can be called as if it were a real driver. Whereas the Mock class
provides boilerplate code to record and verify calls upon the mock driver. Although multiple inheritance is used in this example, it is by no means a prerequisite to set up mocks. Furthermore as long as
the driver interface is an abstract class, any multiple inheritance related diamond problems are avoided.
The other preferred approach to develop hardware-aware code is Remote prototyping. As a strategy
Remote prototyping is optimized to deal with hardware-aware code. Namely, developing a part of code
which is loosely coupled to a hardware driver only requires the hardware driver functions to be remotely
addressable. Once this condition is fullled it enables developing the rest of the code on host, without
the need of developing mocks.
Yet when considering testing hardware-aware code as a black box, the addition of mocks allowed to
test from a bottom-up approach. As Remote prototyping does not require including mocks, it appears
to be limited to the typical top-down testing style. To make it worse, injecting mocks with Remote
prototyping is a convoluted process, which is not recommended.
Nevertheless mocks, or at least similar functionality, can be introduced in a typical Remote prototyping process.
Instead of injecting a mock, the respective stub can be enhanced with the afore-
mentioned assertion logic. This creates a mock/stub hybrid, which one the one hand delegates calls
to target and on the other hand records and validates the calls from the code under test. Figure 5.3
presents this mock/stub hybrid in the context of remote prototyping hardware-aware code.
Tests
Stable
code base
Skeletons
Software under
development
Mock/stub
hybrid
Asserts
call order
Delegates calls
Broker on Target
Figure 5.3:
Broker on Host
Remote prototyping with a mock/stub hybrid, which can assert the call order of the
temperature sensor is called by the broker, then it is checked whether the call was successful. In the
positive case the call is registered and a xed value is returned. Otherwise an error value is returned,
which ensures the test will fail.
1
2
3
4
5
6
7
8
9
10
11
/*Host*/
int TemperatureSensor::read()
{
broker.call(id, "read");
if (broker.intReturn() != -1)
{
this->record(0);
return 40;
}
return -1;
}
mocks to obtain hardware abstraction. Although it can be accomplished for hardware-specic code,
as demonstrated in listing , developing strictly according to this strategy can be a very time absorbing
activity. This would lead to a diminishing return of investment and could downright turn into a loss
when compared to traditional development methods.
least portable, setting up tests with special directives for either platform could be an answer. However
these usually litter the code and are only a suboptimal solution.
Optimally, the amount of hardware-specic code is reduced to a minimum and isolated as much
as possible to be called by hardware-aware code.
development is to develop low-level drivers with a traditional method and test this code afterwards.
For both Test on host and Remote prototyping this results in a dierent development cycle.
Test on host allows the concurrent development of hardware independent and specic code, as
shown in gure 5.4. As previously indicated, hardware independent code naturally lends to test rst
development, while hardware-specic driver code can be tested afterwards. Once a minimal interface
and implementation of hardware-specic code is available, hardware-aware code development can be
started. Hardware mocks resembling hardware-specic behavior are used to decouple the code from
target and enable running the tests on host. Once both hardware-specic and hardware-aware code of
the driver has reached a stable state, the hardware-aware part can be migrated to target and instead
of the mock the real hardware-specic code can be used. At that moment hardware independent code
can be integrated with mocks which provide the hardware-aware interface.
combined on the target system, to perform the system tests.
58
Test on host
TDD hardware independent
unit tests
TDD hardware aware driver
unit tests
hardware mocks
TDD applications
integration tests
driver mocks
Driver verification
System tests
Remote prototyping
TDD hardware independent
unit tests
Stable drivers
TDD applications
integration tests
System tests
code migration
remote calls
hardware-specic drivers have been tested with Remote testing. Once a stable state has been reached
a migration of hardware-aware code is in order to be incorporated in the remote addressable target
system.
5.2
Testing patterns
As demonstrated in earlier examples, Dependency Injection is a commonly used design pattern for
testing purposes. Other patterns can be useful in a testing context. For instance, two classic design
patterns prove to be useful in dealing with testability issues raised by the embedded cross-platform
development environment.
On the one hand is the Adapter pattern, also known as Wrapper, which is used in two situations.
First, it is applied as a class adapter in remoting or mocking instrumentation. In this case unrelated
testing concerns need to be provided in a uniform way to the classes under test. Moreover an object
adapter can be used to address specic mocked methods when an inheritance based mock principle is
used.
production
testing
TempSensor
- port
- pin
# getPin()
# getPort()
DS1820
unable to call
from test
TempSensorAdapter
+ getPin()
+ getPort()
Figure 5.6: A class diagram of an object adapter to solve an access modier issue.
In gure 5.6, part of a class diagram for a test case is shown in which the particular access modiers
of members of the code under test present problems. Accessing the private methods of a module for
testing purposes is a typical case of glass-box testing, yet this is avoided in TDD. However, it is
reasonable to exercise a bit more control to deal with the non-deterministic nature of hardware related
functions. Nevertheless, accessing specic internal methods leads to brittle tests. Namely, should the
specic internal member change afterwards this will typically lead to a broken test, which incurs an
additional overhead to x the test in question. So, a rule of thumb in applying this particular variation
of the adapter pattern is to use it only once the code under test has reached a certain stable state.
Another design pattern, which can be used to allow embedded software testing, is the State pattern.
This pattern allows an object to change one of its methods if the context state changes.
In fact
the concrete instance of the object in question is maintained by the context, as this pattern relies
on polymorphism.
The State pattern is used for the same reason as Dependency Injection, but a
fundamental dierence exists. Whereas Dependency Injection allows setting a desired composition of
objects from the perspective of a test, the State pattern incorporates this capability in production
code.
Namely, instead of a testing feature, it becomes a feature in production code, which can be
TempSensor
+ getState()
+ getTemperature(): int
TempSensorState
+ read()
TempSensorStateReal
+ read()
hardware dependency is
replaced by a state object
TempSensorStateMock
+ read()
+ setSign(int)
+ setMagnitude(int)
Figure 5.7: A class diagram of the state pattern, which allows more exibility in testing at the cost of
performance.
An example is given in gure 5.7. TempSensor will call the desired read, either the mock or real
method, dependent on its state.
5.3
Embedded patterns
This section describes patterns, which provide the solution for specic embedded issues related to
TDD. The 3-tier TDD pattern is in fact a specic adaption of the Layered pattern, which proscribes
to separate code into a hierarchical organization based on its level of abstraction.
However, embedded issues regarding TDD encompass more than organizational problems. Another
important aspect of embedded is its precarious memory management. The memory problems have a
twofold classication.
Either these problems are related to the limited size of embedded memory.
Otherwise embedded programs are expected to run indenitely or practically whenever the hardware
fails. This means that no memory fragmentation is allowed, especially when spare memory is limited.
A typical solution to this problem is the Static Allocation pattern, which proscribes to allocate all
memory at start-up should the worst case scenario of needed memory t into the available memory.
This solution only suits a subset of embedded programs, mostly embedded control programs which
only need a xed history of events.
Considering the limited memory available on target and the single instantiation of most driver components, dynamic memory allocation is not desired in embedded software. Yet, TDD requires dynamic
memory allocation to allow exible tests on the target system. This introduces the responsibility to
manage memory, namely creation, deletion and avoiding fragmentation. By all means this only aects
the development process and unit verication of the system, as in production this exibility is no
longer required.
However, patterns like Dependency Injection and strategies like Test on host and
Remote prototyping will typically inuence software design in a drastic measure, that static allocation
for production code is no longer an option.
In this case two embedded memory management patterns can be used to deal with this issue.
1. Fixed Sized Buer deals with memory fragmentation by dynamically allocating memory chunks
of a xed set of sizes.
the memory might be wasted when applying this pattern. However this pattern will ensure a
program will never crash because of memory fragmentation issues.
2. Garbage Compactor is a garbage collector which solves memory fragmentation problems. Instead
of a regular mark and sweep action, the garbage compactor will copy all live objects to another
heap. This action can be done atomically, however it requires twice the memory space and also
a way to update pointer references while the system is running. As the objects are reallocated,
these can be put in a contiguous region of memory, which solves the fragmentation problem. The
major complications with this pattern are the introduction of non-determinism by the compactor
interruptions and the computational intensive operation it uses to copy all live objects.
5.4
Further reading
An alternative for 3-tier TDD is the MCH-pattern by Karlesky ea.[24, 39, 40], which is shown in
gure 5.8.
It consists
of a Model, which presents the internal state of hardware. Next is the Hardware, which presents the
drivers. Finally the Conductor contains the control logic, which gets or sets the state of the Model
and sends command or receives triggers from the Hardware. As this system is decoupled it is possible
to replace each component with a mock for testing purposes.
(Mock)
Conductor
Set state
(Mock)
Command
Current state
(Mock)
Trigger
Model
Hardware
Atomic object, 2007
Gamma ea.[29] wrote the rst book on design patterns, which is a selection of the most commonly
used design patterns, such as Adapter and State.
related patterns, for instance the Layered, Static Allocation, Fixed Sized Buer and Garbage Compactor patterns.
Beck[14] concludes his seminal book on TDD with a number of patterns, which
describe practices related to TDD in general. Meszaros[47] provides a pattern language, i.e. strategy
patterns, design patterns and coding idioms based on xUnit testing automation.
62
Chapter 6
In conclusion
Program testing can be a very eective way to show the presence of bugs, but it is
hopelessly inadequate for showing their absence.
Edsger Dijkstra[21]
This manual is the transcript of a research project of two years. In this time there were some topics
which were identied, but could not have been properly addressed due to time limitations. Section
6.1 provides an overview of these topics.
important guidelines to conduct Test-Driven Development for embedded software. Finally a summary
of contributions is given in section 6.3, which feature predated or early forms of the concepts in this
manual.
6.1
Future work
1) Hardware mocking
No further
elaboration is given on the subject, but since hardware mocks are extensively used in the Test on host
strategy, a part of the work could be lifted from the programmer.
On the other hand, other techniques to mock hardware could be considered, for instance simulators,
emulators or complex hardware models. At rst glance these techniques deem to be too complex to
be incorporated in a TDD-style of development. However, comprehensive hardware simulators might
prove their worth in other stages of software development, namely during integration or specication.
In turn this can be executed indicating whether the desired functionality has
been implemented. BDD for embedded has some very specic issues, since functionality or features in
embedded systems is mostly a combination of hardware and software.
Remote strategies requires a considerable eort. However, the boilerplate code which is needed for the
remoting infrastructure, could be generated automatically once the interface has been dened on host.
63
ment strategies a qualitative evaluation model has been developed (section 3.5). This model allows
conducting quantitative case studies in a uniform and standardized manner.
simplied representation of the actual process, it must be validated rst. For instance by a number
of quantitative case studies, it could be indicated that the model is correct. These would also allow
further renement of the model, so it incorporates additional parameters. Finally an attempt could
be made to generalize the models to the development of other types of software.
5) Testing
TDD is strongly involved with testing, however as a testing strategy it does not suce.
Real-time execution of embedded software is in some cases a fundamental property of the embedded
system. However it is impossible to test this feature while developing, as premature optimization leads
to a degenerative development process.
the development process. Continuous Integration (CI) is a process which advocates building a working
system as much as possible.
overnight indicates potential problems. Adding real-time specication tests in a nightly build might
be able to detect some issues.
reservation towards the value of these tests on a low level must be regarded.
Testing concurrent software is another issue which cannot be covered by tests devised in a TDD
process.
As multi-core processors are getting incorporated in embedded systems, these issues will
6.2
Conclusion
Test-Driven Development has proven to be a viable alternative to traditional development, even for
embedded software. Yet a number of considerations have to be made. Most importantly, TDD is a fast
cycle, yet embedded software uploads are inherently slow. To deal with this, as shown in the strategies,
it is of fundamental importance to develop as much as possible on host. Therefore Remote prototyping
or Test on host is preferred. Choosing between the former and the latter is entirely dependent on the
target embedded system, tool availability and personal preference. Once the overhead of one of these
strategies could be greatly reduced the balance may shift in favor of one or the other.
Yet, at the
moment of writing, Test on host is the most popular. However Remote prototyping might present a
worthy alternative.
Besides Remote testing and prototyping, the main contribution of this manual and the research it
describes is 3-tier TDD. This pattern allows isolating hardware and non-deterministic behavior, which
are both prerequisites for test automation. This pattern presents a main guideline, which is not only
applicable to development with TDD, but generally relevant for all embedded software development.
Namely, minimizing the hardware-specic layer improves a modular design, loosely coupled to the
hardware system. Such a design is more testable, thus its quality can be assured. Furthermore the
software components could be reused over dierent hardware platforms. This is not only a benet in
the long run, when hardware platform upgrades are to be expected. Moreover, it will help the hardware
and software integration phase. In this phase unexpected dierences in hardware specications can
be more easily solved in changing the software component.
changing hardware-specic code to t the integration does not break any higher-tier functionality. Or
at least it will be detected by a red bar.
6.3
Summary of contributions
J. Boydens. Tdd4es: Testgedreven ontwikkeling van embedded software. Presented at Academiato-Business forum, DSPValley, 2009.
64
J. Boydens. Test driven development: About turning your requirements into executable code.
Presented at Technology Seminar:
software, 2009.
J. Boydens and P. Cordemans. Embedded Software Development Driven by Tests. DSP Valley
Newsletter, issue 3, volume 11, pp. 6 - 7, June - July 2010.
bedded software. Proceedings of the Ninth International Conference and Workshop on Ambient
Intelligence and Embedded Systems, 2010.
Electronics-ET, 2010.
P. Cordemans. Test-Driven Development of software for embedded systems. Presented at Onderzoekstreen, KATHO-KHBO, 2011.
J. Boydens.
65
Presented at
Glossary
This glossary presents the denitions of common terms, regarding technology, in the TDD4ES project.
When applicable, a reference is provided to a standard literary article.
A
Acceptance test: Validation test with respect to user needs, requirements, and business processes
conducted to determine whether or not to accept the system. [31]
Agile: Software development methodology, based upon four values and twelve principles as described in the Agile Manifesto. [9]
B
Black-box testing: Testing, either functional or non-functional, without reference to the internal
structure of the component or system. [31]
Broker: A mechanism for invoking operations on a procedure in a remote process. [46]
C
Co-design: Process of enabling concurrent development of hardware and software of an embedded
system. [57]
Continuous Integration (CI): Software development practice where members of a team integrate
their work frequently, usually each person integrates at least daily - leading to multiple integrations
per day. Each integration is veried by an automated build (including test) to detect integration errors
as quickly as possible. [27]
Co-verication: Process of verifying that embedded system software runs correctly on the hardware
design before the design is committed for fabrication. [42]
D
Design pattern:
E
Embedded system: Combination of hardware and software designed to perform one or a limited
set of dedicated functions.
H
Host: Desktop environment to develop embedded software, by means of (partial) simulation of the
embedded system. [17]
I
Integration test: Test to expose defects in the interfaces and interaction between integrated components. [31]
Invariant: A condition, which must be satised before, during and after the execution of a subroutine. [48]
L
Legacy code: Source code, in which the functionality is desired for continued use, but changes to
that source code are hard or next to impossible, because it is a legacy from an older project, third
party or older technology.
M
Mock: A fake component in the system that decides whether the unit test has passed or failed. It
does so by verifying whether the component under test interacted as expected with the fake component.
[51]
Model (ambiguous): (1) Formal: A system description at a higher level of abstraction. [42]
Model (ambiguous): (2) MCH pattern: Representation of the internal state of the hardware. [39]
P
Postcondition: A condition, which must satised after the execution of a subroutine. [48]
Precondition: A condition, which must be satised before the execution of a subroutine. [48]
66
R
Refactor: Process of changing a software system in such a way that it does not alter the external
behavior of the code yet improves its internal structure. [25]
Regression test: Test to ensure that defects have not been introduced or uncovered in unchanged
areas of the software as a result of the changes made. [31]
Remoting: Inter-process communication that allows a computer program to cause a subroutine or
procedure to execute in another address space without the programmer explicitly coding the details
for this remote interaction. [12]
S
Scrum: Incremental and iterative agile software development process. [19]
Seam: Place where you can alter the behavior in your program without editing in that place. [23]
Sprint: Thirty-day iteration, resulting in a potentially shippable product increment. [19]
Stub (ambiguous): Replacement of the behavior of a component on which the component under
test depends. [31] In this manual we refer to this stub denition as a mock.
Stub (ambiguous): Interface of a component, which uses an inter-process communication mechanism to transmit the call to a broker.[46]
T
Target: The embedded system. [17]
Test-Driven Development (TDD): Write a failing automated test before changing any code. [15]
U
Unit test: Test of a minimal software item that can be tested in isolation. [31]
User story: Description of functionality that is valuable to a user. The details of these stories are
covered by tests that can be used to determine whether a story is complete. [19]
V
V-Model: A framework to describe the linear software development life cycle activities from requirements specications to maintenance. The V-model illustrates how testing activities can be integrated
into each phase of the software development life cycle. [31]
W
Waterfall model: Sequential software development process, where software is rst fully designed,
then built, tested and nally made ready as a nal product. [17]
White-box testing: Testing based on an analysis of the internal structure of a component or system.
[31]
X
eXtreme Programming (XP): Discipline of the business of software development that focuses the
whole team on common, reachable goals.
67
Bibliography
[1] Cpputest, http://www.cpputest.org/.
[2] Cxxtest, http://cxxtest.tigris.org/.
[3] Embedded unit testing framework for embedded c, http://embunit.sourceforge.net/embunit/ch01.html.
[4] Googletest, google c++ testing framework, http://code.google.com/p/googletest/.
[5] Host / target testing with the ldra tool suite, http://www.ldra.com/host_trg.asp.
[6] Jtn002
minunit
minimal
unit
testing
framework
for
c,
http://www.jera.com/techinfo/jtns/jtn002.html.
[7] Object management group, http://www.omg.org/.
[8] Unity - test framework for c, http://sourceforge.net/apps/trac/unity/wiki.
[9] Manifesto for agile software development, 2001.
[10] Common object request broker architecture (corba) specication, version 3.1, 2008.
[11] Corba/e:
http://www.corba.org/corba-e/corba-e_yer_v2.pdf, 2008.
[12] Remote procedure call protocol specication version 2, 2009.
[13] Mbed micrcontroller cookbook:
using-rpc, 2011.
[14] K. Beck.
Addison-Wesley, 2003.
Addison-Wesley, 2005.
[20] T. DeMarco.
Systems.
Addison-Wesley, 2004.
Addison-Wesley, 2003.
68
[23] M. Feathers.
[24] M. Fletcher, W. Bereza, M. Karlesky, and G. Williams. Evolving into embedded development. In
[25] M. Fowler.
[26] M.
Fowler.
Inversion
of
control
containers
and
the
Addison-Wesley, 1999.
dependency
injection
pattern,
http://martinfowler.com/articles/injection.html, 2004.
[27] M. Fowler. Continuous integration. Technical report, ThoughtWorks, 2006.
[28] M. Fowler. Testcancer, http://martinfowler.com/bliki/testcancer.html, 2007.
Object-Oriented Software.
Addison-Wesley, 1995.
Information
2004.
[33] J. Grenning. Test-driven development for embedded c++ programmers. Technical report, Rennaisance Software Consulting, 2002.
[34] J. Grenning. Progress before hardware.
O'Reilly, 2004.
Addison-
Wesley, 2000.
[39] M. Karlesky, W. Bereza, and C. Erickson. Eective test driven development for embedded software. In
[40] M. Karlesky, W. Bereza, G. Williams, and M. Fletcher. Mocking the embedded world: Test-driven
development, continuous integration, and design patterns. In
[41] R. Koss and J. Langr. Test driven development in c. In
[42] J. Labrosse, J. Ganssle, R. Oshana, C. Walls, K. Curtis, J. Andrews, D. Katz, R. Gentile, K. Hyder, and B. Perrin.
[43] N.
Llopis.
Games
from
within:
Exploring
Elsevier, 2008.
the
c++
unit
testing
framework
jungle,
http://gamesfromwithin.com/exploring-the-c-unit-testing-framework-jungle.
[44] N. Llopis and C. Nicholson. Unittest++, http://unittest-cpp.sourceforge.net/.
[45] R. Martin.
[46] C. McHale.
www.CiaranMcHale.com, 2007.
[47] G. Meszaros.
[48] B. Meyer.
Addison-Wesley, 2007.
Addison-Wesley, 2002.
[50] N. Nagappan, M. Maximilien, T. Bhat, and L. Williams. Realizing quality improvement through
test driven development: results and experiences of four industrial teams.
[51] R. Osherove.
Manning, 2009.
Embedded agile:
Boston, 2004.
Empirical Software
An experience report.
In
In
ESC
Embedded Systems
[54] E. Shihab, Z. Jiang, B. Adams, and A. Hassan. Prioritizing the creation of unit tests in legacy
software systems. In
O'Reilly, 2007.
[56] M. Siniaalto. Test driven development: empirical body of evidence. Technical report, ITEA, 2006.
[57] F. Vahid and T. Givargis.
70