Control Systems PDF
Control Systems PDF
en.wikibooks.org
December 26, 2019
On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia
projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. A
URI to this license is given in the list of figures on page 345. If this document is a derived work
from the contents of one of these projects and the content was still licensed by the project under
this license at the time of derivation this document has to be licensed under the same, a similar or a
compatible license, as stated in section 4b of the license. The list of contributors is included in chapter
Contributors on page 337. The licenses GPL, LGPL and GFDL are included in chapter Licenses on
page 355, since this book and/or parts of it may or may not be licensed under one or more of these
licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of
figures on page 345. This PDF was generated by the LATEX typesetting software. The LATEX source
code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from
the PDF file, you can use the pdfdetach tool including in the poppler suite, or the http://www.
pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility. Some PDF viewers may also let you save
the attachment to a file. After extracting it from the PDF file you have to rename it to source.7z.
To uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The LATEX
source itself was generated by a program written by Dirk Hünniger, which is freely available under
an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf.
Contents
1 Preface 3
2 Introduction 5
2.1 This Wikibook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 What are Control Systems? . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Classical and Modern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Who is This Book For? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 What are the Prerequisites? . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 How is this Book Organized? . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7 Differential Equations Review . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.8 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.9 Branches of Control Engineering . . . . . . . . . . . . . . . . . . . . . . . 12
2.10 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.11 About Formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 System Identification 17
3.1 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 System Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Initial Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Additivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5 Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.6 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.8 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.9 Time-Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.10 LTI Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.11 Lumpedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.12 Relaxed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.13 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.14 Inputs and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
III
Contents
5 System Metrics 33
5.1 System Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Standard Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3 Steady State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.4 Target Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 Rise Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.6 Percent Overshoot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.7 Steady-State Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.8 Settling Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.9 System Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.10 System Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.11 Visually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6 System Modeling 43
6.1 The Control Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 External Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.3 Internal Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.4 Complex Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.5 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.7 Modeling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.8 Manufacture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7 Transforms 49
7.1 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.5 Euler's Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.6 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
8 Transfer Functions 61
8.1 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.2 Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
8.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.4 Convolution Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.5 Using the Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.6 Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
IV
Contents
10 System Delays 89
10.1 Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.2 Time Shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
10.3 Delays and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.4 Delay Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10.5 Transform-Domain Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
10.6 Modified Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
12 State-Space Equations 97
12.1 Time-Domain Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
12.2 State-Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
12.3 State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
12.4 Multi-Input, Multi-Output . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
12.5 State-Space Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
12.6 Obtaining the State-Space Equations . . . . . . . . . . . . . . . . . . . . . 103
12.7 State-Space Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 105
12.8 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
12.9 Note on Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
12.10 MATLAB Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
V
Contents
19 Gain 149
19.1 What is Gain? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
19.2 Responses to Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
19.3 Gain and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
VI
Contents
25 Stability 211
25.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
25.2 BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
25.3 Determining BIBO Stability . . . . . . . . . . . . . . . . . . . . . . . . . . 212
25.4 Poles and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
25.5 Poles and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
25.6 Transfer Functions Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . 214
25.7 State-Space and Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
25.8 Marginal Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
VII
Contents
VIII
Contents
IX
Contents
X
Contents
47 Contributors 337
48 Licenses 355
48.1 GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . . . . . . . . . 355
48.2 GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . . . 356
48.3 GNU Lesser General Public License . . . . . . . . . . . . . . . . . . . . . . 357
1
1 Preface
This book will discuss the topic of Control Systems, which is an interdisciplinary engineer-
ing topic. Methods considered here will consist of both ”Classical” control methods, and
”Modern” control methods. Also, discretely sampled systems (digital/computer systems)
will be considered in parallel with the more common analog methods. This book will not
focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although
readers should have a solid foundation in the fundamentals of at least one discipline.
This book will require prior knowledge of linear algebra, integral and differential calculus,
and at least some exposure to ordinary differential equations. In addition, a prior knowledge
of integral transforms, specifically the Laplace and Z transforms will be very beneficial.
Also, prior knowledge of the Fourier Transform will shed more light on certain subjects.
Wikibooks with information on calculus topics or transformation topics required for this
book will be listed below:
• Calculus1
• Linear Algebra2
• Signals and Systems3
• Digital Signal Processing4
1 https://en.wikibooks.org/wiki/Calculus
2 https://en.wikibooks.org/wiki/Linear%20Algebra
3 https://en.wikibooks.org/wiki/Signals%20and%20Systems
4 https://en.wikibooks.org/wiki/Digital%20Signal%20Processing
3
2 Introduction
This book was written at Wikibooks, a free online community where people write open-
content textbooks. Any person with internet access is welcome to participate in the creation
and improvement of this book. Because this book is continuously evolving, there are no
finite ”versions” or ”editions” of this book. Permanent links to known good versions of the
pages may be provided.
This simple example can be complex to both users and designers of the motor system. It
may seem obvious that the motor should start at a higher voltage, so that it accelerates
faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed.
This is clearly a simplistic example, but it illustrates an important point: we can add
special ”Controller units” to preexisting systems, to improve performance and meet new
system specifications.
1 https://en.wikipedia.org/wiki/Control%20system
2 https://en.wikipedia.org/wiki/Control%20engineering
5
Introduction
Here are some formal definitions of terms used throughout this book:
Control System
A Control System is a device, or a collection of devices that manage the behavior of
other devices. Some devices are not controllable. A control system is an interconnection
of components connected or related in such a manner as to command, direct, or regulate
itself or another system.
Control System is a conceptual framework for designing systems with capa-
bilities of regulation and/or tracking to give a desired performance. For this
there must be a set of signals measurable to know the performance, another
set of signals measurable to influence the evolution of the system in time and
a third set which is not measurable but disturb the evolution.
Controller
A controller is a control system that manages the behavior of another device or system.
Compensator
A Compensator is a control system that regulates another system, usually by condi-
tioning the input or the output to that system. Compensators are typically employed
to correct a single design flaw, with the intention of affecting other aspects of the design
in a minimal manner.
There are essentially two methods to approach the problem of designing a new control
system: the Classical Approach, and the Modern Approach.
Classical and Modern control methodologies are named in a misleading way, because the
group of techniques called ”Classical” were actually developed later than the techniques
labeled ”Modern”. However, in terms of developing control systems, Modern methods have
been used to great effect more recently, while the Classical methods have been gradually
falling out of favor. Most recently, it has been shown that Classical and Modern methods
can be combined to highlight their respective strengths and weaknesses.
Classical Methods, which this book will consider first, are methods involving the Laplace
Transform domain. Physical systems are modeled in the so-called ”time domain”, where
the response of a given system is a function of the various inputs, the previous system values,
and time. As time progresses, the state of the system and its response change. However,
time-domain models for systems are frequently modeled using high-order differential equa-
tions which can become impossibly difficult for humans to solve and some of which can
even become impossible for modern computer systems to solve efficiently. To counter-
act this problem, integral transforms, such as the Laplace Transform and the Fourier
Transform, can be employed to change an Ordinary Differential Equation (ODE) in the
time domain into a regular algebraic polynomial in the transform domain. Once a given
system has been converted into the transform domain it can be manipulated with greater
ease and analyzed quickly by humans and computers alike.
6
Who is This Book For?
Modern Control Methods, instead of changing domains to avoid the complexities of time-
domain ODE mathematics, converts the differential equations into a system of lower-order
time domain equations called State Equations, which can then be manipulated using
techniques from linear algebra. This book will consider Modern Methods second.
A third distinction that is frequently made in the realm of control systems is to divide analog
methods (classical and modern, described above) from digital methods. Digital Control
Methods were designed to try and incorporate the emerging power of computer systems
into previous control methodologies. A special transform, known as the Z-Transform,
was developed that can adequately describe digital systems, but at the same time can be
converted (with some effort) into the Laplace domain. Once in the Laplace domain, the
digital system can be manipulated and analyzed in a very similar manner to Classical analog
systems. For this reason, this book will not make a hard and fast distinction between Analog
and Digital systems, and instead will attempt to study both paradigms in parallel.
Understanding of the material in this book will require a solid mathematical foundation.
This book does not currently explain, nor will it ever try to fully explain most of the
necessary mathematical tools used in this text. For that reason, the reader is expected to
have read the following wikibooks, or have background knowledge comparable to them:
7
Introduction
Algebra3
Calculus4
The reader should have a good understanding of differentiation and integration. Partial
differentiation, multiple integration, and functions of multiple variables will be used occa-
sionally, but the students are not necessarily required to know those subjects well. These
advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite.
Linear Algebra5
State-space system representation draws heavily on linear algebra techniques. Students
should know how to operate on matrices. Students should understand basic matrix op-
erations (addition, multiplication, determinant, inverse, transpose). Students would also
benefit from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are
covered in this text.
Ordinary Differential Equations6
All linear systems can be described by a linear ordinary differential equation. It is bene-
ficial, therefore, for students to understand these equations. Much of this book describes
methods to analyze these equations. Students should know what a differential equation
is, and they should also know how to find the general solutions of first and second order
ODEs.
Engineering Analysis7
This book reinforces many of the advanced mathematical concepts used in the Engineering
Analysis8 book, and we will refer to the relevant sections in the aforementioned text for
further information on some subjects. This is essentially a math book, but with a focus
on various engineering applications. It relies on a previous knowledge of the other math
books in this list.
Signals and Systems9
The Signals and Systems10 book will provide a basis in the field of systems theory, of
which control systems is a subset. Readers who have not read the Signals and Systems11
book will be at a severe disadvantage when reading this book.
This book will be organized following a particular progression. First this book will discuss
the basics of system theory, and it will offer a brief refresher on integral transforms. Section
3 https://en.wikibooks.org/wiki/Algebra
4 https://en.wikibooks.org/wiki/Calculus
5 https://en.wikibooks.org/wiki/Linear%20Algebra
6 https://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
7 https://en.wikibooks.org/wiki/Engineering%20Analysis
8 https://en.wikibooks.org/wiki/Engineering%20Analysis
9 https://en.wikibooks.org/wiki/Signals%20and%20Systems
10 https://en.wikibooks.org/wiki/Signals%20and%20Systems
11 https://en.wikibooks.org/wiki/Signals%20and%20Systems
8
Differential Equations Review
2 will contain a brief primer on digital information, for students who are not necessarily
familiar with them. This is done so that digital and analog signals can be considered in
parallel throughout the rest of the book. Next, this book will introduce the state-space
method of system description and control. After section 3, topics in the book will use
state-space and transform methods interchangeably (and occasionally simultaneously). It is
important, therefore, that these three chapters be well read and understood before venturing
into the later parts of the book.
After the ”basic” sections of the book, we will delve into specific methods of analyzing and
designing control systems. First we will discuss Laplace-domain stability analysis techniques
(Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots).
After the classical methods are discussed, this book will then discuss Modern methods of
stability analysis. Finally, a number of advanced topics will be touched upon, depending
on the knowledge level of the various contributors.
As the subject matter of this book expands, so too will the prerequisites. For instance,
when this book is expanded to cover nonlinear systems, a basic background knowledge
of nonlinear mathematics will be required.
2.6.1 Versions
This wikibook has been expanded to include multiple versions12 of its text, differentiated
by the material covered, and the order in which the material is presented. Each different
version is composed of the chapters of this book, included in a different order. This book
covers a wide range of information, so if you don't need all the information that this book
has to offer, perhaps one of the other versions would be right for you and your educational
needs.
Each separate version has a table of contents outlining the different chapters that are in-
cluded in that version. Also, each separate version comes complete with a printable version,
and some even come with PDF versions as well.
Take a look at the All Versions Listing Page13 to find the version of the book that is
right for you and your needs.
Implicit in the study of control systems is the underlying use of differential equations. Even
if they aren't visible on the surface, all of the continuous-time systems that we will be
looking at are described in the time domain by ordinary differential equations (ODE), some
of which are relatively high-order.
12 https://en.wikibooks.org/wiki/Control%20Systems%2FAll%20Versions
13 https://en.wikibooks.org/wiki/Control%20Systems%2FAll%20Versions
9
Introduction
Let's review some differential equation basics. Consider the topic of interest from a bank.
The amount of interest accrued on a given principal balance (the amount of money you
put into the bank) P, is given by:
dP
dt = rP
Where dPdt is the interest (rate of change of the principal), and r is the interest rate.
Notice in this case that P is a function of time (t), and can be rewritten to reflect that:
dP (t)
dt = rP (t)
To solve this basic, first-order equation, we can use a technique called ”separation of
variables”, where we move all instances of the letter P to one side, and all instances of
t to the other:
dP (t)
P (t) = r dt
And integrating both sides gives us:
ln |P (t)| = rt + C
This is all fine and good, but generally, we like to get rid of the logarithm, by raising
both sides to a power of e:
P (t) = ert+C
Where we can separate out the constant as such:
D = eC
P (t) = Dert
D is a constant that represents the initial conditions of the system, in this case the
starting principal.
14 https://en.wikibooks.org/wiki/Calculus
10
History
2.8 History
The field of control systems started essentially in the ancient world. Early civilizations,
notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement
of time, the result of which were several ”water clocks” that were designed and implemented.
However, there was very little in the way of actual progress made in the field of engineering
until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler's
Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace
used the transform (later called the Laplace Transform) to solve complex problems in
probability theory.
Joseph Fourier was a court mathematician in France under Napoleon I. He created a spe-
cial function decomposition called the Fourier Series, that was later generalized into an
integral transform, and named in his honor (the Fourier Transform).
Figure 1 Figure 2
Pierre-Simon Laplace Joseph Fourier
1749-1827 1768-1840
The ”golden age” of control engineering occurred between 1910-1945, where mass commu-
nication methods were being created and two world wars were being fought. During this
period, some of the most famous names in controls engineering were doing their work:
Nyquist and Bode.
Hendrik Wade Bode and Harry Nyquist, especially in the 1930's while working with
Bell Laboratories, created the bulk of what we now call ”Classical Control Methods”. These
methods were based off the results of the Laplace and Fourier Transforms, which had been
previously known, but were made popular by Oliver Heaviside around the turn of the
century. Previous to Heaviside, the transforms were not widely used, nor respected math-
ematical tools.
11
Introduction
Bode is credited with the ”discovery” of the closed-loop feedback system, and the logarithmic
plotting technique that still bears his name (bode plots). Harry Nyquist did extensive
research in the field of system stability and information theory. He created a powerful
stability criteria that has been named for him (The Nyquist Criteria).
Modern control methods were introduced in the early 1950's, as a way to bypass some
of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in
modern control theory, and an adaptive controller called the Kalman Filter was named
in his honor. Modern control methods became increasingly popular after 1957 with the
invention of the computer, and the start of the space program. Computers created the
need for digital control methodologies, and the space program required the creation of some
”advanced” control techniques, such as ”optimal control”, ”robust control”, and ”nonlinear
control”. These last subjects, and several more, are still active areas of study among research
engineers.
Here we are going to give a brief listing of the various different methodologies within
the sphere of control engineering. Oftentimes, the lines between these methodologies are
blurred, or even erased completely.
Classical Controls
Control methodologies where the ODEs that describe a system are transformed using the
Laplace, Fourier, or Z Transforms, and manipulated in the transform domain.
Modern Controls
Methods where high-order differential equations are broken into a system of first-order
equations. The input, output, and internal states of the system are described by vectors
called ”state variables”.
Robust Control
Control methodologies where arbitrary outside noise/disturbances are accounted for, as
well as internal inaccuracies caused by the heat of the system itself, and the environment.
Optimal Control
In a system, performance metrics are identified, and arranged into a ”cost function”. The
cost function is minimized to create an operational system with the lowest cost.
Adaptive Control
In adaptive control, the control changes its response characteristics over time to better
control the system.
Nonlinear Control
12
MATLAB
The youngest branch of control engineering, nonlinear control encompasses systems that
cannot be described by linear equations or ODEs, and for which there is often very little
supporting theory available.
Game Theory
Game Theory is a close relative of control theory, and especially robust control and optimal
control theories. In game theory, the external disturbances are not considered to be random
noise processes, but instead are considered to be ”opponents”. Each player has a cost
function that they attempt to minimize, and that their opponents attempt to maximize.
This book will definitely cover the first two branches, and will hopefully be expanded to
cover some of the later branches, if time allows.
2.10 MATLAB
a https://en.wikibooks.org/wiki/Control%20Systems%2FMATLAB
MATLAB ® is a programming tool that is commonly used in the field of control engi-
neering. We will discuss MATLAB in specific sections of this book devoted to that purpose.
MATLAB will not appear in discussions outside these specific sections, although MATLAB
may be used in some example problems. An overview of the use of MATLAB in control
engineering can be found in the appendix at: Control Systems/MATLAB15 .
For more information on MATLAB in general, see: MATLAB Programming16 .
a https://en.wikibooks.org/wiki/Control%20Systems%2FResources
Nearly all textbooks on the subject of control systems, linear systems, and system analysis
will use MATLAB as an integral part of the text. Students who are learning this subject
at an accredited university will certainly have seen this material in their textbooks, and are
likely to have had MATLAB work as part of their classes. It is from this perspective that
the MATLAB appendix is written.
In the future, this book may be expanded to include information on Simulink ®, as well
as MATLAB.
There are a number of other software tools that are useful in the analysis and design
of control systems. Additional information can be added in the appendix of this book,
depending on the experience and prior knowledge of contributors.
15 https://en.wikibooks.org/wiki/Control%20Systems%2FMATLAB
16 https://en.wikibooks.org/wiki/MATLAB%20Programming
13
Introduction
Mathematical equations will be labeled with the {{eqn}} template, to give them names.
Equations that are labeled in such a manner are important, and should be taken special
note of. For instance, notice the label to the right of this equation:
[Inverse Laplace Transform]
∫ c+i∞
1
f (t) = L−1 {F (s)} = est F (s) ds
2πi c−i∞
Equations that are named in this manner will also be copied into the List of Equations
Glossary17 in the end of the book, for an easy reference.
Italics will be used for English variables, functions, and equations that appear in the main
text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX math-
ematics formatting engine, although an attempt will be made not to employ formatted
mathematical equations inline with other text because of the difference in size and font.
Greek letters, and other non-English characters will not be italicized in the text unless
they appear in the midst of multiple variables which are italicized (as a convenience to the
editor).
Scalar time-domain functions and variables will be denoted with lower-case letters, along
with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written
in a similar manner, except with an [n] instead of a (t).
Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters
followed by the appropriate variable in parenthesis. For example: F(s), X(jω), Y(z), and
F*(s).
Matrices will be denoted with capital letters. Matrices which are functions of time will be
denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix,
a(t) is a scalar function of time.
Transforms of time-variant matrices will be displayed in uppercase bold letters, such as
H(s).
Math equations rendered using LaTeX will appear on separate lines, and will be indented
from the rest of the text.
17 https://en.wikibooks.org/wiki/Control%20Systems%2FList%20of%20Equations
14
About Formatting
Information which is tangent or auxiliary to the main text will be placed in these ”side-
box” templates.
Examples will appear in TextBox templates, which show up as large grey boxes filled
with text and equations.
Important Definitions
Will appear in TextBox templates as well, except we will use this formatting to show
that it is a definition.
Notes of interest will appear in ”infobox” templates. These notes will often be used to
explain some nuances of a mathematical derivation or proof.
B Warning
Warnings will appear in these ”warning” boxes. These boxes will point out common
mistakes, or other items to be careful of.
15
3 System Identification
3.1 Systems
Systems, in one sense, are devices that take input and produce an output. A system can be
thought to operate on the input to produce the output. The output is related to the input
by a certain relationship known as the system response. The system response usually
can be modeled with a mathematical relationship between the system input and the system
output.
The initial time of a system is the time before which there is no input. Typically, the initial
time of a system is defined to be zero, which will simplify the analysis significantly. Some
http://wikis.controltheorypro.com/index.php?title=Introduction_to_System_
1
Identification
http://wikis.controltheorypro.com/index.php?title=Introduction_to_Parameter_
2
Identification
17
System Identification
techniques, such as the Laplace Transform require that the initial time of the system be
zero. The initial time of a system is typically denoted by t0 .
The value of any variable at the initial time t0 will be denoted with a 0 subscript. For
instance, the value of variable x at time t0 is given by:
x(t0 ) = x0
Likewise, any time t with a positive subscript are points in time after t0 , in ascending order:
t0 ≤ t1 ≤ t2 ≤ · · · ≤ tn
So t1 occurs after t0 , and t2 occurs after both points. In a similar fashion above, a variable
with a positive subscript (unless specifying an index into a vector) also occurs at that point
in time:
x(t1 ) = x1
x(t2 ) = x2
3.4 Additivity
A system satisfies the property of additivity if a sum of inputs results in a sum of outputs.
By definition: an input of x3 (t) = x1 (t) + x2 (t) results in an output of y3 (t) = y1 (t) + y2 (t).
To determine whether a system is additive, use the following test:
Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and
x2 ) produce two outputs:
y1 = f (x1 )
y2 = f (x2 )
Now, create a composite input that is the sum of the previous inputs:
x3 = x1 + x2
18
Homogeneity
Systems that satisfy this property are called additive. Additive systems are useful because
a sum of simple inputs can be used to analyze the system response to a more complex input.
3.5 Homogeneity
y = f (x)
y1 = f (Cx1 )
x1 = x
Then, for the system to be homogeneous, the following equation must be true:
y1 = f (Cx) = Cf (x) = Cy
Systems that are homogeneous are useful in many applications, especially applications with
gain or amplification.
19
System Identification
Exercise:
Prove that additivity implies homogeneity, but that homogeneity does not imply addi-
tivity.
3.6 Linearity
y1 = f (x1 )
y2 = f (x2 )
Now, a linear combination of the inputs should produce a linear combination of the outputs:
20
Memory
3.7 Memory
A system is said to have memory if the output from the system is dependent on past inputs
(or future inputs!) to the system. A system is called memoryless if the output is only
dependent on the current input. Memoryless systems are easier to work with, but systems
with memory are more common in digital signal processing applications.
Systems that have memory are called dynamic systems, and systems that do not have
memory are static systems.
3.8 Causality
Causality is a property that is very similar to memory. A system is called causal if it is only
dependent on past and/or current inputs. A system is called anti-causal if the output of
the system is dependent only on future inputs. A system is called non-causal if the output
depends on past and/or current and future inputs.
A system design that is not causal cannot be physically implemented (to operate in a
real-time). If the system can't be built, the design is generally worthless. However,
there are applications of non-causal systems, e.g. when a system does not need to
operate in a real-time and already has signals stored in its memory (sound and image
compression).
21
System Identification
3.9 Time-Invariance
A system is called time-invariant if the system relationship between the input and output
signals is not dependent on the passage of time. If the input signal x(t) produces an output
y(t) then any time shifted input, x(t + δ), results in a time-shifted output y(t + δ) This
property can be satisfied if the transfer function of the system is not a function of time
except expressed by the input and output. If a system is time-invariant then the system
block is commutative with an arbitrary delay. This facet of time-invariant systems will be
discussed later.
To determine if a system f is time-invariant, perform the following test:
Apply an arbitrary input x to a system and produce an arbitrary output y:
y(t) = f (x(t))
Now, assign x1 to be equal to the first input x, time-shifted by a given constant value δ:
x1 (t) = x(t − δ)
y1 (t) = y(t − δ)
3.11 Lumpedness
A system is said to be lumped if one of the two following conditions are satisfied:
1. There are a finite number of states that the system can be in.
2. There are a finite number of state variables.
22
Relaxed
The concept of ”states” and ”state variables” are relatively advanced, and they will be
discussed in more detail in the discussion about modern controls.
Systems which are not lumped are called distributed. A simple example of a distributed
system is a system with delay, that is, A(s)y(t) = B(s)u(t − τ ), which has an infinite number
of state variables (Here we use s to denote the Laplace variable). However, although
distributed systems are quite common, they are very difficult to analyze in practice, and
there are few tools available to work with such systems. Fortunately, in most cases, a
delay can be sufficiently modeled with the Pade approximation. This book will not discuss
distributed systems much.
3.12 Relaxed
A system is said to be relaxed if the system is causal, and at the initial time t0 the output
of the system is zero, i.e., there is no stored energy in the system.
y(t0 ) = f (x(t0 )) = 0
In terms of differential equations, a relaxed system is said to have ”zero initial state”. Sys-
tems without an initial state are easier to work with, but systems that are not relaxed can
frequently be modified to approximate relaxed systems.
3.13 Stability
Control Systems engineers will frequently say that an unstable system has ”exploded”.
Some physical systems actually can rupture or explode when they go unstable.
Stability is a very important concept in systems, but it is also one of the hardest function
properties to prove. There are several different criteria for system stability, but the most
common requirement is that the system must produce a finite output when subjected to
a finite input. For instance, if 5 volts is applied to the input terminals of a given circuit,
it would be best if the circuit output didn't approach infinity, and the circuit itself didn't
melt or explode. This type of stability is often known as ”Bounded Input, Bounded
Output” stability, or BIBO.
There are a number of other types of stability, most of which are based off the concept of
BIBO stability. Because stability is such an important and complicated topic, an entire
section of this text is devoted to its study.
Systems can also be categorized by the number of inputs and the number of outputs the
system has. Consider a television as a system, for instance. The system has two inputs:
23
System Identification
the power wire and the signal cable. It has one output: the video display. A system with
one input and one output is called single-input, single output, or SISO. a system with
multiple inputs and multiple outputs is called multi-input, multi-output, or MIMO.
These systems will be discussed in more detail later.
Exercise:
Based on the definitions of SISO and MIMO, above, determine what the acronyms SIMO
and MISO mean.
24
4 Digital and Analog
There is a significant distinction between an analog system and a digital system, in the
same way that there is a significant difference between analog and digital data. This book is
going to consider both analog and digital topics, so it is worth taking some time to discuss
the differences, and to display the different notations that will be used with each.
Figure 3
25
Digital and Analog
A signal is called discrete-time if it is only defined for particular points in time. A discrete-
time system takes discrete-time input signals, and produces discrete-time output signals.
The following image shows the difference between an analog waveform and the sampled
discrete time equivalent:
Figure 4
4.1.3 Quantized
A signal is called Quantized if it can only be certain values, and cannot be other values.
This concept is best illustrated with examples:
1. Students with a strong background in physics will recognize this concept as being the
root word in ”Quantum Mechanics”. In quantum mechanics, it is known that energy
comes only in discrete packets. An electron bound to an atom, for example, may
occupy one of several discrete energy levels, but not intermediate levels.
2. Another common example is population statistics. For instance, a common statistic
is that a household in a particular country may have an average of ”3.5 children”, or
some other fractional number. Actual households may have 3 children, or they may
have 4 children, but no household has 3.5 children.
3. People with a computer science background will recognize that integer variables are
quantized because they can only hold certain integer values, not fractions or decimal
points.
26
Analog
The last example concerning computers is the most relevant, because quantized systems
are frequently computer-based. Systems that are implemented with computer software and
hardware will typically be quantized.
Here is an example waveform of a quantized signal. Notice how the magnitude of the wave
can only take certain values, and that creates a step-like appearance. This image is discrete
in magnitude, but is continuous in time:
Figure 5
4.2 Analog
By definition:
Analog
A signal is considered analog if it is defined for all points in time and if it can take any
real magnitude value within its range.
An analog system is a system that represents data using a direct conversion from one form
to another. In other words, an analog system is a system that is continuous in both time
and magnitude.
If we have a given motor, we can show that the output of the motor (rotation in units of
radians per second, for instance) is a function of the voltage that is input to the motor.
We can show the relationship as such:
27
Digital and Analog
Θ(v) = f (v)
Where Θ is the output in terms of Rad/sec, and f(v) is the motor's conversion function
between the input voltage (v) and the output. For any value of v we can calculate out
specifically what the rotational speed of the motor should be.
Consider a standard analog clock, which represents the passage of time though the
angular position of the clock hands. We can denote the angular position of the hands of
the clock with the system of equations:
ϕh = fh (t)
ϕm = fm (t)
ϕs = fs (t)
Where φh is the angular position of the hour hand, φm is the angular position of the
minute hand, and φs is the angular position of the second hand. The positions of all
the different hands of the clock are dependent on functions of time.
Different positions on a clock face correspond directly to different times of the day.
4.3 Digital
Digital
A signal or system is considered digital if it is both discrete-time and quantized.
Digital data always have a certain granularity, and therefore there will almost always be an
error associated with using such data, especially if we want to account for all real numbers.
The tradeoff, of course, to using a digital system is that our powerful computers with our
powerful, Moore's law microprocessor units, can be instructed to operate on digital data
only. This benefit more than makes up for the shortcomings of a digital representation
system.
Discrete systems will be denoted inside square brackets, as is a common notation in texts
that deal with discrete values. For instance, we can denote a discrete data set of ascending
numbers, starting at 1, with the following notation:
x[n] = [1 2 3 4 5 6 ...]
n, or other letters from the central area of the alphabet (m, i, j, k, l, for instance) are
commonly used to denote discrete time values. Analog, or ”non-discrete” values are denoted
in regular expression syntax, using parenthesis. Here is an example of an analog waveform
and the digital equivalent. Notice that the digital waveform is discrete in both time and
magnitude:
28
Hybrid Systems
Figure 6 Figure 7
As a common example, let's consider a digital clock: The digital clock represents time
with binary electrical data signals of 1 and 0. The 1's are usually represented by a
positive voltage, and a 0 is generally represented by zero voltage. Counting in binary,
we can show that any given time can be represented by a base-2 numbering system:
Hybrid Systems are systems that have both analog and digital components. Devices
called samplers are used to convert analog signals into digital signals, and Devices called
reconstructors are used to convert digital signals into analog signals. Because of the use
of samplers, hybrid systems are frequently called sampled-data systems.
Most modern automobiles today have integrated computer systems that monitor certain
aspects of the car, and actually help to control the performance of the car. The speed
of the car, and the rotational speed of the transmission are analog values, but a sampler
29
Digital and Analog
converts them into digital values so the car computer can monitor them. The digital
computer will then output control signals to other parts of the car, to alter analog
systems such as the engine timing, the suspension, the brakes, and other parts. Because
the car has both digital and analog components, it is a hybrid system.
Note:
We are not using the word ”continuous” here in the sense of continuously differentiable,
as is common in math texts.
A system is considered continuous-time if the signal exists for all time. Frequently, the
terms ”analog” and ”continuous” will be used interchangeably, although they are not strictly
the same.
Discrete systems can come in three flavors:
1. Discrete time (sampled)
2. Discrete magnitude (quantized)
3. Discrete time and magnitude (digital)
Discrete magnitude systems are systems where the signal value can only have certain
values.Discrete time systems are systems where signals are only available (or valid) at
particular times. Computer systems are discrete in the sense of (3), in that data is only
read at specific discrete time intervals, and the data can have only a limited number of
discrete values.
A discrete-time system has a sampling time value associated with it, such that each
discrete value occurs at multiples of the given sampling time. We will denote the sampling
time of a system as T. We can equate the square-brackets notation of a system with the
continuous definition of the system as follows:
x[n] = x(nT )
Notice that the two notations show the same thing, but the first one is typically easier to
write, and it shows that the system in question is a discrete system. This book will use
the square brackets to denote discrete systems by the sample number n, and parenthesis to
denote continuous time functions.
The process of converting analog information into digital data is called ”Sampling”. The
process of converting digital data into an analog signal is called ”Reconstruction”. We will
talk about both processes in a later chapter. For more information on the topic than is
30
Sampling and Reconstruction
available in this book, see the Analog and Digital Conversion1 wikibook. Here is an example
of a reconstructed waveform. Notice that the reconstructed waveform here is quantized
because it is constructed from a digital signal:
Figure 8
1 https://en.wikibooks.org/wiki/Analog%20and%20Digital%20Conversion
31
5 System Metrics
When a system is being designed and analyzed, it doesn't make any sense to test the system
with all manner of strange input functions, or to measure all sorts of arbitrary performance
metrics. Instead, it is in everybody's best interest to test the system with a set of standard,
simple reference functions. Once the system is tested with the reference functions, there
are a number of different metrics that we can use to determine the system performance.
It is worth noting that the metrics presented in this chapter represent only a small number
of possible metrics that can be used to evaluate a given system. This wikibook will present
other useful metrics along the way, as their need becomes apparent.
Note:
All of the standard inputs are zero before time zero. All the standard inputs are causal.
There are a number of standard inputs that are considered simple enough and universal
enough that they are considered when designing a system. These inputs are known as a
unit step, a ramp, and a parabolic input.
Unit Step
A unit step function is defined piecewise as such:
[Unit Step Function]
{
0, t < 0
u(t) =
1, t ≥ 0
The unit step function is a highly important function, not only in control systems engi-
neering, but also in signal processing, systems analysis, and all branches of engineering.
If the unit step function is input to a system, the output of the system is known as the
step response. The step response of a system is an important tool, and we will study
step responses in detail in later chapters.
33
System Metrics
Figure 9
Ramp
A unit ramp is defined in terms of the unit step function, as such:
[Unit Ramp Function]
r(t) = tu(t)
It is important to note that the unit step function is simply the differential of the unit
ramp function:
∫
r(t) = u(t)dt = tu(t)
This definition will come in handy when we learn about the Laplace Transform.
34 Figure 10
Steady State
Parabolic
A unit parabolic input is similar to a ramp input:
[Unit Parabolic Function]
1
p(t) = t2 u(t)
2
Notice also that the unit parabolic input is equal to the integral of the ramp function:
∫ ∫
1 1
p(t) = r(t)dt = tu(t)dt = t2 u(t) = tr(t)
2 2
Again, this result will become important when we learn about the Laplace Transform.
Figure 11
Also, sinusoidal and exponential functions are considered basic, but they are too difficult
to use in initial analysis of a system.
Note:
To be more precise, we should have taken the limit as t approaches infinity. However,
as a shorthand notation, we will typically say ”t equals infinity”, and assume the reader
understands the shortcut that is being used.
35
System Metrics
When a unit-step function is input to a system, the steady-state value of that system is
the output value at time t = ∞. Since it is impractical (if not completely impossible) to wait
till infinity to observe the system, approximations and mathematical calculations are used
to determine the steady-state value of the system. Most system responses are asymptotic,
that is that the response approaches a particular value. Systems that are asymptotic are
typically obvious from viewing the graph of that response.
The step response of a system is most frequently used to analyze systems, and there is
a large amount of terminology involved with step responses. When exposed to the step
input, the system will initially have an undesirable output period known as the transient
response. The transient response occurs because a system is approaching its final output
value. The steady-state response of the system is the response after the transient response
has ended.
The amount of time it takes for the system output to reach the desired value (before the
transient response has ended, typically) is known as the rise time. The amount of time it
takes for the transient response to end and the steady-state response to begin is known as
the settling time.
It is common for a systems engineer to try and improve the step response of a system. In
general, it is desired for the transient response to be reduced, the rise and settling times to
be shorter, and the steady-state to approach a particular desired ”reference” output.
Figure 12 Figure 13
An arbitrary step function with A step response graph of input x(t) to a made-up system
x(t) = M u(t)
The target output value is the value that our system attempts to obtain for a given input.
This is not the same as the steady-state value, which is the actual value that the system
does obtain. The target value is frequently referred to as the reference value, or the
”reference function” of the system. In essence, this is the value that we want the system to
produce. When we input a ”5” into an elevator, we want the output (the final position of
the elevator) to be the fifth floor. Pressing the ”5” button is the reference input, and is the
36
Rise Time
expected value that we want to obtain. If we press the ”5” button, and the elevator goes to
the third floor, then our elevator is poorly designed.
Rise time is the amount of time that it takes for the system response to reach the target
value from an initial state of zero. Many texts on the subject define the rise time as being
the time it takes to rise between the initial position and 80% of the target value. This is
because some systems never rise to 100% of the expected, target value, and therefore they
would have an infinite rise-time. This book will specify which convention to use for each
individual problem. Rise time is typically denoted tr , or trise .
Rise time is not the amount of time it takes to achieve steady-state, only the amount
of time it takes to reach the desired target value for the first time.
Underdamped systems frequently overshoot their target value initially. This initial surge
is known as the ”overshoot value”. The ratio of the amount of overshoot to the target
steady-state value of the system is known as the percent overshoot. Percent overshoot
represents an overcompensation of the system, and can output dangerously large output
signals that can damage a system. Percent overshoot is typically denoted with the term
PO.
Example: Refrigerator
Consider an ordinary household refrigerator. The refrigerator has cycles where it is on
and when it is off. When the refrigerator is on, the coolant pump is running, and the
temperature inside the refrigerator decreases. The temperature decreases to a much
lower level than is required, and then the pump turns off.
When the pump is off, the temperature slowly increases again as heat is absorbed into
the refrigerator. When the temperature gets high enough, the pump turns back on.
Because the pump cools down the refrigerator more than it needs to initially, we can say
that it ”overshoots” the target value by a certain specified amount.
Example: Refrigerator
Another example concerning a refrigerator concerns the electrical demand of the heat
pump when it first turns on. The pump is an inductive mechanical motor, and when
the motor first activates, a special counter-acting force known as ”back EMF” resists
the motion of the motor, and causes the pump to draw more electricity until the motor
reaches its final speed. During the startup time for the pump, lights on the same electrical
37
System Metrics
circuit as the refrigerator may dim slightly, as electricity is drawn away from the lamps,
and into the pump. This initial draw of electricity is a good example of overshoot.
After the initial rise time of the system, some systems will oscillate and vibrate for an
amount of time before the system output settles on the final value. The amount of time it
takes to reach steady state after the initial rise time is known as the settling time. Notice
that damped oscillating systems may never settle completely, so we will define settling time
as being the amount of time for the system to reach, and stay in, a certain acceptable
range. The acceptable range for settling time is typically determined on a per-problem
basis, although common values are 20%, 10%, or 5% of the target value. The settling time
will be denoted as ts .
The order of the system is defined by the number of independent energy storage elements
in the system, and intuitively by the highest order of the linear differential equation that
describes the system. In a transfer function representation, the order is the highest exponent
in the transfer function. In a proper system, the system order is defined as the degree of
the denominator polynomial. In a state-space equation, the system order is the number of
state-variables used in the system. The order of a system will frequently be denoted with
an n or N, although these variables are also used for other purposes. This book will make
clear distinction on the use of these variables.
A proper system is a system where the degree of the denominator is larger than or equal
to the degree of the numerator polynomial. A strictly proper system is a system where
the degree of the denominator polynomial is larger than (but never equal to) the degree
of the numerator polynomial. A biproper system is a system where the degree of the
denominator polynomial equals the degree of the numerator polynomial.
38
System Type
It is important to note that only proper systems can be physically realized. In other words,
a system that is not proper cannot be built. It makes no sense to spend a lot of time
designing and analyzing imaginary systems.
1+s
G(s) =
1 + s + s2
The highest exponent in the denominator is s2 , so the system is order 2. Also, since the
denominator is a higher degree than the numerator, this system is strictly proper.}}
In the above example, G(s) is a second-order transfer function because in the denominator
one of the s variables has an exponent of 2. Second-order functions are the easiest to work
with.
Let's say that we have a process transfer function (or combination of functions, such as a
controller feeding in to a process), all in the forward branch of a unity feedback loop. Say
that the overall forward branch transfer function is in the following generalized form (known
as pole-zero form):
[Pole-Zero Form]
∏
K (s − si )
G(s) = M ∏i
s j (s − sj )
Poles at the origin are called integrators, because they have the effect of performing
integration on the input signal.
we call the parameter M the system type. Note that increased system type number
correspond to larger numbers of poles at s = 0. More poles at the origin generally have
a beneficial effect on the system, but they increase the order of the system, and make it
increasingly difficult to implement physically. System type will generally be denoted with a
letter like N, M, or m. Because these variables are typically reused for other purposes, this
book will make clear distinction when they are employed.
Now, we will define a few terms that are commonly used when discussing system type.
These new terms are Position Error, Velocity Error, and Acceleration Error. These
names are throwbacks to physics terms where acceleration is the derivative of velocity, and
velocity is the derivative of position. Note that none of these terms are meant to deal with
movement, however.
Position Error
39
System Metrics
The position error, denoted by the position error constant Kp . This is the amount
of steady-state error of the system when stimulated by a unit step input. We define the
position error constant as follows:
[Position Error Constant]
Kp = lim G(s)
s→0
Kv = lim sG(s)
s→0
Acceleration Error
The acceleration error is the amount of steady-state error when the system is stimulated
with a parabolic input. We define the acceleration error constant to be:
[Acceleration Error Constant]
Ka = lim s2 G(s)
s→0
Now, this table will show briefly the relationship between the system type, the kind of input
(step, ramp, parabolic), and the steady-state error of the system:
Likewise, we can show that the system order can be found from the following generalized
transfer function in the Z domain:
∏
K i (z − zi )
G(z) = ∏
(z − 1)M j (z − zj )
40
Visually
Where the constant M is the type of the digital system. Now, we will show how to find the
various error constants in the Z-Domain:
[Z-Domain Error Constants]
5.11 Visually
Here is an image of the various system metrics, acting on a system in response to a step
input:
Figure 14
The target value is the value of the input step response. The rise time is the time at which
the waveform first reaches the target value. The overshoot is the amount by which the
waveform exceeds the target value. The settling time is the time it takes for the system
to settle into a particular bounded region. This bounded region is denoted with two short
dotted lines above and below the target value.
41
6 System Modeling
It is the job of a control engineer to analyze existing systems, and to design new systems
to meet specific needs. Sometimes new systems need to be designed, but more frequently a
controller unit needs to be designed to improve the performance of existing systems. When
designing a system, or implementing a controller to augment an existing system, we need
to follow some basic steps:
1. Model the system mathematically
2. Analyze the mathematical model
3. Design system/controller
4. Implement system/controller and test
The vast majority of this book is going to be focused on (2), the analysis of the mathematical
systems. This chapter alone will be devoted to a discussion of the mathematical modeling
of the systems.
An external description of a system relates the system input to the system output with-
out explicitly taking into account the internal workings of the system. The external de-
scription of a system is sometimes also referred to as the Input-Output Description of
the system, because it only deals with the inputs and the outputs to the system.
Figure 15
If the system can be represented by a mathematical function h(t, r), where t is the time
that the output is observed, and r is the time that the input is applied. We can relate the
system function h(t, r) to the input x and the output y through the use of an integral:
[General System Description]
43
System Modeling
∫ ∞
y(t) = h(t, r)x(r)dr
−∞
This integral form holds for all linear systems, and every linear system can be described by
such an equation.
If a system is causal (i.e. an input at t=r affects system behaviour only for t ≥ r) and there
is no input of the system before t=0, we can change the limits of the integration:
∫ t
y(t) = h(t, r)x(r)dr
0
∫ t
y(t) = h(t − r)x(r)dr
0
This equation is known as the convolution integral, and we will discuss it more in the
next chapter.
Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform,
a powerful tool that allows us to convert an equation from the time domain into the S-
Domain, where many calculations are easier. Time-variant systems cannot be used with
the Laplace Transform.
If a system is linear and lumped, it can also be described using a system of equations known
as state-space equations. In state-space equations, we use the variable x to represent the
internal state of the system. We then use u as the system input, and we continue to use
y as the system output. We can write the state-space equations as such:
We will discuss the state-space equations more when we get to the section on modern
controls.
44
Analysis
Systems which are LTI and Lumped can also be described using a combination of the state-
space equations, and the Laplace Transform. If we take the Laplace Transform of the state
equations that we listed above, we can get a set of functions known as the Transfer Matrix
Functions. We will discuss these functions in a later chapter.
6.5 Representations
To recap, we will prepare a table with the various system properties, and the available
methods for describing the system:
We will discuss all these different types of system representation later in the book.
6.6 Analysis
Once a system is modeled using one of the representations listed above, the system needs to
be analyzed. We can determine the system metrics and then we can compare those metrics
to our specification. If our system meets the specifications we are finished with the design
process. However if the system does not meet the specifications (as is typically the case),
then suitable controllers and compensators need to be designed and added to the system.
Once the controllers and compensators have been designed, the job isn't finished: we need
to analyze the new composite system to ensure that the controllers work properly. Also, we
need to ensure that the systems are stable: unstable systems can be dangerous.
For proposals, early stage designs, and quick turn around analyses a frequency domain model
is often superior to a time domain model. Frequency domain models take disturbance PSDs
(Power Spectral Densities) directly, use transfer functions directly, and produce output or
residual PSDs directly. The answer is a steady-state response. Oftentimes the controller is
shooting for 0 so the steady-state response is also the residual error that will be the analysis
output or metric for report.
45
System Modeling
Note some texts will state that this is only valid for random processes which are station-
ary. Other texts suggest stationary and ergodic while still others state weakly stationary
processes. Some texts do not distinguish between strictly stationary and weakly stationary.
From practice, the rule of thumb is if the PSD of the input process is the same from hour
to hour and day to day then the input PSD can be used and the above equation is valid.
Notes
1 Sun, Jian-Qiao (2006). Stochastic Dynamics and Control, Volume 4. Amsterdam: Elsevier Science.
0444522301
.
2 http://wikis.controltheorypro.com/index.php?title=Frequency_Domain_Modeling
46
Modeling Examples
6.8 Manufacture
Once the system has been properly designed we can prototype our system and test it.
Assuming our analysis was correct and our design is good, the prototype should work as
expected. Now we can move on to manufacture and distribute our completed systems.
3 http://wikis.controltheorypro.com/index.php?title=Helicopter_Hover_Example
4 http://wikis.controltheorypro.com/index.php?title=Reaction_Cancellation_Example
5 http://wikis.controltheorypro.com/index.php?title=Category:Examples
47
7 Transforms
7.1 Transforms
There are a number of transforms that we will be discussing throughout this book, and the
reader is assumed to have at least a small prior knowledge of them. It is not the intention of
this book to teach the topic of transforms to an audience that has had no previous exposure
to them. However, we will include a brief refresher here to refamiliarize people who maybe
cannot remember the topic perfectly. If you do not know what the Laplace Transform or
the Fourier Transform are yet, it is highly recommended that you use this page as a
simple guide, and look the information up on other sources. Specifically, Wikipedia1 has
lots of information on these subjects.
A transform is a mathematical tool that converts an equation from one variable (or one
set of variables) into a new variable (or a new set of variables). To do this, the transform
must remove all instances of the first variable, the ”Domain Variable”, and add a new ”Range
Variable”. Integrals are excellent choices for transforms, because the limits of the definite
integral will be substituted into the domain variable, and all instances of that variable will
be removed from the equation. An integral transform that converts from a domain variable
a to a range variable b will typically be formatted as such:
∫
T [f (a)] = F (b) = f (a)g(a, b)da
C
Where the function f(a) is the function being transformed, and g(a,b) is known as the
kernel of the transform. Typically, the only difference between the various integral trans-
forms is the kernel.
w:Laplace transform2
1 https://en.wikipedia.org/wiki/
2 https://en.wikipedia.org/wiki/Laplace%20transform
49
Transforms
The Laplace Transform converts an equation from the time-domain into the so-called ”S-
domain”, or the Laplace domain, or even the ”Complex domain”. These are all different
names for the same mathematical space and they all may be used interchangeably in this
book and in other texts on the subject. The Transform can only be applied under the
following conditions:
1. The system or signal in question is analog.
2. The system or signal in question is Linear.
3. The system or signal in question is Time-Invariant.
4. The system or signal in question is causal.
The transform is defined as such:
[Laplace Transform]
∫∞
F (s) = L[f (t)] = 0 f (t)e−st dt
Laplace transform results have been tabulated extensively. More information on the Laplace
transform, including a transform table can be found in the Appendix3 .
If we have a linear differential equation in the time domain:
With zero initial conditions, we can take the Laplace transform of the equation as such:
1 ∫ c+i∞ st
f (t) = L−1 {F (s)} = 2πi c−i∞ e F (s) ds
The inverse transform converts a function from the Laplace domain back into the time
domain.
3 https://en.wikibooks.org/wiki/Control%20Systems%2FTransforms%20Appendix
50
Laplace Transform
The Laplace Transform can be used on systems of linear equations in an intuitive way. Let's
say that we have a system of linear equations:
y1 (t) = a1 x1 (t)
y2 (t) = a2 x2 (t)
y(t) = Ax(t)
Which is the same as taking the transform of each individual equation in the system of
equations.
a https://en.wikibooks.org/wiki/Circuit%20Theory
dI(t)
V (t) = L
dt
51
Transforms
Figure 17 Circuit diagram for the RL circuit example problem. VL is the voltage
over the inductor, and is the quantity we are trying to find.
Let's say that we have a 1st order RL series electric circuit. The resistor has resistance
R, the inductor has inductance L, and the voltage source has input voltage Vin . The
system output of our circuit is the voltage over the inductor, Vout . In the time domain,
we have the following first-order differential equations to describe the circuit:
Vout (t) = VL (t) = L dI(t)
dt
However, since the circuit is essentially acting as a voltage divider, we can put the
output in terms of the input as follows:
dI(t)
L
Vout (t) = dt
dI(t) Vin (t)
RI(t)+L dt
This is a very complicated equation, and will be difficult to solve unless we employ the
Laplace transform:
Ls
Vout (s) = R+Ls Vin (s)
We can divide top and bottom by L, and move Vin to the other side:
Vout s
Vin = R
+s
L
And using a simple table look-up, we can solve this for the time-domain relationship
between the circuit input and the circuit output:
Vout d ( −Rt
L ) u(t)
Vin = dt e
52
Laplace Transform
a https://en.wikibooks.org/wiki/Calculus
Laplace transform pairs are extensively tabulated, but frequently we have transfer functions
and other equations that do not have a tabulated inverse transform. If our equation is a
fraction, we can often utilize Partial Fraction Expansion (PFE) to create a set of simpler
terms that will have readily available inverse transforms. This section is going to give a
brief reminder about PFE, for those who have already learned the topic. This refresher will
be in the form of several examples of the process, as it relates to the Laplace Transform.
People who are unfamiliar with PFE are encouraged to read more about it in Calculus4 .
This looks impossible, because we have a single equation with 3 unknowns (s, A, B),
but in reality s can take any arbitrary value, and we can ”plug in” values for s to solve
for A and B, without needing other equations. For instance, in the above equation, we
can multiply through by the denominator, and cancel terms:
(2s + 1) = A(s + 2) + B(s + 1)
Now, when we set s → -2, the A term disappears, and we are left with B → 3. When
we set s → -1, we can solve for A → -1. Putting these values back into our original
equation, we have:
−1 3
F (s) = (s+1) + (s+2)
Remember, since the Laplace transform is a linear operator, the following relationship
holds true:
[ ] [ ] [ ]
−1 −1
L−1 [F (s)] = L−1 (s+1)
3
+ (s+2) = L−1 s+1 + L−1 3
(s+2)
Finding the inverse transform of these smaller terms should be an easier process then
finding the inverse transform of the whole function. Partial fraction expansion is a
useful, and oftentimes necessary tool for finding the inverse of an S-domain equation.
4 https://en.wikibooks.org/wiki/Calculus
53
Transforms
A(s+10)3 +Bs+Cs(s+10)+Ds(s+10)2
F (s) = s(s+10)3
F (s) = A 1s + B (s+10)
1 1 1
3 + C (s+10)2 + D s+10
54
Laplace Transform
When the solution of the denominator is a complex number, we use a complex repre-
sentation A + iB, like 3+i4 as opposed to the use of a single letter (e.g. D) - which is
for real numbers:
As + B = 7s + 26
A=7
B = 26
We will need to reform it into two fractions that look like this (without changing its
value):
e−αt sin(ωt) · u(t) → (s+α)ω2 +ω2
(s−40) B+40A 9
A (s−40)2 +92 + 9 (s−40)2 +92
55
Transforms
= 90s2 − 1110
Comparing coefficients:
A+B+C=0
-15A - 12B - 3C + D = 90
73A + 37B - 3D = 0
-111A = -1110
Now, we can solve for A, B, C and D:
A = 10
B = -10
C=0
D = 120
And now for the ”fitting”:
The roots of s2 - 12s + 37 are 6 + j and 6 - j
A 1s + B s−3
1
+ C (s−6)s2 +12 + D (s−6)12 +12
No need to fit the fraction of D, because it is complete; no need to bother fitting the
fraction of C, because C is equal to zero.
10 1s − 10 s−3
1
+ 0 (s−6)s2 +12 + 120 (s−6)12 +12
The Final Value Theorem allows us to determine the value of the time domain equation,
as the time approaches infinity, from the S domain equation. In Control Engineering,
the Final Value Theorem is used most frequently to determine the steady-state value of a
system. The real part of the poles of the function must be <0.
[Final Value Theorem (Laplace)]
From our chapter on system metrics, you may recognize the value of the system at time
infinity as the steady-state time of the system. The difference between the steady state value
and the expected output value we remember as being the steady-state error of the system.
Using the Final Value Theorem, we can find the steady-state value and the steady-state
error of the system in the Complex S domain.
56
Laplace Transform
2 = 0 · 1+2·0+02 = 0 · 1 = 0
1+s 1+0
lims→ 0 s 1+2s+s
Akin to the final value theorem, the Initial Value Theorem allows us to determine the
initial value of the system (the value at time zero) from the S-Domain Equation. The initial
value theorem is used most frequently to determine the starting conditions, or the ”initial
conditions” of a system.
[Initial Value Theorem (Laplace)]
We will now show you the transforms of the three functions we have already learned about:
The unit step, the unit ramp, and the unit parabola. The transform of the unit step function
is given by:
1
L[u(t)] =
s
And since the unit ramp is the integral of the unit step, we can multiply the above result
times 1/s to get the transform of the unit ramp:
1
L[r(t)] =
s2
Again, we can multiply by 1/s to get the transform of the unit parabola:
1
L[p(t)] =
s3
57
Transforms
w:Fourier Transform5
The Fourier Transform is very similar to the Laplace transform. The fourier transform
uses the assumption that any finite time-domain signal can be broken into an infinite sum
of sinusoidal (sine and cosine waves) signals. Under this assumption, the Fourier Transform
converts a time-domain signal into its frequency-domain representation, as a function of the
radial frequency, ω, The Fourier Transform is defined as such:
[Fourier Transform]
∫ ∞
F (jω) = F[f (t)] = f (t)e−jωt dt
0
We can now show that the Fourier Transform is equivalent to the Laplace transform, when
the following condition is true:
s = jω
Because the Laplace and Fourier Transforms are so closely related, it does not make much
sense to use both transforms for all problems. This book, therefore, will concentrate on
the Laplace transform for nearly all subjects, except those problems that deal directly with
frequency values. For frequency problems, it makes life much easier to use the Fourier
Transform representation.
Like the Laplace Transform, the Fourier Transform has been extensively tabulated. Prop-
erties of the Fourier transform, in addition to a table of common transforms is available in
the Appendix6 .
5 https://en.wikipedia.org/wiki/Fourier%20Transform
6 https://en.wikibooks.org/wiki/Control%20Systems%2FTransforms%20Appendix
58
Complex Plane
Figure 18
Using the above equivalence, we can show that the Laplace transform is always equal to
the Fourier Transform, if the variable s is an imaginary number. However, the Laplace
transform is different if s is a real or a complex variable. As such, we generally define s to
have both a real part and an imaginary part, as such:
s = σ + jω
There is an important result from calculus that is known as Euler's Formula, or ”Euler's
Relation”. This important formula relates the important values of e, j, π, 1 and 0:
ejπ + 1 = 0
59
Transforms
This formula will be used extensively in some of the chapters of this book, so it is important
to become familiar with it now.
7.6 MATLAB
The MATLAB symbolic toolbox contains functions to compute the Laplace and Fourier
transforms automatically. The function laplace, and the function fourier can be used
to calculate the Laplace and Fourier transforms of the input functions, respectively. For
instance, the code:
t = sym('t');
fx = 30*t^2 + 20*t;
laplace(fx)
ans =
60/s^3+20/s^2
7 https://en.wikibooks.org/wiki/Control%20Systems%2FMATLAB
https://en.wikibooks.org/wiki/Digital%20Signal%20Processing%2FContinuous-Time%
8
20Fourier%20Transform
9 https://en.wikibooks.org/wiki/Signals%20and%20Systems%2FAperiodic%20Signals
10 https://en.wikibooks.org/wiki/Circuit%20Theory%2FLaplace%20Transform
60
8 Transfer Functions
A Transfer Function is the ratio of the output of a system to the input of a system, in
the Laplace domain considering its initial conditions and equilibrium point to be zero. This
assumption is relaxed for systems observing transience. If we have an input function of
X(s), and an output function Y(s), we define the transfer function H(s) to be:
[Transfer Function]
Y (s)
H(s) =
X(s)
Readers who have read the Circuit Theory1 book will recognize the transfer function as
being the impedance, admittance, impedance ratio of a voltage divider or the admittance
ratio of a current divider.
Figure 19
Note:
Time domain variables are generally written with lower-case letters. Laplace-Domain,
and other transform domain variables are generally written using upper-case letters.
1 https://en.wikibooks.org/wiki/Circuit%20Theory
61
Transfer Functions
For comparison, we will consider the time-domain equivalent to the above input/output
relationship. In the time domain, we generally denote the input to a system as x(t), and the
output of the system as y(t). The relationship between the input and the output is denoted
as the impulse response, h(t).
We define the impulse response as being the relationship between the system output to its
input. We can use the following equation to define the impulse response:
y(t)
h(t) =
x(t)
It would be handy at this point to define precisely what an ”impulse” is. The Impulse
Function, denoted with δ(t) is a special function defined piece-wise as follows:
[Impulse Function]
0, t<0
δ(t) = undefined, t = 0
0, t>0
The impulse function is also known as the delta function because it's denoted with the
Greek lower-case letter δ. The delta function is typically graphed as an arrow towards
infinity, as shown below:
62
Impulse Response
Figure 20
It is drawn as an arrow because it is difficult to show a single point at infinity in any other
graphing method. Notice how the arrow only exists at location 0, and does not exist for any
other time t. The delta function works with regular time shifts just like any other function.
For instance, we can graph the function δ(t - N) by shifting the function δ(t) to the right,
as such:
63
Transfer Functions
Figure 21
An examination of the impulse function will show that it is related to the unit-step function
as follows:
du(t)
δ(t) =
dt
and
∫
u(t) = δ(t)dt
The impulse function is not defined at point t = 0, but the impulse must always satisfy the
following condition, or else it is not a true impulse function:
∫ ∞
δ(t)dt = 1
−∞
The response of a system to an impulse input is called the impulse response. Now, to
get the Laplace Transform of the impulse function, we take the derivative of the unit step
function, which means we multiply the transform of the unit step function by s:
64
Convolution
1
L[u(t)] = U (s) =
s
s
L[δ(t)] = sU (s) = =1
s
Similar to the impulse response, the step response of a system is the output of the system
when a unit step function is used as the input. The step response is a common analysis
tool used to determine certain metrics about a system. Typically, when a new system is
designed, the step response of the system is the first characteristic of the system to be
analyzed.
8.3 Convolution
However, the impulse response cannot be used to find the system output from the system
input in the same manner as the transfer function. If we have the system input and the
impulse response of the system, we can calculate the system output using the convolution
operation as such:
https://en.wikibooks.org/wiki/Control%20Systems%2FTransforms%20Appendix%23Table%20of%
2
20Laplace%20Transforms
65
Transfer Functions
(The variable τ (Greek tau) is a dummy variable for integration). This operation can be
difficult to perform. Therefore, many people prefer to use the Laplace Transform (or another
transform) to convert the convolution operation into a multiplication operation, through
the Convolution Theorem.
If the system in question is time-invariant, then the general description of the system can be
replaced by a convolution integral of the system's impulse response and the system input.
We can call this the convolution description of a system, and define it below:
[Convolution Description]
∫ ∞
y(t) = x(t) ∗ h(t) = x(τ )h(t − τ )dτ
−∞
This method of solving for the output of a system is quite tedious, and in fact it can waste a
large amount of time if you want to solve a system for a variety of input signals. Luckily, the
Laplace transform has a special property, called the Convolution Theorem, that makes
the operation of convolution easier:
Convolution Theorem
Convolution in the time domain becomes multiplication in the complex Laplace domain.
Multiplication in the time domain becomes convolution in the complex Laplace domain.
The Transfer Function fully describes a control system. The Order, Type and Frequency
response can all be taken from this specific function. Nyquist and Bode plots can be drawn
from the open loop Transfer Function. These plots show the stability of the system when
the loop is closed. Using the denominator of the transfer function, called the characteristic
equation, roots of the system can be derived.
66
Using the Transfer Function
For all these reasons and more, the Transfer function is an important aspect of classical
control systems. Let's start out with the definition:
Transfer Function
The Transfer function of a system is the relationship of the system's output to its input,
represented in the complex Laplace domain.
If the complex Laplace variable is s, then we generally denote the transfer function of a
system as either G(s) or H(s). If the system input is X(s), and the system output is Y(s),
then the transfer function can be defined as such:
Y (s)
H(s) =
X(s)
If we know the input to a given system, and we have the transfer function of the system,
we can solve for the system output by multiplying:
[Transfer Function Description]
Y (s) = H(s)X(s)
From a Laplace transform table, we know that the Laplace transform of the impulse
function, δ(t) is:
L[δ(t)] = 1
So, when we plug this result into our relationship between the input, output, and
transfer function, we get:
Y (s) = X(s)H(s)
Y (s) = (1)H(s)
Y (s) = H(s)
In other words, the ”impulse response” is the output of the system when we input an
impulse function.
From the Laplace Transform table, we can also see that the transform of the unit step
function, u(t) is given by:
L[u(t)] = 1
s
Plugging that result into our relation for the transfer function gives us:
67
Transfer Functions
Y (s) = X(s)H(s)
Y (s) = 1s H(s)
H(s)
Y (s) = s
And we can see that the step response is simply the impulse response divided by s.
Use MATLAB to find the step response of the following transfer function:
79s2 +916s+1000
F (s) = s(s+10)3
The Frequency Response is similar to the Transfer function, except that it is the re-
lationship between the system output and input in the complex Fourier Domain, not the
Laplace domain. We can obtain the frequency response from the transfer function, by using
the following change of variables:
s = jω
Frequency Response
The frequency response of a system is the relationship of the system's output to its
input, represented in the Fourier Domain.
Figure 22
68
Frequency Response
Because the frequency response and the transfer function are so closely related, typically
only one is ever calculated, and the other is gained by simple variable substitution. How-
ever, despite the close relationship between the two representations, they are both useful
individually, and are each used for different purposes.
69
9 Sampled Data Systems
In this chapter, we are going to introduce the ideal sampler and the Star Transform.
First, we need to introduce (or review) the Geometric Series infinite sum. The results of
this sum will be very useful in calculating the Star Transform, later.
Consider a sampler device that operates as follows: every T seconds, the sampler reads the
current value of the input signal at that exact moment. The sampler then holds that value
on the output for T seconds, before taking the next sample. We have a generic input to this
system, f(t), and our sampled output will be denoted f*(t). We can then show the following
relationship between the two signals:
( ) ( ) ( )
f ∗ (t) = f (0) u(t − 0) − u(t − T ) + f (T ) u(t − T ) − u(t − 2T ) +· · ·+ f (nT ) u(t − nT ) − u(t − (n + 1)T ) +·
Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship
works for any fractional value.
Taking the Laplace Transform of this infinite sequence will yield us with a special result
called the Star Transform. The Star Transform is also occasionally called the ”Starred
Transform” in some texts.
w:Geometric progression1
Before we talk about the Star Transform or even the Z-Transform, it is useful for us to
review the mathematical background behind solving infinite series. Specifically, because of
the nature of these transforms, we are going to look at methods to solve for the sum of a
geometric series.
A geometic series is a sum of values with increasing exponents, as such:
∑
n
ark = ar0 + ar1 + ar2 + ar3 + · · · + arn
k=0
1 https://en.wikipedia.org/wiki/Geometric%20progression
71
Sampled Data Systems
In the equation above, notice that each term in the series has a coefficient value, a. We can
optionally factor out this coefficient, if the resulting equation is easier to work with:
∑
n ( )
a rk = a r0 + r1 + r2 + r3 + · · · + rn
k=0
Once we have an infinite series in either of these formats, we can conveniently solve for the
total sum of this series using the following equation:
∑
n
1 − rn+1
a rk = a
k=0
1−r
Let's say that we start our series off at a number that isn't zero. Let's say for instance that
we start our series off at n = 1 or n = 100. Let's see:
∑
n
ark = arm + arm+1 + arm+2 + arm+3 + · · · + arn
k=m
∑
n
a(rm − rn+1 )
ark =
k=m
1−r
With that result out of the way, now we need to worry about making this series converge.
In the above sum, we know that n is approaching infinity (because this is an infinite sum).
Therefore, any term that contains the variable n is a matter of worry when we are trying to
make this series converge. If we examine the above equation, we see that there is one term
in the entire result with an n in it, and from that, we can set a fundamental inequality to
govern the geometric series.
rn+1 < ∞
r≤1
Therefore, we come to the final result: The geometric series converges if and only if
the value of r is less than one.
72
The Star Transform
∞
∑
F ∗ (s) = L∗ [f (t)] = f (kT )e−skT
k=0
w:Star transform2
The Star Transform depends on the sampling time T and is different for a single signal
depending on the frequency at which the signal is sampled. Since the Star Transform is
defined as an infinite series, it is important to note that some inputs to the Star Transform
will not converge, and therefore some functions do not have a valid Star Transform. Also, it
is important to note that the Star Transform may only be valid under a particular region
of convergence. We will cover this topic more when we discuss the Z-transform.
a https://en.wikibooks.org/wiki/Complex%20Analysis%2FResidue%20Theory
The Laplace Transform and the Star Transform are clearly related, because we obtained
the Star Transform by using the Laplace Transform on a time-domain signal. However,
the method to convert between the two results can be a slightly difficult one. To find the
Star Transform of a Laplace function, we must take the residues of the Laplace equation,
as such:
∑[ 1
]
X ∗ (s) = residues of X(λ)
1 − e−T (s−λ) at poles of E(λ)
This math is advanced for most readers, so we can also use an alternate method, as follows:
∞
∗ 1 ∑ x(0)
X (s) = X(s + jmωs ) +
T n=−∞ 2
Neither one of these methods are particularly easy, however, and therefore we will not discuss
the relationship between the Laplace transform and the Star Transform any more than is
absolutely necessary in this book. Suffice it to say, however, that the Laplace transform
and the Star Transform are related mathematically.
2 https://en.wikipedia.org/wiki/Star%20transform
73
Sampled Data Systems
In some systems, we may have components that are both continuous and discrete in nature.
For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by
a computer (for processing), and then a Digital-To-Analog converter. In this case, the
computer is acting on a digital signal, but the rest of the system is acting on continuous
signals. Star transforms can interact with Laplace transforms in some of the following ways:
Given:
Y (s) = X ∗ (s)H(s) Then:
Y ∗ (s) = X ∗ (s)H ∗ (s)
Given:
Y (s) = X(s)H(s) Then:
∗
Y ∗ (s) = XH (s)
Y ∗ (s) ̸= X ∗ (s)H ∗ (s)
∗
Where XH (s) is the Star Transform of the product of X(s)H(s).
The Star Transform is defined as being an infinite series, so it is critically important that
the series converge (not reach infinity), or else the result will be nonsensical. Since the
Star Transform is a geometic series (for many input signals), we can use geometric series
analysis to show whether the series converges, and even under what particular conditions
the series converges. The restrictions on the star transform that allow it to converge are
known as the region of convergence (ROC) of the transform. Typically a transform must
be accompanied by the explicit mention of the ROC.
w:Z-transform3
Let us say now that we have a discrete data set that is sampled at regular intervals. We
can call this set x[n]:
3 https://en.wikipedia.org/wiki/Z-transform
74
The Z-Transform
This is also known as the Bilateral Z-Transform. We will only discuss this version of
the transform in this book
we can utilize a special transform, called the Z-transform, to make dealing with this set
more easy:
[Z Transform]
∞
∑
X(z) = Z {x[n]} = x[n]z −n
n=−∞
a https://en.wikibooks.org/wiki/Control%20Systems%2FTransforms%20Appendix
Like the Star Transform the Z Transform is defined as an infinite series and therefore we
need to worry about convergence. In fact, there are a number of instances that have identical
Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about
the Z transform, you must include the ROC, or you are missing valuable information.
Like the Laplace Transform, in the Z-domain we can use the input-output relationship of
the system to define a transfer function.
Figure 23
The transfer function in the Z domain operates exactly the same as the transfer function
in the S Domain:
Y (z)
H(z) =
X(z)
Z{h[n]} = H(z)
75
Sampled Data Systems
Similarly, the value h[n] which represents the response of the digital system is known as the
impulse response of the system. It is important to note, however, that the definition of
an ”impulse” is different in the analog and digital domains.
Where C is a counterclockwise closed path encircling the origin and entirely in the region
of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z).
There is more information about complex integrals in the book Engineering Analysisa .
a https://en.wikibooks.org/wiki/Engineering%20Analysis
This math is relatively advanced compared to some other material in this book, and there-
fore little or no further attention will be paid to solving the inverse Z-Transform in this
manner. Z transform pairs are heavily tabulated in reference texts, so many readers can
consider that to be the primary method of solving for inverse Z transforms. There are a
number of Z-transform pairs available in table form in The Appendix4 .
Like the Laplace Transform, the Z Transform also has an associated final value theorem:
[Final Value Theorem (Z)]
This equation can be used to find the steady-state response of a system, and also to calculate
the steady-state error of the system.
9.5 Star ↔ Z
The Z transform is related to the Star transform though the following change of variables:
z = esT
4 Chapter 7 on page 49
76
Star ↔ Z
Notice that in the Z domain, we don't maintain any information on the sampling period, so
converting to the Z domain from a Star Transformed signal loses that information. When
converting back to the star domain however, the value for T can be re-insterted into the
equation, if it is still available.
Also of some importance is the fact that the Z transform is bilinear, while the Star Transform
is unilinear. This means that we can only convert between the two transforms if the sampled
signal is zero for all values of n < 0.
Because the two transforms are so closely related, it can be said that the Z transform is
simply a notational convenience for the Star Transform. With that said, this book could
easily use the Star Transform for all problems, and ignore the added burden of Z trans-
form notation entirely. A common example of this is Richard Hamming's book ”Numerical
Methods for Scientists and Engineers” which uses the Fourier Transform for all problems,
considering the Laplace, Star, and Z-Transforms to be merely notational conveniences.
However, the Control Systems wikibook is under the impression that the correct utilization
of different transforms can make problems more easy to solve, and we will therefore use a
multi-transform approach.
9.5.1 Z plane
Note:
The lower-case z is the name of the variable, and the upper-case Z is the name of the
Transform and the plane.
z is a complex variable with a real part and an imaginary part. In other words, we can
define z as such:
z = Re(z) + j Im(z)
Since z can be broken down into two independent components, it often makes sense to graph
the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part
of z, and the vertical axis represents the magnitude of the imaginary part of z.
Notice also that if we define z in terms of the star-transform relation:
z = esT
s = σ + jω
77
Sampled Data Systems
Through Euler's formula, we can separate out the complex exponential as such:
M = eσT
ϕ = ωT
z = M cos(ϕ) + jM sin(ϕ)
Which is clearly a polar representation of z, with the magnitude of the polar function (M)
based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary
part of s.
To best teach the region of convergance (ROC) for the Z-transform, we will do a quick
example.
Note that we can remove the unit step function, and change the limits of the sum:
∑∞ −2n z −n
X(z) = n=0 e
This is because the series is 0 for all time less than n → 0. If we try to combine the n
terms, we get the following result:
∑∞ 2 z)−n
X(z) = n=0 (e
Once we have our series in this term, we can break this down to look like our geometric
series:
a=1
r = (e2 z)−1
And finally, we can find our final value, using the geometric series formula:
78
Star ↔ Z
∑n n+1 2 −1 n+1
a k=0 r
k = a 1−r
1−r = 1 1−((e z) )
1−(e2 z)−1
Again, we know that to make this series converge, we need to make the r value less
than 1:
|(e2 z)−1 | = e21z ≤ 1
|e2 z| ≥ 1
And finally we obtain the region of convergance for this Z-transform:
|z| ≥ 1
e2
z and s are complex variables, and therefore we need to take the magnitude in our
ROC calculations. The ”Absolute Value symbols” are actually the ”magnitude
calculation”, and is defined as such:
x = A + jB
√
|x| = A2 + B 2
9.5.3 Laplace ↔ Z
There are no easy, direct ways to convert between the Laplace transform and the Z trans-
form directly. Nearly all methods of conversions reproduce some aspects of the original
equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping
techniques between the two, see the Z Transform Mappings Appendix5 .
However, there are some topics that we need to discuss. First and foremost, conversions
between the Laplace domain and the Z domain are not linear, this leads to some of the
following problems:
1. L[G(z)H(z)] ̸= G(s)H(s)
2. Z[G(s)H(s)] ≠ G(z)H(z)
This means that when we combine two functions in one domain multiplicatively, we must
find a combined transform in the other domain. Here is how we denote this combined
transform:
Z[G(s)H(s)] = GH(z)
5 https://en.wikibooks.org/wiki/Control%20Systems%2FZ%20Transform%20Mappings
79
Sampled Data Systems
Notice that we use a horizontal bar over top of the multiplied functions, to denote that
we took the transform of the product, not of the individual pieces. However, if we have a
system that incorporates a sampler, we can show a simple result. If we have the following
format:
Y (s) = X ∗ (s)H(s)
and once we are in the star domain, we can do a direct change of variables to reach the Z
domain:
Note that we can only make this equivalence relationship if the system incorporates an ideal
sampler, and therefore one of the multiplicative terms is in the star domain.
9.5.4 Example
Let's say that we have the following equation in the Laplace domain:
Y (s) = A∗ (s)B(s) + C(s)D(s)
And because we have a discrete sampler in the system, we want to analyze it in the Z
domain. We can break up this equation into two separate terms, and transform each:
Z[A∗ (s)B(s)] → Z[A∗ (s)B ∗ (s)] = A(z)B(z)
And
Z[C(s)D(s)] = CD(z)
And when we add them together, we get our result:
Y (z) = A(z)B(z) + CD(z)
9.6 Z ↔ Fourier
By substituting variables, we can relate the Star transform to the Fourier Transform as
well:
esT = ejω
e(σ+jω)T = ejω
80
Reconstruction
If we assume that T = 1, we can relate the two equations together by setting the real part
of s to zero. Notice that the relationship between the Laplace and Fourier transforms is
mirrored here, where the Fourier transform is the Laplace transform with no real-part to
the transform variable.
There are a number of discrete-time variants to the Fourier transform as well, which are
not discussed in this book. For more information about these variants, see Digital Signal
Processing6 .
9.7 Reconstruction
Some of the easiest reconstruction circuits are called ”Holding circuits”. Once a signal has
been transformed using the Star Transform (passed through an ideal sampler), the signal
must be ”reconstructed” using one of these hold systems (or an equivalent) before it can be
analyzed in a Laplace-domain system.
If we have a sampled signal denoted by the Star Transform X ∗ (s), we want to
reconstruct that signal into a continuous-time waveform, so that we can manipulate it
using Laplace-transform techniques.
Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and
an output denoted with the Laplace-transform variable Y(s). We can show the relationship
as follows:
Y (s) = X ∗ (s)G(s)
Reconstruction circuits then, are physical devices that we can use to convert a digital,
sampled signal into a continuous-time domain, so that we can take the Laplace transform
of the output signal.
6 https://en.wikibooks.org/wiki/Digital%20Signal%20Processing
81
Sampled Data Systems
A zero-order hold circuit is a circuit that essentially inverts the sampling process: The
value of the sampled signal at time t is held on the output for T time. The output waveform
of a zero-order hold circuit therefore looks like a staircase approximation to the original
waveform.
The transfer function for a zero-order hold circuit, in the Laplace domain, is written as
such:
[Zero Order Hold]
1 − e−T s
Gh0 =
s
The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits
on this page) assumes zero processing delay in converting between digital to analog.
82
Reconstruction
Figure 25 A continuous input signal (gray) and the sampled signal with a zero-order
hold (red)
83
Sampled Data Systems
The zero-order hold creates a step output waveform, but this isn't always the best way to
reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the
waveform at the time t, and uses that derivative to make a guess as to where the output
waveform is going to be at time (t + T). The first-order hold circuit then ”draws a line”
from the current position to the expected future position, as the output of the waveform.
[First Order Hold]
[ ]2
1 + T s 1 − e−T s
Gh1 =
T s
Keep in mind, however, that the next value of the signal will probably not be the same as
the expected value of the next data point, and therefore the first-order hold may have a
number of discontinuities.
84
Reconstruction
Figure 27 An input signal (grey) and the first-order hold circuit output (red)
The Zero-Order hold outputs the current value onto the output, and keeps it level through-
out the entire bit time. The first-order hold uses the function derivative to predict the next
value, and produces a series of ramp outputs to produce a fluctuating waveform. Some-
times however, neither of these solutions are desired, and therefore we have a compromise:
Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding
circuits, and takes a fractional number k as an argument. Notice that k must be between 0
and 1 for this circuit to work correctly.
[Fractional Order Hold]
1 − e−T s k
Ghk = (1 − ke−T s ) + 2 (1 − e−T s )2
s Ts
This circuit is more complicated than either of the other hold circuits, but sometimes added
complexity is worth it if we get better performance from our reconstruction circuit.
85
Sampled Data Systems
86
Further reading
Figure 29 An input signal (grey) and the output signal through a linear approximation
circuit
7 https://en.wikibooks.org/wiki/Digital%20Signal%20Processing%2FZ%20Transform
8 https://en.wikibooks.org/wiki/Complex%20Analysis%2FResidue%20Theory
9 https://en.wikibooks.org/wiki/Analog%20and%20Digital%20Conversion
87
10 System Delays
10.1 Delays
A system can be built with an inherent delay. Delays are units that cause a time-shift in
the input signal, but that don't affect the signal characteristics. An ideal delay is a delay
system that doesn't affect the signal characteristics at all, and that delays the signal for
an exact amount of time. Some delays, like processing delays or transmission delays, are
unintentional. Other delays however, such as synchronization delays, are an integral part
of a system. This chapter will talk about how delays are utilized and represented in the
Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter,
through change of variables, to express delays in other domains.
An ideal delay causes the input function to be shifted forward in time by a certain specified
amount of time. Systems with an ideal delay cause the system output to be delayed by a
finite, predetermined amount of time.
Figure 30
Let's say that we have a function in time that is time-shifted by a certain constant time
period T. For convenience, we will denote this function as x(t - T). Now, we can show that
the Laplace transform of x(t - T) is the following:
What this demonstrates is that time-shifts in the time-domain become exponentials in the
complex Laplace domain.
89
System Delays
Since we know the following general relationship between the Z Transform and the Star
Transform:
z ⇔ esT
We can show what a time shift in a discrete time domain becomes in the Z domain:
A time-shift in the time domain becomes an exponential increase in the Laplace domain.
This would seem to show that a time shift can have an effect on the stability of a system,
and occasionally can cause a system to become unstable. We define a new parameter called
the time margin as the amount of time that we can shift an input function before the
system becomes unstable. If the system can survive any arbitrary time shift without going
unstable, we say that the time margin of the system is infinite.
When speaking of sinusoidal signals, it doesn't make sense to talk about ”time shifts”, so
instead we talk about ”phase shifts”. Therefore, it is also common to refer to the time margin
as the phase margin of the system. The phase margin denotes the amount of phase shift
that we can apply to the system input before the system goes unstable.
We denote the phase margin for a system with a lowercase Greek letter φ (phi). Phase
margin is defined as such for a second-order system:
[Delay Margin]
[ ]
−1 2ζ
ϕm = tan √
( 4ζ + 1 − 2ζ 2 )1/2
4
ϕm ≈ 100ζ
The Greek letter zeta (ζ) is a quantity called the damping ratio, and we discuss this
quantity in more detail in the next chapter.
90
Transform-Domain Delays
The ordinary Z-Transform does not account for a system which experiences an arbitrary
time delay, or a processing delay. The Z-Transform can, however, be modified to account for
an arbitrary delay. This new version of the Z-transform is frequently called the Modified
Z-Transform, although in some literature (notably in Wikipedia), it is known as the
Advanced Z-Transform.
To demonstrate the concept of an ideal delay, we will show how the star transform responds
to a time-shifted input with a specified delay of time T. The function :X ∗ (s, ∆) is the
delayed star transform with a delay parameter ∆. The delayed star transform is defined in
terms of the star transform as such:
[Delayed Star Transform]
Since we know that the Star Transform is related to the Z Transform through the following
change of variables:
z = e−sT
We can interpret the above result to show how the Z Transform responds to a delay:
Z(x[t − T ]) = X(z)z −T
{ }
X(z, ∆) = Z {x(t − ∆)} = Z X(s)e−∆T s
And finally:
[Delayed Z Transform]
91
System Delays
∞
∑
Z(x[n], ∆) = X(z, ∆) = x[n − ∆]z −n
n=−∞
w:Advanced Z-transform1
The Delayed Z-Transform has some uses, but mathematicians and engineers have decided
that a more useful version of the transform was needed. The new version of the Z-Transform,
which is similar to the Delayed Z-transform with a change of variables, is known as the
Modified Z-Transform. The Modified Z-Transform is defined in terms of the delayed Z
transform as follows:
{ }
X(z, m) = X(z, ∆)∆→1−m = Z X(s)e−∆T s ∆→1−m
∞
∑
X(z, m) = Z(x[n], m) = x[n + m − 1]z −n
n=−∞
1 https://en.wikipedia.org/wiki/Advanced%20Z-transform
92
11 Poles and Zeros
Poles and Zeros of a transfer function are the frequencies for which the value of the de-
nominator and numerator of transfer function becomes zero respectively. The values of the
poles and the zeros of a system determine whether the system is stable, and how well the
system performs. Control systems, in the most simple sense, can be designed simply by
assigning specific values to the poles and zeros of the system.
Physically realizable control systems must have a number of poles greater than the number
of zeros. Systems that satisfy this relationship are called Proper. We will elaborate on
this below.
a
H(s) =
(s − l)(s − m)(s − n)
The poles are located at s = l, m, n. Now, we can use partial fraction expansion to separate
out the transfer function:
a A B C
H(s) = = + +
(s − l)(s − m)(s − n) s − l s − m s − n
Using the inverse transform on each of these component fractions (looking up the transforms
in our table), we get the following:
But, since s is a complex variable, l m and n can all potentially be complex numbers, with
a real part (σ) and an imaginary part (jω). If we just look at the first term:
1 https://en.wikipedia.org/wiki/Euler%27s%20identity
93
Poles and Zeros
If a complex pole is present it is always accomponied by another pole that is its complex
conjugate. The imaginary parts of their time domain representations thus cancel and we
are left with 2 of the same real parts. Assuming that the complex conjugate pole of the
first term is present, we can take 2 times the real part of this equation and we are left with
our final result:
We can see from this equation that every pole will have an exponential part, and a sinusoidal
part to its response. We can also go about constructing some rules:
1. if σl = 0, the response of the pole is a perfect sinusoid (an oscillator)
2. if ωl = 0, the response of the pole is a perfect exponential.
3. if σl < 0, the exponential part of the response will decay towards zero.
4. if σl > 0, the exponential part of the response will rise towards infinity.
From the last two rules, we can see that all poles of the system must have negative real
parts, and therefore they must all have the form (s + l) for the system to be stable. We
will discuss stability in later chapters.
N (s)
H(s) =
D(s)
Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator
of the transfer function) obtained by setting N(s) = 0 and solving for s.
The polynomial order of a function is the value of the highest exponent in the poly-
nomial.
Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting
D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must
not have more zeros than poles, we can state that the polynomial order of D(s) must be
greater than or equal to the polynomial order of N(s).
94
Effects of Poles and Zeros
11.3.1 Example
We define N(s) and D(s) to be the numerator and denominator polynomials, as such:
N (s) = s + 2
D(s) = s2 + 0.25
We set N(s) to zero, and solve for s:
N (s) = s + 2 = 0 → s = −2
So we have a zero at s → -2. Now, we set D(s) to zero, and solve for s to obtain the
poles of the equation:
√ √
D(s) = s2 + 0.25 = 0 → s = +i 0.25, −i 0.25
And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable,
and it can therefore take imaginary and real values.
As s approaches a zero, the numerator of the transfer function (and therefore the transfer
function itself) approaches the value 0. When s approaches a pole, the denominator of the
transfer function approaches zero, and the value of the transfer function approaches infinity.
An output value of infinity should raise an alarm bell for people who are familiar with BIBO
stability. We will discuss this later.
As we have seen above, the locations of the poles, and the values of the real and imaginary
parts of the pole determine the response of the system. Real parts correspond to exponen-
tials, and imaginary parts correspond to sinusoidal values. Addition of poles to the transfer
function has the effect of pulling the root locus to the right, making the system less stable.
Addition of zeros to the transfer function has the effect of pulling the root locus to the left,
making the system more stable.
Kω 2
H(s) =
s2 + 2ζωs + ω 2
Where K is the system gain, ζ is called the damping ratio of the function, and ω is
called the natural frequency of the system. ζ and ω, if exactly known for a second order
95
Poles and Zeros
system, the time responses can be easily plotted and stability can easily be checked. More
information on second order systems can be found here2 .
The damping ratio of a second-order system, denoted with the Greek letter zeta (ζ), is
a real number that defines the damping properties of the system. More damping has the
effect of less percent overshoot, and slower settling time. Damping is the inherent ability
of the system to oppose the oscillatory nature of the system's transient response. Larger
values of damping coefficient or damping factor produces transient responses with lesser
oscillatory nature.
ω → ωn
We will omit the subscript when it is clear that we are talking about the natural frequency,
but we will include the subscript when we are using other values for the variable ω. Also,
ω = ωn when ζ = 0.
2 http://wikis.controltheorypro.com/index.php?title=Second_Order_Systems
96
12 State-Space Equations
The ”Classical” method of controls (what we have been studying so far) has been based
mostly in the transform domain. When we want to control the system in general we use the
Laplace transform (Z-Transform for digital systems) to represent the system, and when we
want to examine the frequency characteristics of a system, we use the Fourier Transform.
The question arises, why do we do this?
Let's look at a basic second-order Laplace Transform transfer function:
Y (s) 1+s
= G(s) =
X(s) 1 + 2s + 5s2
And we can decompose this equation in terms of the system inputs and outputs:
Now, when we take the inverse Laplace transform of our equation, we can see that:
The Laplace transform is transforming the fact that we are dealing with second-order dif-
ferential equations. The Laplace transform moves a system out of the time-domain into the
complex frequency domain, to study and manipulate our systems as algebraic polynomials
instead of linear ODEs. Given the complexity of differential equations, why would we ever
want to work in the time domain?
It turns out that to decompose our higher-order differential equations into multiple first-
order equations, one can find a new method for easily manipulating the system without
having to use integral transforms. The solution to this problem is state variables. By
taking our multiple first-order differential equations, and analyzing them in vector form,
we can not only do the same things we were doing in the time domain using simple matrix
algebra, but now we can easily account for systems with multiple inputs and multiple out-
puts, without adding much unnecessary complexity. This demonstrates why the ”modern”
state-space approach to controls has become popular.
97
State-Space Equations
12.2 State-Space
State
Central to the state-space notation is the idea of a state. A state of a system is the
current value of internal elements of the system, that change separately (but not completely
unrelated) to the output of the system. In essence, the state of a system is an explicit account
of the values of the internal system components. Here are some examples:
Consider an electric circuit with both an input and an output terminal. This circuit
may contain any number of inductors and capacitors. The state variables may represent
the magnetic and electric fields of the inductors and capacitors, respectively.
Consider a spring-mass-dashpot system. The state variables may represent the com-
pression of the spring, or the acceleration at the dashpot.
Consider a chemical reaction where certain reagents are poured into a mixing container,
and the output is the amount of the chemical product produced over time. The state
variables may represent the amounts of un-reacted chemicals in the container, or other
properties such as the quantity of thermal energy in the container (that can serve to
facilitate the reaction).
1 https://en.wikipedia.org/wiki/State%20space%20%28controls%29
98
State Variables
When modeling a system using a state-space equation, we first need to define three vectors:
Input variables
A SISO (Single Input Single Output) system will only have a single input value, but a
MIMO system may have multiple inputs. We need to define all the inputs to the system,
and we need to arrange them into a vector.
Output variables
This is the system output value, and in the case of MIMO systems, we may have several.
Output variables should be independent of one another, and only dependent on a linear
combination of the input vector and the state vector.
State Variables
The state variables represent values from inside the system, that can change over time.
In an electric circuit, for instance, the node voltages or the mesh currents can be state
variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can
be state variables.
We denote the input variables with u, the output variables with y, and the state variables
with x. In essence, we have the following relationship:
y = f (x, u)
Where f(x, u) is our system. Also, the state variables can change with respect to the current
state and the system input:
x′ = g(x, u)
Where x' is the rate of change of the state variables. We will define f(u, x) and g(u, x) in
the next chapter.
In the Laplace domain, if we want to account for systems with multiple inputs and multiple
outputs, we are going to need to rely on the principle of superposition to create a system
of simultaneous Laplace equations for each output and each input. For such systems,
the classical approach not only doesn't simplify the situation, but because the systems of
equations need to be transformed into the frequency domain first, manipulated, and then
transformed back into the time domain, they can actually be more difficult to work with.
However, the Laplace domain technique can be combined with the State-Space techniques
discussed in the next few chapters to bring out the best features of both techniques. We
will discuss MIMO systems in the MIMO Systems Chapter2 .
99
State-Space Equations
Note:
If x'(t) and y(t) are not linear combinations of x(t) and u(t), the system is said to be
nonlinear. We will attempt to discuss non-linear systems in a later chapter.
The first equation shows that the system state change is dependent on the previous system
state, the initial state of the system, the time, and the system inputs. The second equation
shows that the system output is dependent on the current system state, the system input,
and the current time.
If the system state change x'(t) and the system output y(t) are linear combinations of the
system state and input vectors, then we can say the systems are linear systems, and we can
rewrite them in matrix form:
[State Equation]
x′ = A(t)x(t) + B(t)u(t)
[Output Equation]
x′ = Ax(t) + Bu(t)
The State Equation shows the relationship between the system's current state and its
input, and the future state of the system. The Output Equation shows the relationship
between the system state and its input, and the output. These equations show that in a
100
State-Space Equations
given system, the current output is dependent on the current input and the current state.
The future state is also dependent on the current state and the current input.
It is important to note at this point that the state space equations of a particular system
are not unique, and there are an infinite number of ways to represent these equations
by manipulating the A, B, C and D matrices using row operations. There are a number
of ”standard forms” for these matrices, however, that make certain computations easier.
Converting between these forms will require knowledge of linear algebra.
12.5.1 Matrices: A B C D
We've bolded several quantities to try and reinforce the fact that they can be vectors, not
just scalar quantities. If these systems are time-invariant, we can simplify them by removing
the time variables:
Now, if we take the partial derivatives of these functions with respect to the input and the
state vector at time t0 , we get our system matrices:
C = hx [x(0), u(0)]
D = hu [x(0), u(0)]
101
State-Space Equations
In our time-invariant state space equations, we write these matrices and their relationships
as:
We have four constant matrices: A, B, C, and D. We will explain these matrices below:
Matrix A
Matrix A is the system matrix, and relates how the current state affects the state change
x'. If the state change is not dependent on the current state, A will be the zero matrix.
The exponential of the state matrix, eAt is called the state transition matrix, and is an
important function that we will describe below.
Matrix B
Matrix B is the control matrix, and determines how the system input affects the state
change. If the state change is not dependent on the system input, then B will be the zero
matrix.
Matrix C
Matrix C is the output matrix, and determines the relationship between the system state
and the system output.
Matrix D
Matrix D is the feed-forward matrix, and allows for the system input to affect the
system output directly. A basic feedback system like those we have previously considered
do not have a feed-forward element, and therefore for most of the systems we have already
considered, the D matrix is the zero matrix.
Because we are adding and multiplying multiple matrices and vectors together, we need to
be absolutely certain that the matrices have compatible dimensions, or else the equations
will be undefined. For integer values p, q, and r, the dimensions of the system matrices and
vectors are defined as follows:
Vectors Matrices
• x : p×1 • A : p×p
• x′ : p × 1 • B : p×q
• u : q×1 • C : r×p
• y : r×1 • D : r×q
102
Obtaining the State-Space Equations
Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q
If the matrix and vector dimensions do not agree with one another, the equations are invalid
and the results will be meaningless. Matrices and vectors must have compatible dimensions
or they cannot be combined using matrix operations.
For the rest of the book, we will be using the small template on the right as a reminder
about the matrix dimensions, so that we can keep a constant notation throughout the book.
The state equations and the output equations of systems can be expressed in terms of
matrices A, B, C, and D. Because the form of these equations is always the same, we can
use an ordered quadruplet to denote a system. We can use the shorthand (A, B, C, D) to
denote a complete state-space representation. Also, because the state equation is very
important for our later analyis, we can write an ordered pair (A, B) to refer to the state
equation:
(A, B) → x′ = Ax + Bu
{
x′ = Ax + Bu
(A, B, C, D) →
y = Cx + Du
The beauty of state equations, is that they can be used to transparently describe systems
that are both continuous and discrete in nature. Some texts will differentiate notation
between discrete and continuous cases, but this text will not make such a distinction. In-
stead we will opt to use the generic coefficient matrices A, B, C and D for both continuous
and discrete systems. Occasionally this book may employ the subscript C to denote a
continuous-time version of the matrix, and the subscript D to denote the discrete-time ver-
sion of the same matrix. Other texts may use the letters F, H, and G for continuous systems
and Γ, and Θ for use in discrete systems. However, if we keep track of our time-domain
system, we don't need to worry about such notations.
103
State-Space Equations
Let's say that we have a general 3rd order differential equation in terms of input u(t) and
output y(t):
d3 y(t) 2
dt3
+ a2 d dty(t) dy(t)
2 + a1 dt + a0 y(t) = u(t)
Now, we can define the state vector x in terms of the individual x components, and we
can create the future state vector as well:
x1 x′1
′ ′
x = x2 , x = x2
x3 x′3
And with that, we can assemble the state-space equations for the system:
0 1 0 0
x′ = 0 0 1 x(t) + 0 u(t)
−a0 −a1 −a2 1
[ ]
y(t) = 1 0 0 x(t)
Granted, this is only a simple example, but the method should become apparent to
most readers.
The method of obtaining the state-space equations from the Laplace domain transfer func-
tions are very similar to the method of obtaining them from the time-domain differential
equations. We call the process of converting a system description from the Laplace domain
to the state-space domain realization. We will discuss realization in more detail in a later
chapter. In general, let's say that we have a transfer function of the form:
sm + am−1 sm−1 + · · · + a0
T (s) =
sn + bn−1 sn−1 + · · · + b0
104
State-Space Representation
0 1 0 ··· 0
···
0 0 1 0
.. .. .. .. ..
A=
. . . . .
0 0 0 ··· 1
−b
0 −b1 −b2 · · · −bn−1
0
0
B=
..
.
[1 ]
C = a0 a1 · · · am−1
D=0
This form of the equations is known as the controllable canonical form of the system
matrices, and we will discuss this later.
Notice that to perform this method, the denominator and numerator polynomials must be
monic, the coefficients of the highest-order term must be 1. If the coefficient of the highest
order term is not 1, you must divide your equation by that coefficient to make it 1.
As an important note, remember that the state variables x are user-defined and therefore
are arbitrary. There are any number of ways to define x for a particular problem, each of
which are going to lead to different state space equations.
Note: There are an infinite number of equivalent ways to represent a system using state-
space equations. Some ways are better than others. Once these state-space equations
are obtained, they can be manipulated to take a particular form if needed.
Consider the previous continuous-time example. We can rewrite the equation in the form
[ ]
d d2 y(t) dy(t)
2
+ a2 + a1 y(t) + a0 y(t) = u(t)
dt dt dt
x1 = y(t)
dy(t)
x2 =
dt
105
State-Space Equations
d2 y(t) dy(t)
x3 = 2
+ a2 + a1 y(t)
dt dt
with first-order derivatives
dy(t)
x′1 = = x2
dt
d2 y(t)
x′2 = dt2
= −a1 x1 − a2 x2 + x3 (suspected error here. d
Fails to account that : dt []. encap-
d2 y(t)
sulates : dt2 + a2 dy(t)
dt + a1 y(t) five lines earlier.)
106
Discretization
however, that the variables α and δ do correspond to physical values, and cannot be
changed.
12.8 Discretization
If we have a system (A, B, C, D) that is defined in continuous time, we can discretize the
system so that an equivalent process can be performed using a digital computer. We can
use the definition of the derivative, as such:
x(t + T ) − x(t)
x′ (t) = lim
T →0 T
And substituting this into the state equation with some approximation (and ignoring the
limit for now) gives us:
x(t + T ) − x(t)
lim = Ax(t) + Bu(t)
T →0 T
We are able to remove that limit because in a discrete system, the time interval between
samples is positive and non-negligible. By definition, a discrete system is only defined at
certain time points, and not at all time points as the limit would have indicated. In a
discrete system, we are interested only in the value of the system at discrete points. If those
points are evenly spaced by every T seconds (the sampling time), then the samples of the
system occur at t = kT, where k is an integer. Substituting kT for t into our equation above
gives us:
Or, using the square-bracket shorthand that we've developed earlier, we can write:
In this form, the state-space system can be implemented quite easily into a digital computer
system using software, not complicated analog hardware. We will discuss this relationship
and digital systems more specifically in a later chapter.
We will write out the discrete-time state-space equations as:
107
State-Space Equations
The variable T is a common variable in control systems, especially when talking about the
beginning and end points of a continuous-time system, or when discussing the sampling
time of a digital system. However, another common use of the letter T is to signify the
transpose operation on a matrix. To alleviate this ambiguity, we will denote the transpose
of a matrix with a prime:
AT → A′
AH
This notation is common in other literature, and raises no obvious ambiguities here.
108
MATLAB Representation
Systems created in this way can be manipulated in the same way that the transfer function
descriptions (described earlier) can be manipulated. To convert a transfer function to a
state-space representation, we can use the tf2ss function:
And to perform the opposite operation, we can use the ss2tf function:
he:תורתהבקרה/משתנימצב3
https://he.wikibooks.org/wiki/%D7%AA%D7%95%D7%A8%D7%AA%20%D7%94%D7%91%D7%A7%D7%A8%D7%
3
94%2F%D7%9E%D7%A9%D7%AA%D7%A0%D7%99%20%D7%9E%D7%A6%D7%91
109
13 Solutions for Linear Systems
The solutions in this chapter are heavily rooted in prior knowledge of Ordinary Differ-
ential Equationsa . Readers should have a prior knowledge of that subject before reading
this chapter.
a https://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
The state equation is a first-order linear differential equation, or (more precisely) a system
of linear differential equations. Because this is a first-order equation, we can use results
from Ordinary Differential Equations1 to find a general solution to the equation in terms
of the state-variable x. Once the state equation has been solved for x, that solution can be
plugged into the output equation. The resulting equation will show the direct relationship
between the system input and the system output, without the need to account explicitly
for the internal state of the system. The sections in this chapter will discuss the solutions
to the state-space equations, starting with the easiest case (Time-invariant, no input), and
ending with the most difficult case (Time-variant systems).
x′ = Ax(t) + Bu(t)
We can see that this equation is a first-order differential equation, except that the variables
are vectors, and the coefficients are matrices. However, because of the rules of matrix
calculus, these distinctions don't matter. We can ignore the input term (for now), and
rewrite this equation in the following form:
dx(t)
= Ax(t)
dt
1 https://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
111
Solutions for Linear Systems
dx(t)
= Adt
x(t)
Integrating both sides, and raising both sides to a power of e, we obtain the result:
x(t) = eAt+C
Where C is a constant. We can assign D = eC to make the equation easier, but we also
know that D will then be the initial conditions of the system. This becomes obvious if we
plug the value zero into the variable t. The final solution to this equation then is given as:
We call the matrix exponential eAt the state-transition matrix, and calculating it, while
difficult at times, is crucial to analyzing and manipulating systems. We will talk more about
calculating the matrix exponential below.
If, however, our input is non-zero (as is generally the case with any interesting system), our
solution is a little bit more complicated. Notice that now that we have our input term in
the equation, we will no longer be able to separate the variables and integrate both sides
easily.
We subtract to get the Ax(t) on the left side, and then we do something curious; we
premultiply both sides by the inverse state transition matrix:
The rationale for this last step may seem fuzzy at best, so we will illustrate the point with
an example:
13.3.1 Example
112
State-Transition Matrix
f (t)g(t)
and we differentiate with respect to t, then the result is:
f (t)g ′ (t) + f ′ (t)g(t)
If we set our functions accordingly:
f (t) = e−At f ′ (t) = −Ae−At
g(t) = x(t) g ′ (t) = x′ (t)
Then the output result is:
e−At x′ (t) − e−At Ax(t)
If we look at this result, it is the same as from our equation above.
Using the result from our example, we can condense the left side of our equation into a
derivative:
d(e−At x(t))
= e−At Bu(t)
dt
Now we can integrate both sides, from the initial time (t0 ) to the current time (t), using a
dummy variable τ , we will get closer to our result. Finally, if we premultiply by eAt , we get
our final result:
[General State Equation Solution]
∫ t
A(t−t0 )
x(t) = e x(t0 ) + eA(t−τ ) Bu(τ )dτ
t0
∫ t
A(t−t0 )
y(t) = Ce x(t0 ) + C eA(t−τ ) Bu(τ )dτ + Du(t)
t0
This is the general Time-Invariant solution to the state space equations, with non-zero
input. These equations are important results, and students who are interested in a further
study of control systems would do well to memorize these equations.
a https://en.wikibooks.org/wiki/Engineering%20Analysis
113
Solutions for Linear Systems
The state transition matrix, eAt , is an important part of the general state-space solutions
for the time-invariant cases listed above. Calculating this matrix exponential function is
one of the very first things that should be done when analyzing a new system, and the
results of that calculation will tell important information about the system in question.
The matrix exponential can be calculated directly by using a Taylor-Series expansion:
∞
∑ (At)n
eAt =
n=0
n!
More information about diagonal matrices and Jordan-form matrices can be found
in:
Engineering Analysisa
a https://en.wikibooks.org/wiki/Engineering%20Analysis
Also, we can attempt to diagonalize the matrix A into a diagonal matrix or a Jordan
Canonical matrix. The exponential of a diagonal matrix is simply the diagonal elements
individually raised to that exponential. The exponential of a Jordan canonical matrix
is slightly more complicated, but there is a useful pattern that can be exploited to find
the solution quickly. Interested readers should read the relevant passages in Engineering
Analysis2 .
The state transition matrix, and matrix exponentials in general are very important tools in
control engineering.
If a matrix is diagonal, the state transition matrix can be calculated by raising each diagonal
entry of the matrix raised as a power of e.
If the A matrix is in the Jordan Canonical form, then the matrix exponential can be
generated quickly using the following formula:
1 t 1 2
2! t ··· 1 n
n! t
0 1 ··· 1 n−1
t (n−1)! t
eJt = eλt
.. .. .. .. ..
. . . . .
0 0 0 ··· 1
Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
2 https://en.wikibooks.org/wiki/Engineering%20Analysis
114
State-Transition Matrix
We can calculate the state-transition matrix (or any matrix exponential function) by taking
the following inverse Laplace transform:
1 t 1 2
2!
t ··· 1 n
n!
t
0 1 t ··· 1
tn−1
eJt = eλt .
(n−1)!
. .
. .
. ..
.
.
.
. . . .
0 0 0 ··· 1
Where λ is the eigenvalue (the value on the diagonal) of the jordan-canonical matrix.
If we know all the eigenvalues of A, we can create our transition matrix T, and our inverse
transition matrix T-1 These matrices will be the matrices of the right and left eigenvectors,
respectively. If we have both the left and the right eigenvectors, we can calculate the
state-transition matrix as:
[Spectral Decomposition]
∑
n
eAt = eλi t vi wi′
i=1
Note that wi ' is the transpose of the ith left-eigenvector, not the derivative of it. We will
discuss the concepts of ”eigenvalues”, ”eigenvectors”, and the technique of spectral decom-
position in more detail in a later chapter.
a https://en.wikibooks.org/wiki/Engineering%20Analysis
The Cayley-Hamilton Theorem can also be used to find a solution for a matrix expo-
nential. For any eigenvalue of the system matrix A, λ, we can show that the two equations
are equivalent:
115
Solutions for Linear Systems
Once we solve for the coefficients of the equation, a, we can then plug those coefficients into
the following equation:
116
State-Transition Matrix
The reader is encouraged to perform the multiplications, and attempt to derive this
result.
With the freely available python library 'sympy' we can very easily calculate the
state-transition matrix automatically:
⎡cos(t) sin(t)⎤
⎢ ⎥
⎣-sin(t) cos(t)⎦
Using the symbolic toolbox in MATLAB, we can write MATLAB code to auto-
matically generate the state-transition matrix for a given input matrix A. Here is an
example of MATLAB code that can perform this task:
Use this MATLAB function to find the state-transition matrix for the following matrices
(warning, calculation may take some time):
[ ]
2 0
1. A1 =
0 2
[ ]
0 1
2. A2 =
−1 0
[ ]
2 1
3. A3 =
0 2
Matrix 1 is a diagonal matrix, Matrix 2 has complex eigenvalues, and Matrix 3 is
Jordan canonical form. These three matrices should be representative of some of
the common forms of system matrices. The following code snippets are the input
commands into MATLAB to produce these matrices, and the output results:
Matrix A1
117
Solutions for Linear Systems
There are multiple methods in MATLAB to compute the state transtion matrix, from a
scalar (time-invariant) matrix A. The following methods are all going to rely on the Sym-
bolic Toolbox to perform the equation manipulations. At the end of each code snippet,
the variable eAt contains the state-transition matrix of matrix A.
Direct Method
t = sym('t'); eAt = expm(A * t);
Laplace Transform Method
s = sym('s'); [n,n] = size(A); in = inv(s*eye(n) - A); eAt = ilaplace(in);
Spectral Decomposition
t = sym('t'); [n,n] = size(A); [V, e] = eig(A); W = inv(V); sum = [0 0;0 0]; for I = 1:n
sum = sum + expm(e(I,I)*t)*V(:,I)*W(I,:); end; eAt = sum;
All three of these methods should produce the same answers. The student is encouraged
to verify this.
he:תורתהבקרה/פתרוןמשוואתהמצבעבורמערכתקבועהבזמן4
https://he.wikibooks.org/wiki/%D7%AA%D7%95%D7%A8%D7%AA%20%D7%94%D7%91%D7%A7%D7%A8%D7%
94%2F%D7%A4%D7%AA%D7%A8%D7%95%D7%9F%20%D7%9E%D7%A9%D7%95%D7%95%D7%90%D7%AA%20%D7%94%
4
D7%9E%D7%A6%D7%91%20%D7%A2%D7%91%D7%95%D7%A8%20%D7%9E%D7%A2%D7%A8%D7%9B%D7%AA%20%D7%
A7%D7%91%D7%95%D7%A2%D7%94%20%D7%91%D7%96%D7%9E%D7%9F
118
14 Time-Variant System Solutions
The state-space equations can be solved for time-variant systems, but the solution is signif-
icantly more complicated than the time-invariant case. Our time-variant state equation is
given as follows:
We can say that the general solution to time-variant state-equation is defined as:
[Time-Variant General Solution]
∫ t
x(t) = ϕ(t, t0 )x(t0 ) + ϕ(t, τ )B(τ )u(τ )dτ
t0
Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q
The function ϕ is called the state-transition matrix, because it (like the matrix expo-
nential from the time-invariant case) controls the change for states in the state equation.
However, unlike the time-invariant case, we cannot define this as a simple exponential. In
fact, ϕ can't be defined in general, because it will actually be a different function for every
system. However, the state-transition matrix does follow some basic properties that we can
use to determine the state-transition matrix.
In a time-variant system, the general solution is obtained when the state-transition matrix
is determined. For that reason, the first thing (and the most important thing) that we need
to do here is find that matrix. We will discuss the solution to that matrix below.
119
Time-Variant System Solutions
Note:
The state transition matrix ϕ is a matrix function of two variables (we will say t and τ ).
Once the form of the matrix is solved, we will plug in the initial time, t0 in place of the
variable τ . Because of the nature of this matrix, and the properties that it must satisfy,
this matrix typically is composed of exponential or sinusoidal functions. The exact form
of the state-transition matrix is dependent on the system itself, and the form of the
system's differential equation. There is no single ”template solution” for this matrix.
The state transition matrix ϕ is not completely unknown, it must always satisfy the following
relationships:
∂ϕ(t, t0 )
= A(t)ϕ(t, t0 )
∂t
ϕ(τ, τ ) = I
ϕ(t, t0 ) = eA(t−t0 )
The reader can verify that this solution for a time-invariant system satisfies all the properties
listed above. However, in the time-variant case, there are many different functions that may
satisfy these requirements, and the solution is dependent on the structure of the system.
The state-transition matrix must be determined before analysis on the time-varying solution
can continue. We will discuss some of the methods for determining this matrix below.
As the most basic case, we will consider the case of a system with zero input. If the system
has no input, then the state equation is given as:
x′ (t) = A(t)x(t)
120
Time-Variant, Zero Input
And we are interested in the response of this system in the time interval T = (a, b). The
first thing we want to do in this case is find a fundamental matrix of the above equation.
The fundamental matrix is related
x′ (t) = A(t)x(t)
The solutions to this equation form an n-dimensional vector space in the interval T = (a,
b). Any set of n linearly-independent solutions {x1 , x2 , ..., xn } to the equation above is
called a fundamental set of solutions.
Readers who have a background in Linear Algebraa may recognize that the fundamental
set is a basis set for the solution space. Any basis set that spans the entire solution
space is a valid fundamental set.
a https://en.wikibooks.org/wiki/Linear%20Algebra
[ ]
X = x1 x2 · · · xn
Also, any matrix that solves this equation can be a fundamental matrix if and only if the
determinant of the matrix is non-zero for all time t in the interval T. The determinant must
be non-zero, because we are going to use the inverse of the fundamental matrix to solve for
the state-transition matrix.
Once we have the fundamental matrix of a system, we can use it to find the state transition
matrix of the system:
121
Time-Variant System Solutions
The inverse of the fundamental matrix exists, because we specify in the definition above that
it must have a non-zero determinant, and therefore must be non-singular. The reader should
note that this is only one possible method for determining the state transition matrix, and
we will discuss other methods below.
There are other methods for finding the state transition matrix besides having to find the
fundamental matrix.
Method 1
If A(t) is triangular (upper or lower triangular), the state transition matrix can be deter-
mined by sequentially integrating the individual rows of the state equation.
Method 2
If for every τ and t, the state matrix commutes as follows:
[∫ t ] [∫ t ]
A(t) A(ζ)dζ = A(ζ)dζ A(t)
τ τ
∫t
A(ζ)dζ
ϕ(t, τ ) = e τ
The state transition matrix will commute as described above if any of the following con-
ditions are true:
122
Time-Variant, Zero Input
∑
n
A(t) = Mi fi (t)
i=1
∏
n ∫t
Mi f (θ)dθ
ϕ(t, τ ) = τ i
e
i=1
It will be left as an exercise for the reader to prove that if A(t) is time-invariant, that the
equation in method 2 above will reduce to the state-transition matrix eA(t−τ ) .
Use method 3, above, to compute the state-transition matrix for the system if the system
matrix A is given by:
[ ]
t 1
A=
−1 t
We can decompose this matrix as follows:
[ ] [ ]
1 0 0 1
A= t+
0 1 −1 0
Where f1 (t) = t, and f2 (t) = 1. Using the formula described above gives us:
∫t ∫t
M1 θdθ M2 dθ
ϕ(t, τ ) = e τ e τ
123
Time-Variant System Solutions
[ ] [ ]
0 t−τ 0 1 [ ]
(t−τ )
−t + τ 0 −1 0 cos(t − τ ) sin(t − τ )
e =e =
− sin(t − τ ) cos(t − τ )
The final solution is given as:
[ 1 2 −τ 2 )
][ ]
e 2 (t 0 cos(t − τ ) sin(t − τ )
ϕ(t, τ ) = =
2 −τ 2 )
− sin(t − τ ) cos(t − τ )
1
0 e 2 (t
[ ]
e 2 (t −τ ) cos(t − τ ) e 2 (t −τ ) sin(t − τ )
1 2 2 1 2 2
−e 2 (t −τ ) sin(t − τ ) e 2 (t −τ ) cos(t − τ )
1 2 2 1 2 2
If the input to the system is not zero, it turns out that all the analysis that we performed
above still holds. We can still construct the fundamental matrix, and we can still represent
the system solution in terms of the state transition matrix ϕ.
We can show that the general solution to the state-space equations is actually the solution:
∫ t
x(t) = ϕ(t, t0 )x(t0 ) + ϕ(t, τ )B(τ )u(τ )dτ
t0
124
15 Digital State-Space
For digital systems, we can write similar equations, using discrete data sets:
We can derive the digital version of this equation that we discussed above. We take the
Laplace transform of our equation:
Now, taking the inverse Laplace transform gives us our time-domain system, keeping in
mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix,
Φ:
∫ t
x(t) = L−1 (X(s)) = Φ(t − t0 )x(0) + Φ(t − τ )Bu(τ )dτ
t0
Now, we apply a zero-order hold on our input, to make the system digital. Notice that we
set our start time t0 = kT, because we are only interested in the behavior of our system
during a single sample period:
125
Digital State-Space
∫ t
x(t) = Φ(t, kT )x(kT ) + Φ(t, τ )Bdτ u(kT )
kT
We were able to remove u(kT) from the integral because it did not rely on τ . We now define
a new function, Γ, as follows:
∫ t
Γ(t, t0 ) = Φ(t, τ )Bdτ
t0
Inserting this new expression into our equation, and setting t = (k + 1)T gives us:
Now Φ(T) and Γ(T) are constant matrices, and we can give them new names. The
d subscript denotes that they are digital versions of the coefficient matrices:
Ad = Φ((k + 1)T, kT )
Bd = Γ((k + 1)T, kT )
We can use these values in our state equation, converting to our bracket notation instead:
w:Discretization1 Continuous and discrete systems that perform similarly can be related
together through a set of relationships. It should come as no surprise that a discrete system
and a continuous system will have different characteristics and different coefficient matrices.
If we consider that a discrete system is the same as a continuous system, except that it is
sampled with a sampling time T, then the relationships below will hold. The process of
converting an analog system for use with digital hardware is called discretization. We've
given a basic introduction to discretization already, but we will discuss it in more detail
here.
1 https://en.wikipedia.org/wiki/Discretization
126
Relating Continuous and Discrete Systems
∫ kT
x(kT ) = e AkT
x(0) + eA(kT −τ ) Bu(τ )dτ
0
∫ kT
x[k] = e AkT
x[0] + eA(kT −τ ) Bu(τ )dτ
0
Now, if we want to analyze the k+1 term, we can solve the equation again:
∫ (k+1)T
x[k + 1] = eA(k+1)T x[0] + eA((k+1)T −τ ) Bu(τ )dτ
0
Separating out the variables, and breaking the integral into two parts gives us:
∫ kT ∫ (k+1)T
AT A(kT −τ )
x[k + 1] = eAT AkT
e x[0] + e e Bu(τ )dτ + eA(kT +T −τ ) Bu(τ )dτ
0 kT
Comparing this equation to our regular solution gives us a set of relationships for converting
the continuous-time system into a discrete-time system. Here, we will use ”d” subscripts to
denote the system matrices of a discrete system, and we will use a ”c” subscript to denote
the system matrices of a continuous system.
Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q
Ad = eAc T
127
Digital State-Space
∫
Bd = 0T eAc τ dτ Bc
Cd = Cc
Dd = Dc
c2d
If the Ac matrix is nonsingular, then we can find its inverse and instead define Bd as:
Bd = A−1
c (Ad − I)Bc
The differences in the discrete and continuous matrices are due to the fact that the un-
derlying equations that describe our systems are different. Continuous-time systems are
represented by linear differential equations, while the digital systems are described by dif-
ference equations. High order terms in a difference equation are delayed copies of the signals,
while high order terms in the differential equations are derivatives of the analog signal.
If we have a complicated analog system, and we would like to implement that system in a
digital computer, we can use the above transformations to make our matrices conform to
the new paradigm.
15.3.2 Notation
Because the coefficient matrices for the discrete systems are computed differently from the
continuous-time coefficient matrices, and because the matrices technically represent different
things, it is not uncommon in the literature to denote these matrices with different variables.
For instance, the following variables are used in place of A and B frequently:
Ω = Ad
R = Bd
These substitutions would give us a system defined by the ordered quadruple (Ω, R, C,
D) for representing our equations.
As a matter of notational convenience, we will use the letters A and B to represent these
matrices throughout the rest of this book.
Now, let's say that we have a 3rd order difference equation, that describes a discrete-time
system:
128
Solving for x[n]
We can find a general time-invariant solution for the discrete time difference equations. Let
us start working up a pattern. We know the discrete state equation:
With a little algebraic trickery, we can reduce this pattern to a single equation:
[General State Equation Solution]
129
Digital State-Space
∑
n−1
x[n] = An x[n0 ] + An−1−m Bu[m]
m=0
∑
n−1
y[n] = CAn x[n0 ] + CAn−1−m Bu[m] + Du[n]
m=0
If the system is time-variant, we have a general solution that is similar to the continuous-
time case:
∑
n−1
x[n] = ϕ[n, n0 ]x[n0 ] + ϕ[n, m + 1]B[m]u[m]
m=n0
∑
n−1
y[n] = C[n]ϕ[n, n0 ]x[n0 ] + C[n] ϕ[n, m + 1]B[m]u[m] + D[n]u[n]
m=n0
Where φ, the state transition matrix, is defined in a similar manner to the state-
transition matrix in the continuous case. However, some of the properties in the discrete
time are different. For instance, the inverse of the state-transition matrix does not need to
exist, and in many systems it does not exist.
The discrete time state transition matrix is the unique solution of the equation:
ϕ[k + 1, k0 ] = A[k]ϕ[k, k0 ]
ϕ[k0 , k0 ] = I
From this definition, an obvious way to calculate this state transition matrix presents itself:
Or,
130
MATLAB Calculations
∏0
k−k
ϕ[k, k0 ] = A[k − m]
m=1
MATLAB is a computer program, and therefore calculates all systems using digital methods.
The MATLAB function lsim is used to simulate a continuous system with a specified input.
This function works by calling the c2d, which converts a system (A, B, C, D) into the
equivalent discrete system. Once the system model is discretized, the function passes control
to the dlsim function, which is used to simulate discrete-time systems with the specified
input.
Because of this, simulation programs like MATLAB are subjected to round-off errors asso-
ciated with the discretization process.
131
16 Eigenvalues and Eigenvectors
The eigenvalues and eigenvectors of the system matrix play a key role in determining the
response of the system. It is important to note that only square matrices have eigenvalues
and eigenvectors associated with them. Non-square matrices cannot be analyzed using the
methods below.
The word ”eigen” comes from German and means ”own”, while it is the Dutch word for
”characteristic”, and so this chapter could also be called ”Characteristic values and char-
acteristic vectors”. The terms ”Eigenvalues” and ”Eigenvectors” are most commonly used.
Eigenvalues and Eigenvectors have a number of properties that make them valuable tools in
analysis, and they also have a number of valuable relationships with the matrix from which
they are derived. Computing the eigenvalues and the eigenvectors of the system matrix is
one of the most important things that should be done when beginning to analyze a system
matrix, second only to calculating the matrix exponential of the system matrix.
The eigenvalues and eigenvectors of the system determine the relationship between the
individual system state variables (the members of the x vector), the response of the system
to inputs, and the stability of the system. Also, the eigenvalues and eigenvectors can be used
to calculate the matrix exponential of the system matrix (through spectral decomposition).
The remainder of this chapter will discuss eigenvalues and eigenvectors, and the ways that
they affect their respective systems.
Av = λv
Where λ are scalar values called the eigenvalues, and v are the corresponding eigenvec-
tors. To solve for the eigenvalues of a matrix, we can take the following determinant:
133
Eigenvalues and Eigenvectors
|A − λI| = 0
To solve for the eigenvectors, we can then add an additional term, and solve for v:
(A − λI)v = 0
Another value worth finding are the left eigenvectors of a system, defined as w in the
modified characteristic equation:
[Left-Eigenvector Equation]
wA = λw
For more information about eigenvalues, eigenvectors, and left eigenvectors, read the ap-
propriate sections in the following books:
• Linear Algebra1
• Engineering Analysis2
16.2.1 Diagonalization
Note:
The transition matrix T should not be confused with the sampling time of a discrete
system. If needed, we will use subscripts to differentiate between the two.
If the matrix A has a complete set of distinct eigenvalues, the matrix can be diagonalized.
A diagonal matrix is a matrix that only has entries on the diagonal, and all the rest of the
entries in the matrix are zero. We can define a transformation matrix, T, that satisfies
the diagonalization transformation:
A = T DT −1
eAt = T eDt T −1
The right-hand side of the equation may look more complicated, but because Dis a diagonal
matrix here (not to be confused with the feed-forward matrix from the output equation),
the calculations are much easier.
We can define the transition matrix, and the inverse transition matrix in terms of the
eigenvectors and the left eigenvectors:
1 https://en.wikibooks.org/wiki/Linear%20Algebra
2 https://en.wikibooks.org/wiki/Engineering%20Analysis
134
Exponential Matrix Decomposition
[ ]
T = v1 v2 v3 · · · vn
′
w1
w′
2
′
T −1 = w
3
..
.
wn′
a https://en.wikibooks.org/wiki/Engineering%20Analysis%2FSpectral%20Decomposition
A matrix exponential can be decomposed into a sum of the eigenvectors, eigenvalues, and
left eigenvectors, as follows:
∑
n
eAt = eλi t vi wi′
i=1
Notice that this equation only holds in this form if the matrix A has a complete set of n
distinct eigenvalues. Since w'i is a row vector, and x(0) is a column vector of the initial
system states, we can combine those two into a scalar coefficient α:
∑
n
eAt x(t0 ) = αi eλi t vi
i=1
Since the state transition matrix determines how the system responds to an input, we can
see that the system eigenvalues and eigenvectors are a key part of the system response. Let
us plug this decomposition into the general solution to the state equation:
[State Equation Spectral Decomposition]
∑
n n ∫ t
∑
x(t) = λi t
αi e vi + eλi (t−τ ) vi wi′ Bu(τ )dτ
i=1 i=1 0
135
Eigenvalues and Eigenvectors
As we can see from the above equation, the individual elements of the state vector
x(t) cannot take arbitrary values, but they are instead related by weighted sums of multiples
of the systems right-eigenvectors.
16.3.2 Decoupling
For people who are familiar with linear algebra, the left-eigenvector of the matrix A
must be in the null space of the matrix B to decouple the system.
If a system can be designed such that the following relationship holds true:
wi′ B = 0
then the system response from that particular eigenvalue will not be affected by the system
input u, and we say that the system has been decoupled. Such a thing is difficult to do in
practice.
With every matrix there is associated a particular number called the condition number of
that matrix. The condition number tells a number of things about a matrix, and it is worth
calculating. The condition number, k, is defined as:
[Condition Number]
i ∥i ∥
k=
|wi′ vi |
Systems with smaller condition numbers are better, for a number of reasons:
1. Large condition numbers lead to a large transient response of the system
2. Large condition numbers make the system eigenvalues more sensitive to changes in
the system.
We will discuss the issue of eigenvalue sensitivity more in a later section.
16.3.4 Stability
We will talk about stability at length in later chapters, but is a good time to point out a
simple fact concerning the eigenvalues of the system. Notice that if the eigenvalues of the
system matrix A are positive, or (if they are complex) that they have positive real parts, that
the system state (and therefore the system output, scaled by the C matrix) will approach
infinity as time t approaches infinity. In essence, if the eigenvalues are positive, the system
will not satisfy the condition of BIBO stability, and will therefore become unstable.
136
Non-Unique Eigenvalues
Another factor that is worth mentioning is that a manufactured system never exactly
matches the system model, and there will always been inaccuracies in the specifications
of the component parts used, within a certain tolerance. As such, the system matrix will be
slightly different from the mathematical model of the system (although good systems will
not be severely different), and therefore the eigenvalues and eigenvectors of the system will
not be the same values as those derived from the model. These facts give rise to several
results:
1. Systems with high condition numbers may have eigenvalues that differ by a large
amount from those derived from the mathematical model. This means that the system
response of the physical system may be very different from the intended response of
the model.
2. Systems with high condition numbers may become unstable simply as a result of in-
accuracies in the component parts used in the manufacturing process.
For those reasons, the system eigenvalues and the condition number of the system matrix
are highly important variables to consider when analyzing and designing a system. We will
discuss the topic of stability in more detail in later chapters.
The decomposition above only works if the matrix A has a full set of n distinct eigenvalues
(and corresponding eigenvectors). If A does not have n distinct eigenvectors, then a set
of generalized eigenvectors need to be determined. The generalized eigenvectors will
produce a similar matrix that is in Jordan canonical form, not the diagonal form we
were using earlier.
(A − λI)vn+1 = vn
if d is the number of times that a given eigenvalue is repeated, and p is the number of
unique eigenvectors derived from those eigenvalues, then there will be q = d - p generalized
eigenvectors. Generalized eigenvectors are developed by plugging in the regular eigenvectors
into the equation above (vn ). Some regular eigenvectors might not produce any non-trivial
generalized eigenvectors. Generalized eigenvectors may also be plugged into the equation
above to produce additional generalized eigenvectors. It is important to note that the
generalized eigenvectors form an ordered series, and they must be kept in order during
analysis or the results will not be correct.
137
Eigenvalues and Eigenvectors
a https://en.wikibooks.org/wiki/Engineering%20Analysis%2FMatrix%20Forms
If a matrix has a complete set of distinct eigenvectors, the transition matrix T can be
defined as the matrix of those eigenvectors, and the resultant transformed matrix will be
a diagonal matrix. However, if the eigenvectors are not unique, and there are a number
of generalized eigenvectors associated with the matrix, the transition matrix T will consist
of the ordered set of the regular eigenvectors and generalized eigenvectors. The regular
138
Equivalence Transformations
eigenvectors that did not produce any generalized eigenvectors (if any) should be first in
the order, followed by the eigenvectors that did produce generalized eigenvectors, and the
generalized eigenvectors that they produced (in appropriate sequence).
Once the T matrix has been produced, the matrix can be transformed by it and it's inverse:
A = T −1 JT
The J matrix will be a Jordan block matrix. The format of the Jordan block matrix will
be as follows:
D 0 ··· 0
0 J1 ··· 0
J =
.. .. .. ..
. . . .
0 0 · · · Jn
Where D is the diagonal block produced by the regular eigenvectors that are not associated
with generalized eigenvectors (if any). The Jn blocks are standard Jordan blocks with a size
corresponding to the number of eigenvectors/generalized eigenvectors in each sequence. In
each Jn block, the eigenvalue associated with the regular eigenvector of the sequence is on
the main diagonal, and there are 1's in the sub-diagonal.
x̄ = P x
Where:
Ā = P AP −1
B̄ = P B
C̄ = CP −1
D̄ = D
139
Eigenvalues and Eigenvectors
We call the matrix P the equivalence transformation between the two sets of equations.
It is important to note that the eigenvalues of the matrix A (which are of primary impor-
tance to the system) do not change under the equivalence transformation. The eigenvectors
of A, and the eigenvectors of Ā are related by the matrix P.
If the A matrix is time-invariant, we can construct the matrix V from the eigenvectors of
A. The V matrix can be used to transform the A matrix to a diagonal matrix. Our new
system becomes:
Since our system matrix is now diagonal (or Jordan canonical), the calculation of the state-
transition matrix is simplified:
−1
eV AV =Λ
140
17 MIMO Systems
Systems with more than one input and/or more than one output are known as Multi-Input
Multi-Output systems, or they are frequently known by the abbreviation MIMO. This
is in contrast to systems that have only a single input and a single output (SISO), like we
have been discussing previously.
See the Formatting Sectiona in the introduction if the notation in this page is confusing.
Let's say that we have two outputs, y1 and y2 , and two inputs, u1 and u2 . These are
related in our system through the following system of differential equations:
y1′′ + a1 y1′ + a0 (y1 + y2 ) = u1 (t)
y2′ + a2 (y2 − y1 ) = u2 (t)
now, we can assign our state variables as such, and produce our first-order differential
equations:
x1 = y1
x4 = y2
141
MIMO Systems
x′1 = y1′ = x2
x′2 = −a1 x2 − a0 (x1 + x4 ) + u1 (t)
x′4 = −a2 (x4 − x1 ) + u2 (t)
And finally we can assemble our state space equations:
0 1 0 0 0 0 [ ]
−a −a
0 −a0 0
1 u1
x′ = 0 1
x+
0 0 0 1 0 0 u2
a2 0 0 −a2 0 1
[ ] [ ]
y1 1 0 0 0
= x(t)
y2 0 0 0 1
If the system is LTI and Lumped, we can take the Laplace Transform of the state-space
equations, as follows:
Where X(0) is the initial conditions of the system state vector in the time domain. If
the system is relaxed, we can ignore this term, but for completeness we will continue the
derivation with it.
We can separate out the variables in the state equation as follows:
And then we can multiply both sides by the inverse of [sI - A] to give us our state equation:
142
Transfer Function Matrix
Now, if we plug in this value for X(s) into our output equation, above, we get a more
complicated equation:
Now, if the system is relaxed, and therefore X(0) is 0, the first term of this equation becomes
0. In this case, we can factor out a U(s) from the remaining two terms:
We can make the following substitution to obtain the Transfer Function Matrix, or more
simply, the Transfer Matrix, H(s):
[Transfer Matrix]
And rewrite our output equation in terms of the transfer matrix as follows:
[Transfer Matrix Description]
Y(s) = H(s)U(s)
If Y(s) and X(s) are 1 × 1 vectors (a SISO system), then we have our external description:
Y (s) = H(s)X(s)
Now, since X(s) = X(s), and Y(s) = Y(s), then H(s) must be equal to H(s). These are
simply two different ways to describe the same exact equation, the same exact system.
17.3.1 Dimensions
If our system has q inputs, and r outputs, our transfer function matrix will be an r ×
q matrix.
143
MIMO Systems
For SISO systems, the Transfer Function matrix will reduce to the transfer function as
would be obtained by taking the Laplace transform of the system response equation.
For MIMO systems, with n inputs and m outputs, the transfer function matrix will contain
n × m transfer functions, where each entry is the transfer function relationship between
each individual input, and each individual output.
Through this derivation of the transfer function matrix, we have shown the equivalency
between the Laplace methods and the State-Space method for representing systems. Also,
we have shown how the Laplace method can be generalized to account for MIMO systems.
Through the rest of this explanation, we will use the Laplace and State Space methods
interchangeably, opting to use one or the other where appropriate.
In the discrete case, we end up with similar equations, except that the X(0) initial conditions
term is preceded by an additional z variable:
If X(0) is zero, that term drops out, and we can derive a Transfer Function Matrix in the
Z domain as well:
144
Discrete MIMO Systems
Y(z) = H(z)U(z)
For digital systems, it is frequently a good idea to write the pulse response equation,
from the state-space equations:
x[k + 1] = Ax[k] + Bu[k]
y[k] = Cx[k] + Du[k]
We can combine these two equations into a single difference equation using the coef-
ficient matrices A, B, C, and D. To do this, we find the ratio of the system output
vector, Y[n], to the system input vector, U[n]:
Y (z)
U (z) = H(z) = C(zI − A)−1 B + D
So the system response to a digital system can be derived from the pulse response
equation by:
Y (z) = H(z)U (z)
And we can set U(z) to a step input through the following Z transform:
u(t) ⇔ U (z) = z
z−1
Plugging this into our pulse response we get our step response:
( )
Y (z) = (C(zI − A)−1 B + D) z
z−1
( )
z
Y(z) = H(z) z−1
145
18 System Realization
18.1 Realization
Note:
Discrete systems G(z) are also realizable if these conditions are satisfied.
• A transfer function G(s) is realizable if and only if the system can be described by a
finite-dimensional state-space equation.
• (A B C D), an ordered set of the four system matrices, is called a realization of the
system G(s). If the system can be expressed as such an ordered quadruple, the system is
realizable.
• A system G is realizable if and only if the transfer matrix G(s) is a proper rational matrix.
In other words, every entry in the matrix G(s) (only 1 for SISO systems) is a rational
polynomial, and if the degree of the denominator is higher or equal to the degree of the
numerator.
We've already covered the method for realizing a SISO system, the remainder of this chapter
will talk about the general method of realizing a MIMO system.
We can decompose a transfer matrix G(s) into a strictly proper transfer matrix:
147
System Realization
Where Gsp (s) is a strictly proper transfer matrix. Also, we can use this to find the value of
our D matrix:
D = G(∞)
We can define d(s) to be the lowest common denominator polynomial of all the entries in
G(s):
Remember, q is the number of inputs, p is the number of internal system states, and r is
the number of outputs.
1
Gsp (s) = N (s)
d(s)
Where
−a1 Ip −a2 Ip · · · −ar−1 Ip −ar Ip
I ··· 0
p 0 0
A= 0 I p ··· 0 0
.. .. .. .. ..
. . . . .
0 0 ··· Ip 0
Ip
0
B= 0
..
.
0
[ ]
C = Ip 0 0 · · · 0
148
19 Gain
Gain is a proportional value that shows the relationship between the magnitude of the input
to the magnitude of the output signal at steady state. Many systems contain a method by
which the gain can be altered, providing more or less ”power” to the system. However,
increasing gain or decreasing gain beyond a particular safety zone can cause the system to
become unstable.
Consider the given second-order system:
1
T (s) =
s2 + 2s + 1
We can include an arbitrary gain term, K in this system that will represent an amplification,
or a power increase:
1
T (s) = K
s2 + 2s + 1
The gain term can also be inserted into other places in the system, and in those cases the
equations will be slightly different.
Figure 31
149
Gain
Here are some good examples of arbitrary gain values being used in physical systems:
Volume Knob
On your stereo there is a volume knob that controls the gain of your amplifier circuit.
Higher levels of volume (turning the volume ”up”) corresponds to higher amplification
of the sound signal.
Gas Pedal
The gas pedal in your car is an example of gain. Pressing harder on the gas pedal
causes the engine to receive more gas, and causes the engine to output higher RPMs.
Brightness Buttons
Most computer monitors come with brightness buttons that control how bright the
screen image is. More brightness causes more power to be outputed to the screen.
As the gain to a system increases, generally the rise-time decreases, the percent overshoot
increases, and the settling time increases. However, these relationships are not always the
same. A critically damped system, for example, may decrease in rise time while not
experiencing any effects of percent overshoot or settling time.
If the gain increases to a high enough extent, some systems can become unstable. We will
examine this effect in the chapter on Root Locus. But it will decrease the steady state
error.
Systems that are stable for some gain values, and unstable for other values are called
conditionally stable systems. The stability is conditional upon the value of the gain, and
often the threshold where the system becomes unstable is important to find.
150
20 Block Diagrams
When designing or analyzing a system, often it is useful to model the system graphically.
Block Diagrams are a useful and simple method for analyzing a system graphically. A
”block” looks on paper exactly what it means:
When two or more systems are in series, they can be combined into a single representative
system, with a transfer function that is the product of the individual systems.
Figure 32
If we have two systems, f(t) and g(t), we can put them in series with one another so that
the output of system f(t) is the input to system g(t). Now, we can analyze them depending
on whether we are using our classical or modern methods.
If we define the output of the first system as h(t), we can define h(t) as:
Now, we can define the system output y(t) in terms of h(t) as:
151
Block Diagrams
Figure 33
If two or more systems are in series with one another, the total transfer function of the
series is the product of all the individual system transfer functions.
Figure 34
But, in the frequency domain we know that convolution becomes multiplication, so we can
re-write this as:
Figure 35
152
Systems in Series
If we have two systems in series (say system F and system G), where the output of F is the
input to system G, we can write out the state-space equations for each individual system.
System 1:
x′F = AF xF + BF u
yF = CF x F + D F u
System 2:
x′G = AG xG + BG yF
yG = CG xG + DG yF
And we can write substitute these equations together form the complete response of system
H, that has input u, and output yG :
[Series state equation]
[ ] [ ][ ] [ ]
x′G AG B G C F xG B D
= + G F u
x′F 0 AF xF BF
[Series output equation]
[ ] [ ][ ] [ ]
yG C DG CF xG D D
= G + G F u
yF 0 CF xF DF
153
Block Diagrams
Figure 36
Blocks may not be placed in parallel without the use of an adder. Blocks connected by an
adder as shown above have a total transfer function of:
Since the Laplace transform is linear, we can easily transfer this to the time domain by
converting the multiplication to convolution:
Figure 37
The state-space equations, with non-zero A, B, C, and D matrices conceptually model the
following system:
154
State Space Model
Figure 38
In this image, the strange-looking block in the center is either an integrator or an ideal
delay, and can be represented in the transfer domain as:
1 1
s or z
Figure 39
The state space model of the above system, if A, B, C, and D are transfer functions A(s),
B(s), C(s) and D(s) of the individual subsystems, and if U(s) and Y(s) represent a single
input and output, can be written as follows:
155
Block Diagrams
( )
Y (s) 1
= B(s) C(s) + D(s)
U (s) s − A(s)
We will explain how we got this result, and how we deal with feedforward and feedback
loop structures in the next chapter.
Some systems may have dedicated summation or multiplication devices, that automatically
add or multiply the transfer functions of multiple systems together
Block diagrams can be systematically simplified. Note that this table is from Schaum's
Outline: Feedback and Controls Systems by DiStefano et al
Transformation Equation Block Diagram Equivalent Block
Diagram
2 Combining Blocks in Y = P1 X ± P2 X
Parallel Figure 43
4 Eliminating a Feed- Y = P1 (X ∓ P2 Y )
back Loop Figure 46
Rearranging
6 Z = W ±X ±Y
Summing Junctions
Figure 49
Figure 48
Figure 50
7 Moving a Summing Z = PX ±Y
Junction in front of a
Figure 51 Figure 52
Block
8 Moving a Summing Z = P (X ± Y )
Junction beyond a
Figure 53 Figure 54
Block
156
External links
9 Moving a Takeoff Y = PX
Point in front of a
Figure 55 Figure 56
Block
10 Moving a Takeoff Y = PX
Point beyond a Block
Figure 57 Figure 58
11 Moving a Takeoff Z = W ±X
Point in front of a
Figure 59
Summing Junction Figure 60
12 Moving a Takeoff Z = X ±Y
Point beyond a Sum-
ming Junction Figure 61 Figure 62
1 http://wikis.controltheorypro.com/index.php?title=Block_Diagram_Quick_Reference
157
21 Feedback Loops
21.1 Feedback
A feedback loop is a common and powerful tool when designing a control system. Feed-
back loops take the system output into consideration, which enables the system to adjust
its performance to meet a desired output response.
When talking about control systems it is important to keep in mind that engineers typically
are given existing systems such as actuators, sensors1 , motors, and other devices with set
parameters, and are asked to adjust the performance of those systems. In many cases, it may
not be possible to open the system (the ”plant”) and adjust it from the inside: modifications
need to be made external to the system to force the system response to act as desired. This
is performed by adding controllers, compensators, and feedback structures to the system.
Figure 63 framed
w:Feedback2 This is a basic feedback structure. Here, we are using the output value of
the system to help us prepare the next output value. In this way, we can create systems
that correct errors. Here we see a feedback loop with a value of one. We call this a unity
feedback.
Here is a list of some relevant vocabulary, that will be used in the following
sections:
Plant
1 https://en.wikibooks.org/wiki/Sensory%20Systems%2FVisual%20Signal%20Processing
2 https://en.wikipedia.org/wiki/Feedback
159
Feedback Loops
The term ”Plant” is a carry-over term from chemical engineering to refer to the main system
process. The plant is the preexisting system that does not (without the aid of a controller
or a compensator) meet the given specifications. Plants are usually given ”as is”, and are
not changeable. In the picture above, the plant is denoted with a P.
Controller
A controller, or a ”compensator” is an additional system that is added to the plant to
control the operation of the plant. The system can have multiple compensators, and they
can appear anywhere in the system: Before the pick-off node, after the summer, before or
after the plant, and in the feedback loop. In the picture above, our compensator is denoted
with a C.
Summer
A summer is a symbol on a system diagram, (denoted above with parenthesis) that con-
ceptually adds two or more input signals, and produces a single sum output signal.
Pick-off node
A pickoff node is simply a fancy term for a split in a wire.
Forward Path
The forward path in the feedback loop is the path after the summer, that travels through
the plant and towards the system output.
Reverse Path
The reverse path is the path after the pick-off node, that loops back to the beginning of
the system. This is also known as the ”feedback path”.
Unity feedback
When the multiplicative value of the feedback path is 1.
It turns out that negative feedback is almost always the most useful type of feedback. When
we subtract the value of the output from the value of the input (our desired value), we get
a value called the error signal. The error signal shows us how far off our output is from
our desired input.
Positive feedback has the property that signals tend to reinforce themselves, and grow larger.
In a positive feedback system, noise from the system is added back to the input, and that in
turn produces more noise. As an example of a positive feedback system, consider an audio
amplification system with a speaker and a microphone. Placing the microphone near the
speaker creates a positive feedback loop, and the result is a sound that grows louder and
louder. Because the majority of noise in an electrical system is high-frequency, the sound
output of the system becomes high-pitched.
160
Negative vs Positive Feedback
Figure 64
Now, we will derive the I/O relationship into the state-space equations. If we examine
the inner-most feedback loop, we can see that the forward path has an integrator system,
1
s , and the feedback loop has the matrix value A. If we take the transfer function only
of this loop, we get:
1
1
Tinner (s) = s
1− 1s A
= s−A
We can see that the upper path (D) and the lower-path Tlower are added together to
produce the final result:
( )
1
Ttotal (s) = B s−A C +D
Now, for an alternate method, we can assume that x' is the value of the inner-feedback
loop, right before the integrator. This makes sense, since the integral of x' should be
x (which we see from the diagram that it is. Solving for x', with an input of u, we get:
x′ = Ax + Bu
This is because the value coming from the feedback branch is equal to the value x times
the feedback loop matrix A, and the value coming from the left of the sumer is the
input u times the matrix B.
If we keep things in terms of x and u, we can see that the system output is the sum of
u times the feed-forward value D, and the value of x times the value C:
y = Cx + Du
161
Feedback Loops
These last two equations are precisely the state-space equations of our system.
We can solve for the output of the system by using a series of equations:
Y (s) = G(s)E(s)
Gp(s)
Y (s) = X(s)
1 + Gp(s)
The reader is encouraged to use the above equations to derive the result by themselves.
The function E(s) is known as the error signal. The error signal is the difference between
the system output (Y(s)), and the system input (X(s)). Notice that the error signal is now
the direct input to the system G(s). X(s) is now called the reference input. The purpose
of the negative feedback loop is to make the system output equal to the system input, by
identifying large differences between X(s) and Y(s) and correcting for them.
There is an elevator in a certain building with 5 floors. Pressing button ”1” will take
you to the first floor, and pressing button ”5” will take you to the fifth floor, etc. For
reasons of simplicity, only one button can be pressed at a time.
Pressing a particular button is the reference input of the system. Pressing ”1” gives
the system a reference input of 1, pressing ”2” gives the system a reference input of 2,
etc. The elevator system then, tries to make the output (the physical floor location of
the elevator) match the reference input (the button pressed in the elevator). The error
signal, e(t), represents the difference between the reference input x(t), and the physical
location of the elevator at time t, y(t).
Let's say that the elevator is on the first floor, and the button ”5” is pressed at time t0 .
The reference input then becomes a step function:
x(t) = 5u(t − t0 )
Where we are measuring in units of ”floors”. At time t0 , the error signal is:
162
Feedback Loop Transfer Function
In the state-space representation, the plant is typically defined by the state-space equations:
The plant is considered to be pre-existing, and the matrices A, B, C, and D are considered
to be internal to the plant (and therefore unchangeable). Also, in a typical system, the state
variables are either fictional (in the sense of dummy-variables), or are not measurable. For
these reasons, we need to add external components, such as a gain element, or a feedback
element to the plant to enhance performance.
Consider the addition of a gain matrix K installed at the input of the plant, and a negative
feedback element F that is multiplied by the system output y, and is added to the input
signal of the plant. There are two cases:
1. The feedback element F is subtracted from the input before multiplication of the K
gain matrix.
2. The feedback element F is subtracted from the input after multiplication of the K
gain matrix.
In case 1, the feedback element F is added to the input before the multiplicative gain is
applied to the input. If v is the input to the entire system, then we can define u as:
163
Feedback Loops
In case 2, the feeback element F is subtracted from the input after the multiplicative gain
is applied to the input. If v is the input to the entire system, then we can define u as:
Figure 65
Let's say that we have the generalized system shown above. The top part, Gp(s) represents
all the systems and all the controllers on the forward path. The bottom part, Gb(s) repre-
sents all the feedback processing elements of the system. The letter ”K” in the beginning of
the system is called the Gain. We will talk about the gain more in later chapters. We can
define the Closed-Loop Transfer Function as follows:
[Closed-Loop Transfer Function]
KGp(s)
Hcl (s) =
1 + Gp(s)Gb(s)
If we ”open” the loop, and break the feedback node, we can define the Open-Loop Transfer
Function, as:
[Open-Loop Transfer Function]
We can redefine the closed-loop transfer function in terms of this open-loop transfer function:
Hol (s)
Hcl (s) =
1 + Gp(s)Gb(s)
These results are important, and they will be used without further explanation or derivation
throughout the rest of the book.
164
Second-Order Systems
There are a number of different places where we could place an additional controller.
Figure 66
Each location has certain benefits and problems, and hopefully we will get a chance to talk
about all of them.
The general expression of the transfer function of a second order system is given as:
2
ωn
s2 +2ζωn s+ωn
2
where ζ and ωn are damping ratio and natural frequency of the system respectively.
The damping ratio is defined by way of the sign ζ. The damping ratio gives us an idea
about the nature of the transient response detailing the amount of overshoot & oscillation
that the system will undergo. This is completely regardless of time scaling.
If :
• ζ = zero, the system is undamped;
• ζ < 1, the system is underdamped;
• ζ = 1, the system is critically damped;
• ζ > 1, the system is overdamped.
165
Feedback Loops
ζ is used in conjunction with the natural frequency to determine system properties. To find
the zeta value you must first find the natural response!
Natural Frequency, denoted by ωn is defined as the frequency with which the system would
oscillate if it were not damped and we define the damping ratio as ζ = ωσn .
166
22 Signal Flow Diagrams
w:Signal-flow graph1 Signal-flow graphs are another method for visually representing
a system. Signal Flow Diagrams are especially useful, because they allow for particular
methods of analysis, such as Mason's Gain Formula.
Signal flow diagrams typically use curved lines to represent wires and systems, instead of
using lines at right-angles, and boxes, respectively. Every curved line is considered to have a
multiplier value, which can be a constant gain value, or an entire transfer function. Signals
travel from one end of a line to the other, and lines that are placed in series with one another
have their total multiplier values multiplied together (just like in block diagrams).
Signal flow diagrams help us to identify structures called ”loops” in a system, which can be
analyzed individually to determine the complete response of the system.
1 https://en.wikipedia.org/wiki/Signal-flow%20graph
167
Signal Flow Diagrams
A forward path is a path in the signal flow diagram that connects the input to the output
without touching any single node or path more than once. A single system can have multiple
forward paths.
22.1.2 Loops
A loop is a structure in a signal flow diagram that leads back to itself. A loop does not
contain the beginning and ending points, and the end of the loop is the same node as the
beginning of a loop.
Loops are said to touch if they share a node or a line in common.
The Loop gain is the total gain of the loop, as you travel from one point, around the loop,
back to the starting point.
168
Examples
∆ = 1 − A + B − C + D − E + F...... + ∞
Where:
• A is the sum of all individual loop gains
• B is the sum of the products of all the pairs of non-touching loops
• C is the sum of the products of all the sets of 3 non-touching loops
• D is the sum of the products of all the sets of 4 non-touching loops
• et cetera.
If the given system has no pairs of loops that do not touch, for instance, B and all additional
letters after B will be zero.
Mason's rule is a rule for determining the gain of a system. Mason's rule can be used with
block diagrams, but it is most commonly (and most easily) used with signal flow diagrams.
w:Mason's rule2
If we have computed our delta values (above), we can then use Mason's Gain Rule to
find the complete gain of the system:
[Mason's Rule]
yout ∑ N
Mk ∆ k
M= =
yin k=1
∆
Where M is the total gain of the system, represented as the ratio of the output gain (yout )
to the input gain (yin ) of the system. Mk is the gain of the kth forward path, and ∆k is the
loop gain of the kth loop.
22.2 Examples
This example shows how a system of five equations in five unknowns is solved using sys-
tematic reduction rules. The independent variable is xin . The dependent variables are x1 ,
x2 , x3 , x4 , xout . The coefficients are labeled a, b, c, d, e. Here is the starting flowgraph:
2 https://en.wikipedia.org/wiki/Mason%27s%20rule
169
Signal Flow Diagrams
Figure 68 upright=2
x1 = xin + ex3
x2 = bx1 + ax4
x3 = cx2
x4 = dx3
xout = x4
Figure 69 upright=2
x1 = xin + ex3
x2 = bx1 + ax4
x3 = cx2
x3 = c(bx1 + ax4 )
x3 = bcx1 + cax4
x4 = dx3
xout = x4
170
Examples
Figure 70 upright=2
Figure 71 upright=2
Figure 72 upright=2
171
Signal Flow Diagrams
Figure 73 upright=2
x1 = xin + ex3
x1 = xin + e(bcx1 + cax4 )
x1 = xin + bcex1 + acex4
x2 = bx1 + ax4
x3 = bcx1 + cax4
x4 = dx3
xout = x4
Figure 74 upright=2
172
Examples
Figure 75 upright=2
Node x3 has no outflows and is not a node of interest. It is deleted along with its inflows.
173
Signal Flow Diagrams
Figure 76 upright=2
Removing self-loop at x1
Figure 77 upright=2
174
Examples
Figure 78 upright=2
Removing self-loop at x4
Figure 79 upright=2
1 ace
x1 = xin + x4
1 − bce 1 − bce
x4 = bcdx1 + acdx4
x4 (1 − acd) = bcdx1
bcd
x4 = x1
1 − acd
xout = x4
175
Signal Flow Diagrams
Figure 80 upright=2
Figure 81 upright=2
1 ace
x1 = xin + x4
1 − bce 1 − bce
1 ace bcd
x1 = xin + × x1
1 − bce 1 − bce 1 − acd
bcd
x4 = x1
1 − acd
xout = x4
176
Examples
1 ace bcd
x1 = xin + × x1
1 − bce 1 − bce 1 − acd
bcd
xout = x1
1 − acd
x4 's outflow is then eliminated: xout is connected directly to x1 using the product of the
gains from the two edges replaced.
x4 is not a variable of interest; thus, its node and its inflows are eliminated.
Figure 82 upright=2
Eliminating self-loop at x1
1 ace bcd
x1 = xin + × x1
1 − bce 1 − bce 1 − acd
ace bcd 1
x1 (1 − × )= xin
1 − bce 1 − acd 1 − bce
1
x1 = xin
(1 − bce) × (1 − 1−bce
ace
× 1−acd
bcd
)
bcd
xout = x1
1 − acd
177
Signal Flow Diagrams
Figure 83 upright=2
Figure 84 upright=2
Figure 85 upright=2
178
Examples
1
x1 = xin
(1 − bce) × (1 − 1−bce
ace
× 1−acd
bcd
)
bcd
xout = x1
1 − acd
bcd 1
xout = × xin
1 − acd (1 − bce) × (1 − 1−bce
ace
× 1−acd
bcd
)
bcd 1
xout = × xin
1 − acd (1 − bce) × (1 − 1−bce
ace
× 1−acd
bcd
)
Figure 86 upright=2
−bcd
xout = xin
bce + acd − 1
Figure 87 upright=2
This example shows how a system of three equations in three unknowns is solved using sys-
tematic reduction rules. The independent variables are y1 , y2 , y3 . The dependent variables
are x1 , x2 , x3 . The coefficients are labeled cjk . The steps for solving x1 follow:
179
Signal Flow Diagrams
Figure 88 upright=2
180
Examples
Figure 89 upright=2
181
Signal Flow Diagrams
Figure 90 upright=2
Figure 91 upright=2
182
Examples
Figure 92 upright=2
183
Signal Flow Diagrams
Figure 93 upright=2
184
Examples
Figure 94 upright=2
185
Signal Flow Diagrams
Figure 95 upright=2
186
Examples
Figure 96 upright=2
187
Signal Flow Diagrams
Figure 97 upright=2
188
Examples
Figure 98 upright=2
189
Signal Flow Diagrams
Figure 99 upright=2
190
Examples
191
Signal Flow Diagrams
Figure 101
This illustration shows the physical connections of the circuit. Independent voltage source
S is connected in series with a resistor R and capacitor C. The example is developed from
the physical circuit equations and solved using signal-flow graph techniques. Polarity is
important:
• S is a source with the positive terminal at N1 and the negative terminal at N3
• R is a resistor with the positive terminal at N1 and the negative terminal at N2
• C is a capacitor with the positive terminal at N2 and the negative terminal at N3 .
The unknown variable of interest is the voltage across capacitor C.
Approach to the solution:
• Find the set of equations from the physical network. These equations are acausal in
nature.
• Branch equations for the capacitor and resistor. The equations will be developed as
transfer functions using Laplace transforms.
• Kirchhoff's voltage and current laws
• Build a signal-flow graph from the equations.
• Solve the signal-flow graph.
192
Electrical engineering: Construction of a flow graph for a RC circuit
Figure 102
193
Signal Flow Diagrams
∫ t
QC (t) 1
VC (t) = = IC (τ )dτ + VC (t0 )
C C t0
∫ t
QC (t) 1
VC (t) = = IC (τ )dτ
C C t0
Taking the derivative of this and multiplying by C yields the derivative form:
IC (s) = VC (s)sC
Figure 103
194
Electrical engineering: Construction of a flow graph for a RC circuit
This circuit has only one independent loop. Its equation in the time domain is:
The circuit has three nodes, thus three Kirchhoff's current equations (expresses here as the
currents flowing from the nodes):
A set of independent equations must be chosen. For the current laws, it is necessary to
drop one of these equations. In this example, let us choose KCL1 , KCL2 .
We then look at the inventory of equations, and the signals that each equation relates:
Equation Signals
BC VC , I C
BR VR , IR
KVL1 VR , VC , VS
KCL1 IS , IR
KCL2 IR , IC
The next step consists in assigning to each equation a signal that will be represented as a
node. Each independent source signal is represented in the signal-flow graph as a source
node, therefore no equation is assigned to the independent source VS . There are many
possible valid signal flow graphs from this set of equations. An equation must only be used
once, and the variables of interest must be represented.
195
Signal Flow Diagrams
Figure 104
196
Electrical engineering: Construction of a flow graph for a RC circuit
Figure 105
197
Signal Flow Diagrams
Figure 106 Angular position servo and signal flow graph. θC = desired angle
command, θL = actual load angle, KP = position loop gain, VωC = velocity command,
VωM = motor velocity sense voltage, KV = velocity loop gain, VIC = current command,
VIM = current sense voltage, KC = current loop gain, VA = power amplifier output
voltage, LM = motor inductance, VM = voltage across motor inductance, IM = motor
current, RM = motor resistance, RS = current sense resistance, KM = motor torque
constant (Nm/amp) , T = torque, M = momment of inertia of all rotating components α
= angular acceleration, ω = angular velocity, β = mechanical damping, GM = motor back
EMF constant, GT = tachometer conversion gain constant,. There is one forward path
(shown in a different color) and six feedback loops. The drive shaft assumed to be stiff
enough to not treat as a spring. Constants are shown in black and variables in purple.
198
23 Bode Plots
A Bode Plot is a useful tool that shows the gain and phase response of a given LTI system
for different frequencies. Bode Plots are generally used with the Fourier Transform of a
given system.
Figure 107 An example of a Bode magnitude and phase plot set. The Magnitude plot
is typically on the top, and the Phase plot is typically on the bottom of the set.
The frequency of the bode plots are plotted against a logarithmic frequency axis. Every
tickmark on the frequency axis represents a power of 10 times the previous value. For
instance, on a standard Bode plot, the values of the markers go from (0.1, 1, 10, 100, 1000,
...) Because each tickmark is a power of 10, they are referred to as a decade. Notice that
the ”length” of a decade decreases as you move to the right on the graph. (note that this
199
Bode Plots
description doesn't match the chart above... there are 10 tickmarks per decade, not one,
but since it is a log chart they are not evenly spaced).
The bode Magnitude plot measures the system Input/Output ratio in special units called
decibels. The Bode phase plot measures the phase shift in degrees (typically, but radians
are also used).
23.1.1 Decibels
( )
A
dB = 20 log
B
dB = 20 log(C)
If we have a system transfer function T(s), we can separate it into a numerator polynomial
N(s) and a denominator polynomial D(s). We can write this as follows:
N (s)
T (s) =
D(s)
To get the magnitude gain plot, we must first transit the transfer function into the frequency
response by using the change of variables:
s = jω
From here, we can say that our frequency response is a composite of two parts, a real part
R and an imaginary part X:
200
Bode Gain Plots
The Bode magnitude and phase plots can be quickly and easily approximated by using a
series of straight lines. These approximate graphs can be generated by following a few short,
simple rules (listed below). Once the straight-line graph is determined, the actual Bode plot
is a smooth curve that follows the straight lines, and travels through the breakpoints.
We say that the values for all zn and pm are called break points of the Bode plot. These
are the values where the Bode plots experience the largest change in direction.
Break points are sometimes also called ”break frequencies”, ”cutoff points”, or ”corner points”.
Bode Gain Plots, or Bode Magnitude Plots display the ratio of the system gain at
each input frequency.
√
|T (jω)| = R2 + X 2
If we convert both sides to decibels, the logarithms from the decibel calculations convert
multiplication of the arguments into additions, and the divisions into subtractions:
∑ ∑
Gain = 20 log(|jω + zn |) − 20 log(|jω + pm |)
n m
And calculating out the gain of each term and adding them together will give the gain of
the system at that frequency.
201
Bode Plots
The slope of a straight line on a Bode magnitude plot is measured in units of dB/Decade,
because the units on the vertical axis are dB, and the units on the horizontal axis are
decades.
The value ω = 0 is infinitely far to the left of the bode plot (because a logarithmic scale
never reaches zero), so finding the value of the gain at ω = 0 essentially sets that value to
be the gain for the Bode plot from all the way on the left of the graph up till the first break
point. The value of the slope of the line at ω = 0 is 0 dB/Decade.
From each pole break point, the slope of the line decreases by 20 dB/Decade. The line is
straight until it reaches the next break point. From each zero break point the slope of the
line increases by 20 dB/Decade. Double, triple, or higher amounts of repeat poles and zeros
affect the gain by multiplicative amounts. Here are some examples:
• 2 poles: -40 dB/Decade
• 10 poles: -200 dB/Decade
• 5 zeros: +100 dB/Decade
Bode phase plots are plots of the phase shift to an input waveform dependent on the fre-
quency characteristics of the system input. Again, the Laplace transform does not account
for the phase shift characteristics of the system, but the Fourier Transform can. The phase
of a complex function, in ”real+imaginary” form is given as:
( )
−1 X
∠T (jω) = tan
R
202
Bode Procedure
203
Bode Plots
23.5 Examples
Draw the bode plot of an amplifier system, with a constant gain increase of 6dB.
Because the gain value is constant, and is not dependent on the frequency, we know
that the value of the magnitude graph is constant at all places on the graph. There are
no break points, so the slope of the graph never changes. We can draw the graph as a
straight, horizontal line at 6dB:
Figure 108
Draw the bode plot of a perfect integrator system given by the transfer function:
2
T (s) = s
204
Examples
Figure 109
T (jω) = 2jω
205
Bode Plots
Figure 110
−21.4
T (jω) = jω+144
206
Figure 111
Further reading
1 http://www.onmyphd.com/?p=bode.plot
2 https://en.wikibooks.org/wiki/Circuit%20Theory%2FBode%20Plots
3 http://wikis.controltheorypro.com/index.php?title=Bode_Plot
207
24 Nichols Charts
This page will talk about the use of Nichols charts to analyze frequency-domain character-
istics of control systems.
w:Nichols plot1
1 https://en.wikipedia.org/wiki/Nichols%20plot
209
25 Stability
25.1 Stability
When a system is unstable, the output of the system may be infinite even though the input
to the system was finite. This causes a number of practical problems. For instance, a robot
arm controller that is unstable may cause the robot to move dangerously. Also, systems
that are unstable often incur a certain amount of physical damage, which can become costly.
Nonetheless, many systems are inherently unstable - a fighter jet, for instance, or a rocket
at liftoff, are examples of naturally unstable systems. Although we can design controllers
that stabilize the system, it is first important to understand what stability is, how it is
determined, and why it matters.
The chapters in this section are heavily mathematical, and many require a background
in linear differential equations. Readers without a strong mathematical background might
want to review the necessary chapters in the Calculus1 and Ordinary Differential Equations2
books (or equivalent) before reading this material.
For most of this chapter we will be assuming that the system is linear, and can be represented
either by a set of transfer functions or in state space. Linear systems have an associated
characteristic polynomial, and this polynomial tells us a great deal about the stability of
the system. Negativeness of any coefficient of a characteristic polynomial indicates that the
system is either unstable or at most marginally stable. If any coefficient is zero/negative
then we can say that the system is unstable. It is important to note, though, that even if
all of the coefficients of the characteristic polynomial are positive the system may still be
unstable. We will look into this in more detail below.
A system is defined to be BIBO Stable if every bounded input to the system results in a
bounded output over the time interval [t0 , ∞). This must hold for all initial times to . So
long as we don't input infinity to our system, we won't get infinity output.
A system is defined to be uniformly BIBO Stable if there exists a positive constant
k that is independent of t0 such that for all t0 the following conditions:
(t)∥ ≤ 1
1 https://en.wikibooks.org/wiki/Calculus
2 https://en.wikibooks.org/wiki/Ordinary%20Differential%20Equations
211
Stability
t ≥ t0
implies that
(t)∥ ≤ k
There are a number of different types of stability, and keywords that are used with the
topic of stability. Some of the important words that we are going to be discussing in this
chapter, and the next few chapters are: BIBO Stable, Marginally Stable, Condition-
ally Stable, Uniformly Stable, Asymptotically Stable, and Unstable. All of these
words mean slightly different things.
−M < x ≤ M
We apply the input x, and the arbitrary boundaries M and -M to the system to produce
three outputs:
yx = f (x)
yM = f (M )
y−M = f (−M )
Now, all three outputs should be finite for all possible values of M and x, and they should
satisfy the following relationship:
y−M ≤ yx ≤ yM
212
Poles and Stability
25.3.1 Example
We can apply our test, selecting an arbitrarily large finite constant M, and an arbitrary
input x such that M>x>-M
As M approaches infinity (but does not reach infinity), we can show that:
2
y−M = limM →∞ −M = 0−
And:
2
yM = limM →∞ M = 0+
So now, we can write out our inequality:
y−M ≤ yx ≤ yM
0− ≤ x < 0+
And this inequality should be satisfied for all possible values of x. However, we can see
that when x is zero, we have the following:
yx = limx→0 x2 = ∞
Which means that x is between -M and M, but the value yx is not between y-M and
yM . Therefore, this system is not stable.
When the poles of the closed-loop transfer function of a given system are located in the
right-half of the S-plane (RHP), the system becomes unstable. When the poles of the
system are located in the left-half plane (LHP) and the system is not improper, the system
is shown to be stable. A number of tests deal with this particular facet of stability: The
Routh-Hurwitz Criteria, the Root-Locus, and the Nyquist Stability Criteria all
test whether there are poles of the transfer function in the RHP. We will learn about all
these tests in the upcoming chapters.
If the system is a multivariable, or a MIMO system, then the system is stable if and
only if every pole of every transfer function in the transfer function matrix has a negative
real part and every transfer function in the transfer function matrix is not improper. For
these systems, it is possible to use the Routh-Hurwitz, Root Locus, and Nyquist methods
described later, but these methods must be performed once for each individual transfer
function in the transfer function matrix.
213
Stability
Note:
Every pole of G(s) is an eigenvalue of the system matrix A. However, not every eigenvalue
of A is a pole of G(s).
The poles of the transfer function, and the eigenvalues of the system matrix A are related.
In fact, we can say that the eigenvalues of the system matrix A are the poles of the transfer
function of the system. In this way, if we have the eigenvalues of a system in the state-space
domain, we can use the Routh-Hurwitz, and Root Locus methods as if we had our system
represented by a transfer function instead.
On a related note, eigenvalues and all methods and mathematical techniques that use
eigenvalues to determine system stability only work with time-invariant systems. In systems
which are time-variant, the methods using eigenvalues to determine system stability fail.
We are going to have a brief refesher here about transfer functions, because several of the
later chapters will use transfer functions for analyzing system stability.
Let us remember our generalized feedback-loop transfer function, with a gain element of
K, a forward path Gp(s), and a feedback of Gb(s). We write the transfer function for this
system as:
KGp(s)
Hcl (s) =
1 + Hol (s)
Where Hcl is the closed-loop transfer function, and Hol is the open-loop transfer function.
Again, we define the open-loop transfer function as the product of the forward path and
the feedback elements, as such:
Hol (s) = KGp(s)Gb(s) <---Note this definition now contradicts the updated definition in
the ”Feedback” section.
Now, we can define F(s) to be the characteristic equation. F(s) is simply the denominator
of the closed-loop transfer function, and can be defined as such:
[Characteristic Equation]
We can say conclusively that the roots of the characteristic equation are the poles of the
transfer function. Now, we know a few simple facts:
1. The locations of the poles of the closed-loop transfer function determine if the system
is stable or not
214
State-Space and Stability
2. The zeros of the characteristic equation are the poles of the closed-loop transfer func-
tion.
3. The characteristic equation is always a simpler equation than the closed-loop transfer
function.
These functions combined show us that we can focus our attention on the characteristic
equation, and find the roots of that equation.
As we have discussed earlier, the system is stable if the eigenvalues of the system matrix
A have negative real parts. However, there are other stability issues that we can analyze,
such as whether a system is uniformly stable, asymptotically stable, or otherwise. We will
discuss all these topics in a later chapter.
When the poles of the system in the complex S-Domain exist on the complex frequency
axis (the vertical axis), or when the eigenvalues of the system matrix are imaginary (no real
part), the system exhibits oscillatory characteristics, and is said to be marginally stable. A
marginally stable system may become unstable under certain circumstances, and may be
perfectly stable under other circumstances. It is impossible to tell by inspection whether a
marginally stable system will become unstable or not.
We will discuss marginal stability more in the following chapters.
215
26 Introduction to Digital Controls
The stability analysis of a discrete-time or digital system is similar to the analysis for a
continuous time system. However, there are enough differences that it warrants a separate
chapter.
An LTI causal system is uniformly BIBO stable if there exists a positive constant L such
that the following conditions:
x[n0 ] = 0
[n]∥ ≤ k
k≥0
imply that
[n]∥ ≤ L
217
Introduction to Digital Controls
{
Cϕ[n, n0 ]B if k > 0
G[n] =
0 if k ≤ 0
A digital system is BIBO stable if and only if there exists a positive constant L such that
for all non-negative k:
∑
k
[n]∥ ≤ L
n=0
A MIMO discrete-time system is BIBO stable if and only if every pole of every transfer
function in the transfer function matrix has a magnitude less than 1. All poles of all
transfer functions must exist inside the unit circle on the Z plane.
There is a discrete version of the Lyapunov stability theorem that applies to digital systems.
Given the discrete Lyapunov equation:
[Digital Lypapunov Equation]
AT M A − M = −N
We can use this version of the Lyapunov equation to define a condition for stability in
discrete-time systems:
Every pole of G(z) is an eigenvalue of the system matrix A. Not every eigenvalue of A is
a pole of G(z). Like the poles of the transfer function, all the eigenvalues of the system
matrix must have magnitudes less than 1. Mathematically:
√
Re(z)2 + Im(z)2 ≤ 1
If the magnitude of the eigenvalues of the system matrix A, or the poles of the transfer
functions are greater than 1, the system is unstable.
218
Finite Wordlengths
Digital computer systems have an inherent problem because implementable computer sys-
tems have finite wordlengths to deal with. Some of the issues are:
1. Real numbers can only be represented with a finite precision. Typically, a computer
system can only accurately represent a number to a finite number of decimal points.
2. Because of the fact above, computer systems with feedback can compound errors with
each program iteration. Small errors in one step of an algorithm can lead to large
errors later in the program.
3. Integer numbers in computer systems have finite lengths. Because of this, integer
numbers will either roll-over, or saturate, depending on the design of the computer
system. Both situations can create inaccurate results.
219
27 Routh-Hurwitz Criterion
The Routh-Hurwitz stability criterion provides a simple algorithm to decide whether or not
the zeros of a polynomial are all in the left half of the complex plane (such a polynomial
is called at times ”Hurwitz”). A Hurwitz polynomial is a key requirement for a linear
continuous-time invariant to be stable (all bounded inputs produce bounded outputs).
Necessary stability conditions
Conditions that must hold for a polynomial to be Hurwitz.
If any of them fails - the polynomial is not stable. However, they may all hold without
implying stability.
Sufficient stability conditions
Conditions that if met imply that the polynomial is stable. However, a polynomial may
be stable without implying some or any of them.
The Routh criteria provides condition that are both necessary and sufficient for a polynomial
to be Hurwitz.
The Routh-Hurwitz criteria is comprised of three separate tests that must be satisfied. If
any single test fails, the system is not stable and further tests need not be performed. For
this reason, the tests are arranged in order from the easiest to determine to the hardest.
The Routh Hurwitz test is performed on the denominator of the transfer function, the
characteristic equation. For instance, in a closed-loop transfer function with G(s) in the
forward path, and H(s) in the feedback loop, we have:
G(s)
T (s) =
1 + G(s)H(s)
If we simplify this equation, we will have an equation with a numerator N(s), and a denom-
inator D(s):
N (s)
T (s) =
D(s)
221
Routh-Hurwitz Criterion
Here are the three tests of the Routh-Hurwitz Criteria. For convenience, we will use N as
the order of the polynomial (the value of the highest exponent of s in D(s)). The equation
D(s) can be represented generally as follows:
D(s) = a0 + a1 s + a2 s2 + · · · + aN sN
Rule 1
All the coefficients ai must be present (non-zero)
Rule 2
All the coefficients ai must be positive (equivalently all of them must be negative, with
no sign change)
Rule 3
If Rule 1 and Rule 2 are both satisfied, then form a Routh array from the coef-
ficients ai . There is one pole in the right-hand s-plane for every sign change of the
members in the first column of the Routh array (any sign changes, therefore, mean the
system is unstable).
The Routh array is formed by taking all the coefficients ai of D(s), and staggering them in
array form. The final columns for each row should contain zeros:
sN aN aN −2 · · · 0
−1
sN aN −1 aN −3 · · · 0
Therefore, if N is odd, the top row will be all the odd coefficients. If N is even, the top row
will be all the even coefficients. We can fill in the remainder of the Routh Array as follows:
sN aN aN −2 · · · 0
s −1 aN −1 aN −3
N · · · 0
bN −1 bN −3 ···
cN −1 cN −3 · · ·
s0 · · ·
Now, we can define all our b, c, and other coefficients, until we reach row s0 . To fill them
in, we use the following formulae:
−1 aN aN −2
bN −1 =
aN −1 aN −1 aN −3
222
Routh-Hurwitz Criteria
And
−1 aN aN −4
bN −3 =
aN −1 aN −1 aN −5
For each row that we are computing, we call the left-most element in the row directly above
it the pivot element. For instance, in row b, the pivot element is aN-1 , and in row c, the
pivot element is bN-1 and so on and so forth until we reach the bottom of the array.
To obtain any element, we negate the determinant of the following matrix, and divide by
the pivot element:
k m
l n
Where:
• k is the left-most element two rows above the current row.
• l is the pivot element.
• m is the element two rows up, and one column to the right of the current element.
• n is the element one row up, and one column to the right of the current element.
In terms of k l m n, our equation is:
(lm) − (kn)
v=
l
To calculate the value CN-3 , we must determine the values for k l m and n:
• k is the left-most element two rows up: aN-1
• l the pivot element, is the left-most element one row up: bN-1
• m is the element from one-column to the right, and up two rows: aN-5
• n is the element one column right, and one row up: bN-5
Plugging this into our equation gives us the formula for CN-3 :
−1 aN −1 aN −5 aN −1 bN −5 − bN −1 aN −5
=
bN −1 bN −1 bN −5 −bN −1
223
Routh-Hurwitz Criterion
Using the first two requirements, we see that all the coefficients are non-zero, and all
of the coefficients are positive. We will proceed then to construct the Routh-Array:
s3 1 4 0
2
s 2 3 0
s1 bN −1 bN −3 0
s0 cN −1 cN −3 0
And we can calculate out all the coefficients:
(2)(4)−(1)(3) 5
bN −1 = 2 = 2
(2)(0)−(0)(1)
bN −3 = 2 =0
(3)( 52 )−(2)(0)
cN −1 = 5 =3
2
(2)(0)−( 52 )(0)
cN −3 = 5 =0
2
And filling these values into our Routh Array, we can determine whether the system is
stable:
s3 1 4 0
s2 2 3 0
s1 52 0 0
s0 3 0 0
From this array, we can clearly see that all of the signs of the first column are positive,
there are no sign changes, and therefore there are no poles of the characteristic equation
in the RHP.
If, while calculating our Routh-Hurwitz, we obtain a row of all zeros, we do not stop, but
can actually learn more information about our system.
If we have a row of all zeros, the row directly above it is known as the Auxiliary Poly-
nomial, and can be very helpful. The roots of the auxiliary polynomial give us the precise
locations of complex conjugate roots that lie on the jω axis. However, one important point
to notice is that if there are repeated roots on the jω axis, the system is actually unsta-
ble. Therefore, we must use the auxiliary polynomial to determine whether the roots are
repeated or not.
The auxiliary equation is to be differentiated with respect to s and the coefficients of this
equation replaces the all zero row. Routh array can be further calculated using these new
values.
In this special case, there is a zero in the first column of the Routh Array, but the other
elements of that row are non-zero. Like the above case, we can replace the zero with a
224
Routh-Hurwitz Criteria
small variable epsilon (ε) and use that variable to continue our calculations. After we have
constructed the entire array, we can take the limit as epsilon approaches zero to get our
final values. If the sign coefficient above the (ε) is the same as below it, this indicates a
pure imaginary root.
225
28 Jury's Test
Because of the differences in the Z and S domains, the Routh-Hurwitz criteria can not be
used directly with digital systems. This is because digital systems and continuous-time
systems have different regions of stability. However, there are some methods that we can
use to analyze the stability of digital systems. Our first option (and arguably not a very
good option) is to convert the digital system into a continuous-time representation using
the bilinear transform. The bilinear transform converts an equation in the Z domain
into an equation in the W domain, that has properties similar to the S domain. Another
possibility is to use Jury's Stability Test. Jury's test is a procedure similar to the RH
test, except it has been modified to analyze digital systems in the Z domain directly.
One common, but time-consuming, method of analyzing the stability of a digital system
in the z-domain is to use the bilinear transform to convert the transfer function from the
z-domain to the w-domain. The w-domain is similar to the s-domain in the following ways:
• Poles in the right-half plane are unstable
• Poles in the left-half plane are stable
• Poles on the imaginary axis are partially stable
The w-domain is warped with respect to the s domain, however, and except for the relative
position of poles to the imaginary axis, they are not in the same places as they would be in
the s-domain.
Remember, however, that the Routh-Hurwitz criterion can tell us whether a pole is unstable
or not, and nothing else. Therefore, it doesn't matter where exactly the pole is, so long as
it is in the correct half-plane. Since we know that stable poles are in the left-half of the
w-plane and the s-plane, and that unstable poles are on the right-hand side of both planes,
we can use the Routh-Hurwitz test on functions in the w domain exactly like we can use it
on functions in the s-domain.
There are other methods for mapping an equation in the Z domain into an equation in the
S domain, or a similar domain. We will discuss these different methods in the Appendix1 .
1 https://en.wikibooks.org/wiki/Control%20Systems%2FZ%20Transform%20Mappings
227
Jury's Test
Jury's test is a test that is similar to the Routh-Hurwitz criterion, except that it can be
used to analyze the stability of an LTI digital system in the Z domain. To use Jury's test to
determine if a digital system is stable, we must check our z-domain characteristic equation
against a number of specific rules and requirements. If the function fails any requirement,
it is not stable. If the function passes all the requirements, it is stable. Jury's test is a
necessary and sufficient test for stability in digital systems.
Again, we call D(z) the characteristic polynomial of the system. It is the denominator
polynomial of the Z-domain transfer function. Jury's test will focus exclusively on the
Characteristic polynomial. To perform Jury's test, we must perform a number of smaller
tests on the system. If the system fails any test, it is unstable.
D(z) = a0 + a1 z + a2 z 2 + · · · + aN z N
The following tests determine whether this system has any poles outside the unit circle (the
instability region). These tests will use the value N as being the degree of the characteristic
polynomial.
The system must pass all of these tests to be considered stable. If the system fails any
test, you may stop immediately: you do not need to try any further tests.
Rule 1
If z is 1, the system output must be positive:
D(1) > 0
Rule 2
If z is -1, then the following relationship must hold:
Rule 3
The absolute value of the constant term (a0 ) must be less than the value of the highest
coefficient (aN ):
|a0 | < aN
If Rule 1 Rule 2 and Rule 3 are satisified, construct the Jury Array (discussed
below).
228
Jury's Test
Rule 4
Once the Jury Array has been formed, all the following relationships must be satisifed
until the end of the array:
And so on until the last row of the array. If all these conditions are satisifed, the system
is stable.
While you are constructing the Jury Array, you can be making the tests of Rule 4. If
the Array fails Rule 4 at any point, you can stop calculating the array: your system is
unstable. We will discuss the construction of the Jury Array below.
The Jury Array is constructed by first writing out a row of coefficients, and then writing
out another row with the same coefficients in reverse order. For instance, if your polynomial
is a third order system, we can write the First two lines of the Jury Array as follows:
z0 z1 z2 z3 . . . zN
a0 a1 a2 a3 . . . aN
aN . . . a3 a2 a1 a0
Now, once we have the first row of our coefficients written out, we add another row of coef-
ficients (we will use b for this row, and c for the next row, as per our previous convention),
and we will calculate the values of the lower rows from the values of the upper rows. Each
new row that we add will have one fewer coefficient then the row before it:
1) a0 a1 a2 a3 . . . aN
2) aN ... a 3 a2 a1 a0
3) b0 b1 b2 . . . bN −1
4) bN −1 ... b2 b1 b0
.. .. .. ..
. . . .
2N − 3) v0 v1 v2
Note: The last file is the (2N-3) file, and always has 3 elements. This test doesn't have
sense if N=1, but in this case you know the pole!
229
Jury's Test
Once we get to a row with 2 members, we can stop constructing the array.
To calculate the values of the odd-number rows, we can use the following formulae. The
even number rows are equal to the previous row in reverse order. We will use k as an
arbitrary subscript value. These formulae are reusable for all elements in the array:
a aN −k
bk = 0
aN ak
b bN −1−k
ck = 0
bN −1 bk
c cN −2−k
dk = 0
cN −2 ck
This pattern can be carried on to all lower rows of the array, if needed.
Give the equation for member e5 of the jury array (assuming the original polynomial is
sufficiently large to require an e5 member).
Going off the pattern we set above, we can have this equation for a member e:
d dN −R−k
ek = 0
dN −R dk
Where we are using R as the subtractive element from the above equations. Since row
c had R → 1, and row d had R → 2, we can follow the pattern and for row e set R →
3. Plugging this value of R into our equation above gives us:
d dN −3−k
ek = 0
dN −3 dk
And since we want e5 we know that k is 5, so we can substitute that into the equation:
d dN −3−5 d0 dN −8
e5 = 0 =
dN −3 d5 dN −3 d5
When we take the determinant, we get the following equation:
e5 = d0 d5 − dN −8 dN −3
We will discuss the bilinear transform, and other methods to convert between the Laplace
domain and the Z domain in the appendix:
230
Further reading
• Z Transform Mappings2
2 https://en.wikibooks.org/wiki/Control%20Systems%2FZ%20Transform%20Mappings
231
29 Root Locus
Consider a system like a radio. The radio has a ”volume” knob, that controls the amount
of gain of the system. High volume means more power going to the speakers, low volume
means less power to the speakers. As the volume value increases, the poles of the transfer
function of the radio change, and they might potentially become unstable. We would like to
find out if the radio becomes unstable, and if so, we would like to find out what values of the
volume cause it to become unstable. Our current methods would require us to plug in each
new value for the volume (gain, ”K”), and solve the open-loop transfer function for the roots.
This process can be a long one. Luckily, there is a method called the root-locus method,
that allows us to graph the locations of all the poles of the system for all values of gain, K
29.2 Root-Locus
As we change gain, we notice that the system poles and zeros actually move around in the
S-plane1 . This fact can make life particularly difficult, when we need to solve higher-order
equations repeatedly, for each new gain value. The solution to this problem is a technique
known as Root-Locus graphs. Root-Locus allows you to graph the locations of the poles
and zeros for every value of gain, by following several simple rules. As we know that a fan
switch also can control the speed of the fan.
Let's say we have a closed-loop transfer function for a particular system:
N (s) KG(s)
=
D(s) 1 + KG(s)H(s)
Where N is the numerator polynomial and D is the denominator polynomial of the transfer
functions, respectively. Now, we know that to find the poles of the equation, we must set
the denominator to 0, and solve the characteristic equation. In other words, the locations
of the poles of a specific equation must satisfy the following relationship:
D(s) = 1 + KG(s)H(s) = 0
1 https://en.wikibooks.org/wiki/S-plane
233
Root Locus
1 + KG(s)H(s) = 0
KG(s)H(s) = −1
∠KG(s)H(s) = 180◦
Now we have 2 equations that govern the locations of the poles of a system for all gain
values:
[The Magnitude Equation]
1 + KG(s)H(s) = 0
[The Angle Equation]
∠KG(s)H(s) = 180◦
The same basic method can be used for considering digital systems in the Z-domain:
N (z) KG(z)
=
D(z) 1 + KGH(z)
D(z) = 1 + KGH(z) = 0
1 + KGH(z) = 0
KGH(z) = −1
We can now convert this to polar coordinates, and take the angle of the polynomial:
∠KGH(z) = 180◦
234
The Root-Locus Procedure
1 + KGH(z) = 0
[The Angle Equation]
∠KGH(z) = 180◦
If you will compare the two, the Z-domain equations are nearly identical to the S-domain
equations, and act exactly the same. For the remainder of the chapter, we will only consider
the S-domain equations, with the understanding that digital systems operate in nearly the
same manner.
Note:
In this section, the rules for the S-Plane and the Z-plane are the same, so we won't refer
to the differences between them.
In the transform domain (see note at right), when the gain is small, the poles start at the
poles of the open-loop transfer function. When gain becomes infinity, the poles move to
overlap the zeros of the system. This means that on a root-locus graph, all the poles move
towards a zero. Only one pole may move towards one zero, and this means that there must
be the same number of poles as zeros.
If there are fewer zeros than poles in the transfer function, there are a number of implicit
zeros located at infinity, that the poles will approach.
First thing, we need to convert the magnitude equation into a slightly more convenient
form:
−1
KG(s)H(s) + 1 = 0 → G(s)H(s) =
K
Note:
We generally use capital letters for functions in the frequency domain, but a(s) and
b(s) are unimportant enough to be lower-case.
Now, we can assume that G(s)H(s) is a fraction of some sort, with a numerator and a
denominator that are both polynomials. We can express this equation using arbitrary
functions a(s) and b(s), as such:
a(s) −1
=
b(s) K
235
Root Locus
We will refer to these functions a(s) and b(s) later in the procedure.
We can start drawing the root-locus by first placing the roots of b(s) on the graph with an
'X'. Next, we place the roots of a(s) on the graph, and mark them with an 'O'.
Poles are marked on the graph with an 'X' and zeros are marked with an 'O' by
common convention. These letters have no particular meaning
Next, we examine the real-axis. starting from the right-hand side of the graph and traveling
to the left, we draw a root-locus line on the real-axis at every point to the left of an odd
number of poles or zeros on the real-axis. This may sound tricky at first, but it becomes
easier with practice.
Now, a root-locus line starts at every pole. Therefore, any place that two poles appear to
be connected by a root locus line on the real-axis, the two poles actually move towards each
other, and then they ”break away”, and move off the axis. The point where the poles break
off the axis is called the breakaway point. From here, the root locus lines travel towards
the nearest zero.
It is important to note that the s-plane is symmetrical about the real axis, so whatever is
drawn on the top-half of the S-plane, must be drawn in mirror-image on the bottom-half
plane.
Once a pole breaks away from the real axis, they can either travel out towards infinity
(to meet an implicit zero), or they can travel to meet an explicit zero, or they can re-join
the real-axis to meet a zero that is located on the real-axis. If a pole is traveling towards
infinity, it always follows an asymptote. The number of asymptotes is equal to the number
of implicit zeros at infinity.
Here is the complete set of rules for drawing the root-locus graph. We will use p and z
to denote the number of poles and the number of zeros of the open-loop transfer function,
respectively. We will use Pi and Zi to denote the location of the ith pole and the ith zero,
respectively. Likewise, we will use ψ i and ρi to denote the angle from a given point to the
ith pole and zero, respectively. All angles are given in radians (π denotes π radians).
There are 11 rules that, if followed correctly, will allow you to create a correct root-locus
graph.
Rule 1
There is one branch of the root-locus for every root of b(s).
236
Root Locus Rules
Rule 2
The roots of b(s) are the poles of the open-loop transfer function. Mark the roots of
b(s) on the graph with an X.
Rule 3
The roots of a(s) are the zeros of the open-loop transfer function. Mark the roots of
a(s) on the graph with an O. There should be a number of O's less than or equal to
the number of X's. There is a number of zeros p - z located at infinity. These zeros at
infinity are called ”implicit zeros”. All branches of the root-locus will move from a pole
to a zero (some branches, therefore, may travel towards infinity).
Rule 4
A point on the real axis is a part of the root-locus if it is to the left of an odd number
of poles and zeros.
Rule 5
The gain at any point on the root locus can be determined by the inverse of the absolute
value of the magnitude equation.
b(s)
a(s) = |K|
Rule 6
The root-locus diagram is symmetric about the real-axis. All complex roots are conju-
gates.
Rule 7
Two roots that meet on the real-axis will break away from the axis at certain break-away
points. If we set s → σ (no imaginary part), we can use the following equation:
b(σ)
K =−
a(σ)
dK d b(σ)
=
dσ dσ a(σ)
Rule 8
π
The breakaway lines of the root locus are separated by angles of α, where α is the
number of poles intersecting at the breakaway point.
Rule 9
The breakaway root-loci follow asymptotes that intersect the real axis at angles φω
given by:
237
Root Locus
π + 2N π
ϕω = , N = 0, 1, ...p − z − 1
p−z
The origin of these asymptotes, OA, is given as the sum of the pole locations, minus
the sum of the zero locations, divided by the difference between the number of poles
and zeros:
∑ ∑
p Pi − z Zi
OA =
p−z
∑ ∑
ψi + ρi + ϕd = π
p z
∑ ∑
ρi + ψ i + ϕa = π
z p
Note that the sum of the angles of all the poles and zeros must equal to 180.
238
Root Locus Equations
If the number of explicit zeros of the system is denoted by Z (uppercase z), and the number
of poles of the system is given by P, then the number of asymptotes (Na ) is given by:
[Number of Asymptotes]
Na = P − Z
π
ϕk = (2k + 1)
P −Z
The angles for the asymptotes are measured from the positive real-axis
The breakaway points are located at the roots of the following equation:
[Breakaway Point Locations]
dG(s)H(s) dGH(z)
ds = 0 or dz =0
Once you solve for z, the real roots give you the breakaway/reentry points. Complex roots
correspond to a lack of breakaway/reentry.
The breakaway point equation can be difficult to solve, so many times the actual location
is approximated.
239
Root Locus
The root locus procedure should produce a graph of where the poles of the system are for
all values of gain K. When any or all of the roots of D are in the unstable region, the
system is unstable. When any of the roots are in the marginally stable region, the system
is marginally stable (oscillatory). When all of the roots of D are in the stable region, then
the system is stable.
It is important to note that a system that is stable for gain K1 may become unstable for
a different gain K2 . Some systems may have poles that cross over from stable to unstable
multiple times, giving multiple gain values for which the system is unstable.
Here is a quick refresher:
29.7 Examples
If we look at the characteristic equation, we can quickly solve for the single pole of the
system:
D(s) = 1 + 2s = 0
s = − 12
We plot that point on our root-locus graph, and everything on the real axis to the left of
that single point is on the root locus (from the rules, above). Therefore, the root locus of
our system looks like this:
240
Examples
Figure 112
From this image, we can see that for all values of gain this system is stable.
We are given a system with three real poles, shown by the transfer function:
1
T (s) = (s+1)(s+2)(s+3)
241
Root Locus
Figure 113
We can see that for low values of gain the system is stable, but for higher values of gain,
the system becomes unstable.
Find the root-locus graph for the following system transfer function:
2
T (s) = K s s(s+1)(s+2)
+4.5s+5.625
242
Examples
If we look at the denominator, we have poles at the origin, -1, and -2. Following Rule 4,
we know that the real-axis between the first two poles, and the real axis after the third
pole are all on the root-locus. We also know that there is going to be a breakaway point
between the first two poles, so that they can approach the complex conjugate zeros. If we
use the quadratic equation on the numerator, we can find that the zeros are located at:
s = (−2.25 + j0.75), (−2.25 − j0.75)
If we draw our graph, we get the following:
Figure 114
We can see from this graph that the system is stable for all values of K.
243
Root Locus
N (s) = S 2 + 7S + 12
D(s) = S 2 + 3S + 2
Now, we can generate the coefficient vectors from the numerator and denominator:
num = [0 1 7 12];
den = [0 1 3 2];
rlocus(num, den);
Figure 115
244
30 Nyquist Criterion
The Nyquist Stability Criteria is a test for system stability, just like the Routh-
Hurwitz1 test, or the Root-Locus2 Methodology. However, the Nyquist Criteria can also
give us additional information about a system. Routh-Hurwitz and Root-Locus can tell
us where the poles of the system are for particular values of gain. By altering the gain
of the system, we can determine if any of the poles move into the RHP, and therefore
become unstable. The Nyquist Criteria, however, can tell us things about the frequency
characteristics of the system. For instance, some systems with constant gain might be
stable for low-frequency inputs, but become unstable for high-frequency inputs.
Also, the Nyquist Criteria can tell us things about the phase of the input signals, the
time-shift of the system, and other important information.
30.2 Contours
245
Nyquist Criterion
When we have our contour, Γ, we transform it into ΓF (s) by plugging every point of
the contour into the function F(s), and taking the resultant value to be a point on
the transformed contour.
Let's say, for instance, that Γ is a unit square contour in the complex s plane. The
vertices of the square are located at points I,J,K,L, as follows:
I = 1+j
J = 1−j
K = −1 − j
L = −1 + j
we must also specify the direction of our contour, and we will say (arbitrarily)
that it is a clockwise contour (travels from I to J to K to L). We will also define
our transform function, F(s), to be the following:
F (s) = 2s + 1
We can factor the denominator of F(s), and we can show that there is one zero at
s → -0.5, and no poles. Plotting this root on the same graph as our contour, we
see clearly that it lies within the contour. Since s is a complex variable, defined
with real and imaginary parts as:
s = σ + jω
We know that F(s) must also be complex. We will say, for reasons of simplicity,
that the axes in the F(s) plane are u and v, and are related as such:
F (s) = u + vj = 2(σ + jω) + 1
From this relationship, we can define u and v in terms of σ and ω:
u = 2σ + 1
v = 2ω
Now, to transform Γ, we will plug every point of the contour into F(s), and the
resultant values will be the points of ΓF (s) . We will solve for complex values u
and v, and we will start with the vertices, because they are the simplest examples:
u + vj = F (I) = 3 + 2j
3 https://en.wikipedia.org/wiki/Argument%20principle
246
The Nyquist Contour
u + vj = F (J) = 3 − 2j
u + vj = F (K) = −1 + 2j
u + vj = F (L) = −1 − 2j
We can take the lines in between the vertices as a function of s, and plug the
entire function into the transform. Luckily, because we are using straight lines,
we can simplify very much:
• Line from I to J: σ = 1, u = 3, v = ω
• Line from J to K: ω = −1, u = 2σ + 1, v = −1
• Line from K to L: σ = −1, u = −1, v = ω
• Line from L to I: ω = 1, u = 2σ + 1, v = 1
And when we graph these functions, from vertex to vertex, we see that the
resultant contour in the F(s) plane is a square, but not centered at the origin,
and larger in size. Notice how the contour encircles the origin of the F(s) plane
one time. This will be important later on.
This is a little difficult now, because we need to simplify this whole expression,
and separate it out into real and imaginary parts. There are two methods to doing
this, neither of which is short or easy enough to demonstrate here to entirety:
a) We convert the numerator and denominator polynomials into a polar repre-
sentation in terms of r and θ, then perform the division, and then convert
back into rectangular format.
b) We plug each segment of our contour into this equation, and simplify numer-
ically.
247
Nyquist Criterion
Analog Systems
The Nyquist contour for analog systems is an infinite semi-circle that encircles the entire
right-half of the s plane. The semicircle travels up the imaginary axis from negative infinity
to positive infinity. From positive infinity, the contour breaks away from the imaginary
axis, in the clock-wise direction, and forms a giant semicircle.
Digital Systems
The Nyquist contour in digital systems is a counter-clockwise encirclement of the unit
circle.
w:Nyquist plot5
In other words, if P is zero then N must equal zero. Otherwise, N must equal P. Essentially,
we are saying that Z must always equal zero, because Z is the number of zeros of the
characteristic equation (and therefore the number of poles of the closed-loop transfer
function) that are in the right-half of the s plane.
Keep in mind that we don't necessarily know the locations of all the zeros of the charac-
teristic equation. So if we find, using the nyquist criterion, that the number of poles is
not equal to N, then we know that there must be a zero in the right-half plane, and that
therefore the system is unstable.
4 https://en.wikipedia.org/wiki/Nyquist%20stability%20criterion
5 https://en.wikipedia.org/wiki/Nyquist%20plot
248
Nyquist ↔ Bode
249
31 State-Space Stability
dhunparserurl Control Systems/State-Space Stability
Unstable
A system is said to be unstable if the system response approaches infinity as time
approaches infinity. If our system is G(t), then, we can say a system is unstable if:
lim (t)∥ = ∞
t→∞
Also, a key concept when we are talking about stability of systems is the concept of an
equilibrium point:
Equilibrium Point
Given a system f such that:
x′ (t) = f (x(t))
A particular state xe is called an equilibrium point if
f (xe ) = 0 for all time t in the interval [t0 , ∞), where t0 is the starting time of the
system.
The definitions below typically require that the equilibrium point be zero. If we have an
equilibrium point xe = a, then we can use the following change of variables to make the
equilibrium point zero:
x̄ = xe − a = 0
We will also see below that a system's stability is defined in terms of an equilibrium point.
Related to the concept of an equilibrium point is the notion of a zero point:
Zero State
251
State-Space Stability
The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input
state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for
every initial time t0 , there exists an associated finite constant k(t0 ) such that:
supt≥t0 ∥ϕ(t, t0 )∥ = k(t0 ) < ∞
Where sup is the supremum, or ”maximum” value of the equation. The maximum value
of this equation must never exceed the arbitrary finite constant k (and therefore it may
not be infinite at any point).
Uniform Stability
The system is defined to be uniformly stable if it is stable for all initial values of t0 :
Uniform stability is a more general, and more powerful form of stability than was previously
provided.
Asymptotic Stability
A system is defined to be asymptotically stable if:
lim ∥ϕ(t, t0 )∥ = 0
t→∞
A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix
A have negative real parts. If a system is asymptotically stable, it is also BIBO stable.
However the inverse is not true: A system that is BIBO stable might not be asymptotically
stable.
Exponential Stability
A system is defined to be exponentially stable if the system response decays expo-
nentially towards zero as time approaches infinity.
252
Eigenvalues and Poles
For linear systems, uniform asymptotic stability is the same as exponential stability.
This is not the case with non-linear systems.
Here we will discuss some rules concerning systems that are marginally stable. Because we
are discussing eigenvalues and eigenvectors, these theorems only apply to time-invariant
systems.
1. A time-invariant system is marginally stable if and only if all the eigenvalues of the
system matrix A are zero or have negative real parts, and those with zero real parts
are simple roots of the minimal polynomial of A.
2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of
A have non-positive real parts, and there is a complete set of distinct eigenvectors
associated with the eigenvalues with zero real parts.
3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all
eigenvalues of the system matrix A have negative real parts.
Let's look at the denominator (which we will now call D(s)) more closely. To be stable,
the following condition must be true:
D(s) = |(sI − A)| = 0
253
State-Space Stability
And if we substitute λ for s, we see that this is actually the characteristic equation of
matrix A! This means that the values for s that satisfy the equation (the poles of our
transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required
that all the poles of the system be located in the left-half plane, and therefore all the
eigenvalues of A must have negative real parts.
We can define the Impulse response matrix, G(t, τ ) in order to define further tests for
stability:
[Impulse Response Matrix]
{
C(t)ϕ(t, τ )B(τ ) if t ≥ τ
G(t, τ ) =
0 if t < τ
The system is uniformly stable if and only if there exists a finite positive constant L such
that for all time t and all initial conditions t0 with t ≥ t0 the following integral is satisfied:
∫t
0 (t, τ )τ ≤L
In other words, the above integral must have a finite value, or the system is not uniformly
stable.
In the time-invariant case, the impulse response matrix reduces to:
{
CeAt B if t ≥ 0
G(t) =
0 if t < 0
In a time-invariant system, we can use the impulse response matrix to determine if the
system is uniformly BIBO stable by taking a similar integral:
∫∞
0 (t) ≤ L
Where L is a finite constant.
These terms are important, and will be used in further discussions on this topic.
• f(x) is positive definite if f(x) > 0 for all x.
• f(x) is positive semi-definite if f (x) ≥ 0 for all x, and f(x) = 0 only if x = 0.
• f(x) is negative definite if f(x) < 0 for all x.
• f(x) is negative semi-definite if f (x) ≤ 0 for all x, and f(x) = 0 only if x = 0.
A Hermitian matrix X is positive definite if all its principle minors are positive. Also,
a matrix X is positive definite if all its eigenvalues have positive real parts. These two
methods may be used interchangeably.
Positive definiteness is a very important concept. So much so that the Lyapunov stability
test depends on it. The other categorizations are not as important, but are included here
for completeness.
254
Lyapunov Stability
Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible
sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively,
we can write:
If the matrix M can be calculated in this manner, the system is asymptotically stable.
255
32 Controllability and Observability
In the world of control engineering, there are a slew of systems available that need to
be controlled. The task of a control engineer is to design controller and compensator
units to interact with these pre-existing systems. However, some systems simply cannot
be controlled (or, more often, cannot be controlled in specific ways). The concept of
controllability refers to the ability of a controller to arbitrarily alter the functionality of
the system plant.
The state-variable of a system, x, represents the internal workings of the system that can
be separate from the regular input-output relationship of the system. This also needs to
be measured, or observed. The term observability describes whether the internal state
variables of the system can be externally measured.
32.2 Controllability
Complete state controllability (or simply controllability if no other context is given) de-
scribes the ability of an external input to move the internal state of a system from any
initial state to any other final state in a finite time interval
We will start off with the definitions of the term controllability, and the related terms
reachability and stabilizability.
Controllability
A system with internal state vector x is called controllable if and only if the system
states can be changed by changing the system input.
Reachability
A particular state x1 is called reachable if there exists an input that transfers the state
of the system from the initial state x0 to x1 in some finite time interval [t0 , t).
Stabilizability
A system is Stabilizable if all states that cannot be reached decay to zero asymptoti-
cally.
257
Controllability and Observability
A state x1 is called reachable at time t1 if for some finite initial time t0 there exists
an input u(t) that transfers the state x(t) from the origin at t0 to x1 .
A system is reachable at time t1 if every state x1 in the state-space is reachable at
time t1 .
A state x0 is controllable at time t0 if for some finite time t1 there exists an input
u(t) that transfers the state x(t) from x0 to the origin at time t1 .
A system is called controllable at time t0 if every state x0 in the state-space is con-
trollable.
For LTI (linear time-invariant) systems, a system is reachable if and only if its controlla-
bility matrix, ζ, has a full row rank of p, where p is the dimension of the matrix A, and
p × q is the dimension of matrix B.
[Controllability Matrix]
[ ]
ζ = B AB A2 B · · · Ap−1 B ∈ Rp×pq
A system is controllable or ”Controllable to the origin” when any state x1 can be driven to
the zero state x = 0 in a finite number of steps.
A system is controllable when the rank of the system matrix A is p, and the rank of the
controllability matrix is equal to:
Rank(ζ) = Rank(A−1 ζ) = p
If the second equation is not satisfied, the system is not .
MATLAB allows one to easily create the controllability matrix with the ctrb command.
To create the controllability matrix ζ simply type
ζ = ctrb(A, B)
where A and B are mentioned above. Then in order to determine if the system is control-
lable or not one can use the rank command to determine if it has full rank.
If
Rank(A) < p
Then controllability does not imply reachability.
• Reachability always implies controllability.
• Controllability only implies reachability when the state transition matrix is nonsingular.
258
Controllability
32.2.3 Gramians
Gramians are complicated mathematical functions that can be used to determine
specific things about a system. For instance, we can use gramians to determine
whether a system is controllable or reachable. Gramians, because they are more com-
plicated than other methods, are typically only used when other methods of analyzing
a system fail (or are too difficult).
All the gramians presented on this page are all matrices with dimension p × p (the
same size as the system matrix A).
All the gramians presented here will be described using the general case of Linear time-
variant systems. To change these into LTI (time-invariant equations), the following
substitutions can be used:
ϕ(t, τ ) → eA(t−τ )
′
ϕ′ (t, τ ) → eA (t−τ )
Where we are using the notation X' to denote the transpose of a matrix X (as opposed to
the traditional notation XT ).
259
Controllability and Observability
32.3 Observability
The state-variables of a system might not be able to be measured for any of the following
reasons:
1. The location of the particular state variable might not be physically accessible (a
capacitor or a spring, for instance).
2. There are no appropriate instruments to measure the state variable, or the state-
variable might be measured in units for which there does not exist any measurement
device.
3. The state-variable is a derived ”dummy” variable that has no physical meaning.
If things cannot be directly observed, for any of the reasons above, it can be necessary
to calculate or estimate the values of the internal state variables, using only the
input/output relation of the system, and the output history of the system from the
starting time. In other words, we must ask whether or not it is possible to determine
what the inside of the system (the internal system states) is like, by only observing the
outside performance of the system (input and output)? We can provide the following
formal definition of mathematical observability:
Observability
A system with an initial state, x(t0 ) is observable if and only if the value of
the initial state can be determined from the system output y(t) that has been
observed through the time interval t0 < t < tf . If the initial state cannot be so
determined, the system is unobservable.
Complete Observability
260
Observability
State-Observability
A system is completely state-observable at time t0 or the pair (A, C) is ob-
servable at t0 if the only state that is unobservable at t0 is the zero state x =
0.
32.3.1 Constructability
A state x is unconstructable at a time t1 if for every finite time t < t1 the zero input
response of the system is zero for all time t.
A system is completely state constructable at time t1 if the only state x that is
unconstructable at t0 is x = 0.
If a system is observable at an initial time t0 , then it is constructable at some time t
> t0 , if it is constructable at t1 .
Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q
x′ (t) = Ax(t)
y(t) = Cx(t)
261
Controllability and Observability
Therefore, we can show that the observability of the system is dependent only on the
coefficient matrices A and C. We can show precisely how to determine whether a system
is observable, using only these two matrices. If we have the observability matrix Q:
[Observability Matrix]
C
CA
2
Q= CA
..
.
CAp−1
we can show that the system is observable if and only if the Q matrix has a rank of p.
Notice that the Q matrix has the dimensions pr × p.
MATLAB allows one to easily create the observability matrix with the obsv command.
To create the observability matrix Q simply type
Q=obsv(A,C)
where A and C are mentioned above. Then in order to determine if the system is observable
or not one can use the rank command to determine if it has full rank.
262
Duality Principle
Notice that the constructability and observability gramians are very similar, and typically
they can both be calculated at the same time, only substituting in different values into the
state-transition matrix.
The concepts of controllability and observability are very similar. In fact, there is a
concrete relationship between the two. We can say that a system (A, B) is controllable if
and only if the system (A', C, B', D) is observable. This fact can be proven by plugging
A' in for A, and B' in for C into the observability Gramian. The resulting equation will
exactly mirror the formula for the controllability gramian, implying that the two results
are the same.
263
33 System Specifications
There are a number of different specifications that might need to be met by a new system
design. In this chapter we will talk about some of the specifications that systems use, and
some of the ways that engineers analyze and quantify systems.
33.3 Sensitivity
The sensitivity of a system is a parameter that is specified in terms of a given output and
a given input. The sensitivity measures how much change is caused in the output by small
changes to the reference input. Sensitive systems have very large changes in output in
response to small changes in the input. The sensitivity of system H to input X is denoted
as:
X (s)
SH
All physically-realized systems have to deal with a certain amount of noise and disturbance.
The ability of a system to reject the noise is known as the disturbance rejection of the
system.
The control effort is the amount of energy or power necessary for the controller to perform
its duty.
265
34 Controllers and Compensators
34.1 Controllers
There are a number of different standard types of control systems that have been studied
extensively. These controllers, specifically the P, PD, PI, and PID controllers are very
common in the production of physical systems, but as we will see they each carry several
drawbacks.
Proportional controllers are simply gain values. These are essentially multiplicative coeffi-
cients, usually denoted with a K. A P controller can only force the system poles to a spot
on the system's root locus. A P controller cannot be used for arbitrary pole placement.
We refer to this kind of controller by a number of different names: proportional controller,
gain, and zeroth-order controller.
267
Controllers and Compensators
In the Laplace domain, we can show the derivative of a signal using the following notation:
D(s) = L {f ′ (t)} = sF (s) − f (0)
Since most systems that we are considering have zero initial condition, this simplifies to:
D(s) = L {f ′ (t)} = sF (s)
The derivative controllers are implemented to account for future values, by taking the
derivative, and controlling based on where the signal is going to be in the future. Deriva-
tive controllers should be used with care, because even small amount of high-frequency
noise can cause very large derivatives, which appear like amplified noise. Also, derivative
controllers are difficult to implement perfectly in hardware or software, so frequently solu-
tions involving only integral controllers or proportional controllers are preferred over using
derivative controllers.
Notice that derivative controllers are not proper systems, in that the order of the numerator
of the system is greater than the order of the denominator of the system. This quality
of being a non-proper system also makes certain mathematical analysis of these systems
difficult.
We won't derive this equation here, but suffice it to say that the following equation in the
Z-domain performs the same function as the Laplace-domain derivative:
z−1
D(z) = Tz
268
Integral Controllers
Integral controllers of this type add up the area under the curve for past time. In this
manner, a PI controller (and eventually a PID) can take account of the past performance
of the controller, and correct based on past errors.
The integral controller can be implemented in the Z domain using the following equation:
z+1
D(z) = z−1
269
Controllers and Compensators
PID controllers are combinations of the proportional, derivative, and integral controllers.
Because of this, PID controllers have large amounts of flexibility. We will see below that
there are definite limites on PID control.
The transfer function for a standard PID controller is an addition of the Proportional, the
Integral, and the Differential controller transfer functions (hence the name, PID). Also,
we give each term a gain constant, to control the weight that each factor has on the final
output:
[PID]
D(s) = Kp + Ksi + Kd s
Notice that we can write the transfer function of a PID controller in a slightly different
way:
270
PID Controllers
A0 +A1 s
D(s) = B0 +B1 s
This form of the equation will be especially useful to us when we look at polynomial design.
The process of selecting the various coefficient values to make a PID controller perform
correctly is called PID Tuning. There are a number of different methods for determining
these values:1
1) Direct Synthesis (DS) method
2) Internal Model Control (IMC) method
1 Seborg, Dale E.; Edgar, Thomas F.; Mellichamp, Duncan A. (2003). Process Dynamics and Control,
Second Edition. John Wiley & Sons,Inc.
0471000779
271
Controllers and Compensators
In the Z domain, the PID controller has the following transfer function:
[Digital PID]
[ ] [ ]
D(z) = Kp + Ki T2 z+1
z−1 + Kd z−1
Tz
And we can convert this into a canonical equation by manipulating the above equation to
obtain:
a0 +a1 z −1 +a2 z −2
D(z) = 1+b1 z −1 +b2 z −2
Where:
a0 = Kp + K2i T + KTd
a1 = −Kp + K2i T + −2K
T
d
Kd
a2 = T
b1 = −1
b2 = 0
Once we have the Z-domain transfer function of the PID controller, we can convert it into
the digital time domain:
y[n] = x[n]a0 + x[n − 1]a1 + x[n − 2]a2 − y[n − 1]b1 − y[n − 2]b2
And finally, from this difference equation, we can create a digital filter structure to imple-
ment the PID.
For more information about digital filter structures, see Digital Signal Processing2
Despite the low-brow sounding name of the Bang-Bang controller, it is a very useful tool
that is only really available using digital methods. A better name perhaps for a bang-bang
controller is an on/off controller, where a digital system makes decisions based on target
272
Compensation
and threshold values, and decides whether to turn the controller on and off. Bang-bang
controllers are a non-linear style of control.
Consider the example of a household furnace. The oil in a furnace burns at a specific
temperature—it can't burn hotter or cooler. To control the temperature in your house
then, the thermostat control unit decides when to turn the furnace on, and when to turn
the furnace off. This on/off control scheme is a bang-bang controller.
34.7 Compensation
There are a number of different compensation units that can be employed to help fix
certain system metrics that are outside of a proper operating range. Most commonly, the
phase characteristics are in need of compensation, especially if the magnitude response is
to remain constant. There are four major types of compensation 1. Lead compensation 2.
Lag compensation 3. Lead-lag compensation 4. Lag-lead compensation
To make the compensator work correctly, the following property must be satisfied:
|z| < |p|
273
Controllers and Compensators
And both the pole and zero location should be close to the origin, in the LHP. Because
there is only one pole and one zero, they both should be located on the real axis.
Phase lead compensators help to shift the poles of the transfer function to the left, which
is beneficial for stability purposes.
The transfer function for a lag compensator is the same as the lead-compensator, and is
as follows:
[Lag Compensator]
s−z
Tlag (s) = s−p
However, in the lag compensator, the location of the pole and zero should be swapped:
|p| < |z|
Both the pole and the zero should be close to the origin, on the real axis.
The Phase lag compensator helps to improve the steady-state error of the system. The
poles of the lag compensator should be very close together to help prevent the poles of the
system from shifting right, and therefore reducing system stability.
w:Lag-lead compensator3
The transfer function of a Lag-lead compensator is simply a multiplication of the lead
and lag compensator transfer functions, and is given as:
[Lag-lead Compensator]
(s−z1 )(s−z2 )
TLag−lead (s) = (s−p1 )(s−p2 ) .
3 https://en.wikipedia.org/wiki/Lag-lead%20compensator
4 http://wikis.controltheorypro.com/index.php?title=Standard_Controller_Forms
5 http://wikis.controltheorypro.com/index.php?title=PID_Control
6 http://wikis.controltheorypro.com/index.php?title=PI_Control
274
35 Nonlinear Systems
dhunparserurl Control Systems/Nonlinear Systems
and we can prove that this is the general solution to the above equation because when we
differentiate both sides we get the origin equation.
The general solution to a nonlinear system can be found through a method of infinite
iteration. We will define xn as being an iterative family of indexed variables. We can
define them recursively as such:
∫t
xn (t) = x0 + t0 f (τ, xn−1 (τ ))dτ
x1 (t) = x0
We can show that the following relationship is true:
x(t) = limn→∞ xn (t)
The xn series of equations will converge on the solution to the equation as n approaches
infinity.
1 https://en.wikipedia.org/wiki/Non-linear%20control
275
Nonlinear Systems
1. Intentional non-linearity: The non-linear elements that are added into a system.
Eg: Relay
2. Incidental non-linearity: The non-linear behavior that is already present in the
system. Eg: Saturation
35.2 Linearization
Nonlinear systems are difficult to analyze, and for that reason one of the best methods
for analyzing those systems is to find a linear approximation to the system. Frequently,
such approximations are only good for certain operating ranges, and are not valid
beyond certain bounds. The process of finding a suitable linear approximation to a
nonlinear system is known as linearization.
Figure 121
This image shows a linear approximation (dashed line) to a non-linear system response
(solid line). This linear approximation, like most, is accurate within a certain range,
but becomes more inaccurate outside that range. Notice how the curve and the linear
approximation diverge towards the right of the graph.
276
36 Common Nonlinearities
dhunparserurl Control Systems/Common Nonlinearities
There are some nonlinearities that happen so frequently in physical systems that they
are called ”Common nonlinearities”. These common nonlinearities include Hysteresis,
Backlash, and Dead-zone.
36.1 Hysteresis
Continuing with the example of a household thermostat, let's say that your thermostat
is set at 70 degrees (Fahrenheit). The furnace turns on, and the house heats up to
70 degrees, and then the thermostat dutifully turns the furnace off again. However,
there is still a large amount of residual heat left in the ducts, and the hot air from the
vents on the ground may not all have risen up to the level of the thermostat. This
means that after the furnace turns off, the house may continue to get hotter, maybe
even to uncomfortable levels.
So the furnace turns off, the house heats up to 80 degrees, and then the air conditioner
turns on. The temperature of the house cools down to 70 degrees again, and the A/C
turns back off. However, the house continues to cool down, and then it gets too cold,
and the furnace needs to turn back on.
As we can see from this example, a bang-bang controller, if poorly designed, can cause
big problems, and it can waste lots of energy. To avoid this, we implement the idea
of Hysteresis, which is a set of threshold values that allow for overflow outputs.
Implementing hysteresis, our furnace now turns off when we get to 65 degrees, and
the house slowly warms up to 75 degrees, and doesn't turn on the A/C unit. This is
a far preferable solution.
36.2 Backlash
E.g.: Mechanical gear.
36.3 Dead-Zone
A dead-zone is a kind of non linearity in which the system doesn't respond to the
given input until the input reaches a particular level or it can refer to a condition in
which output becomes zero when the input crosses certain limiting value.
277
37 Noise Driven Systems
dhunparserurl Control Systems/Noise Driven Systems
The topics in this chapter will rely heavily on topics from a calculus-based
background in probability theory. There currently are no wikibooks available
that contain this information. The reader should be familiar with the following
concepts: Gaussian Random Variables, Mean, Expectation Operation.
Example: Consider a moving automobile. The control signals for the automobile
consist of acceleration (gas pedal) and deceleration (brake pedal) inputs acting
on the wheels of the vehicle, and working to create forward motion. The noise
inputs to the system can consist of wind pushing against the vertical faces of the
automobile, rough pavement (or even dirt) under the tires, bugs and debris hitting
the front windshield, etc. As we can see, the control inputs act on the wheels of the
vehicle, while the noise inputs can act on multiple sides of the vehicle, in different
ways.
37.2.1 Expectation
The expectation operatior, E, is used to find the expected, or mean value of a given
random variable. The expectation operator is defined as:
279
Noise Driven Systems
∫∞
E[x] = −∞ xfx (x)dx
If we have two variables that are independent of one another, the expectation of their
product is zero.
37.2.2 Covariance
The covariance matrix, Q, is the expectation of a random vector times it's transpose:
E[x(t)x′ (t)] = Q(t)
If we take the value of the x transpose at a different point in time, we can calculate out
the covariance as:
E[x(t)x′ (s)] = Q(t)δ(t − s)
Where δ is the impulse function.
We would like to find out how our system will respond to the new noisy input. Every
system iteration will have a different response that varies with the noise input, but the
average of all these iterations should converge to a single value.
For the system with zero control input, we have:
x′ (t) = A(t)x(t) + B(t)v(t)
For which we know our general solution is given as:
∫t
x(t) = ϕ(t, t0 )x0 + t0 ϕ(t, τ )B(τ )v(τ )dτ
If we take the expected value of this function, it should give us the expected value of
the output of the system. In other words, we would like to determine what the expected
output of our system is going to be by adding a new, noise input.
∫t
E[x(t)] = E[ϕ(t, t0 )x0 ] + E[ t0 ϕ(t, τ )B(τ )v(τ )dτ ]
280
System Covariance
In the second term of this equation, neither φ nor B are random variables, and therefore
they can come outside of the expectaion operation. Since v is zero-mean, the expectation
of it is zero. Therefore, the second term is zero. In the first equation, φ is not a random
variable, but x0 does create a dependancy on the output of x(t), and we need to take the
expectation of it. This means that:
E[x(t)] = ϕ(t, t0 )E[x0 ]
In other words, the expected output of the system is, on average, the value that the output
would be if there were no noise. Notice that if our noise vector v was not zero-mean, and
if it was not gaussian, this result would not hold.
We are now going to analyze the covariance of the system with a noisy input. We multiply
our system solution by its transpose, and take the expectation: (this equation is long and
might break onto multiple lines)
∫t ∫t
E[x(t)x′ (t)] = E[ϕ(t, t0 )x0 + ′
t0 ϕ(τ, t0 )B(τ )v(τ )dτ ]E[(ϕ(t, t0 )x0 + t0 ϕ(τ, t0 )B(τ )v(τ )dτ ) ]
If we multiply this out term by term, and cancel out the expectations that have a zero-
value, we get the following result:
E[x(t)x′ (t)] = ϕ(t, t0 )E[x0 x′0 ]ϕ′ (t, t0 ) = P
We call this result P, and we can find the first derivative of P by using the chain-rule:
P ′ (t) = A(t)ϕ(t, t0 )P0 ϕ(t, t0 ) + ϕ(t, t0 )P0 ϕ′ (t, t0 )A′ (t)
Where
P0 = E[x0 x′0 ]
We can reduce this to:
P ′ (t) = A(t)P (t) + P (t)A′ (t) + B(t)Q(t)B ′ (t)
In other words, we can analyze the system without needing to calculate the state-transition
matrix. This is a good thing, because it can often be very difficult to calculate the state-
transition matrix.
We can run into a problem because in a gaussian distribution, especially systems with high
variance (especially systems with infinite variance), the value of v can momentarily become
undefined (approach infinity), which will cause the value of x to likewise become undefined
at certain points. This is unacceptable, and makes further analysis of this problem difficult.
Let us look again at our original equation, with zero control input:
281
Noise Driven Systems
This new term, dw, is a random process known as a Weiner Process, which the result
of transforming a gaussian process in this manner.
We can define a new differential, dw(t), which is an infinitesimal function of time as:
dw(t) = v(t)dt
Now, we can integrate both sides of this equation:
∫ ∫
x(t) = x(t0 ) + tt0 A(τ )x(τ )dτ + tt0 B(τ )dw(τ ) w:Ito Calculus1 However, this leads us to an
unusual place, and one for which we are (probably) not prepared to continue further: in
the third term on the left-hand side, we are attempting to integrate with respect to a
function, not a variable. In this instance, the standard Riemann integrals that we are all
familiar with cannot solve this equation. There are advanced techniques known as Ito
Calculus however that can solve this equation, but these methods are currently outside
the scope of this book.
1 https://en.wikipedia.org/wiki/Ito%20Calculus
282
38 Appendix: Physical Models
This page will serve as a refresher for various different engineering disciplines on how
physical devices are modeled. Models will be displayed in both time-domain and Laplace-
domain input/output characteristics. The only information that is going to be displayed
here will be the ones that are contributed by knowledgeable contributors.
For more information about electric circuits and circuit elements, see the following
books:
Circuit Theorya
Electronicsb
a https://en.wikibooks.org/wiki/Circuit%20Theory
b https://en.wikibooks.org/wiki/Electronics
283
39 Appendix: Z Transform Mappings
There are a number of different mappings that can be used to convert a system from the
complex Laplace domain into the Z-Domain. None of these mappings are perfect, and
every mapping requires a specific starting condition, and focuses on a specific aspect to
reproduce faithfully. One such mapping that has already been discussed is the bilinear
transform, which, along with prewarping, can faithfully map the various regions in the
s-plane into the corresponding regions in the z-plane. We will discuss some other potential
mappings in this chapter, and we will discuss the pros and cons of each.
The Bilinear transform converts from the Z-domain to the complex W domain. The W
domain is not the same as the Laplace domain, although there are some similarities. Here
are some of the similiarities between the Laplace domain and the W domain:
1. Stable poles are in the Left-Half Plane
2. Unstable poles are in the right-half plane
3. Marginally stable poles are on the vertical, imaginary axis
With that said, the bilinear transform can be defined as follows:
[Bilinear Transform]
2 z−1
w= T z+1
285
Appendix: Z Transform Mappings
Figure 122
39.2.1 Prewarping
The W domain is not the same as the Laplace domain, but if we employ the process of
prewarping before we take the bilinear transform, we can make our results match more
closely to the desired Laplace Domain representation.
Using prewarping, we can show the effect of the bilinear transform graphically:
Figure 123
286
Matched Z-Transform
The shape of the graph before and after prewarping is the same as it is without prewarping.
However, the destination domain is the S-domain, not the W-domain.
If we have a function in the laplace domain that has been decomposed using partial fraction
expansion, we generally have an equation in the form:
A B C
Y (s) = s+α1 + s+α 2
+ s+α 3
+ ...
And once we are in this form, we can make a direct conversion between the s and z planes
using the following mapping:
[Matched Z Transform]
s + α = 1 − z −1 e−αT
Pro
A good direct mapping in terms of s and a single coefficient
Con
requires the Laplace-domain function be decomposed using partial fraction expansion.
[Simpson's Rule]
3 z2 − 1
s=
T z 2 + 4z 1 + 1
CON
Essentially multiplies the order of the transfer function by a factor of 2. This makes things
difficult when you are trying to physically implement the system. It has been shown that
this transform produces unstable roots (outside of unit unit circle).
287
Appendix: Z Transform Mappings
Pro
Directly maps a function in terms of z and s, into a function in terms of only z.
Con
Requires a function that is already in terms of s, z and α.
39.6 Z-Forms
288
40 Appendix: Transforms
w:Laplace Transform1
When we talk about the Laplace transform, we are actually talking about the version of
the Laplace transform known as the unilinear Laplace Transform. The other version,
the Bilinear Laplace Transform (not related to the Bilinear Transform, below) is not
used in this book.
The Laplace Transform is defined as:
[Laplace Transform]
∫∞ −st dt
F (s) = L[f (t)] = −∞ x(t)e
1 https://en.wikipedia.org/wiki/Laplace%20Transform
289
Appendix: Transforms
Property Definition
Linearity L {af (t) + bg(t)} = aF (s) + bG(s)
Differentiation L{f ′ } = sL{f } − f (0− )
L{f ′′ − ′ −
{ }= } s L{f } − sf (0 ) − f (0 )
2
L f (n) = s L{f } − s
n n−1 f (0 ) − · · · − f (n−1) (0− )
−
Where:
f (t) = L−1 {F (s)}
g(t) = L−1 {G(s)}
s = σ + jω
290
Fourier Transform
w:Fourier Transform2
The Fourier Transform is used to break a time-domain signal into its frequency domain
components. The Fourier Transform is very closely related to the Laplace Transform, and
is only used in place of the Laplace transform when the system is being analyzed in a
frequency context.
The Fourier Transform is defined as:
[Fourier Transform]
∫∞
F (jω) = F[f (t)] = 0 f (t)e−jωt dt
And the Inverse Fourier Transform is defined as:
[Inverse Fourier Transform]
1 ∫∞
f (t) = F −1 {F (jω)} = 2π −∞ F (jω)e
−jωt dω
2 https://en.wikipedia.org/wiki/Fourier%20Transform
291
Appendix: Transforms
Notes:
1. sinc(x)
( )= sin(πx)/(πx)
2. rect τt is the rectangular pulse
function of width τ
3. u(t) is the Heaviside step function
4. δ(t) is the Dirac delta function
292
Z-Transform
40.3 Z-Transform
w:Z-transform3
The Z-transform is used primarily to convert discrete data sets into a continuous represen-
tation. The Z-transform is notationally very similar to the star transform, except that the
Z transform does not take explicit account for the sampling period. The Z transform has
a number of uses in the field of digital signal processing, and the study of discrete signals
in general, and is useful because Z-transform results are extensively tabulated, whereas
star-transform results are not.
The Z Transform is defined as:
[Z Transform]
∑∞ −n
X(z) = Z[x[n]] = i=−∞ x[n]z
3 https://en.wikipedia.org/wiki/Z-transform
293
Appendix: Transforms
4 https://en.wikipedia.org/wiki/Advanced%20Z%20Transform
294
Star Transform
w:Star Transform5
The Star Transform is a discrete transform that has similarities between the Z transform
and the Laplace Transform. In fact, the Star Transform can be said to be nearly analogous
to the Z transform, except that the Star transform explicitly accounts for the sampling
time of the sampler.
The Star Transform is defined as:
[Star Transform]
∑∞
F ∗ (s) = L∗ [f (t)] = k=0 f (kT )e
−skT
Star transform pairs can be obtained by plugging z = esT into the Z-transform pairs, above.
The bilinear transform is used to convert an equation in the Z domain into the arbitrary
W domain, with the following properties:
1. roots inside the unit circle in the Z-domain will be mapped to roots on the left-half
of the W plane.
2. roots outside the unit circle in the Z-domain will be mapped to roots on the right-half
of the W plane
3. roots on the unit circle in the Z-domain will be mapped onto the vertical axis in the
W domain.
The bilinear transform can therefore be used to convert a Z-domain equation into a
form that can be analyzed using the Routh-Hurwitz criteria. However, it is important
to note that the W-domain is not the same as the complex Laplace S-domain. To
make the output of the bilinear transform equal to the S-domain, the signal must be
prewarped, to account for the non-linear nature of the bilinear transform.
The Bilinear transform can also be used to convert an S-domain system into the Z
domain. Again, the input system must be prewarped prior to applying the bilinear
transform, or else the results will not be correct.
The Bilinear transform is governed by the following variable transformations:
[Bilinear Transform]
(T /2)+w 2 z−1
z= (T /2)−w , w= T z+1
5 https://en.wikipedia.org/wiki/Star%20Transform
295
Appendix: Transforms
This relationship is called the frequency warping characteristic of the bilinear trans-
form. To counter-act the effects of frequency warping, we can pre-warp the Z-domain
equation using the inverse warping characteristic. If the equation is prewarped before it
is transformed, the resulting poles of the system will line up more faithfully with those in
the s-domain.
[Bilinear Frequency Prewarping]
( )
2
ω= T arctan ωa T2 .
Applying these transformations before applying the bilinear transform actually enables di-
rect conversions between the S-Domain and the Z-Domain. The act of applying one of these
frequency warping characteristics to a function before transforming is called prewarping.
• w:Laplace transform6
• w:Fourier transform7
• w:Z-transform8
• w:Star transform9
• w:Bilinear transform10
6 https://en.wikipedia.org/wiki/Laplace%20transform
7 https://en.wikipedia.org/wiki/Fourier%20transform
8 https://en.wikipedia.org/wiki/Z-transform
9 https://en.wikipedia.org/wiki/Star%20transform
10 https://en.wikipedia.org/wiki/Bilinear%20transform
296
41 System Representations
dhunparserurl Control Systems/System Representations
General Description
∫∞
Time-Invariant, Non-causal y(t) = −∞ h(t − r)x(r)dr
∫t
Time-Invariant, Causal y(t) = 0 h(t − r)x(r)dr
∫∞
Time-Variant, Non-Causal y(t) = −∞ h(t, r)x(r)dr
∫
Time-Variant, Causal y(t) = 0t h(t, r)x(r)dr
State-Space Equations
Time-Invariant x′ (t) = Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
Time-Variant x′ (t) = A(t)x(t) + B(t)u(t)
y(t) = C(t)x(t) + D(t)u(t)
297
System Representations
These are the digital versions of the equations listed above. All the variables have the
same meanings, except that the systems are digital.
[Digital State Equations]
State-Space Equations
Time-Invariant x′ [t] = Ax[t] + Bu[t]
y[t] = Cx[t] + Du[t]
Time-Variant x′ [t] = A[t]x[t] + B[t]u[t]
y[t] = C[t]x[t] + D[t]u[t]
Transfer Function
Y (s) = H(s)X(s)
Transfer Function
Y (z) = H(z)X(z)
Transfer Matrix
Y(s) = H(s)X(s)
Transfer Matrix
Y(z) = H(z)X(z)
298
42 Matrix Operations
dhunparserurl Control Systems/Matrix Operations
For more about this subject, see:
Linear Algebraa
and
Engineering Analysisb
a https://en.wikibooks.org/wiki/Linear%20Algebra
b https://en.wikibooks.org/wiki/Engineering%20Analysis
A+B = B +A
Multiplication
Matrices must have the same inner dimensions (the number of columns of the first matrix
must equal the number of rows in the second matrix). For instance, if matrix A is n × m,
and matrix B is m × k, then we can multiply:
AB = C
AB ̸= BA
299
Matrix Operations
42.3 Determinant
42.4 Inverse
The inverse of a matrix A, which we will denote here by ”B” is any matrix that satisfies
the following equation:
AB = BA = I
Matrices that have such a companion are known as ”invertible” matrices, or ”non-singular”
matrices. Matrices which do not have an inverse that satisfies this equation are called
”singular” or ”non-invertable”.
An inverse can be computed in a number of different ways:
1. Append the matrix A with the Identity matrix of the same size. Use row-reductions
to make the left side of the matrice an identity. The right side of the appended matrix
will then be the inverse:
[A|I] → [I|B]
2. The inverse matrix is given by the adjoint matrix divided by the determinant. The
adjoint matrix is the transpose of the cofactor matrix.
adj(A)
A−1 =
|A|
300
Eigenvalues
42.5 Eigenvalues
The eigenvalues of a matrix, denoted by the Greek letter lambda λ, are the solutions
to the characteristic equation of the matrix:
|X − λI| = 0
Eigenvalues only exist for square matrices. Non-square matrices do not have eigenvalues.
If the matrix X is a real matrix, the eigenvalues will either be all real, or else there will be
complex conjugate pairs.
42.6 Eigenvectors
The eigenvectors of a matrix are the nullspace solutions of the characteristic equation:
(X − λi I)vi = 0
There is at least one distinct eigenvector for every distinct eigenvalue. Multiples of an
eigenvector are also themselves eigenvectors. However, eigenvalues that are not linearly
independent are called ”non-distinct” eigenvectors, and can be ignored.
42.7 Left-Eigenvectors
Left Eigenvectors are the right-hand nullspace solutions to the characteristic equation:
wi (A − λi I) = 0
These are also the rows of the inverse transition matrix.
In the case of repeated eigenvalues, there may not be a complete set of n distinct eigenvec-
tors (right or left eigenvectors) associated with those eigenvalues. Generalized eigenvectors
can be generated as follows:
(A − λI)vn+1 = vn
Because generalized eigenvectors are formed in relation to another eigenvector or generalize
eigenvectors, they constitute an ordered set, and should not be used outside of this order.
The transformation matrix is the matrix of all the eigenvectors, or the ordered sets of
generalized eigenvectors:
301
Matrix Operations
T = [v1 v2 · · · vn ]
The inverse transition matrix is the matrix of the left-eigenvectors:
′
w1
w′
T −1 = 2
···
wn′
A matrix can be diagonalized by multiplying by the transition matrix:
A = T DT −1
Or:
T −1 AT = D
If the matrix has an incomplete set of eigenvectors, and therefore a set of generalized eigen-
vectors, the matrix cannot be diagonalized, but can be converted into Jordan canonical
form:
T −1 AT = J
42.10 MATLAB
The MATLAB programming environment was specially designed for matrix algebra and
manipulation. The following is a brief refresher about how to manipulate matrices in
MATLAB:
Addition
To add two matrices together, use a plus sign (”+”): C = A + B;
Multiplication
To multiply two matrices together use an asterisk (”*”): C = A * B;
If your matrices are not the correct dimensions, MATLAB will issue an error.
Transpose
To find the transpose of a matrix, use the apostrophe (” ' ”): C = A';
Determinant
To find the determinant, use the det function: d = det(A);
Inverse
To find the inverse of a matrix, use the function inv: C = inv(A);
Eigenvalues and Eigenvectors
To find the eigenvalues and eigenvectors of a matrix, use the eig command: [E, V] =
eig(A);
302
MATLAB
Where E is a square matrix with the eigenvalues of A in the diagonal entries, and V is the
matrix comprised of the corresponding eigenvectors. If the eigenvalues are not distinct, the
eigenvectors will be repeated. MATLAB will not calculate the generalized eigenvectors.
Left Eigenvectors
To find the left eigenvectors, assuming there is a complete set of distinct right-eigenvectors,
we can take the inverse of the eigenvector matrix: [E, V] = eig(A); C = inv(V); The rows
of C will be the left-eigenvectors of the matrix A.
For more information about MATLAB, see the wikibook MATLAB Programming1 .
1 https://en.wikibooks.org/wiki/MATLAB%20Programming
303
43 Appendix: MATLAB
B Warning
This page would highly benefit from some screenshots of various systems. Users who
have MATLAB or Octave available are highly encouraged to produce some
screenshots for the systems here.
43.1 MATLAB
This page assumes a prior knowledge of the fundamentals of MATLAB. For more
information about MATLAB, see MATLAB Programminga .
a https://en.wikibooks.org/wiki/MATLAB%20Programming
305
Appendix: MATLAB
First, let's take a look at the classical approach, with the following system:
5s+10
G(s) = s2 +4s+5
This system can effectively be modeled as two vectors of coefficients, NUM and DEN:
NUM = [5, 10] DEN = [1, 4, 5]
Now, we can use the MATLAB step command to produce the step response to this system:
step(NUM, DEN, t);
Where t is a time vector. If no results on the left-hand side are supplied by you, the step
function will automatically produce a graphical plot of the step response. If, however, you
use the following format:
[y, x, t] = step(NUM, DEN, t);
Then MATLAB will not produce a plot automatically, and you will have to produce one
yourself.
Here is a sample screenshot:
Now, let's look at the modern, state-space approach. If we have the matrices A, B, C and
D, we can plug these into the step function, as shown:
step(A, B, C, D);
or, we can optionally include a vector for time, t:
step(A, B, C, D, t);
Again, if we supply results on the left-hand side of the equation, MATLAB will not auto-
matically produce a plot for us.
306
Classical ↔ Modern
If we didn't get an automatic plot, and we want to produce our own, we type:
[y, x, t] = step(NUM, DEN, t);
And then we can create a graph using the plot command:
plot(t, y);
y is the output magnitude of the step response, while x is the internal state of the system
from the state-space equations:
x′ = Ax + Bu
y = Cx + Du
MATLAB contains features that can be used to automatically convert to the state-space
representation from the Laplace representation. This function, tf2ss, is used as follows:
[A, B, C, D] = tf2ss(NUM, DEN);
Where NUM and DEN are the coefficient vectors of the numerator and denominator of
the transfer function, respectively.
In a similar vein, we can convert from the Laplace domain back to the state-space repre-
sentation using the ss2tf function, as such:
[NUM, DEN] = ss2tf(A, B, C, D);
Or, if we have more than one input in a vector u, we can write it as follows:
[NUM, DEN] = ss2tf(A, B, C, D, u);
The u parameter must be provided when our system has more than one input, but it does
not need to be provided if we have only 1 input. This form of the equation produces a
transfer function for each separate input. NUM and DEN become 2-D matricies, with each
row being the coefficients for each different input.
307
Appendix: MATLAB
Let us now consider a digital system with the following generic transfer function in the Z
domain:
n(z)
H(z) = d(z)
Where n(z) and d(z) are the numerator and denominator polynomials of the transfer
function, respectively. The filter command can be used to apply an input vector x to the
filter. The output, y, can be obtained from the following code:
y = filter(n, d, x);
The word ”filter” may be a bit of a misnomer in this case, but the fact remains that this
is the method to apply an input to a digital system. Once we have the output magnitude
vector, we can plot it using our plot command:
plot(y);
To get the step response of the digital system, we must first create a step function using
the ones command:
u = ones(1, N);
Where N is the number of samples that we want to take in our digital system (not to
be confused with ”n”, our numerator coefficient). Once we have produced our unit step
function, we can pass this function through our digital filter as such:
y = filter(n, d, u);
And we can plot y:
plot(y);
Likewise, we can analyze a digital system in the state-space representation. If we have the
following digital state relationship:
x[k + 1] = Ax[k] + Bu[k]
y[k] = Cx[k] + Du[k]
We can convert automatically to the pulse response using the ss2tf function, that we used
above:
[NUM, DEN] = ss2tf(A, B, C, D);
Then, we can filter it with our prepared unit-step sequence vector, u:
308
Root Locus Plots
y = filter(num, den, u)
this will give us the step response of the digital system in the state-space representation.
MATLAB supplies a useful, automatic tool for generating the root-locus graph from a
transfer function: the rlocus command. In the transfer function domain, or the state
space domain respectively, we have the following uses of the function:
rlocus(num, den);
And:
rlocus(A, B, C, D);
These functions will automatically produce root-locus graphs of the system. However, if
we provide left-hand parameters:
[r, K] = rlocus(num, den);
Or:
[r, K] = rlocus(A, B, C, D);
The function won't produce a graph automatically, and you will need to produce one
yourself. There is also an optional additional parameter for gain, K, that can be supplied:
rlocus(num, den, K);
Or:
rlocus(A, B, C, D, K);
If K is not supplied, MATLAB will supply an automatic gain value for you.
Once we have our values [r, K], we can plot a root locus:
plot(r);
The rlocus command cannot be used with MIMO systems, so if your system is a MIMO
system, you must separate out your coefficient matrices to isolate each separate Input-
output pair, and graph each individually.
Here is a sample screenshot:
309
Appendix: MATLAB
Creating a root-locus diagram for a digital system is exactly the same as it is for a contin-
uous system. The only difference is the interpretation of the results, because the stability
region for digital systems is different from the stability region for continuous systems. The
same rlocus function can be used, in the same manner as is used above.
MATLAB also offers a number of tools for examining the frequency response characteristics
of a system, both using Bode plots, and using Nyquist charts. To construct a Bode plot
from a transfer function, we use the following command:
[mag, phase, omega] = bode(NUM, DEN, omega);
Or:
[mag, phase, omega] = bode(A, B, C, D, u, omega);
Where ”omega” is the frequency vector where the magnitude and phase response points are
analyzed. If we want to convert the magnitude data into decibels, we can use the following
conversion:
magdb = 20 * log10(mag);
This conversion should be known well enough by now that it doesn't require explanation.
When talking about Bode plots in decibels, it makes the most sense (and is the most
common occurrence) to also use a logarithmic frequency scale. To create such a logarithmic
sequence in omega, we use the logspace command, as such:
omega = logspace(a, b, n);
This command produces n points, spaced logarithmicly, from 10a up to 10b .
If we use the bode command without left-hand arguments, MATLAB will produce a graph
of the bode phase and magnitude plots automatically.
The bode command, if used with a MIMO system, will use subplots to produce all the
input-output relationship graphs on a single plot window. for a system with multiple inputs
and multiple outputs, this can become difficult to see clearly. In these cases, it is typically
better to separate out your coefficient matrices to isolate each individual input-output
pair.
Here is a sample screenshot:
310
Lyapunov Equations
In addition to the bode plots, we can create nyquist charts by using the nyquist command.
The nyquist command operates in a similar manner to the bode command (and other
commands that we have used so far):
[real, imag, omega] = nyquist(NUM, DEN, omega);
Or: [real, imag, omega] = nyquist(A, B, C, D, u, omega);
Here, ”real” and ”imag” are vectors that contain the real and imaginary parts of each point
of the nyquist diagram. If we don't supply the right-hand arguments, the nyquist command
automatically produces a nyquist plot for us.
Like the bode command, the nyquist command will use subplots to display the input-
output relations of MIMO systems on a single plot window. If there are multiple input-
output pairs, it can be difficult to see the individual graphs.
Here is a sample screenshot:
43.11 Controllability
A controllability matrix can be constructed using the ctrb command. The controllability
gramian can be constructed using the gram command.
43.12 Observability
Empirical gramians can be computed for linear and also nonlinear control systems. The
empirical gramian framework emgr allows the computation of the controllability, observ-
ability and cross gramian; it is compatible with MATLAB and OCTAVE and does not
require the control systems toolbox.
311
Appendix: MATLAB
• Ogata, Katsuhiko, ”Solving Control Engineering Problems with MATLAB”, Prentice Hall,
New Jersey, 1994.
0130459070
• MATLAB Programming2 .
• http://octave.sourceforge.net/
• MATLAB Category on ControlTheoryPro.com3
• Empirical Gramian Framework4
2 https://en.wikibooks.org/wiki/MATLAB%20Programming
3 http://wikis.controltheorypro.com/index.php?title=Category:MATLAB
4 http://gramian.de
312
44 Glossary and List of Equations
dhunparserurl Control Systems/Glossary
The following is a listing of some of the most important terms from the book, along with
a short definition or description.
44.1 A, B, C
Acceleration Error
The amount of steady state error of the system when stimulated by a unit parabolic input.
Acceleration Error Constant
A system metric that determines that amount of acceleration error in the system.
Adaptive Control
A branch of control theory where controller systems are able to change their response
characteristics over time, as the input characteristics to the system change.
Adaptive Gain
when control gain is varied depending on system state or condition, such as a disturbance
Additivity
A system is additive if a sum of inputs results in a sum of outputs.
Analog System
A system that is continuous in time and magnitude.
ARMA
Autoregressive Moving Average, see http://en.wikipedia.org/wiki/Autoregressive_
moving_average_model
ATO
Analog Timed Output. Control loop output is correlated to a timed contact closure.
A/M
Auto-Manual. Control modes, where auto typically means output is computer-driven,
calculated while manual can be field-driven or merely using a static setpoint.
Bilinear Transform
a variant of the Z-transform, see http://en.wikibooks.org/wiki/Digital_Signal_
Processing/Bilinear_Transform
313
Glossary and List of Equations
Block Diagram
A visual way to represent a system that displays individual system components as boxes,
and connections between systems as arrows.
Bode Plots
A set of two graphs, a ”magnitude” and a ”phase” graph, that are both plotted on log scale
paper. The magnitude graph is plotted in decibels versus frequency, and the phase graph
is plotted in degrees versus frequency. Used to analyze the frequency characteristics of the
system.
Bounded Input, Bounded Output
BIBO. If the input to the system is finite, then the output must also be finite. A condition
for stability.
Cascade
When the output of a control loop is fed to/from another loop.
Causal
A system whose output does not depend on future inputs. All physical systems must be
causal.
Classical Approach
See Classical Controls.
Classical Controls
A control methodology that uses the transform domain to analyze and manipulate the
Input-Output characteristics of a system.
Closed Loop
a controlled system using feedback or feedforward
Compensator
A Control System that augments the shortcomings of another system.
Condition Number
Conditional Stability
A system with variable gain is conditionally stable if it is BIBO stable for certain values
of gain, but not BIBO stable for other values of gain.
Continuous-Time
A system or signal that is defined at all points t.
Control Rate
the rate at which control is computed and any appropriate output sent. Lower bound is
sample rate.
Control System
314
D, E, F
44.2 D, E, F
Damping Ratio
A constant that determines the damping properties of a system.
Deadtime
time shift between the output change and the related effect (typ. at least one control
sample). One sees ”Lag” used for this action sometimes.
Digital
A system that is both discrete-time, and quantized.
Direct action
target output increase is required to bring the process variable (PV) to setpoint (SP) when
PV is below SP. Thus, PV increases with output increase directly.
Discrete magnitude
See quantized.
Discrete time
A system or signal that is only defined at specific points in time.
Distributed
A system is distributed if it has both an infinite number of states, and an infinite number
of state variables. See Lumped.
Dynamic
315
Glossary and List of Equations
316
G, H, I
An integral transform, similar to the Laplace Transform, that analyzes the frequency
characteristics of a system. See http://en.wikibooks.org/wiki/Waves/Fourier_
Transforms
44.3 G, H, I
Game Theory
A branch of study that is related to control engineering, and especially optimal control.
Multiple competing entities, or ”players” attempt to minimize their own cost, and maximize
the cost of the opponents.
Gain
A constant multiplier in a system that is typically implemented as an amplifier or attenu-
ator. Gain can be changed, but is typically not a function of time. Adaptive control can
use time-adaptive gains that change with time.
General Description
An external description of a system that relates the system output to the system input,
the system response, and a time constant through integration.
Hendrik Wade Bode
Electrical Engineer, did work in control theory and communications. Is primarily remem-
bered in control engineering for his introduction of the bode plot.
Harry Nyquist
Electrical Engineer, did extensive work in controls and information theory. Is remembered
in this book primarily for his introduction of the Nyquist Stability Criterion.
Homogeniety
Property of a system whose scaled input results in an equally scaled output.
Hybrid Systems
Systems which have both analog and digital components.
Impulse
A function denoted δ(t), that is the derivative of the unit step.
Impulse Response
The system output when the system is stimulated by an impulse input. The Inverse
Laplace Transform of the transfer function of the system.
Initial Conditions
The conditions of the system at time t = t0 , where t0 is the first time the system is
stimulated.
Initial Value Theorem
317
Glossary and List of Equations
A theorem that allows the initial conditions of the system to be determined from the
Transfer function.
Input-Output Description
See external description.
Instantaneous
A system is instantaneous if the system doesn't have memory, and if the current output
of the system is only dependent on the current input. See Dynamic, Memory.
Integrated Absolute Error (IAE)
absolute error (ideal vs actual performance) is integrated over the analysis period.
Integrated Squared Error (ISE)
squared error (ideal vs actual performance) is integrated over the analysis period.
Integrators
A system pole at the origin of the S-plane. Has the effect of integrating the system input.
Inverse Fourier Transform
An integral transform that converts a function from the frequency domain into the time-
domain.
Inverse Laplace Transform
An integral transform that converts a function from the S-domain into the time-domain.
Inverse Z-Transform
An integral transform that converts a function from the Z-domain into the discrete time
domain.
44.4 J, K, L
Lag
The observed process impact from an output is slower than the control rate.
Laplace Transform
An integral transform that converts a function from the time domain into a complex
frequency domain.
Laplace Transform Domain
A complex domain where the Laplace Transform of a function is graphed. The imaginary
part of s is plotted along the vertical axis, and the real part of s is plotted along the
horizontal axis.
Left Eigenvectors
318
M, N, O
Left-hand nullspace solutions to the characteristic equation of a matrix for given eigenval-
ues. The rows of the inverse transition matrix.
Linear
A system that satisfies the superposition principle. See Additive and Homogeneous.
Linear Time-Invariant
LTI. See Linear, and Time-Invariant.
Low Clamp
User-applied lower bound on control output signal.
L/R
Local/Remote operation.
LQR
Linear Quadratic Regulator.
Lumped
A system with a finite number of states, or a finite number of state variables.
44.5 M, N, O
Magnitude
the gain component of frequency response. This is often all that is considered in saying a
discrete filter's response is well matched to the analog's. It is the DC gain at 0 frequency.
Marginal Stability
A system has an oscillatory response, as determined by having imaginary poles or imagi-
nary eigenvalues.
Mason's Rule
see http://en.wikipedia.org/wiki/Mason's_rule
MATLAB
Commercial software having a Control Systems toolbox. Also see Octave.
Memory
A system has memory if its current output is dependent on previous and current inputs.
MFAC
Model Free Adaptive Control.
MIMO
A system with multiple inputs and multiple outputs.
319
Glossary and List of Equations
Modern Approach
see modern controls
Modern Controls
A control methodology that uses the state-space representation to analyze and manipulate
the Internal Description of a system.
Modified Z-Transform
A version of the Z-Transform, expanded to allow for an arbitrary processing delay.
MPC
Model Predictive Control.
MRAC
Model Reference Adaptive Control.
MV
can denote Manipulated variable or Measured variable (not the same)
Natural Frequency
The fundamental frequency of the system, the frequency for which the system's frequency
response is largest.
Negative Feedback
A feedback system where the output signal is subtracted from the input signal, and the
difference is input to the plant.
The Nyquist Criteria
A necessary and sufficient condition of stability that can be derived from Bode plots.
Nonlinear Control
A branch of control engineering that deals exclusively with non-linear systems. We do not
cover nonlinear systems in this book.
OCTAVE
Open-source software having a Control Systems toolbox. Also see MATLAB.
Offset
The discrepancy between desired and actual value after settling. P-only control can give
offset.
Oliver Heaviside
Electrical Engineer, Introduced the Laplace Transform as a tool for control engineering.
Open Loop
when the system is not closed, its behavior has a free-running component rather than
controlled
320
P, Q, R
Optimal Control
A branch of control engineering that deals with the minimization of system cost, or max-
imization of system performance.
Order
The order of a polynomial is the highest exponent of the independent variable in that
exponent. The order of a system is the order of the Transfer Function's denominator
polynomial.
Output equation
An equation that relates the current system input, and the current system state to the
current system output.
Overshoot
measures the extent of system response against desired (setpoint tracking).
44.6 P, Q, R
Parabolic
A parabolic input is defined by the equation 12 t2 u(t).
Partial Fraction Expansion
A method by which a complex fraction is decomposed into a sum of simple fractions.
Percent Overshoot
PO, the amount by which the step response overshoots the reference value, in percentage
of the reference value.
Phase
the directional component of frequency response, not typically well-matched between a
discrete filter equivalent to the analog version, especially as frequency approaches the
Nyquist limit. The final value in the limit drives system stability, and stems from the
poles and zeros of the characteristic equation.
PID
Proportional-Integral-Derivative
Plant
A central system which has been provided, and must be analyzed or controlled.
PLC
Programmable Logic Controller
Pole
321
Glossary and List of Equations
A value for s that causes the denominator of the transfer function to become zero, and
therefore causes the transfer function itself to approach infinity.
Pole-Zero Form
The transfer function is factored so that the locations of all the poles and zeros are clearly
evident.
Position Error
The amount of steady-state error of a system stimulated by a unit step input.
Position Error Constant
A constant that determines the position error of a system.
Positive Feedback
A feedback system where the system output is added to the system input, and the sum is
input into the plant.
PSD
The power spectral density which shows the distribution of power in the spectrum of a
particular signal.
Pulse Response
The response of a digital system to a unit step input, in terms of the transfer matrix.
PV
Process variable
Quantized
A system is quantized if it can only output certain discrete values.
Quarter-decay
the time or number of control rates required for process overshoot to be limited to within
1/4 of the maximum peak overshoot (PO) after a SP change. If the PO is 25% at sample
time N, this would be time N+k when subsequent PV remains < SP*1.0625, presuming
the process is settling.
Raise-Lower
Output type that works from present position rather than as a completely new computed
spanned output. For R/L, the % change should be applied to the working clamps i.e.
5%(hi clamp-lo clamp).
Ramp
A ramp is defined by the function tu(t).
Reconstructors
A system that converts a digital signal into an analog signal.
Reference Value
322
S, T, U, V
44.7 S, T, U, V
Samplers
A system that converts an analog signal into a digital signal.
Sampled-Data Systems
See Hybrid Systems.
Sampling Time
In a discrete system, the sampling time is the amount of time between samples. Reflects
the lower bound for Control rate.
SCADA
Supervisory Control and Data Acquisition.
S-Domain
The domain of the Laplace Transform of a signal or system.
Second-order System;
Settling Time
The amount of time it takes for the system's oscillatory response to be damped to within
a certain band of the steady-state value. That band is typically 10%.
Signal Flow Diagram
A method of visually representing a system, using arrows to represent the direction of
signals in the system.
SISO
323
Glossary and List of Equations
324
W, X, Y, Z
System Identification
method of trying to identify the system characterization , typically through least squares
analysis of input,output and noise data vectors. May use ARMA type framework.
System Type
The number of ideal integrators in the system.
Time-Invariant
A system is time-invariant if an input time-shifted by an arbitrary delay produces an
output shifted by that same delay.
Transfer Function
The ratio of the system output to its input, in the S-domain. The Laplace Transform of
the function's impulse response.
Transfer Function Matrix
The Laplace transform of the state-space equations of a system, that provides an external
description of a MIMO system.
Uniform Stability
Also ”Uniform BIBO Stability”, a system where an input signal in the range [0, 1] results
in a finite output from the initial time until infinite time. See http://en.wikibooks.
org/wiki/Control_Systems/Stability.
Unit Step
An input defined by u(t). Practically, a setpoint change.
Unity Feedback
A feedback system where the feedback loop element H has a transfer function of 1.
Velocity Error
The amount of steady-state error when the system is stimulated by a ramp input.
Velocity Error Constant
A constant that determines that amount of velocity error in a system.
44.8 W, X, Y, Z
W-plane
Reference plane used in the bilinear transform.
Wind-up
when the numerics of computed control adjustment can ”wind-up”, yielding control correc-
tion with an inappropriate component unless prevented. An example is the ”I” contribution
of PID if output has been disconnected during PID calculation
325
Glossary and List of Equations
Zero
A value for s that causes the numerator of the transfer function to become zero, and
therefore causes the transfer function itself to become zero.
Zero Input Response
The response of a system with zero external input. Relies only on the value of the system
state to produce output.
Zero State Response
The response of the system with zero system state. The output of the system depends
only on the system input.
ZOH
Zero order hold.
Z-Transform
An integral transform that is related to the Laplace transform through a change of
variables. The Z-Transform is used primarily with digital systems. See http://en.
wikibooks.org/wiki/Digital_Signal_Processing/Z_Transform
326
45 List of Equations
[Euler's Formula]
ejω = cos(ω) + j sin(ω)
[Convolution]
∫∞
(a ∗ b)(t) = −∞ a(τ )b(t − τ )dτ
[Convolution Theorem]
L[f (t) ∗ g(t)] = F (s)G(s)
L[f (t)g(t)] = F (s) ∗ G(s)
[Characteristic Equation]
|A − λI| = 0
Av = λv
wA = λw
[Decibels]
dB = 20 log(C)
327
List of Equations
[Convolution Description]
∫∞
y(t) = x(t) ∗ h(t) = −∞ x(τ )h(t − τ )dτ
328
Feedback Loops
[Mason's Rule]
yout ∑N Mk ∆
M= yin = k=1 ∆
k
45.6 Transforms
[Laplace Transform]
∫∞
F (s) = L[f (t)] = 0 f (t)e−st dt
[Inverse Laplace Transform]
1 ∫ c+i∞ st
f (t) = L−1 {F (s)} = 2π c−i∞ e F (s) ds
[Fourier Transform]
∫∞
F (jω) = F[f (t)] = 0 f (t)e−jωt dt
[Inverse Fourier Transform]
1 ∫∞
f (t) = F −1 {F (jω)} = 2π −∞ F (jω)e
−jωt dω
[Star Transform]
∑∞
F ∗ (s) = L∗ [f (t)] = i=0 f (iT )e
−siT
[Z Transform]
∑∞ −n
X(z) = Z {x[n]} = i=−∞ x[n]z
[Inverse Z Transform]
1 H
x[n] = Z −1 {X(z)} = 2πj C X(z)z
n−1 dz
[Modified Z Transform]
∑∞ −n
X(z, m) = Z(x[n], m) = n=−∞ x[n + m − 1]z
329
List of Equations
330
Lyapunov Stability
Na = P − Z
[Angle of Asymptotes]
π
ϕk = (2k + 1) P −Z
[Origin of Asymptotes]
∑ ∑
−
σ0 = P
P −Z
Z
[Lyapunov Equation]
M A + AT M = −N
[PID]
D(s) = Kp + Ksi + Kd s
[ ] [ ]
D(z) = Kp + Ki T2 z+1
z−1 + Kd z−1
Tz
331
46 Resources and Further Reading
46.1 Wikibooks
A number of wikibooks exist on topics that are (a) prerequisites to this book (b) companion
pieces to and references for this book, and (c) of further interest to people who have
completed reading this book. Below will be a listing of such books, ordered according to
the categories listed above.
• Linear algebra1
• Linear Algebra with Differential Equations2
• Complex Numbers3
• Calculus4
• Signals and Systems5
1 https://en.wikibooks.org/wiki/Linear%20algebra
2 https://en.wikibooks.org/wiki/Linear%20Algebra%20with%20Differential%20Equations
3 https://en.wikibooks.org/wiki/Algebra%2FComplex%20Numbers
4 https://en.wikibooks.org/wiki/Calculus
5 https://en.wikibooks.org/wiki/Signals%20and%20Systems
6 https://en.wikibooks.org/wiki/Engineering%20Analysis
7 https://en.wikibooks.org/wiki/Engineering%20Tables
8 https://en.wikibooks.org/wiki/Analog%20and%20Digital%20Conversion
9 https://en.wikibooks.org/wiki/MATLAB%20Programming
10 https://en.wikibooks.org/wiki/Signal%20Processing
11 https://en.wikibooks.org/wiki/Digital%20Signal%20Processing
333
Resources and Further Reading
• Communication Systems12
• Embedded Control Systems Design13
46.2 Wikiversity
v:Control Systems14
The Wikiversity project also contains a number of collaborative learning efforts in the
field of control systems, and related subjects. As best as possible, we will attempt to list
those efforts here:
• v:Control Systems15
Wikiversity is also a place to host learning materials, such as assignments, tests, and
reading plans. It is the goal of the authors of this book to create such materials for use
in conjunction with this book. As such materials are added to wikiversity, they will be
referenced here.
46.3 Wikipedia
There are a number of Wikipedia articles on the topics covered in this book, and those
articles will be linked to from the appropriate pages of this book. However, some of the
articles that are of general use to the book are:
• w:Control theory16
• w:Control engineering17
• w:Process control18
A complete listing of all Wikipedia articles related to this topic can be found at:
• w:Category:Control theory19 .
46.4 Software
46.4.1 Root Locus
Root-Locus is a free program that was used to create several of the images in this book.
That software can be obtained from the following web address:
http://web.archive.org/20041230124431/www.geocities.com/aseldawy/root_
locus.html
Explicit permission has been granted by the author of the program to include screenshots
on wikibooks. Images generated from the Root-Locus program should be included in
Category:Root Locus Images20 , and appropriately tagged as a screenshot of a free software
program.
12 https://en.wikibooks.org/wiki/Communication%20Systems
13 https://en.wikibooks.org/wiki/Embedded%20Control%20Systems%20Design
14 https://en.wikiversity.org/wiki/Control%20Systems
15 https://en.wikiversity.org/wiki/Control%20Systems
16 https://en.wikipedia.org/wiki/Control%20theory
17 https://en.wikipedia.org/wiki/Control%20engineering
18 https://en.wikipedia.org/wiki/Process%20control
19 https://en.wikipedia.org/wiki/Category%3AControl%20theory
20 https://en.wikibooks.org/wiki/%3ACategory%3ARoot%20Locus%20Images
334
External Publications
46.4.2 MATLAB
MATLAB, Simulink, the Control Systems Toolbox and the Symbolic Toolbox are trade-
marks of The MathWorks, Inc. Other product or brand names are trademarks or registered
trademarks of their respective holders. For more information about MATLAB, or to pur-
chase a copy, visit:
http://www.themathworks.com
For information about the proper way to refer to MATLAB, please see:
http://www.mathworks.com/company/pressroom/editorial_guidelines.html
All MATLAB code appearing in this book has been released under the terms of the GFDL
by the respective authors. All screenshots, graphs, and images relating to MATLAB have
been produced in Octave, with changes to the original MATLAB code made as necessary.
46.4.3 Octave
The following are some common vendors of control-related hardware and software. These
links are for personal interest only, and do not constitute an official endorsement of the
companies by Wikibooks.
• Systems Technology, Inc21
• SimApp - Dynamic Simulation Made Easy22
21 http://www.programcc.com/about.asp
22 http://www.simapp.com
335
Resources and Further Reading
• Hamming, Richard, Numerical Methods for Scientists and Engineers, 2nd edition, Dover,
1987.
0486652416
• Kalman, R. E., When is a linear control system optimal, ASME Transactions, Journal of
Basic Engineering, 1964
• Kalman, R. E., On the General Theory of Control Systems, IRE Transactions on Auto-
matic Control, Volume 4, Issue 3, p110, 1959. ISSN 0096199X
• Ogata, Katsuhiko, Solving Control Engineering Problems with MATLAB, Prentice Hall,
New Jersey, 1994.
0130459070
• Phillips and Nagle, Digital Control System Analysis and Design, 3rd Edition, Prentice
Hall, 1995.
013309832X
The following books and resources are suitable for further reading.
• DiStefano, Stubberud, Williams, Schaum's Outline Series Feedback and Control Systems,
2nd Edition, 1997.
0070170479
• Franklin, Powell, Workman, Digital Control of Dynamic Systems, 3rd Edition, 1997.
9780201820546
• Brosilow, Joseph, Techniques of Model-Based Control, 2002.
013028078X
23 http://www.ieeecss.org/
24 http://wikis.ControlTheoryPro.com
25 http://controls.engin.umich.edu/wiki/index.php/Main_Page
336
47 Contributors
Edits User
2 19891
1 1997kB2
1 1sfoerster3
1 Abletried~enwikibooks4
16 Adrignola5
1 Agentx3r~enwikibooks6
1 Ah3kal7
1 Alej278
2 Amorini9
2 Anith 55510
2 Anon171311
1 Antonysigma12
1 Arjunazadi13
5 Atcovi14
25 Avicennasis15
1 Avijit52516
3 Az156817
4 Bestable18
66 Billymac0019
2 Bo Dorku~enwikibooks20
1 https://en.wikibooks.org/wiki/User:1989
2 https://en.wikibooks.org/wiki/User:1997kB
3 https://en.wikibooks.org/wiki/User:1sfoerster
https://en.wikibooks.org/w/index.php%3ftitle=User:Abletried~enwikibooks&action=edit&
4
redlink=1
5 https://en.wikibooks.org/wiki/User:Adrignola
https://en.wikibooks.org/w/index.php%3ftitle=User:Agentx3r~enwikibooks&action=edit&
6
redlink=1
7 https://en.wikibooks.org/wiki/User:Ah3kal
8 https://en.wikibooks.org/w/index.php%3ftitle=User:Alej27&action=edit&redlink=1
9 https://en.wikibooks.org/w/index.php%3ftitle=User:Amorini&action=edit&redlink=1
10 https://en.wikibooks.org/w/index.php%3ftitle=User:Anith_555&action=edit&redlink=1
11 https://en.wikibooks.org/w/index.php%3ftitle=User:Anon1713&action=edit&redlink=1
12 https://en.wikibooks.org/w/index.php%3ftitle=User:Antonysigma&action=edit&redlink=1
13 https://en.wikibooks.org/w/index.php%3ftitle=User:Arjunazadi&action=edit&redlink=1
14 https://en.wikibooks.org/wiki/User:Atcovi
15 https://en.wikibooks.org/wiki/User:Avicennasis
16 https://en.wikibooks.org/w/index.php%3ftitle=User:Avijit525&action=edit&redlink=1
17 https://en.wikibooks.org/wiki/User:Az1568
18 https://en.wikibooks.org/w/index.php%3ftitle=User:Bestable&action=edit&redlink=1
19 https://en.wikibooks.org/w/index.php%3ftitle=User:Billymac00&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Bo_Dorku~enwikibooks&action=edit&
20
redlink=1
337
Contributors
1 Cbarlog21
1 Chazz22
1 Chih WANG23
1 CommonsDelinker24
1 Congquaner25
1 Constant31426
1 DSP-user27
1 Dagonet Iestwyn28
1 Danny B.29
1 DannyS71230
1 Davetcoleman31
1 DavidCary32
1 Deepcyan33
2 Derby County FC34
8 Dirk Hünniger35
1 Discostu536
1 Doleszki~enwikibooks37
1 Druzhnik38
2 Ducleotide39
1 Edwardvear40
1 Empato41
6 Ennui~enwikibooks42
9 Esj8843
1 Esteban1644
21 https://en.wikibooks.org/w/index.php%3ftitle=User:Cbarlog&action=edit&redlink=1
22 https://en.wikibooks.org/wiki/User:Chazz
23 https://en.wikibooks.org/w/index.php%3ftitle=User:Chih_WANG&action=edit&redlink=1
24 https://en.wikibooks.org/wiki/User:CommonsDelinker
25 https://en.wikibooks.org/w/index.php%3ftitle=User:Congquaner&action=edit&redlink=1
26 https://en.wikibooks.org/w/index.php%3ftitle=User:Constant314&action=edit&redlink=1
27 https://en.wikibooks.org/wiki/User:DSP-user
https://en.wikibooks.org/w/index.php%3ftitle=User:Dagonet_Iestwyn&action=edit&
28
redlink=1
29 https://en.wikibooks.org/wiki/User:Danny_B.
30 https://en.wikibooks.org/wiki/User:DannyS712
31 https://en.wikibooks.org/w/index.php%3ftitle=User:Davetcoleman&action=edit&redlink=1
32 https://en.wikibooks.org/wiki/User:DavidCary
33 https://en.wikibooks.org/w/index.php%3ftitle=User:Deepcyan&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Derby_County_FC&action=edit&
34
redlink=1
35 https://en.wikibooks.org/wiki/User:Dirk_H%25C3%25BCnniger
36 https://en.wikibooks.org/wiki/User:Discostu5
https://en.wikibooks.org/w/index.php%3ftitle=User:Doleszki~enwikibooks&action=edit&
37
redlink=1
38 https://en.wikibooks.org/w/index.php%3ftitle=User:Druzhnik&action=edit&redlink=1
39 https://en.wikibooks.org/w/index.php%3ftitle=User:Ducleotide&action=edit&redlink=1
40 https://en.wikibooks.org/w/index.php%3ftitle=User:Edwardvear&action=edit&redlink=1
41 https://en.wikibooks.org/w/index.php%3ftitle=User:Empato&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Ennui~enwikibooks&action=edit&
42
redlink=1
43 https://en.wikibooks.org/w/index.php%3ftitle=User:Esj88&action=edit&redlink=1
44 https://en.wikibooks.org/wiki/User:Esteban16
338
External Resources
3 Federhalter45
2 Feraudyh46
1 Fishpi47
1 Foreign1~enwikibooks48
2 Fragapanagos49
1 Frigotoni50
1 Glaisher51
2 Hagindaz52
2 Hammer of Moradin53
1 Helptry54
2 Herbythyme55
1 Herbythyme is the Antichrist of Mumfum56
1 HethrirBot57
1 Hippias58
1 Hyiltiz59
3 Hypergeek1460
8 Inductiveload61
1 Istevie62
5 JackBot63
3 JackPotte64
1 Jakec65
1 Javalenok66
1 Jawnsy67
2 JenVan68
1 Jfmantis69
45 https://en.wikibooks.org/wiki/User:Federhalter
46 https://en.wikibooks.org/wiki/User:Feraudyh
47 https://en.wikibooks.org/w/index.php%3ftitle=User:Fishpi&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Foreign1~enwikibooks&action=edit&
48
redlink=1
49 https://en.wikibooks.org/w/index.php%3ftitle=User:Fragapanagos&action=edit&redlink=1
50 https://en.wikibooks.org/wiki/User:Frigotoni
51 https://en.wikibooks.org/wiki/User:Glaisher
52 https://en.wikibooks.org/wiki/User:Hagindaz
https://en.wikibooks.org/w/index.php%3ftitle=User:Hammer_of_Moradin&action=edit&
53
redlink=1
54 https://en.wikibooks.org/w/index.php%3ftitle=User:Helptry&action=edit&redlink=1
55 https://en.wikibooks.org/wiki/User:Herbythyme
https://en.wikibooks.org/w/index.php%3ftitle=User:Herbythyme_is_the_Antichrist_of_
56
Mumfum&action=edit&redlink=1
57 https://en.wikibooks.org/wiki/User:HethrirBot
58 https://en.wikibooks.org/w/index.php%3ftitle=User:Hippias&action=edit&redlink=1
59 https://en.wikibooks.org/wiki/User:Hyiltiz
60 https://en.wikibooks.org/w/index.php%3ftitle=User:Hypergeek14&action=edit&redlink=1
61 https://en.wikibooks.org/wiki/User:Inductiveload
62 https://en.wikibooks.org/w/index.php%3ftitle=User:Istevie&action=edit&redlink=1
63 https://en.wikibooks.org/wiki/User:JackBot
64 https://en.wikibooks.org/wiki/User:JackPotte
65 https://en.wikibooks.org/wiki/User:Jakec
66 https://en.wikibooks.org/w/index.php%3ftitle=User:Javalenok&action=edit&redlink=1
67 https://en.wikibooks.org/w/index.php%3ftitle=User:Jawnsy&action=edit&redlink=1
68 https://en.wikibooks.org/w/index.php%3ftitle=User:JenVan&action=edit&redlink=1
69 https://en.wikibooks.org/wiki/User:Jfmantis
339
Contributors
25 Jguk70
2 Jianhui6771
1 Jmah72
4 Joe Schmedley73
12 Jomegat74
2 Josephkiran75
1 Kayau76
2 Kevinp277
5 Leaderboard78
3 Leisuresuitwally79
2 Livingthingdan80
11 Lpkeys81
4 Macsdev~enwikibooks82
1 Markcc83
1 Martin Urbanec84
1 Mike.lifeguard85
1 Minorax86
2 Mintz l87
1 Mreiki88
4 Mrjulesd89
3 Mstachowsky90
1 Murughendra91
1 N.masarone92
1 Napalm Llama93
1 Neils5194
70 https://en.wikibooks.org/wiki/User:Jguk
71 https://en.wikibooks.org/wiki/User:Jianhui67
72 https://en.wikibooks.org/w/index.php%3ftitle=User:Jmah&action=edit&redlink=1
73 https://en.wikibooks.org/wiki/User:Joe_Schmedley
74 https://en.wikibooks.org/wiki/User:Jomegat
75 https://en.wikibooks.org/w/index.php%3ftitle=User:Josephkiran&action=edit&redlink=1
76 https://en.wikibooks.org/wiki/User:Kayau
77 https://en.wikibooks.org/w/index.php%3ftitle=User:Kevinp2&action=edit&redlink=1
78 https://en.wikibooks.org/wiki/User:Leaderboard
https://en.wikibooks.org/w/index.php%3ftitle=User:Leisuresuitwally&action=edit&
79
redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Livingthingdan&action=edit&redlink=
80
1
81 https://en.wikibooks.org/w/index.php%3ftitle=User:Lpkeys&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Macsdev~enwikibooks&action=edit&
82
redlink=1
83 https://en.wikibooks.org/w/index.php%3ftitle=User:Markcc&action=edit&redlink=1
84 https://en.wikibooks.org/wiki/User:Martin_Urbanec
85 https://en.wikibooks.org/wiki/User:Mike.lifeguard
86 https://en.wikibooks.org/wiki/User:Minorax
87 https://en.wikibooks.org/wiki/User:Mintz_l
88 https://en.wikibooks.org/w/index.php%3ftitle=User:Mreiki&action=edit&redlink=1
89 https://en.wikibooks.org/wiki/User:Mrjulesd
90 https://en.wikibooks.org/w/index.php%3ftitle=User:Mstachowsky&action=edit&redlink=1
91 https://en.wikibooks.org/w/index.php%3ftitle=User:Murughendra&action=edit&redlink=1
92 https://en.wikibooks.org/w/index.php%3ftitle=User:N.masarone&action=edit&redlink=1
93 https://en.wikibooks.org/w/index.php%3ftitle=User:Napalm_Llama&action=edit&redlink=1
94 https://en.wikibooks.org/w/index.php%3ftitle=User:Neils51&action=edit&redlink=1
340
External Resources
1 Ninjastrikers95
5 Nithinvgeorge96
1 Nonstandard97
2 Nostraticispeak98
14 Nrs13599
1 Ojan100
1 Okashi no kuni101
8 Panic2k4102
1 Pedro Fonini103
9 Pi zero104
34 Pierre5018105
45 PokestarFan106
18 QuiteUnusual107
2 Razr Nation108
19 Recent Runes109
2 Redberry76110
1 Renamed user wZdBmxEPfs111
1 Rmaax112
2 Ro890Z113
1 Roman123~enwikibooks114
1 Rotlink115
1 SPat116
3 Sagie117
2 Satyabrata118
95 https://en.wikibooks.org/wiki/User:Ninjastrikers
96 https://en.wikibooks.org/w/index.php%3ftitle=User:Nithinvgeorge&action=edit&redlink=1
97 https://en.wikibooks.org/w/index.php%3ftitle=User:Nonstandard&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Nostraticispeak&action=edit&
98
redlink=1
99 https://en.wikibooks.org/w/index.php%3ftitle=User:Nrs135&action=edit&redlink=1
100 https://en.wikibooks.org/w/index.php%3ftitle=User:Ojan&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Okashi_no_kuni&action=edit&redlink=
101
1
102 https://en.wikibooks.org/wiki/User:Panic2k4
103 https://en.wikibooks.org/wiki/User:Pedro_Fonini
104 https://en.wikibooks.org/wiki/User:Pi_zero
105 https://en.wikibooks.org/w/index.php%3ftitle=User:Pierre5018&action=edit&redlink=1
106 https://en.wikibooks.org/wiki/User:PokestarFan
107 https://en.wikibooks.org/wiki/User:QuiteUnusual
108 https://en.wikibooks.org/wiki/User:Razr_Nation
109 https://en.wikibooks.org/wiki/User:Recent_Runes
110 https://en.wikibooks.org/wiki/User:Redberry76
https://en.wikibooks.org/w/index.php%3ftitle=User:Renamed_user_wZdBmxEPfs&action=
111
edit&redlink=1
112 https://en.wikibooks.org/w/index.php%3ftitle=User:Rmaax&action=edit&redlink=1
113 https://en.wikibooks.org/w/index.php%3ftitle=User:Ro890Z&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Roman123~enwikibooks&action=edit&
114
redlink=1
115 https://en.wikibooks.org/w/index.php%3ftitle=User:Rotlink&action=edit&redlink=1
116 https://en.wikibooks.org/wiki/User:SPat
117 https://en.wikibooks.org/w/index.php%3ftitle=User:Sagie&action=edit&redlink=1
118 https://en.wikibooks.org/w/index.php%3ftitle=User:Satyabrata&action=edit&redlink=1
341
Contributors
1 Sberkane119
3 ScaAr120
1 Scruff323121
1 Sdayal122
1 Shaffers21123
4 Simoneau124
1 Sonia125
1 Sonicwave32126
10 Spradlig127
1 Strange quark128
4 Stïnger129
1 Supermackin~enwikibooks130
2 Syum90131
1 Tdenewiler132
6 Texvc2LaTeXBot133
4 Thenub314134
1 Tickle55135
4 Tim.greatrex136
2 Tropicalkitty137
1 Trunks ishida138
1 Turkmen139
3 Ubigene140
1 Upul141
1 Van der Hoorn142
1 Vito Genovese143
119 https://en.wikibooks.org/w/index.php%3ftitle=User:Sberkane&action=edit&redlink=1
120 https://en.wikibooks.org/w/index.php%3ftitle=User:ScaAr&action=edit&redlink=1
121 https://en.wikibooks.org/w/index.php%3ftitle=User:Scruff323&action=edit&redlink=1
122 https://en.wikibooks.org/w/index.php%3ftitle=User:Sdayal&action=edit&redlink=1
123 https://en.wikibooks.org/w/index.php%3ftitle=User:Shaffers21&action=edit&redlink=1
124 https://en.wikibooks.org/wiki/User:Simoneau
125 https://en.wikibooks.org/wiki/User:Sonia
126 https://en.wikibooks.org/wiki/User:Sonicwave32
127 https://en.wikibooks.org/wiki/User:Spradlig
128 https://en.wikibooks.org/wiki/User:Strange_quark
129 https://en.wikibooks.org/wiki/User:St%25C3%25AFnger
130 https://en.wikibooks.org/wiki/User:Supermackin~enwikibooks
131 https://en.wikibooks.org/w/index.php%3ftitle=User:Syum90&action=edit&redlink=1
132 https://en.wikibooks.org/w/index.php%3ftitle=User:Tdenewiler&action=edit&redlink=1
133 https://en.wikibooks.org/wiki/User:Texvc2LaTeXBot
134 https://en.wikibooks.org/wiki/User:Thenub314
135 https://en.wikibooks.org/w/index.php%3ftitle=User:Tickle55&action=edit&redlink=1
136 https://en.wikibooks.org/w/index.php%3ftitle=User:Tim.greatrex&action=edit&redlink=1
137 https://en.wikibooks.org/wiki/User:Tropicalkitty
138 https://en.wikibooks.org/w/index.php%3ftitle=User:Trunks_ishida&action=edit&redlink=1
139 https://en.wikibooks.org/wiki/User:Turkmen
140 https://en.wikibooks.org/w/index.php%3ftitle=User:Ubigene&action=edit&redlink=1
141 https://en.wikibooks.org/wiki/User:Upul
142 https://en.wikibooks.org/wiki/User:Van_der_Hoorn
143 https://en.wikibooks.org/wiki/User:Vito_Genovese
342
External Resources
1 Wargo144
1 Warlock31415145
1823 Whiteknight146
1 WikiBayer147
4 Wknight8111148
1 XMollioTKs149
7 Xania150
5 Xris151
1 YMS152
1 Yeus153
1 Z35~enwikibooks154
3 Zhaopatrick155
3 Zoomzoom~enwikibooks156
2 ~riley157
144 https://en.wikibooks.org/wiki/User:Wargo
145 https://en.wikibooks.org/w/index.php%3ftitle=User:Warlock31415&action=edit&redlink=1
146 https://en.wikibooks.org/wiki/User:Whiteknight
147 https://en.wikibooks.org/wiki/User:WikiBayer
148 https://en.wikibooks.org/wiki/User:Wknight8111
149 https://en.wikibooks.org/w/index.php%3ftitle=User:XMollioTKs&action=edit&redlink=1
150 https://en.wikibooks.org/wiki/User:Xania
151 https://en.wikibooks.org/w/index.php%3ftitle=User:Xris&action=edit&redlink=1
152 https://en.wikibooks.org/wiki/User:YMS
153 https://en.wikibooks.org/w/index.php%3ftitle=User:Yeus&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Z35~enwikibooks&action=edit&
154
redlink=1
155 https://en.wikibooks.org/w/index.php%3ftitle=User:Zhaopatrick&action=edit&redlink=1
https://en.wikibooks.org/w/index.php%3ftitle=User:Zoomzoom~enwikibooks&action=edit&
156
redlink=1
157 https://en.wikibooks.org/wiki/User:~riley
343
List of Figures
345
List of Figures
346
List of Figures
159 https://en.wikibooks.org/wiki/User:Whiteknight
160 https://en.wikibooks.org/wiki/
161 https://en.wikibooks.org/wiki/User:Whiteknight
162 https://en.wikibooks.org/wiki/
163 https://en.wikibooks.org/wiki/User:Whiteknight
164 https://en.wikibooks.org/wiki/
165 https://en.wikibooks.org/wiki/User:Whiteknight
166 https://en.wikibooks.org/wiki/
167 https://en.wikibooks.org/wiki/User:Whiteknight
168 https://en.wikibooks.org/wiki/
169 https://en.wikibooks.org/wiki/User:Whiteknight
170 https://en.wikibooks.org/wiki/
171 https://en.wikibooks.org/wiki/User:Whiteknight
172 https://en.wikibooks.org/wiki/
173 https://en.wikibooks.org/wiki/User:Whiteknight
174 https://en.wikibooks.org/wiki/
175 https://en.wikibooks.org/wiki/User:Whiteknight
176 https://en.wikibooks.org/wiki/
177 https://en.wikibooks.org/wiki/User:Whiteknight
178 https://en.wikibooks.org/wiki/
179 https://en.wikibooks.org/wiki/User:Whiteknight
180 https://en.wikibooks.org/wiki/
181 https://en.wikibooks.org/wiki/User:Whiteknight
182 https://en.wikibooks.org/wiki/
183 http://commons.wikimedia.org/wiki/User:Inductiveload
184 https://commons.wikimedia.org/wiki/User:Inductiveload
185 https://en.wikibooks.org/wiki/en:User:Spradlig
186 https://en.wikibooks.org/wiki/User:Whiteknight
187 https://en.wikibooks.org/wiki/
188 https://en.wikibooks.org/wiki/User:Whiteknight
189 https://en.wikibooks.org/wiki/
347
List of Figures
190 https://en.wikibooks.org/wiki/User:Whiteknight
191 https://en.wikibooks.org/wiki/
192 https://en.wikibooks.org/wiki/User:Whiteknight
193 https://en.wikibooks.org/wiki/
194 https://en.wikibooks.org/wiki/User:Whiteknight
195 https://en.wikibooks.org/wiki/
196 https://en.wikibooks.org/wiki/User:Whiteknight
197 https://en.wikibooks.org/wiki/
198 http://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
199 https://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
200 https://en.wikipedia.org/wiki/User:Petr.adamek
201 https://en.wikipedia.org/wiki/User:Rbj
202 http://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
203 https://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
204 http://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
205 https://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
206 http://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
207 https://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
208 http://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
209 https://commons.wikimedia.org/w/index.php?title=User:Rbj&action=edit&redlink=1
210 https://en.wikibooks.org/wiki/User:Whiteknight
211 https://en.wikibooks.org/wiki/
212 https://en.wikibooks.org/wiki/User:Whiteknight
213 https://en.wikibooks.org/wiki/
214 http://commons.wikimedia.org/wiki/User:Inductiveload
215 https://commons.wikimedia.org/wiki/User:Inductiveload
348
List of Figures
33 Inductiveload216 , Inductiveload217
34 Inductiveload218 , Inductiveload219
35 Inductiveload220 , Inductiveload221
36 Inductiveload222 , Inductiveload223
37 Inductiveload224 , Inductiveload225
38 Inductiveload226 , Inductiveload227
39 Inductiveload228 , Inductiveload229
40 Inductiveload230 , Inductiveload231
41 Inductiveload232 , Inductiveload233
42 Inductiveload234 , Inductiveload235
43 Inductiveload236 , Inductiveload237
44 Inductiveload238 , Inductiveload239
45 Inductiveload240 , Inductiveload241
46 Inductiveload242 , Inductiveload243
47 Inductiveload244 , Inductiveload245
48 Inductiveload246 , Inductiveload247
49 Inductiveload248 , Inductiveload249
216 http://commons.wikimedia.org/wiki/User:Inductiveload
217 https://commons.wikimedia.org/wiki/User:Inductiveload
218 http://commons.wikimedia.org/wiki/User:Inductiveload
219 https://commons.wikimedia.org/wiki/User:Inductiveload
220 http://commons.wikimedia.org/wiki/User:Inductiveload
221 https://commons.wikimedia.org/wiki/User:Inductiveload
222 http://commons.wikimedia.org/wiki/User:Inductiveload
223 https://commons.wikimedia.org/wiki/User:Inductiveload
224 http://commons.wikimedia.org/wiki/User:Inductiveload
225 https://commons.wikimedia.org/wiki/User:Inductiveload
226 http://commons.wikimedia.org/wiki/User:Inductiveload
227 https://commons.wikimedia.org/wiki/User:Inductiveload
228 http://commons.wikimedia.org/wiki/User:Inductiveload
229 https://commons.wikimedia.org/wiki/User:Inductiveload
230 http://commons.wikimedia.org/wiki/User:Inductiveload
231 https://commons.wikimedia.org/wiki/User:Inductiveload
232 http://commons.wikimedia.org/wiki/User:Inductiveload
233 https://commons.wikimedia.org/wiki/User:Inductiveload
234 http://commons.wikimedia.org/wiki/User:Inductiveload
235 https://commons.wikimedia.org/wiki/User:Inductiveload
236 http://commons.wikimedia.org/wiki/User:Inductiveload
237 https://commons.wikimedia.org/wiki/User:Inductiveload
238 http://commons.wikimedia.org/wiki/User:Inductiveload
239 https://commons.wikimedia.org/wiki/User:Inductiveload
240 http://commons.wikimedia.org/wiki/User:Inductiveload
241 https://commons.wikimedia.org/wiki/User:Inductiveload
242 http://commons.wikimedia.org/wiki/User:Inductiveload
243 https://commons.wikimedia.org/wiki/User:Inductiveload
244 http://commons.wikimedia.org/wiki/User:Inductiveload
245 https://commons.wikimedia.org/wiki/User:Inductiveload
246 http://commons.wikimedia.org/wiki/User:Inductiveload
247 https://commons.wikimedia.org/wiki/User:Inductiveload
248 http://commons.wikimedia.org/wiki/User:Inductiveload
249 https://commons.wikimedia.org/wiki/User:Inductiveload
349
List of Figures
50 Inductiveload250 , Inductiveload251
51 Inductiveload252 , Inductiveload253
52 Inductiveload254 , Inductiveload255
53 Inductiveload256 , Inductiveload257
54 Inductiveload258 , Inductiveload259
55 Inductiveload260 , Inductiveload261
56 Inductiveload262 , Inductiveload263
57 Inductiveload264 , Inductiveload265
58 Inductiveload266 , Inductiveload267
59 Inductiveload268 , Inductiveload269
60 Inductiveload270 , Inductiveload271
61 Inductiveload272 , Inductiveload273
62 Inductiveload274 , Inductiveload275
63 en:User:Ap276
64 Dicklyon, JarektBot, Leyo, Ma-Lik, Maksim, McPot
65 The original uploader was Whiteknight277 at English Wiki-
books278 .
66 Whiteknight279 at English Wikibooks280
67 Nithinvgeorge281 at English Wikibooks282
250 http://commons.wikimedia.org/wiki/User:Inductiveload
251 https://commons.wikimedia.org/wiki/User:Inductiveload
252 http://commons.wikimedia.org/wiki/User:Inductiveload
253 https://commons.wikimedia.org/wiki/User:Inductiveload
254 http://commons.wikimedia.org/wiki/User:Inductiveload
255 https://commons.wikimedia.org/wiki/User:Inductiveload
256 http://commons.wikimedia.org/wiki/User:Inductiveload
257 https://commons.wikimedia.org/wiki/User:Inductiveload
258 http://commons.wikimedia.org/wiki/User:Inductiveload
259 https://commons.wikimedia.org/wiki/User:Inductiveload
260 http://commons.wikimedia.org/wiki/User:Inductiveload
261 https://commons.wikimedia.org/wiki/User:Inductiveload
262 http://commons.wikimedia.org/wiki/User:Inductiveload
263 https://commons.wikimedia.org/wiki/User:Inductiveload
264 http://commons.wikimedia.org/wiki/User:Inductiveload
265 https://commons.wikimedia.org/wiki/User:Inductiveload
266 http://commons.wikimedia.org/wiki/User:Inductiveload
267 https://commons.wikimedia.org/wiki/User:Inductiveload
268 http://commons.wikimedia.org/wiki/User:Inductiveload
269 https://commons.wikimedia.org/wiki/User:Inductiveload
270 http://commons.wikimedia.org/wiki/User:Inductiveload
271 https://commons.wikimedia.org/wiki/User:Inductiveload
272 http://commons.wikimedia.org/wiki/User:Inductiveload
273 https://commons.wikimedia.org/wiki/User:Inductiveload
274 http://commons.wikimedia.org/wiki/User:Inductiveload
275 https://commons.wikimedia.org/wiki/User:Inductiveload
276 https://en.wikipedia.org/wiki/User:Ap
277 https://en.wikibooks.org/wiki/User:Whiteknight
278 https://en.wikibooks.org/wiki/
279 https://en.wikibooks.org/wiki/User:Whiteknight
280 https://en.wikibooks.org/wiki/
281 https://en.wikibooks.org/wiki/User:Nithinvgeorge
282 https://en.wikibooks.org/wiki/
350
List of Figures
68 Pierre5018283 , Pierre5018284
69 Pierre5018285 , Pierre5018286
70 Pierre5018287 , Pierre5018288
71 Pierre5018289 , Pierre5018290
72 Pierre5018291 , Pierre5018292
73 Pierre5018293 , Pierre5018294
74 Pierre5018295 , Pierre5018296
75 Pierre5018297 , Pierre5018298
76 Pierre5018299 , Pierre5018300
77 Pierre5018301 , Pierre5018302
78 Pierre5018303 , Pierre5018304
79 Pierre5018305 , Pierre5018306
80 Pierre5018307 , Pierre5018308
81 Pierre5018309 , Pierre5018310
82 Pierre5018311 , Pierre5018312
83 Pierre5018313 , Pierre5018314
84 Pierre5018315 , Pierre5018316
283 http://commons.wikimedia.org/wiki/User:Pierre5018
284 https://commons.wikimedia.org/wiki/User:Pierre5018
285 http://commons.wikimedia.org/wiki/User:Pierre5018
286 https://commons.wikimedia.org/wiki/User:Pierre5018
287 http://commons.wikimedia.org/wiki/User:Pierre5018
288 https://commons.wikimedia.org/wiki/User:Pierre5018
289 http://commons.wikimedia.org/wiki/User:Pierre5018
290 https://commons.wikimedia.org/wiki/User:Pierre5018
291 http://commons.wikimedia.org/wiki/User:Pierre5018
292 https://commons.wikimedia.org/wiki/User:Pierre5018
293 http://commons.wikimedia.org/wiki/User:Pierre5018
294 https://commons.wikimedia.org/wiki/User:Pierre5018
295 http://commons.wikimedia.org/wiki/User:Pierre5018
296 https://commons.wikimedia.org/wiki/User:Pierre5018
297 http://commons.wikimedia.org/wiki/User:Pierre5018
298 https://commons.wikimedia.org/wiki/User:Pierre5018
299 http://commons.wikimedia.org/wiki/User:Pierre5018
300 https://commons.wikimedia.org/wiki/User:Pierre5018
301 http://commons.wikimedia.org/wiki/User:Pierre5018
302 https://commons.wikimedia.org/wiki/User:Pierre5018
303 http://commons.wikimedia.org/wiki/User:Pierre5018
304 https://commons.wikimedia.org/wiki/User:Pierre5018
305 http://commons.wikimedia.org/wiki/User:Pierre5018
306 https://commons.wikimedia.org/wiki/User:Pierre5018
307 http://commons.wikimedia.org/wiki/User:Pierre5018
308 https://commons.wikimedia.org/wiki/User:Pierre5018
309 http://commons.wikimedia.org/wiki/User:Pierre5018
310 https://commons.wikimedia.org/wiki/User:Pierre5018
311 http://commons.wikimedia.org/wiki/User:Pierre5018
312 https://commons.wikimedia.org/wiki/User:Pierre5018
313 http://commons.wikimedia.org/wiki/User:Pierre5018
314 https://commons.wikimedia.org/wiki/User:Pierre5018
315 http://commons.wikimedia.org/wiki/User:Pierre5018
316 https://commons.wikimedia.org/wiki/User:Pierre5018
351
List of Figures
85 Pierre5018317 , Pierre5018318
86 Pierre5018319 , Pierre5018320
87 Pierre5018321 , Pierre5018322
88 Pierre5018323 , Pierre5018324
89 Pierre5018325 , Pierre5018326
90 Pierre5018327 , Pierre5018328
91 Pierre5018329 , Pierre5018330
92 Pierre5018331 , Pierre5018332
93 Pierre5018333 , Pierre5018334
94 Pierre5018335 , Pierre5018336
95 Pierre5018337 , Pierre5018338
96 Pierre5018339 , Pierre5018340
97 Pierre5018341 , Pierre5018342
98 Pierre5018343 , Pierre5018344
99 Pierre5018345 , Pierre5018346
100 Pierre5018347 , Pierre5018348
101 Pierre5018349 , Pierre5018350
317 http://commons.wikimedia.org/wiki/User:Pierre5018
318 https://commons.wikimedia.org/wiki/User:Pierre5018
319 http://commons.wikimedia.org/wiki/User:Pierre5018
320 https://commons.wikimedia.org/wiki/User:Pierre5018
321 http://commons.wikimedia.org/wiki/User:Pierre5018
322 https://commons.wikimedia.org/wiki/User:Pierre5018
323 http://commons.wikimedia.org/wiki/User:Pierre5018
324 https://commons.wikimedia.org/wiki/User:Pierre5018
325 http://commons.wikimedia.org/wiki/User:Pierre5018
326 https://commons.wikimedia.org/wiki/User:Pierre5018
327 http://commons.wikimedia.org/wiki/User:Pierre5018
328 https://commons.wikimedia.org/wiki/User:Pierre5018
329 http://commons.wikimedia.org/wiki/User:Pierre5018
330 https://commons.wikimedia.org/wiki/User:Pierre5018
331 http://commons.wikimedia.org/wiki/User:Pierre5018
332 https://commons.wikimedia.org/wiki/User:Pierre5018
333 http://commons.wikimedia.org/wiki/User:Pierre5018
334 https://commons.wikimedia.org/wiki/User:Pierre5018
335 http://commons.wikimedia.org/wiki/User:Pierre5018
336 https://commons.wikimedia.org/wiki/User:Pierre5018
337 http://commons.wikimedia.org/wiki/User:Pierre5018
338 https://commons.wikimedia.org/wiki/User:Pierre5018
339 http://commons.wikimedia.org/wiki/User:Pierre5018
340 https://commons.wikimedia.org/wiki/User:Pierre5018
341 http://commons.wikimedia.org/wiki/User:Pierre5018
342 https://commons.wikimedia.org/wiki/User:Pierre5018
343 http://commons.wikimedia.org/wiki/User:Pierre5018
344 https://commons.wikimedia.org/wiki/User:Pierre5018
345 http://commons.wikimedia.org/wiki/User:Pierre5018
346 https://commons.wikimedia.org/wiki/User:Pierre5018
347 http://commons.wikimedia.org/wiki/User:Pierre5018
348 https://commons.wikimedia.org/wiki/User:Pierre5018
349 http://commons.wikimedia.org/wiki/User:Pierre5018
350 https://commons.wikimedia.org/wiki/User:Pierre5018
352
List of Figures
351 http://commons.wikimedia.org/wiki/User:Pierre5018
352 https://commons.wikimedia.org/wiki/User:Pierre5018
353 http://commons.wikimedia.org/wiki/User:Pierre5018
354 https://commons.wikimedia.org/wiki/User:Pierre5018
355 http://commons.wikimedia.org/wiki/User:Pierre5018
356 https://commons.wikimedia.org/wiki/User:Pierre5018
357 http://commons.wikimedia.org/wiki/User:Pierre5018
358 https://commons.wikimedia.org/wiki/User:Pierre5018
359 http://commons.wikimedia.org/wiki/User:Constant314
360 https://commons.wikimedia.org/wiki/User:Constant314
361 http://commons.wikimedia.org/wiki/User:Netnet
362 https://commons.wikimedia.org/wiki/User:Netnet
363 http://commons.wikimedia.org/wiki/User:Netnet
364 https://commons.wikimedia.org/wiki/User:Netnet
365 http://commons.wikimedia.org/wiki/User:Netnet
366 https://commons.wikimedia.org/wiki/User:Netnet
367 http://commons.wikimedia.org/wiki/User:Netnet
368 https://commons.wikimedia.org/wiki/User:Netnet
369 http://commons.wikimedia.org/wiki/User:Netnet
370 https://commons.wikimedia.org/wiki/User:Netnet
371 https://en.wikibooks.org/wiki/User:Whiteknight
372 https://en.wikibooks.org/wiki/
373 https://en.wikibooks.org/wiki/User:Whiteknight
374 https://en.wikibooks.org/wiki/
375 https://en.wikibooks.org/wiki/User:Whiteknight
376 https://en.wikibooks.org/wiki/
http://commons.wikimedia.org/w/index.php?title=User:Warlock31415&action=edit&redlink=
377
1
https://commons.wikimedia.org/w/index.php?title=User:Warlock31415&action=edit&
378
redlink=1
379 http://commons.wikimedia.org/wiki/User:Netnet
380 https://commons.wikimedia.org/wiki/User:Netnet
381 http://commons.wikimedia.org/wiki/User:Netnet
382 https://commons.wikimedia.org/wiki/User:Netnet
353
List of Figures
383 http://commons.wikimedia.org/wiki/User:Netnet
384 https://commons.wikimedia.org/wiki/User:Netnet
385 https://sk.wikipedia.org/wiki/user:robo
386 https://en.wikibooks.org/wiki/User:Whiteknight
387 https://en.wikibooks.org/wiki/
388 https://en.wikibooks.org/wiki/User:Whiteknight
389 https://en.wikibooks.org/wiki/
390 https://en.wikibooks.org/wiki/User:Whiteknight
391 https://en.wikibooks.org/wiki/
354
48 Licenses