Nonlinear and Adaptive Control Systems PDF
Nonlinear and Adaptive Control Systems PDF
Control Systems
Nonlinear and Adaptive
Control Systems Nonlinear and
Adaptive Control
Systems
An adaptive system for linear systems with unknown parameters is a Zhengtao Ding is a Senior Lecturer in Control
nonlinear system. The analysis of such adaptive systems requires similar Engineering and Director for MSc in Advanced Control
techniques to analyse nonlinear systems. Therefore it is natural to treat and Systems Engineering at the Control Systems
adaptive control as a part of nonlinear control systems. Centre, School of Electrical and Electronic Engineering,
The University of Manchester, UK. His research interests
Nonlinear and Adaptive Control Systems treats nonlinear control focus on nonlinear and adaptive control design. He
and adaptive control in a unified framework, presenting the major pioneered research in asymptotic rejection of general
results at a moderate mathematical level, suitable for MSc students periodic disturbances in nonlinear systems and
and engineers with undergraduate degrees. Topics covered include produced a series of results to systematically solve this
introduction to nonlinear systems; state space models; describing problem in various situations. He also made significant
contributions in output regulation and adaptive control
functions for common nonlinear components; stability theory; feedback
of nonlinear systems with some more recent results on
linearization; adaptive control; nonlinear observer design; backstepping observer design and output feedback control as well.
design; disturbance rejection and output regulation; and control Dr Ding has been teaching ‘Nonlinear and Adaptive
applications, including harmonic estimation and rejection in power Control Systems’ to MSc students for 9 years, and he
distribution systems, observer and control design for circadian rhythms, has accumulated tremendous experiences in explaining
and discrete-time implementation of continuous-time nonlinear control difficult control concepts to students.
laws.
Ding
Zhengtao Ding
The Institution of Engineering and Technology
www.theiet.org
978-1-84919-574-4
Nonlinear and
Adaptive Control
Systems
Other volumes in this series:
Volume 8 A history of control engineering, 1800–1930 S. Bennett
Volume 18 Applied control theory, 2nd Edition J.R. Leigh
Volume 20 Design of modern control systems D.J. Bell, P.A. Cook and N. Munro (Editors)
Volume 28 Robots and automated manufacture J. Billingsley (Editor)
Volume 33 Temperature measurement and control J.R. Leigh
Volume 34 Singular perturbation methodology in control systems D.S. Naidu
Volume 35 Implementation of self-tuning controllers K. Warwick (Editor)
Volume 37 Industrial digital control systems, 2nd Edition K. Warwick and D. Rees (Editors)
Volume 39 Continuous time controller design R. Balasubramanian
Volume 40 Deterministic control of uncertain systems A.S.I. Zinober (Editor)
Volume 41 Computer control of real-time processes S. Bennett and G.S. Virk (Editors)
Volume 42 Digital signal Processing: principles, devices and applications N.B. Jones and
J.D.McK. Watson (Editors)
Volume 44 Knowledge-based systems for industrial control J. McGhee, M.J. Grimble and A. Mowforth
(Editors)
Volume 47 A History of control engineering, 1930–1956 S. Bennett
Volume 49 Polynomial methods in optimal control and filtering K.J. Hunt (Editor)
Volume 50 Programming industrial control systems using IEC 1131-3 R.W. Lewis
Volume 51 Advanced robotics and intelligent machines J.O. Gray and D.G. Caldwell (Editors)
Volume 52 Adaptive prediction and predictive control P.P. Kanjilal
Volume 53 Neural network applications in control G.W. Irwin, K. Warwick and K.J. Hunt (Editors)
Volume 54 Control engineering solutions: a practical approach P. Albertos, R. Strietzel and N. Mort
(Editors)
Volume 55 Genetic algorithms in engineering systems A.M.S. Zalzala and P.J. Fleming (Editors)
Volume 56 Symbolic methods in control system analysis and design N. Munro (Editor)
Volume 57 Flight control systems R.W. Pratt (Editor)
Volume 58 Power-plant control and instrumentation: the control of boilers and HRSG systems
D. Lindsley
Volume 59 Modelling control systems using IEC 61499 R. Lewis
Volume 60 People in control: human factors in control room design J. Noyes and M. Bransby (Editors)
Volume 61 Nonlinear predictive control: theory and practice B. Kouvaritakis and M. Cannon (Editors)
Volume 62 Active sound and vibration control M.O. Tokhi and S.M. Veres
Volume 63 Stepping motors, 4th edition P.P. Acarnley
Volume 64 Control theory, 2nd Edition J.R. Leigh
Volume 65 Modelling and parameter estimation of dynamic systems J.R. Raol, G. Girija and J. Singh
Volume 66 Variable structure systems: from principles to implementation A. Sabanovic, L. Fridman
and S. Spurgeon (Editors)
Volume 67 Motion vision: design of compact motion sensing solution for autonomous systems
J. Kolodko and L. Vlacic
Volume 68 Flexible robot manipulators: modelling, simulation and control M.O. Tokhi and
A.K.M. Azad (Editors)
Volume 69 Advances in unmanned marine vehicles G. Roberts and R. Sutton
(Editors)
Volume 70 Intelligent control systems using computational intelligence techniques A. Ruano
(Editor)
Volume 71 Advances in cognitive systems S. Nefti and J. Gray (Editors)
Volume 72 Control theory: a guided tour, 3rd Edition J.R. Leigh
Volume 73 Adaptive sampling with Mobile WSN K. Sreenath, M.F. Mysorewala, D.O. Popa
and F.L. Lewis
Volume 74 Eigenstructure control algorithms: applications to aircraft/rotorcraft handling
qualities design S. Srinathkumar
Volume 75 Advanced control for constrained processes and systems F. Garelli, R.J. Mantz and
H. De Battista
Volume 76 Developments in control theory towards glocal control L. Qiu, J. Chen, T. Iwasaki and
H. Fujioka
Volume 77 Further advances in unmanned marine vehicles G.N. Roberts and R. Sutton (Editors)
Volume 78 Frequency-domain control design for high-performance systems J. O’Brien
Nonlinear and
Adaptive Control
Systems
Zhengtao Ding
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research or
private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act
1988, this publication may be reproduced, stored or transmitted, in any form or by any means,
only with the prior permission in writing of the publishers, or in the case of reprographic
reproduction in accordance with the terms of licences issued by the Copyright Licensing
Agency. Enquiries concerning reproduction outside those terms should be sent to the
publisher at the undermentioned address:
www.theiet.org
While the author and publisher believe that the information and guidance given in
this work are correct, all parties must rely upon their own skill and judgement when
making use of them. Neither the author nor the publisher assumes any liability to
anyone for any loss or damage caused by any error or omission in the work, whether
such an error or omission is the result of negligence or any other cause. Any and all
such liability is disclaimed.
The moral rights of the author to be identified as author of this work have been
asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Preface ix
3 Describing functions 25
3.1 Fundamentals 26
3.2 Describing functions for common nonlinear components 29
3.3 Describing function analysis of nonlinear systems 34
4 Stability theory 41
4.1 Basic definitions 41
4.2 Linearisation and local stability 45
4.3 Lyapunov’s direct method 46
4.4 Lyapunov analysis of linear time-invariant systems 51
This book is intended for the use as a textbook at MSc and senior undergraduate
level in control engineering and related disciplines such as electrical, mechanical,
chemical and aerospace engineering and applied mathematics. It can also be used as
a reference book by control engineers in industry and research students in automation
and control. It is largely, although not entirely, based on the course unit bearing the
same name as the book title that I have been teaching for several years for the MSc
course at Control Systems Centre, School of Electrical and Electronic Engineering,
The University of Manchester. The beginning chapters cover fundamental concepts
in nonlinear control at moderate mathematical level suitable for students with a first
degree in engineering disciplines. Simple examples are used to illustrate important
concepts, such as the difference between exponential stability and asymptotic stability.
Some advanced and recent stability concepts such as input-to-state stability are also
included, mainly as an introduction at a less-demanding mathematical level compared
with their normal descriptions in the existing books, to research students who may
encounter those concepts in literature. Most of the theorems in the beginning chapters
are introduced with the proofs, and some of the theorems are simplified with less
general scopes, but without loss of rigour. The later chapters cover several topics
which are closely related to my own research activities, such as nonlinear observer
design and asymptotic disturbance rejection of nonlinear systems. They are included
to demonstrate the applications of fundamental concepts in nonlinear and adaptive
control to MSc and research students, and to bridge the gap between a normal textbook
treatment of control concepts and that of research articles published in academic
journals. They can also be used as references for the students who are working on
the related topics. At the end of the book, applications to less traditional areas such
as control of circadian rhythms are also shown, to encourage readers to explore new
applied areas of nonlinear and adaptive control.
This book aims at a unified treatment of adaptive and nonlinear control.
It is well known that the dynamics of an adaptive control system for a linear dynamic
system with unknown parameters are nonlinear. The analysis of such adaptive sys-
tems requires similar techniques to the analysis for nonlinear systems. Some more
recent control design techniques such as backstepping relies on Lyapunov functions
to establish the stability, and they can be directly extended to adaptive control of non-
linear systems. These techniques further reduce the traditional gap between adaptive
control and nonlinear control. Therefore, it is now natural to treat adaptive control
as a part of nonlinear control systems. The foundation for linear adaptive control
and nonlinear adaptive control is the positive real lemma, which is related to passive
systems in nonlinear control and Lyapunov analysis. It is decided to use the positive
x Nonlinear and adaptive control systems
real lemma and related results in adaptive control and nonlinear control as the
main theme of the book, together with Lyapunov analysis. Other important results
such as circle criterion and backstepping are introduced as extensions and further
developments from this main theme.
For a course unit of 15 credits on nonlinear and adaptive control at the Control
Systems Centre, I normally cover Chapters 1–4, 6 and 7, and most of the contents of
Chapter 5, and about half of the materials in Chapter 9. Most of the topics covered
in Chapters 8, 10 and 11 have been used as MSc dissertation projects and some of
them as PhD projects. The contents may also be used for an introductory course
on nonlinear control systems, by including Chapters 1–5, 8 and the first half of
Chapter 9, and possibly Chapter 6. For a course on adaptive control of nonlinear
systems, an instructor may include Chapters 1, 2, 4, 5, 7 and 9. Chapter 8 may be
used alone as a brief introduction course to nonlinear observer design. Some results
shown in Chapters 8, 10 and 11 are recently published, and can be used as references
for the latest developments in related areas.
Nonlinear and adaptive control is still a very active research area in automation
and control, with many new theoretic results and applications continuing to merge.
I hope that the publication of this work will have a good impact, however small,
on students’ interests to the subject. I have been benefited from my students, both
undergraduate and MSc students, through my teaching and other interactions with
them, in particular, their questions to ask me to explain many of the topics covered
in this book with simple languages and examples. My research collaborators and
PhD students have contributed to several topics covered in the book through joint
journal publications, whose names may be found in the references cited at the end of
the book. I would like to thank all the researchers in the area who contributed to the
topics covered in the book, who are the very people that make this subject fascinating.
Chapter 1
Introduction to nonlinear and adaptive systems
Nonlinearity is ubiquitous, and almost all the systems are nonlinear systems. Many
of them can be approximated by linear dynamic systems, and significant amount of
analysis and control design tools can then be applied. However, there are intrinsic
nonlinear behaviours which cannot be described using linear systems, and analysis
and control are necessarily based on nonlinear systems. Even for a linear system,
if there are uncertainties, nonlinear control strategies such as adaptive control may
have to be used. In the last two decades, there have been significant developments in
nonlinear system analysis and control design. Some of them are covered in this book.
In this chapter, we will discuss typical nonlinearities and nonlinear behaviours, and
introduce some basic concepts for nonlinear system analysis and control.
ẋ = Ax + Bu
(1.1)
y = Cx + Du,
2 Nonlinear and adaptive control systems
ẋ = Ax + Bσ (u)
(1.2)
y = Cx + Du,
The only difference between the systems (1.2) and (1.1) is the saturation function σ .
It is clear that the saturation function σ is a nonlinear function, and therefore this
system is a nonlinear system. Indeed, it can be seen that the superposition principle
does not apply, because after the input saturation, any increase in the input amplitude
does not change the system response at all.
A general nonlinear system is often described by
ẋ = f (x, u, t)
(1.4)
y = h(x, u, t),
where x ∈ Rn , u ∈ Rm and y ∈ Rs are the system state, input and output respectively,
and f : Rn × Rm × R → Rn and h : Rn × Rm × R → Rs are nonlinear functions.
Nonlinearities of a dynamic system are described by nonlinear functions. We may
roughly classify nonlinear functions in nonlinear dynamic systems into two types.
The first type of nonlinear functions are analytical functions such as polynomials,
sinusoidal functions and exponential functions, or composition of these functions.
The derivatives of these functions exist, and their Taylor series can be used to obtain
good approximations at any points. These nonlinearities may arise from physical
modelling of actual systems, such as nonlinear springs and nonlinear resistors, or due
to nonlinear control design, such as nonlinear damping and parameter adaptation law
for adaptive control. There are nonlinear control methods such as backstepping which
requires the existence of derivatives up to certain orders.
Introduction to nonlinear and adaptive systems 3
ẋ = ax + u,
u = −cx − a+ x
ẋ = −cx + (a − a+ )x.
u = −cx − âx
â˙ = x2 .
ẋ = −cx + ãx
ã˙ = −x2 .
This adaptive system is nonlinear, even though the original uncertain system is linear.
This adaptive system is stable, and it does need the stability theory introduced later
in Chapter 7 of the book.
4 Nonlinear and adaptive control systems
ẋ = −x + x2 .
This system has two equilibria at x = 0 and x = 1. The behaviours around these equi-
librium points are very different, and they cannot be described by a single linearised
model.
Limit cycles are a phenomenon that periodic solutions exist and attract nearby
trajectories in positive or negative time. Closed curve solutions may exist for linear
systems, such as solution to harmonic oscillators. But they are not attractive to nearby
trajectories and not robust to any disturbances. Heart beats of human body can be
modelled as limit cycles of nonlinear systems.
High-order harmonics and subharmonics occur in the system output when subject
to a harmonic input. For linear systems, if the input is a harmonic function, the
output is a harmonic function, with the same frequency, but different amplitude
and phase. For nonlinear systems, the output may even have harmonic functions with
fractional frequency of the input or multiples of the input frequency. This phenomenon
is common in power distribution networks.
Finite time escape can happen in a nonlinear system, i.e., the system state tends
to infinity at a finite time. This will never happen for linear systems. Even for an
unstable linear system, the system state can only grow at an exponential rate. The
finite time escape can cause a problem in nonlinear system design, as a trajectory
may not exist.
Finite time convergence to an equilibrium point can happen to nonlinear systems.
Indeed, we can design nonlinear systems in this way to achieve fast convergence.
This, again, cannot happen for linear systems, as the convergence rate can only be
exponential, i.e., a linear system can only converge to its equilibrium asymptotically.
Chaos can only happen in nonlinear dynamic systems. For some class of non-
linear systems, the trajectories are bounded, but not converge to any equilibrium
or limit cycles. They may have quasi-periodic solutions, and the behaviour is very
difficult to predict.
There are other nonlinear behaviours such as bifurcation, etc., which cannot
happen in linear systems. Some of the nonlinear behaviours are covered in detail in
this book, such as limit cycles and high-order harmonics. Limit cycles and chaos are
Introduction to nonlinear and adaptive systems 5
discussed in Chapter 2, and limit cycles also appear in other problems considered
in this book. High-order harmonics are discussed in disturbance rejection. When the
disturbance is a harmonic signal, the internal model for disturbance rejection has to
consider the high-order harmonics generated due to nonlinearities.
People often start with linearisation of a nonlinear system. If the control design
based on a linearised model works, then there is no need to worry about nonlinear
control design. Linearised models depend on operating points, and a switching strat-
egy might be needed to move from one operating point to the others. Gain scheduling
and linear parameter variation (LPV) methods are also closely related to linearisation
around operating points.
Linearisation can also be achieved for certain class of nonlinear systems through
a nonlinear state transformation and feedback. This linearisation is very much differ-
ent from linearisation around operating points. As shown later in Chapter 6, a number
of geometric conditions must be satisfied for the existence of such a nonlinear trans-
formation. The linearisation obtained in this way works globally in the state space,
not just at one operating point. Once the linearised model is obtained, further control
design can be carried out using design methods for linear systems.
Nonlinear functions can be approximated using artificial neuron networks, and
fuzzy systems and control methods have been developed using these approximation
methods. The stability analysis of such systems is often similar to Lyapunov function-
based design method and adaptive control. We will not cover them in this book. Other
nonlinear control design methods such as band–band control and sliding mode control
are also not covered.
In the last two decades, there were developments for some more systematic
control design methods, such as backstepping and forwarding. They require the system
to have certain structures so that these iterative control designs can be carried out.
Among them, backstepping method is perhaps the most popular one. As shown in
Chapter 9 in the book, it requires the system state space function in a sort of lower-
triangular form so that at each step a virtual control input can be designed. Significant
amount of coverage of this topic can be found in this book. Forwarding control design
can be interpreted as a counterpart of backstepping in principle, but it is not covered
in this book.
When there are parametric uncertainties, adaptive control can be introduced to
tackle the uncertainty. As shown in a simple example earlier, an adaptive control sys-
tem is nonlinear, even for a linear system. Adaptive technique can also be introduced
together with other nonlinear control design methods, such as backstepping method.
In such a case, people often give it a name, adaptive backstepping. Adaptive control
for linear systems and adaptive backstepping for nonlinear systems are covered in
details in Chapter 7 and Chapter 9 in this book.
Similar to linear control system design, nonlinear control design methods can
also be grouped as state-feedback control design and output-feedback control design.
The difference is that the separation principle is not valid for nonlinear control design
in general, that is if we replace the state in the control input by its estimate, we would
not be able to guarantee the stability of the closed-loop system using state estimate.
Often state estimation must be integrated in the control design, such as observer
backstepping method.
State estimation is an important topic for nonlinear systems on its own. Over the
last three decades, various observer design methods have been introduced. Some of
them may have their counterparts in control design. Design methods are developed
Introduction to nonlinear and adaptive systems 7
for different nonlinearities. One of them is for systems with Lipschitz nonlinearity,
as shown in Chapter 8. A very neat nonlinear observer design is the observer design
with output injection, which can be applied to a class of nonlinear systems whose
nonlinearities are only of the system output.
In recent years, the concept of semi-global stability is getting more popular.
Semi-global stability is not as good as global stability, but the domain of attraction
can be as big as you can specify. The relaxation in the global domain of attraction does
give control design more freedom in choosing control laws. One common strategy is
to use high gain control together with saturation. We will not cover it in this book,
but the design methods in semi-global stability can be easily followed once a reader
is familiar with the control design and analysis methods introduced in this book.
Chapter 2
State space models
The nonlinear systems under consideration in this book are described by differential
equations. In the same way as for linear systems, we have system state variables, inputs
and outputs. In this chapter, we will provide basic definitions for state space models of
nonlinear systems, and tools for preliminary analysis, including linearisation around
operating points. Typical nonlinear behaviours such as limit cycles and chaos will
also be discussed with examples.
with γ > 0.
Note that Lipschitz condition implies continuity with respect to x. The existence
and uniqueness of a solution for (2.1) are guaranteed by the function f being Lipschitz
and being continuous with respect to t.
10 Nonlinear and adaptive control systems
Remark 2.1. The continuity of f with respect to t and state variable x might be
stronger than we have in real applications. For example, a step function is not con-
tinuous in time. In the case that there are finite number of discontinuities in a given
interval, we can solve the equation of a solution in each of the continuous region, and
join them together. There are situations of discontinuity with state variable, such as
an ideal relay. In such a case, the uniqueness of the solution can be an issue. Further
discussion on this is beyond the scope of this book. In the systems considered in
the book, we would assume that there would be no problem with the uniqueness of
a solution.
The system state contains the whole information of the behaviour. However,
for a particular application, only a subset of the state variables or a function of state
variables is of interest, which can be denoted as y = h(x, u, t) with h : Rn × Rm ×
R → Rs , normally with s < n. We often refer to y as the output of the system. To
write them together with the system dynamics, we have
ẋ = f (x, u, t), x(0) = x0
y = h(x, u, t).
In this book, we mainly deal with time-invariant systems. Hence, we can drop the
variable t in f and h and write the system as
where x ∈ Rn is the state of the system, y ∈ Rs and u ∈ Rm are the output and the
input of the system respectively, and f : Rn × Rm → Rn and h : Rn × Rm → Rs are
continuous functions.
Nonlinear system dynamics are much more complex than linear systems in
general. However, when the state variables are subject to small variations, we would
expect the behaviours for small variations to be similar to linear systems, based on
the fact that
∂f ∂f
f (x + δx, u + δu) ≈ f (x, u) + (x, u)δx + (x, u)δu,
∂x ∂u
when δx and δu are very small.
An operating point at (xe , ue ) is taken with x = xe and u = ue being constants
such that f (xe , ue ) = 0. A linearised model around the operation point can then be
obtained. Let
x̄ = x − xe ,
ū = u − ue ,
ȳ = h(x, u) − h(xe , ue ),
∂fi
ai,j = (xe , ue ),
∂xj
∂fi
bi,j = (xe , ue ),
∂uj
∂hi
ci,j = (xe , ue ),
∂xj
∂hi
di,j = (xe , ue ).
∂uj
Remark 2.2. For a practical system, a control input can keep the state in an equilib-
rium point, i.e., at a point such that ẋ = 0, and therefore it is natural to look at the
linearisation around this point. However, we can obtain linearised model at points that
are not at equilibrium. If (xe , ue ) is not an equilibrium point, we have f (xe , ue ) = 0.
We can carry out the linearisation in the same way, but the resultant linearised system
is given by
x̄˙ = Ax̄ + Bū + d
ȳ = C x̄ + Dū,
ẋ = f (x), (2.5)
Remark 2.3. It is easy to see that for a system in (2.3), if the control input remains
constant, then it is an autonomous system. We only need to re-define the function f as
fa (x) := f (x, uc ) where uc is a constant input. Even if the inputs are polynomials and
sinusoidal functions of time, we can convert the system to the autonomous system by
modelling the sinusoidal and polynomial functions as the state variables of a linear
dynamic system, and integrate this system into the original system. The augmented
system is then an autonomous system.
x2
x1
Stable node
Unstable node, for λ1 > 0, λ2 > 0. This singular point is unstable, and the trajectories
diverge from the point, but not spiral around it.
x2
x1
Unstable node
Saddle point, for λ1 < 0, λ2 > 0. With one positive and one negative eigenvalues, the
hyperplane in three dimensions may look like a saddle. Some trajectories converge
to the singular point, and others diverge, depending on the directions of approaching
the point.
x2
x1
Saddle point
Stable focus, for λ1,2 = μ ± jν, (μ < 0). With a negative real part for a pair of con-
jugate poles, the singular point is stable. Trajectories converge to the singular point,
14 Nonlinear and adaptive control systems
spiralling around. In time domain, the solutions are similar to decayed sinusoidal
functions.
x2
x1
Stable focus
Unstable focus, for λ1,2 = μ ± jν, (μ > 0). The real part is positive, and therefore
the singular point is unstable, with the trajectories spiralling out from the singular
point.
x2
x1
Unstable focus
Centre, for λ1,2 = ±jν, (ν > 0). For the linearised model, when the real part is zero,
the norm of the state is constant. In phase portrait, there are closed orbits around the
singular point.
x2
x1
Centre
To draw a phase portrait, the analysis of singular points is the first step. Based
on the classification of the singular points, the behaviours in neighbourhoods of these
points are more or less determined. For other regions, we can calculate the directions
of the movement from the directives.
At any point, the slope of trajectory can be computed by
dx2 f2 (x1 , x2 )
= .
dx1 f1 (x1 , x2 )
State space models 15
With enough points in the plane, we should be able to sketch phase portraits connect-
ing the points in the directions determined by the slopes.
Indeed, we can even obtain curves with constant slopes, which are named as
isoclines. An isocline is a curve on which (f2 (x1 , x2 )/f1 (x1 , x2 )) is constant. This, again,
can be useful in sketching a phase portrait for a second-order nonlinear system.
It should be noted that modern computer simulation can provide accurate solu-
tions to many nonlinear differential equations. For this reason, we will not go to
further details of drawing phase portraits based on calculating slope of trajectories
and isoclines.
ẋ1 = x2
ẋ2 = −x2 − 2x1 + x12 .
Setting
0 = x2 ,
0 = −x2 − 2x1 + x12 ,
we obtain two singular points (0, 0) and (−2, 0).
Linearised system matrix for the singular point (0, 0) is obtained as
0 1
A= ,
−1 −2
and the eigenvalues are obtained as λ1 = λ2 = −1. Hence, this singular point is a
stable node.
For the singular point (−2, 0), the linearised system matrix is obtained as
0 1
A= ,
2 −1
and the eigenvalues are λ1 = −2 and λ2 = 1. Hence, this singular point is a saddle
point. It is useful to obtain the corresponding eigenvectors to determine which direc-
tion is converging and which is diverging. The eigenvalues v1 and v2 , for λ1 = −2
and λ2 = 1, are obtained as
1 1
v1 = , v2 =
−2 1
This suggests that along the direction of v1 , relative to the singular point, the state
converges to the singular point, while along v2 , the state diverges. We can clearly
see from Figure 2.1 that there is a stable region near the singular point (0, 0).
However, in the neighbourhood of (−2, 1) one part of it is stable, and the other part
is unstable.
16 Nonlinear and adaptive control systems
0.5
−0.5
−1
−1.5
x2
−2
−2.5
−3
−3.5
−4
−4 −3 −2 −1 0 1 2 3
x1
where H is the inertia, δ is the rotor angle, Pm is the mechanical power and Pe is the
maximum electrical power generated. We may view Pm as the input and Pe sin δ as
the output. For the convenience of presentation, we take H = 1, Pm = 1 and Pe = 2.
The state space model is obtained by letting x1 = δ and x2 = δ̇ as
ẋ1 = x2
ẋ2 = 1 − 2 sin (x1 ).
Note that there are an infinite number of singular points, as x1e = 2kπ + 16 π and
x1e = 2kπ + 56 π are also solutions for any integer value of k.
State space models 17
Let us concentrate on the analysis of the two singular points ( 16 π , 0) and ( 56 π , 0).
The linearised system matrix is obtained as
0 1
A= .
−2 cos (x1e ) 0
For ( 61 π , 0), the eigenvalues are λ1,2 = ±31/4 j, and therefore this singular point it a
centre.
For ( 56 π, 0), the eigenvalues are λ1 = −31/4 and λ2 = 31/4 . Hence, this singular
point it a saddle point. The eigenvalues v1 and v2 , for λ1 = −31/4 and λ2 = 31/4 , are
obtained as
1 1
v1 = , v2 = .
−31/4 31/4
0
x2
−1
−2
−3
−2 0 2 4 6 8
x1
Figure 2.2 shows a phase portrait obtained from computer simulation. The centre at
( 16 π , 0) and the saddle point at ( 56 π, 0) are clearly shown in the figure. The directions
of the flow can be determined from the eigenvectors of the saddle point. For example
the trajectories start from the points around (5, −3) and move upwards and to the left,
along the direction pointed by the eigenvector v1 towards the saddle point. Along the
direction pointed by v2 , the trajectories depart from the saddle point.
18 Nonlinear and adaptive control systems
Example 2.3. One form of van der Pol oscillator is described in the following
differential equation:
ÿ − (1 − y2 )ẏ + y = 0, (2.8)
where is a positive real constant.
If we take x1 = y and x2 = ẏ, we obtain the state space equation
ẋ1 = x2
ẋ2 = −x1 + (1 − x12 )x2 .
State space models 19
From this state space realisation, it can be seen that when = 0, van der Pol oscillator
is the same as a harmonic oscillator. For with small values, one would expect that
it behaves like a harmonic oscillator.
A more revealing state transformation for with big values is given by
x1 = y
1
x2 = ẏ + f (y),
dx2 x1
(x2 − f (x1 )) = 2 (2.10)
dx1
From the eigenvalues of A, we can see that this singular point is either an unstable
node or an unstable focus, depending on the value of . Phase portrait of van der
Pol oscillator with = 1 is shown in Figure 2.3 for two trajectories, one with initial
condition outside the limit cycle and one from inside. The broken line shows x2 =
f (x1 ). Figure 2.4 shows the phase portrait with = 10. It is clear from Figure 2.4 that
the trajectory sticks with the line x2 = f (x1 ) along the outside and then moves almost
horizontally to the other side, as predicted in the analysis earlier.
Limit cycles also exist in high-order nonlinear systems. As seen later in Chap-
ter 11, circadian rhythms can also be modelled as limit cycles of nonlinear dynamic
systems. For second-order autonomous systems, limit cycles are very typical trajec-
tories. The following theorem, Poincare–Bendixson theorem, describes the features
of trajectories of the second-order systems, from which a condition on the existence
of a limit cycle can be drawn.
0
x2
−1
−2
−3
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x1
0
x2
−1
−2
−3
−2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x1
For high-order nonlinear systems, there are more complicated features if the
trajectories remain in a bounded region. For the asymptotic behaviours of dynamic
systems, we define positive limit sets.
Definition 2.5. Positive limit set of a trajectory is the set of all the points for which
the trajectory converges to, as t → ∞.
Positive limit sets are also referred to as ω-limit sets, as ω is the last letter of
Greek letters. Similarly, we can define negative limit sets, and they are called α-limit
sets accordingly. Stable limit cycles are positive limit sets, so do stable equilibrium
points. The dimension for ω-limit sets is zero or one, depending on singular points or
limit cycles.
Strange limit sets are those limit sets which may or may not be asymptotically
attractive to the neighbouring trajectories. The trajectories they contain may be locally
divergent from each other, within the attracting set. Their dimensions might be frac-
tional. Such structures are associated with the quasi-random behaviour of solutions
called chaos.
Example 2.4. The Lorenz attractor. This is one of the most widely studied examples
of strange behaviour in ordinary differential equations, which is originated from
studies of turbulent convection by Lorenz. The equation is in the form
ẋ1 = σ (x2 − x1 )
ẋ2 = (1 + λ − x3 )x1 − x2 (2.11)
ẋ3 = x1 x2 − bx3 ,
where
√ √ σ , λ and b are √
positive√
constants. There are three equilibrium points (0, 0, 0),
( bλ, bλ, λ) and (− bλ, − bλ, λ). The linearised system matrix around the origin
is obtained as
⎡ ⎤
−σ σ 0
A = ⎣λ+1 −1 0 ⎦,
0 0 −b
and its eigenvalues are obtained as λ1,2 = −(σ − 1) ± (σ − 1)2 + 4σ λ/2 and
λ3 = −b. Since the first eigenvalue is positive, this equilibrium is unstable. It can
be shown that the other equilibrium points are unstable when the parameters satisfies
σ > b + 1,
(σ + 1)(σ + b + 1)
λ> .
σ −b−1
22 Nonlinear and adaptive control systems
35
30
25
20
x3
15
10
0
20
10 15
10
0 5
0
x2 −10 −5 x1
−10
−20 −15
20
15
10
0
x2
−5
−10
−15
−20
−15 −10 −5 0 5 10 15
x1
35
30
25
20
x3
15
10
0
−15 −10 −5 0 5 10 15
x1
15
10
0
x1
−5
−10
−15
0 20 40 60 80 100
Time (s)
In classical control, frequency response is a powerful tool for analysis and control
design of linear dynamic systems. It provides graphical presentation of system dynam-
ics and often can reflect certain physical features of engineering systems. The basic
concept of frequency response is that for a linear system, if the input is a sinusoidal
function, the steady-state response will still be a sinusoidal function, but with a
different amplitude and a different phase. The ratio of the input and output ampli-
tudes and the difference in the phase angles are determined by the system dynamics.
When there is a nonlinear element in a control loop, frequency response methods
cannot be directly applied. When a nonlinear element is a static component, i.e., the
input and output relationship can be described by an algebraic function, its output
to any periodic function will be a periodic function, with the same period as the
input signal. Hence, the output of a static nonlinear element is a periodic function
when the input is a sinusoidal function. It is well known that any periodic func-
tion with piece-wise continuity has its Fourier series which consists of sinusoidal
functions with the same period or frequency as the input with a constant bias, and
other sinusoidal functions with high multiple frequencies. If we take the term with
the fundamental frequency, i.e., the same frequency as the input, as an approxima-
tion, the performance of the entire dynamic system may be analysed using frequency
response techniques. Describing functions are the frequency response functions of
nonlinear components with their fundamental frequency terms as their approximate
outputs. In this sense, describing functions are first-order approximation in frequency
domain. It can also be viewed as a linearisation method in frequency domain for
nonlinear components.
Describing function analysis remains as an important tool for analysis of non-
linear systems with static components despite several more recent developments in
nonlinear control and design. It is relatively easy to use, and closely related to fre-
quency response analysis of linear systems. It is often used to predict the existence
of limit cycles in a nonlinear system, and it can also be used for prediction of subhar-
monics and jump phenomena of nonlinear systems. In this chapter, we will present
basic concept of describing functions, calculation of describing functions of common
nonlinear elements and how to use describing functions to predict the existence of
limit cycles.
26 Nonlinear and adaptive control systems
3.1 Fundamentals
For a nonlinear component described by a nonlinear function f : R → R, its
A sin(wt) w(t)
f (x)
output
w(t) = f (A sin(ωt))
to a sinusoidal input A sin (ωt) is a periodical function, although it may not be sinu-
soidal in general. Assuming that the function f is piecewise-continuous, w(t) is a
piecewise-continuous periodic function with the same period as the input signal. A
piecewise periodical function can be expanded in Fourier series
a0
∞
w(t) = + (an cos(nωt) + bn sin(nωt)), (3.1)
2 n=1
where
π
1
a0 = w(t)d(ωt)
π −π
π
1
an = w(t) cos(nωt)d(ωt)
π −π
π
1
bn = w(t) sin(nωt)d(ωt).
π −π
Remark 3.1. For a piecewise-continuous function w(t), the Fourier series on the
right-hand side of (3.1) converges to w(t) at any continuous point, and to the average
of two values obtained by taking limits from both sides at a dis-continuous point. If
we truncate the series up to order k,
a0
k
wk (t) = + (an cos(nωt) + bn sin(nωt)),
2 n=1
signal A sin(ωt), is a sinusoidal function in (3.2) with the Fourier coefficients a1 and
b1 shown in (3.1). Hence, we can analyse the frequency response of this nonlinear
component.
We can rewrite w1 in (3.2) as
Remark 3.2. A clear difference between the describing function of a nonlinear ele-
ment and the frequency response of a linear system is that the describing function
depends on the input amplitude. This reflects the nonlinear nature of the describing
function.
we have
3
b1 = A + A3 .
8
Therefore, the describing function is
b1 3
N (A, ω) = N (A) = = 1 + A2 .
A 8
Alternatively, we can also use the identity
to obtain
A3
w(t) = A sin(ωt) + sin3 (ωt)
2
A3 3 1
= A sin(ωt) + sin(ωt) − sin(3ωt)
2 4 4
3 3 1 3
= A + A sin(ωt) − A sin(3ωt).
8 8
Hence, we obtain b1 = A + 38 A3 from the first term.
Through the above discussion, describing functions are well defined for nonlinear
components whose input–output relationship can be well defined by piecewise-
continuous functions. These functions are time-invariant, i.e., the properties of
nonlinear elements do not vary with time. This is in line with the assumption for fre-
quency response analysis, which can only be applied to time-invariant linear systems.
We treat describing functions as the approximations at the fundamental frequencies,
and therefore in our analysis, we require a0 = 0 which is guaranteed by odd functions
for the nonlinear components. With the describing function of a nonlinear compo-
nent, we can then apply analysis in frequency responses for the entire system. For the
convenience of this kind of analysis, we often assume that the nonlinear component
for which the describing function is used to approximate its behaviours is the only
Describing functions 29
r + x w y
Σ f (x) G(s)
–
nonlinear component in the system, as shown in Figure 3.1. Hence, in the remain-
ing part of this chapter, we use the following assumptions for describing function
analysis:
The output to the input A sin(ωt), for A > a, is symmetric over quarters of a period,
and in the first quarter,
f(x)
–a
a x
kA sin(ωt), 0 ≤ ωt ≤ γ ,
w(t) = (3.6)
ka, γ < ωt ≤ π/2,
where γ = sin−1 (a/A). The function is odd, hence we have a1 = 0, and the symmetry
of w1 (t) implies that
30 Nonlinear and adaptive control systems
π/2
4
b1 = w1 sin(ωt)d(ωt)
π 0
γ π/2
4 4
= kA sin2 (ωt)d(ωt) + ka sin(ωt)d(ωt)
π 0 π γ
2kA 1 4ka
= γ − sin(2γ ) + cos(γ )
π 2 π
2kA
a 4ka
= γ − cos(γ ) + cos(γ )
π A π
2kA
a
= γ + cos(γ )
π A
2kA a a2
= γ+ 1− 2 .
π A A
a2
Note that we have used sin γ = a
A
and cos γ = 1− A2
. Therefore, the
describing function is given by
b1 2k −1 a a a2
N (A) = = sin + 1− 2 . (3.7)
A π A A A
Example 3.3. Ideal relay. The output from the ideal relay shown in Figure 3.3
(signum function) is described by, with M > 0,
−M , −π ≤ ωt < 0,
w(t) = (3.8)
M, 0 ≤ ωt < π.
f(x)
M
x
–M
f(x)
–a
a x
Example 3.4. Dead zone. A dead zone is a complement to saturation. A dead zone
shown in Figure 3.4 can be described by a nonlinear function
⎧
⎪ k(x − a), for x > a,
⎨
f (x) = 0, for |x| < a, (3.10)
⎪
⎩
k(x + a), for x < −a.
The output to the input A sin(ωt), for A > a, is symmetric over quarters of a
period, and in the first quarter,
0, 0 ≤ ωt ≤ γ ,
w(x) = (3.11)
k(A sin(ωt) − a), γ < ωt ≤ π/2,
where γ = sin−1 (a/A). The function is odd, hence we have a1 = 0, and the symmetry
of w(t) implies that
π/2
4
b1 = w(t) sin(ωt)d(ωt)
π 0
π/2
4
= k(A sin(ωt) − a)d(ωt)
π γ
2kA
π 1 4ka
= − γ + sin(2γ ) − cos(γ )
π 2 2 π
2kA
a
= kA − γ + cos(γ )
π A
2kA a a2
= kA − γ+ 1− 2 .
π A A
32 Nonlinear and adaptive control systems
b1 2k a a a2
N (A) = =k− sin−1 + 1− 2 . (3.12)
A π A A A
Remark 3.4. The dead-zone function shown in (3.10) complements the saturation
function shown in (3.5) in the sense that if we use fs and fd to denote the saturation
function and dead-zone function, we have fs + fd = k for the describing functions
shown in (3.7) and (3.12), the same relationship holds.
Example 3.5. Relay with hysteresis. Consider a case when there is a delay in the
ideal relay as shown in Figure 3.5. The nonlinear function for relay with hysteresis
can be described by
⎧
⎪
⎪ M, for x ≥ a,
⎨
−M , for |x| < a, ẋ > 0,
f (x) = (3.13)
⎪ M,
⎪ for |x| < a, ẋ < 0,
⎩
−M , for x ≤ −a.
When this nonlinear component takes A sin (ωt) as the input with A > a, the
output w(t) is given by
⎧
⎨ M, for −π ≤ ωt < (π − γ ),
w(t) = −M , for −(π − γ ) ≤ ωt < γ , (3.14)
⎩
M, for γ ≤ ωt < π ,
where γ = sin−1 ( Aa ). In this case, we still have a0 = 0, but not a1 . For a1 we have
f(x)
x
–M
4M
=− sin(γ )
π
4M a
=− .
π A
Similarly, we have
−(π −γ ) γ
1 1
b1 = M sin(ωt)d(ωt) + − M sin(ωt)d(ωt)
π −π π −(π −γ )
π
1
+ M sin(ωt)d(ωt)
π γ
4M
= cos(γ )
π
4M a2
= 1 − 2.
π A
From
b1 + ja1
N (A, ω) = ,
A
we have
4M a2 a
N (A) = 1− 2 −j . (3.15)
πA A A
Using the identity cos(γ ) + j sin(γ ) = ejr , we can rewrite the describing function as
4M −j arcsin(a/A)
N (A) = e .
πA
Remark 3.5. Comparing the describing function of the relay with hysteresis with
that of ideal relay in (3.9), the describing functions indicate that there is a delay in the
relay with hysteresis by arcsin(a/A) in terms of phase angle. There is indeed a delay
of γ = arcsin(a/A) in the time response w(t) shown in (3.14) with that of the ideal
relay. In fact, we could use this fact to obtain the describing function for the relay
with hysteresis.
34 Nonlinear and adaptive control systems
One of the most important applications of describing functions is to predict the exis-
tence of a limit cycle in a closed-loop system that contains a nonlinear component
with a linear transfer function, as shown in Figure 3.1. Consider a system with a lin-
ear transfer function G(s) and a nonlinear element with describing function N (A, ω)
in the forward path, under unit feedback. The input–output relations of the system
component by setting r = 0 can be described by
w = N (A, ω)x
y = G(jω)w
x = −y,
with y as the output and x as the input to the nonlinear component. From the above
equations, it can be obtained that
or
1
G(jω) = − . (3.17)
N (A, ω)
Therefore, the amplitude A and frequency ω of the limit cycle must satisfy the above
equation. Equation (3.17) is difficult to solve in general. Graphic solutions can be
found by plotting G(jω) and −1/N (A, ω) on the same graph to see if they intersect
each other. The intersection points are the solutions, from which the amplitude and
frequency of the oscillation can be obtained.
Remark 3.6. The above discussion is based on the assumption that the oscillation,
or limit cycle, can be well approximated by a sinusoidal function, and the nonlinear
component is well approximated by its describing function. The describing function
analysis is an approximate method in nature.
Only a stable limit cycle may exist in real applications. When we say stable limit
cycle, we mean that if the state deviates a little from the limit cycle, it should come
back. With the amplitude as an example, if A is perturbed from its steady condition,
say with a very small increase in the amplitude, for a stable limit cycle, the system
will decay to its steady condition.
Describing functions 35
The Nyquist criterion determines stability of the closed-loop system from the number
of encirclements of the Nyquist plot around point −1, or (−1, 0) in the complex plain.
In the case that there is a control gain K in the forward transfer function, the
characteristic equation is given by
−1
KG(s) + 1 = 0, or G(s) = .
K
In this case, the Nyquist criterion can be extended to determine the stability of the
closed loop by counting the encirclements of the Nyquist plot around (−1/K, 0) in the
complex plain in the same way as around (−1, 0). The Nyquist criterion for non-unity
forward path gain K is also referred to as the extended Nyquist criterion. The same
argument holds when k is a complex number.
We can apply the extended Nyquist criterion to determine the stability of a limit
cycle. When the condition specified in (3.17) is satisfied for some (A0 , ω0 ), A0 and
ω0 are the amplitude and frequency of the limit cycle respectively, and N (A0 , ω0 )
is a complex number. We can use the extended Nyquist criterion to determine the
stability of the limit cycle with the amplitude A0 and frequency ω0 by considering a
perturbation of A around A0 .
To simplify our discussion, let us assume that G(s) is stable and minimum phase.
It is known from the Nyquist criterion that the closed-loop system with constant
gain K is stable if the Nyquist plot does not encircle (−1/K, 0). Let us consider a
perturbation in A to A+ with A+ > A0 . In such a case, −1/N (A+ , ω0 ) is a complex
number in general. If the Nyquist plot does not encircle the point −1/N (A+ , ω0 ), we
conclude that the closed-loop system is stable with the complex gain −1/N (A+ , ω0 ).
Therefore, in a stable closed-loop system, the oscillation amplitude decays, which
makes A+ return to A0 . This implies that the limit cycle (A0 , ω0 ) is stable. Alternatively,
if the Nyquist plot encircles the point −1/N (A+ , ω0 ), we conclude that the closed-
loop system is unstable with the complex gain −1/N (A+ , ω0 ). In such a case, the
oscillation amplitude may grow even further, and does not return to A0 . Therefore,
the limit cycle is unstable.
Similar arguments can be made for the perturbation to a smaller amplitude. For
an A− < A0 , if the Nyquist plot does encircle the point −1/N (A− , ω0 ), the limit cycle
is stable. If the Nyquist plot does not encircle the point −1/N (A− , ω0 ), the limit cycle
is unstable.
When we plot −1/N (A, ω0 ) in the complex plane with A as a variable, we obtain
a line with direction of the increment of A. Based on the discussion above, the way
36 Nonlinear and adaptive control systems
Im
G( jw )
Re
A–
A+
Stable
1
–
N(A,w 0)
Im
G( jw )
Re
A+
A–
Unstable
1
–
N(A,w 0)
of the line for −1/N (A, ω0 ) intersects with the Nyquist plot determines the stability
of the limit cycle. Typical Nyquist plots of stable minimum-phase systems are shown
in Figures 3.6 and 3.7 for stable and unstable limit cycles with nonlinear elements
respectively.
We can summarise the above discussion for the stability criterion of limit cycles
using describing function.
Theorem 3.1. Consider a unity-feedback system with the forward path with stable
minimum phase transfer function G(s) and a nonlinear component with the describing
function N (A, ω), and suppose that the plots, −1/N and G(jω) intersect at the point
with A = A0 and ω = ω0 . The limit cycle at (A0 , ω0 ) is stable if the plot of −1/N (A, ω0 )
crosses the Nyquist plot from the inside of the encirclement to the outside of the
encirclement as A increases. The limit cycle at (A0 , ω0 ) is unstable if the plot of
−1/N (A, ω0 ) crosses the Nyquist plot from the outside of the encirclement to the
inside of the encirclement as A increases.
Remark 3.7. Theorem 3.1 requires the transfer function to be stable and minimum
phase, for the simplicity of the presentation. This theorem can be easily extended to
Describing functions 37
the case when G(s) is unstable or has unstable zeros by using corresponding stability
conditions based on the Nyquist criterion. For example if G(s) is stable and has one
unstable zero, then the stability criterion for the limit cycle will be opposite to the
condition stated in the theorem, i.e., the limit cycle is stable if the plot of −1/N (A, ω0 )
crosses the Nyquist plot from the outside of the encirclement to the inside of the
encirclement as A increases.
K
Example 3.6. Consider a linear transfer function G(s) = with K a
s(s + 1)(s + 2)
positive constant and an ideal relay in a closed loop, as shown in Figure 3.8. We will
determine if there exists a limit cycle and analyse the stability of the limit cycle.
r + x w K y
Σ
s(s + 1)(s + 2)
–
4M
For the ideal relay, we have N = . For the transfer function, we can
πA
obtain that
K
G(jω) =
jω(jω + 1)(jω + 2)
−3ω2 − jω(2 − ω2 )
=K .
(−3ω2 )2 + ω2 (2 − ω2 )2
From
1
G(jω) = − ,
N
we obtain two equations for real and imaginary parts respectively as
(G(jω)) = 0,
πA
(G(jω)) = − .
4M
From the equation of the imaginary part, we have
−ω(2 − ω2 )
K = 0,
(−3ω2 )2 + ω2 (2 − ω2 )2
√
which gives ω = 2. From the equation of the real part, we have
−3ω2 πA
K =− ,
(−3ω ) + ω (2 − ω )
2 2 2 2 2 4M
Hence, we have shown√ that there exists a limit cycle with amplitude and frequency
at (A, ω) = (2KM /3π , 2).
The plot of −(1/N (A)) = −(π A/4M ) overlaps with the negative side of the real
axis. As A increases from 0, −(1/N (A)) moves from the origin towards left. Therefore,
as A increases, −(1/N (A)) moves from inside of the encirclement of the Nyquist plot
to outside of the encirclement, and the limit cycle is stable, based on Theorem 3.1.
A simulation result for K = M = 1 is shown in Figure 3.9 with the amplitude
A = 0.22 and period T = 4.5 s, not far from the values A = 0.2212 and T = 4.4429,
predicted from the describing function analysis.
Example 3.7. In this example, we consider a van der Pol oscillator described by
We will use describing function analysis to predict the existence of a limit cycle, and
compare the predicted amplitudes and periods for different values with the simulated
ones.
To use the describing analysis, we need to formulate the system in the format of
one linear transfer function and a nonlinear element. Rearranging (3.18), we have
d 3
ÿ − ẏ + y = − y .
dt
0.25
0.2
0.15
0.1
0.05
y
−0.05
−0.1
−0.15
−0.2
−0.25
34 36 38 40 42 44 46
Time (s)
Hence, the system (3.18) can be described by a closed-loop system with a nonlinear
component
f (x) = x3
− 2 ω2 4
=− 2
(1 − ω2 )2 + 2 ω2 3A
√
which gives A = 2 3/3.
The linear part of the transfer function has one unstable pole. We need to take
this into consideration for the stability of the limit cycle. As A increases, −1/N (A)
moves from the left to the right along the negative part of the real axis, basically
from the outside of the encirclement of the Nyquist plot to the inside of the encir-
clement. This suggests that the limit cycle is stable, as there is an unstable pole in
the linear transfer function. The simulation results for = 1 and = 30 are shown in
Figures 3.10 and 3.11. In both cases, the amplitudes are very close to the predicted
one from the describing function analysis. For the period, the simulation result for
= 1 in Figure 3.10 is very close to 2π, but the period for = 30 is much better than
2π . This suggests that the describing function analysis gives a better approximation
for the case of = 1 than = 30. In fact, for a small value of the oscillation is
very similar to a sinusoidal function. With a big value of , the wave form is very
different from a sinusoidal function, and therefore the describing function method
cannot provide a good approximation.
40 Nonlinear and adaptive control systems
0.5
0
y
−0.5
−1
10 15 20 25 30 35
Time (s)
0.5
0
y
−0.5
−1
For control systems, design, one important objective is to ensure the stability of the
closed-loop system. For a linear system, the stability can be evaluated in time domain
or frequency domain, by checking the eigenvalues of the system matrix or the poles
of the transfer function. For nonlinear systems, the dynamics of the system cannot be
described by equations in linear state space or transfer functions in general. We need
more general definitions about the stability of nonlinear systems. In this chapter, we
will introduce basic concepts of stability theorems based on Lyapunov functions.
ẋ = f (x), (4.1)
where x ∈ D ⊂ R is the state of the system, and f : D ⊂ R −→ R is a continuous
n n n
not autonomous. However, for such a system, if we design a feedback control law
u = g(x) with g : Rn −→ Rm as a continuous function, the closed-loop system
ẋ = f (x, g(x))
In this chapter, we will present basic definitions and results for stability of
autonomous systems. As discussed above, control systems can be converted to
autonomous systems by state feedback control laws.
There are many different definitions of stability for dynamics systems. Often
different definitions are needed for different purposes, and many of them are actually
the same when the system is linear. Among different definitions, the most fundamental
one is the Lyapunov stability.
Definition 4.1 (Lyapunov stability). For the system (4.1), the equilibrium point x = 0
is said to be Lyapunov stable if for any given positive real number R, there exists a
positive real number r to ensure that x(t) < R for all t ≥ 0 if x(0) < r. Otherwise
the equilibrium point is unstable.
x(t)
R
r
x(0)
with ω > 0. For this linear system, we can explicitly solve the differential equation
to obtain
cos ωt sin ωt
x(t) = x.
−sin ωt cos ωt 0
Stability theory 43
It is easy to check that we have x(t) = x0 . Hence, to ensure that x(t) ≤ R, we
only need to set r = R, i.e., if x0 ≤ R, we have x(t) ≤ R for all t > 0.
Note that for the system in Example 4.1, the system matrix has two eigenvalues
on the imaginary axis, and this kind of systems is referred to as critically stable in
many undergraduate texts. As shown in the example, this system is Lyapunov stable.
It can also be shown that for a linear system, if all the eigenvalues of the system
matrix A are in the closed left half of the complex plane, and the eigenvalues on the
imaginary axis are simple, the system is Lyapunov stable. However, if the system
matrix has multiple poles on the imaginary axis, the system is not Lyapunov stable.
For example let ẋ1 = x2 , and ẋ2 = 0 with x1 (0) = x1,0 , x2 (0) = x2,0 . It is easy to obtain
that x1 (t) = x1,0 + x2,0 t and x2 (t) = x2,0 . If we want x(t) ≤ R, there does not exist
a positive r for x(0) ≤ r to guarantee x(t) ≤ R. Therefore, this system is not
Lyapunov stable.
For linear systems, when a system is stable, the solution will converge to the equi-
librium point. This is not required by Lyapunov stability. For more general dynamic
systems, we have the following definition concerning with the convergence to the
equilibrium.
Definition 4.2 (Asymptotic stability). For the system (4.1), the equilibrium point
x = 0 is asymptotically stable if it is stable (Lyapunov) and furthermore limt→∞
x(t) = 0.
x(t)
r R
x(0)
Linear systems with poles in the open left half of the complex plane are asymp-
totically stable. The asymptotic stability only requires that a solution converges the
equilibrium point, but it does not specify the rate of convergence. In the following
definition, we specify a stability property with an exponential rate of convergence.
Definition 4.3 (Exponential stability). For the system (4.1), the equilibrium point
x = 0 is exponentially stable if there exist two positive real numbers a and λ such that
the following inequality holds:
For linear systems, the stability properties are relatively simple. If a linear system
is asymptotically stable, it can be shown that it is exponentially stable. Of course,
for nonlinear systems, we may have a system that is asymptotically stable, but not
exponentially stable.
where x ∈ R. Let us solve this differential equation. From the system equation we
have
dx
− = dt,
x3
which gives
1 1
− = 2t
x2 (t) x02
and
x0
x(t) = .
1 + 2x02 t
It is easy to see that x(t) decreases as t increases, and also limt→∞ x(t) = 0. Therefore,
this system is asymptotically stable. However, this system is not exponentially stable,
as there does not exist a pair of a and γ to satisfy
x0
≤ ax0 e−γ t .
1+ 2x02 t
1
1 + 2x02 te−γ t ≥ ,
a
which is not satisfied for any choices of a and γ , because the left-hand side converges
to zero. Hence, the system considered in this example is asymptotically stable, but
not exponentially stable.
In this section, we introduce a result for checking the stability of nonlinear systems
based on its linearised model.
Theorem 4.1 (Lyapunov’s linearisation method). For a linearised model, there are
three cases:
● If the linearised system has all the system’s poles in the open left half of the complex
plane, the equilibrium point is asymptotically stable for the actual nonlinear
system.
● If the linearised system has poles in the open right half of the complex plane, then
the equilibrium point is unstable.
● If the linearised system has poles on the imaginary axis, then the stability of the
original system cannot be concluded using the linearised model.
We do not show a proof of this theorem here. It is clear that this theorem can
be applied to check local stabilities of nonlinear systems around equilibrium points.
For the case that the linearised model has poles on the imaginary axis, this theorem
cannot give conclusive result about the stability. This is not a surprise, because stable
and unstable systems can have the same linearised model. For example the systems
ẋ = −x3 and ẋ = x3 have the same linearised model at x = 0, that is ẋ = 0, which is
marginally stable. However, as we have seen in Example 4.2, the system ẋ = −x3 is
asymptotically stable, and it is not difficult to see that ẋ = x3 is unstable. For both the
stable and unstable cases of linearised models, the linearised model approximates the
original system better when the domain around the equilibrium point gets smaller.
Hence, the linearised model is expected to reflect on the stability behaviours around
the equilibrium point.
46 Nonlinear and adaptive control systems
ẋ1 = x2 + x1 − x13 ,
ẋ2 = −x1 .
It can be seen that x = (0, 0) is an equilibrium point of the system. The linearised
model around x = (0, 0) is given by
ẋ = Ax
where
1 1
A=
−1 0
√
The linearised system is unstable as λ(A) = 1±2 3j . Indeed, this nonlinear system is a
van der Pols system, and the origin is unstable. Any trajectories that start from initial
point close to the origin and within the limit cycle will spiral out, and converge to the
limit cycle.
V̇ (x) ≤ 0 (4.4)
Theorem 4.2 (Lyapunov theorem for local stability). Consider the system (4.1). If
in D ⊂ Rn containing the equilibrium point x = 0, there exists a function V (x) :
D ⊂ Rn → R with continuous first-order derivatives such that
Proof. We need to find a value for r such that when x(0) < r, we have x(t) < R.
Define
BR := {x|x ≤ R} ⊂ D,
and let
a = min V (x).
x=R
Since V (x) is positive definite, we have a > 0. We then define the level set within BR
where c is a positive real constant and c < a. The existence of such a positive real
constant c is guaranteed by the continuity and positive definiteness of V . From the
definition of c , x ∈ c implies that x < R. Since V̇ ≤ 0, we have V (x(t)) ≤
V (x(0)). Hence, for any x(0) ∈ c , we have
which implies
x(t) < R.
Br := {x|x < r} ⊂ c .
Hence, we have
Br ⊂ c ⊂ BR .
48 Nonlinear and adaptive control systems
and x(t) < R. We have established that if V̇ is non-positive, the system is Lyapunov
stable.
Next, we will establish the asymptotic stability from the negative definiteness of
V̇ . For any initial point in D, V (x(t)) monotonically decreases with time t. Therefore,
there must be a lower limit such that
lim V (x(t)) = β ≥ 0.
t→∞
α = min (− V̇ (x)),
x∈D −β
where β := {x ∈ D|V (x) < β}. Since V̇ is negative definite, we have α > 0. From
the definition of α, we have
The right-hand side turns to negative when t is big enough, which is a contradiction.
Therefore, we can conclude that limt→∞ V (x(t)) = 0, which implies limt→∞ x(t) = 0.
2
θ̈ + θ̇ + sin θ = 0,
where θ is the angle. If we let x1 = θ and x2 = θ̇, we re-write the dynamic system as
ẋ1 = x2
ẋ2 = −sin x1 − x2 .
x22
V (x) = (1 − cos x1 ) + . (4.5)
2
The first term (1 − cos x1 ) in (4.5) can be viewed as the potential energy and the
x2
second term 22 as the kinetic energy. This function is positive definite in the domain
D = {|x1 | ≤ π , x2 ∈ R}. A direct evaluation gives
Stability theory 49
When establishing global stability using Lyapunov functions, we need the func-
tion V (x) to be unbounded as x tends to infinity. This may sound strange. The reason
behind this point is that we need the property that if V (x) is bounded, then x is
bounded, in order to conclude the boundedness of x from the boundedness of V (x).
This property is defined in the following function as the radial unboundedness of V .
Theorem 4.3 (Lyapunov theorem for global stability). For the system (4.1) with D =
Rn , if there exists a function V (x) : Rn → R with continuous first order derivatives
such that
Proof. The proof is similar to the proof of Theorem 4.2, except that for any given
point in Rn , we need to show that there is a level set defined by
c = {x ∈ Rn |V (x) < c}
to contain it.
Indeed, since the function V is radially unbounded, for any point in Br with any
positive real r, there exists a positive real constant c such that Br ⊂ c . It is clear
that the level set c is invariant for any c, that is, for any trajectory that starts in c
remains in c . The rest of the proof follows the same argument as in the proof of
Theorem 4.2. 2
V̇ = −x4
Theorem 4.4 (Exponential stability). For the system (4.1), if there exists a function
V (x) : D ⊂ Rn → R with continuous first-order derivatives such that
The proof of this theorem is relatively simple, and we are going to show it here.
We need a technical lemma, which is also needed later for stability analysis of robust
adaptive control systems.
implies that
t
V (t) ≤ e−at V (0) + e−α(t−τ ) g(τ )dτ , ∀t ≥ 0 (4.9)
0
Multiplying both sides of (4.11) by e−aτ gives (4.8). This completes the proof. 2
ẋ = Ax (4.13)
52 Nonlinear and adaptive control systems
where x ∈ Rn and A ∈ Rn×n , x is the state variable, and A is a constant matrix. From
linear system theory, we know that this system is stable if all the eigenvalues of A are
in the open left half of the complex plane. Such a matrix is referred to as a Hurwitz
matrix. Here, we would like to carry out the stability analysis using a Lyapunov
function. We can state the stability in the following theorem.
Theorem 4.6. For the linear system shown in (4.13), the equilibrium x = 0 is globally
and exponentially stable if and only if there exist positive definite matrices P and Q
such that
AT P + PA = −Q (4.14)
holds.
Let us use λmax (·) and λmin (·) to denote maximum and minimum eigenvalues of a
positive definite matrix. From (4.15), we have
V̇ ≤ −λmin (Q)x2
λmin (Q)
≤− x2 . (4.18)
λmax (P)
Now we can apply Theorem 4.4 with (4.17) and (4.18) to conclude that the equilibrium
point is globally and exponentially stable. Furthermore, we can identify a1 = λmin (P),
a2 = λmax (P), a3 = λmin (Q) and b = 2. Following the proof of Theorem 4.4, we have
λmax (P) λ
− min
(Q)
t
x(t) ≤ x(0)e 2λmax (P) . (4.19)
λmin (P)
x(t) ≤ ax(0)e−λt
Stability theory 53
for some positive real constants a and λ, which implies limt→∞ x(t) = 0. Since
we can conclude limt→∞ eAt = 0. In such a case, for a positive definite matrix Q, we
can write
∞
d[exp (AT t)Q exp (At)] = −Q. (4.20)
0
Let
∞
P= exp (AT t)Q exp (At)t
0
and if we can show that P is positive definite, then we obtain (4.14), and hence
complete the proof. Indeed, for any z ∈ Rn = 0, we have
∞
z T Pz = z T exp (AT t)Q exp (At)zdt.
0
Since Q is positive definite, and eAt is non-singular for any t, we have z T Pz > 0, and
therefore P is positive definite. 2
Chapter 5
Advanced stability theory
Lyapunov direct method provides a tool to check the stability of a nonlinear system
if a Lyapunov function can be found. For linear systems, a Lyapunov function can
always be constructed if the system is asymptotically stable. In many nonlinear sys-
tems, a part of the system may be linear, such as linear systems with memoryless
nonlinear components and linear systems with adaptive control laws. For such a sys-
tem, a Lyapunov function for the linear part may be very useful in the construction
for the Lyapunov function for the entire nonlinear system. In this chapter, we will
introduce one specific class of linear systems, strict positive real systems, for which,
an important result, Kalman–Yakubovich lemma, is often used to guarantee a choice
of the Lyapunov function for stability analysis of several types of nonlinear systems.
The application of Kalman–Yakubovich lemma to analysis of adaptive control sys-
tems will be shown in later chapters, while in this chapter, this lemma is used for
stability analysis of systems containing memoryless nonlinear components and the
related circle criterion. In Section 5.3 of this chapter, input-to-state stability (ISS) is
briefly introduced.
For analysis of adaptive control systems, strictly positive real systems are more
widely used than the positive real systems.
56 Nonlinear and adaptive control systems
Definition 5.2. A proper rational transfer function G(s) is strictly positive real if
there exists a positive real constant such that G(s − ) is positive real.
1 a + σ − jω
G(s) = =
a + σ + jω (a + σ )2 + ω2
and
a+σ
(G(s)) = > 0.
(a + σ )2 + ω2
Hence, G(s) = 1
s+a
is positive real. Furthermore, for any ∈ (0, a), we have
a−+σ
(G(s − )) = >0
(a − + σ )2 + ω2
Definition 5.1 shows that a positive real transfer function maps the closed right
half of the complex plane to itself. Based on complex analysis, we can obtain the
following result.
● all the poles of G(s) are in the closed left half of the complex plane
● any poles on the imaginary axis are simple and their residues are non-negative
● for all ω ∈ R, (G(jω)) ≥ 0 when jω is not a pole of G(s)
Proposition 5.2. A transfer function G(s) cannot be positive real if one of the
following conditions is satisfied:
s−1
Based on this proposition, the transfer functions G1 = , G2 =
s2 + as + b
s+1 1
and G3 = 2 are not positive real, for any real numbers a
s2 − s + 1 s + as + b
and b, because they are non-minimum phase, unstable, and with relative degree 2
s+4
respectively. It can also be shown that G(s) = 2 is not positive real as
√ s + 3s + 2
G(jω) < 0 for ω > 2 2.
One difference between strictly positive real transfer functions and positive real
transfer functions arises due to the poles on imaginary axis.
1
Example 5.2. Consider G(s) = . For s = σ + jω, we have?
s
1 σ
(G(s)) = = 2 .
σ + jω σ + ω2
Therefore, G(s) = 1
s
is positive real. However, G(s) = 1
s
is not strictly positive real.
For the stability analysis later in the book, we only need the result on strictly
positive real transfer functions.
Lemma 5.3. A proper rational transfer function G(s) is strictly positive real if and
only if
● G(s) is Hurwitz, i.e., all the poles of G(s) are in the open left half of the complex
plane.
● The real part of G(s) is strictly positive along the jω axis, i.e.,
∀ω ≥ 0, (G(jω)) > 0,
● lims→∞ G(s) > 0, or in case of lims→∞ G(s) = 0, limω→∞ ω2 (G(jω)) > 0.
Proof. We show the proof for sufficiency here, and omit the necessity, as it is more
involved. For sufficiency, we only need to show that there exists a positive real constant
such that G(s − ) is positive real.
Since G(s) is Hurwitz, there must exist a positive real constant δ̄ such that for
δ ∈ (0, δ̄ ], G(s − δ) is Hurwitz. Suppose (A, b, cT , d) is a minimum state space
realisation for G(s), i.e.,
G(s) = cT (sI − A)−1 b + d.
We have
where
for some positive real r1 and the existence of limω→∞ ω2 (E(jω)), which implies
for some r3 > 0. Hence, combining (5.2) and (5.4), we obtain, from (5.1), that
implies that
for some positive reals r4 and ω2 . From (5.1), (5.3) and (5.6), we obtain that
where ω3 = max{ω1 , ω2 }. From the second condition of the lemma, we have, for some
positive real constant r5 ,
Combining the results in (5.7) and (5.9), we obtain that (G(jω − δ)) > 0 by setting
δ = min{ rr42 , rr51 }. Therefore, we have shown that there exists a positive real δ such that
G(s − δ) is positive real. 2
Advanced stability theory 59
The main purpose of introducing strictly positive real systems is for the following
result, which characterises the systems using matrices in time domain.
AT P + PA = −Q,
(5.11)
Pb = c.
Remark 5.1. We do not provide a proof here, because the technical details in the
proof such as finding the positive definite P and the format of Q are beyond the scope
of this book. In the subsequent applications for stability analysis, we only need to
know the existence of P and Q, not their actual values for a given system. For example
in the stability analysis for adaptive control systems in Chapter 7, we only need to
make sure that the reference model is strictly positive real, which then implies the
existence of P and Q to satisfy (5.11).
ẋ = Ax + bu
y = cT x (5.12)
u = −F(y)y,
where x ∈ Rn is the state variable; y and u ∈ R are the output and input respectively;
and A, b and c are constant matrices with proper dimensions. The nonlinear component
is in the feedback law. Similar systems have been considered earlier using describing
functions for approximation to predict the existence of limit cycles. Nonlinear ele-
ments considered in this section are sector-bounded, i.e., the nonlinear feedback gain
can be expressed as
.
r=0 + u x = Ax + bu y
Σ
y = cTx
–
F(y)
F(y)y
b
The absolute stability refers to the globally asymptotic stability of the equilib-
rium point at the origin for the system shown in (5.12) for a class of sector-bounded
nonlinearities shown in (5.13). We will use Kalman–Yakubovich lemma for the sta-
bility analysis. If the transfer function for the linear part is strictly positive real, we
can establish the stability by imposing a restriction on the nonlinear element.
Lemma 5.5. For the system shown in (5.12), if the transfer function cT (sI − A)−1 b
is strictly positive real, the system is absolutely stable for F(y) > 0.
V = xT Px.
Advanced stability theory 61
where in obtaining the third line of equation, we used Pb = c from the Kalman–
Yakubovich lemma. Therefore, if F(y) > 0, we have
V̇ ≤ −xT Qx,
Note that the conditions specified in Lemma 5.4 are the sufficient conditions.
With the result shown in Lemma 5.5, we are ready to consider the general case
for α < F(y) < β. Consider the function defined by
F −α
F̃ = (5.14)
β −F
and obviously we have F̃ > 0. How to use this transformation for analysis of systems
stability?
With G(s) = cT (sI − A)−1 b, the characteristic equation of (5.12) can be
written as
G(s)F + 1 = 0. (5.15)
1 + βG F − α
· + 1 = 0. (5.18)
1 + αG β − F
Let
1 + βG
G̃ := (5.19)
1 + αG
and we can write (5.18) as
G̃ F̃ + 1 = 0 (5.20)
62 Nonlinear and adaptive control systems
which implies that the stability of the system (5.12) with the nonlinear gain shown in
(5.13) is equivalent to the stability of the system with the forward transfer function G̃
and the feedback gain F̃. Based on Lemma 5.5 and (5.20), we can see that the system
(5.12) is stable if G̃ is strictly positive real.
The expressions of F̃ in (5.14) and G̃ in (5.19) cannot deal with the case β = ∞.
In such a case, we re-define
F̃ = F − α (5.21)
which ensures that F̃ > 0. With this F̃, we can obtain the manipulated characteristic
equation as
G
· (F − α) + 1 = 0
1 + αG
which enables us to re-define
G
G̃ := . (5.22)
1 + αG
Theorem 5.6 (Circle criterion). For the system (5.12) with the feedback gain satisfying
the condition in (5.13), if the transfer function G̃ defined by
1 + βG(s)
G̃(s) =
1 + αG(s)
or in case of β = ∞, by
G(s)
G̃(s) =
1 + αG(s)
is strictly positive real, with G(s) = cT (sI − A)−1 b, the system is absolutely stable.
1 + βG(jω)
> 0, ∀ω ∈ R,
1 + αG(jω)
which is equivalent to
1/β + G(jω)
> 0, ∀ω ∈ R. (5.23)
1/α + G(jω)
Advanced stability theory 63
If 1
β
+ G(jω) = r1 ejθ1 and 1
α
+ G(jω) = r2 ejθ2 , the condition in (5.23) is satisfied by
π π
− < θ1 − θ2 < ,
2 2
which is equivalent to the point G(jω) that lies outside the circle centered at
(− 12 (1/α + 1/β), 0) with radius of 12 (1/α − 1/β) in the complex plane. This cir-
cle intersects the real axis at (− α1 , 0) and (− β1 , 0). Indeed, β1 + G(jω) is represented
as a vector from the point (− β1 , 0) to G(jω), and α1 + G(jω) as a vector from the
point (− α1 , 0) to G(jω). The angle between the two vectors will be less than π2 when
G(jω) is outside the circle, as shown in Figure 5.3. Since the condition must hold
for all ω ∈ R, the condition (G̃(jω)) > 0 is equivalent to the Nyquist plot of G(s)
that lies outside the circle. The condition that G̃ is strictly positive real requires that
the Nyquist plot of G(s) does not intersect with the circle and encircles the circle
counterclockwise the same number of times as the number of unstable poles of G(s),
as illustrated in Figure 5.4.
Alternatively, the circle can also be interpreted from complex mapping. From
(5.19), it can be obtained that
G̃ − 1
G= . (5.24)
β − α G̃
The mapping shown in (5.24) is a bilinear transformation, and it maps a line to a line
or circle. For the case of β > α > 0, we have
1 1 1 β/α
G=− − − . (5.25)
α α β G̃ − β/α
The function
β/α
G̃ − β/α
Im
G( jw)
q2 q1
Re
–1 –1
a b
π
Figure 5.3 Diagram of |θ1 − θ2 | < 2
64 Nonlinear and adaptive control systems
Im
G( jw)
Re
–1 –1
a b
maps the imaginary axis to a circle centred as (−1/2, 0) with the radius 1/2, i.e., the
line from (−1, 0) to (0, 0) on the complex plane is the diameter of the circle. Then
the function
1 1 β/α
− −
α β G̃ − β/α
maps the imaginary axis to a circle with the diameter on the line from (0, 0) to
( α1 − β1 , 0) on the complex plane. Finally, it can be seen from (5.25) that this map of G̃
to G maps the imaginary axis to the circle with the diameter on the line from (− α1 , 0)
to (− β1 , 0), or in other words, the circle centered as (− 12 (1/α + 1/β), 0) with radius
of 12 (1/α − 1/β). It can also be shown that the function maps the open left-hand
complex plane to the domain inside the circle.
Indeed, we can evaluate the circle directly from (5.24). Let u and v denote the
real and imaginary parts of the mapping of the imaginary axis, and we have
jω − 1 α + βω2
u = =− 2 ,
β − αjω α + β 2 ω2
jω − 1 (α − β)ω
v = = 2 .
β − αjω α + β 2 ω2
Denoting μ = βα ω, we obtain
1/α + (1/β)μ2 1 1 1 1 1 1 1 − μ2
u = − = − + + − ,
1 + μ2 2 α β 2 β α 1 + μ2
(1/β − 1/α)μ 1 1 1 2μ
v = = − .
1+μ 2 2 β α 1 + μ2
Advanced stability theory 65
ẋ = f (x, u) (5.26)
ẋ = f (x, 0)
is asymptotically stable, will the state remain bounded for a bounded input signal u?
ẋ = −x + (1 + 2x)u, x(0) = 0
ẋ = −x
which is asymptotically (exponentially) stable. However, the state of this system may
not remain bounded for a bounded input. For example if we let u = 1, we have
ẋ = x + 1
From the above example, it can be seen that even the corresponding autonomous
system is asymptotically stable, the state may not remain bounded subject to a bounded
input. We introduce a definition for systems with the property of bounded state with
bounded input.
To show a definition of input-to-state stable (ISS), we need to use comparison
functions, which are defined below.
66 Nonlinear and adaptive control systems
Definition 5.5. The system (5.26) is ISS if there exist a class KL function β and a
class K function γ such that
Proposition 5.7. The system shown in (5.26) is ISS if and only if there exist a class
KL function β and a class K function γ such that
x(t) ≤ β(x(0), t) + γ (u∞ )
and hence the system is ISS. From (5.27), there exist a class KL function β1 and a
class K function γ1
x(t) ≤ β1 (x(0), t) + γ1 (u∞ )
≤ max{2β1 (x(0), t), 2γ1 (u∞ )}
and therefore the system satisfies (5.28) with β = 2β1 and γ = 2γ1 . 2
ẋ = Ax + Bu,
where x ∈ Rn and u ∈ Rm are the state and input respectively, and A and B are matrices
with appropriate dimensions and A is Hurwitz. For this system, when u = 0, the
system is asymptotically stable, as A is Hurwitz. From its solution
t
x(t) = eAt x(0) + eA(t−τ ) Bu(τ )dτ ,
0
Advanced stability theory 67
we have
t
x(t) ≤ eAt x(0) + eA(t−τ ) dτ Bu∞ .
0
Since A is Hurwitz, there exist positive real constants a and λ such that eA(t) ≤
ae−λt . Hence, we can obtain
t
x(t) ≤ ae−λt x(0) + ae−λ(t−τ ) dτ Bu∞
0
−λt a
≤ ae x(0) + Bu∞
λ
It is easy to see that the first term in the above expression is a KL function of t and
x(0) and the second term is a K function of u∞ . Therefore, the linear system with
Hurwitz system matrix A is ISS.
Theorem 5.8. For the system (5.26), if there exists a function V (x) : Rn → R with
continuous first-order derivatives such that
a1 xb ≤ V (x) ≤ a2 xb , (5.29)
∂V
f (x, u) ≤ −a3 xb , ∀x ≥ ρ(u) (5.30)
∂x
where a1 , a2 , a3 and b are positive real constants and ρ is a class K function, the
system (5.26) is ISS.
Proof. From Theorem 4.4, we can see that the system is exponentially stable when
u = 0, and V (x) is a Lyapunov function for the autonomous system ẋ = f (x, 0). To
consider the case for non-zero input, let us find a level set based on V , defined by
c := {x|V (x) ≤ c}
Therefore, for any x outside c , we can obtain from (5.29) and (5.30), in a similar
way to the proof of Theorem 4.4,
a3
V̇ ≤ − V
a2
and then
1/b
a2 a
− a 3b t
x(t) ≤ x(0)e 2 .
a1
1/b 1/b
a2 a
− a 3b t a2
x ≤ max x(0)e 2 , ρ(u∞ ) .
a1 a1
Hence, the system is ISS with the gain function γ (·) = ( aa21 )1/b ρ(·).
There is a more general result than Theorem 5.8 that requires ẋ = f (x, 0) to be
asymptotically stable, not necessarily exponential stable. The proof of that theorem
is beyond the level of this text, and we include it here for completeness.
Theorem 5.9. For the system (5.26), if there exists a function V (x) : Rn → R with
continuous first-order derivatives such that
α1 (x) ≤ V (x) ≤ α2 (x), (5.31)
∂V
f (x, u) ≤ −α3 (x), ∀x ≥ ρ(u), (5.32)
∂x
where α1 , α2 , α3 are K∞ functions and ρ is a class K function, the system (5.26) is
ISS with the gain function γ (·) = α1−1 (α2 (ρ(·))).
Note that class K∞ functions are class K functions that satisfy the property
lim α(r) = ∞.
r→∞
Advanced stability theory 69
Corollary 5.10. For the system (5.26), if there exists a function V (x) : Rn → R with
continuous first-order derivatives such that
α1 (x) ≤ V (x) ≤ α2 (x), (5.33)
∂V
f (x) ≤ −α(x) + σ (u), (5.34)
∂x
The function which satisfies (5.31) and (5.32) or (5.33) and (5.34) is referred to as
ISS-Lyapunov function. In fact, it can be shown that the existence of an ISS-Lyapunov
function is also a necessary condition for the system to be ISS. Referring to (5.34),
the gain functions α and σ characterise the ISS property of the system, and they are
also referred to as an ISS pair. In other words, if we say a system is ISS with ISS pair
(α, σ ), we mean that there exists an ISS-Lyapunov function that satisfies (5.34).
ẋ = −x3 + u,
where x ∈ R is the state, and u is input. The autonomous part ẋ = −x3 is considered
in Example 4.2, and it is asymptotically stable, but not exponentially stable. Consider
an ISS-Lyapunov function candidate
1 2
V = x .
2
V̇ = −x4 − xu
1 1
≤ − x4 − |x|(|x|3 − 2|u|)
2 2
1
≤ − x4 , for|x| ≥ (2|u|)1/3 .
2
Hence the system is ISS with the gain function ρ(|u|) = (2|u|)1/3 , based on
Theorem 5.9. Alternatively, using Young’s inequality, we have
1 4 3 4/3
|x||u| ≤ |x| + |u|
4 4
70 Nonlinear and adaptive control systems
which gives
3 3
V̇ ≤ − x4 + |u|4/3 .
4 4
Therefore, the system is ISS based on Corollary 5.10 with α(·) = 34 (·)4 and
σ (·) = 34 (·)4/3 .
the subsystem (5.35) is ISS with x2 as the input, and the subsystem (5.36) is ISS with
u as input, the overall system with state x = [x1T , x2T ]T and input u is ISS.
Theorem 5.12 (ISS small gain theorem). If for the interconnected system
From Theorem 5.11, it can be seen that if the subsystem x2 is globally asymptoti-
cally stable when u = 0, the overall system is globally asymptotically stable. Similarly,
Theorem 5.12 can be used to establish the stability of the following system:
ẋ1 = f1 (x1 , x2 )
ẋ2 = f1 (x2 , x2 ),
and the global and asymptotic stability of the entire system can be concluded if the
gain condition shown in Theorem 5.12 is satisfied.
ẋ1 = −x13 + x2 ,
2/3
ẋ2 = x1 x2 − 3x2 .
Advanced stability theory 71
From Example 5.5, we know that x1 -subsystem is ISS with the gain function
γ1 (·) = (2·)1/3 . For the x2 -subsystem, we choose
1 2
V2 = x
2 2
and we have
5/3
V̇2 = −3x22 + x1 x2
≤ −x22 − |x2 |5/3 (2|x2 |1/3 − |x1 |)
|x1 | 3
≤ −x2 , for |x2 | >
2
2
Hence, the x2 -subsystem is ISS with the gain function γ2 (·) = ( 2· )3 . Now we have,
for r > 0,
1/3 1/3
r 3 1
γ1 (γ2 (r)) = 2 = r < r.
2 4
Definition 5.6. This system (5.40) has differential stability if there exists a Lyapunov
function V (x) such that V : Rn → R for all x, x̂ ∈ Rn , u ∈ Rs , satisfies
γ1 (x) ≤ V (x) ≤ γ2 (x),
∂V (x − x̂)
(f (x, u) − f (x̂, u)) ≤ −γ3 (x − x̂), (5.41)
∂x
∂V (x) c2
c1 ≤ γ3 (x),
∂x
Remark 5.2. The conditions specified in (5.41) are useful for observer design, in
particular, for the stability analysis of the reduced-order observers in Chapter 8.
A similar definition to differential stability is incremental stability. However, the
conditions specified in (5.41) are not always satisfied by the systems with incremental
stability. When x̂ = 0, i.e., in the case for one system only, the conditions specified
in (5.41) are then similar to the properties of the nonlinear systems with exponential
stability. The last condition in (5.41) is specified for interactions with other systems.
This condition is similar to the conditions for the existence of changing the supply
functions for inter-connection of ISS systems.
ẋ = Ax,
AT P + PA = −Q.
Let V (x) = xT Px. In this case, the conditions (5.41) are satisfied with
γ1 (x) = λmin (P)x2 ,
γ2 (x) = λmax (P)x2 ,
γ3 (x) = λmin (Q)x2 ,
λmin (Q)
c1 = , c2 = 2,
4(λmax (P))2
where λmin (·) and λmax (·) denote the minimum and maximum eigenvalues of a
positive definite matrix.
ẋ = Ax + Bu. (5.42)
ẋ = −x − 2 sin x.
V̇ = −x2 − 2x sin x.
V̇ ≤ −x2 .
V̇ ≤ −x2 − 2x sin x
≤ −x2 + 2|x|
2 |x|
= − 1− x − 2|x|
2
−1
π π
2
≤ − 1− x2 .
π
Hence, the system is exponentially stable. But this system is not differentially stable.
Indeed, let e = x − x̂. We have
By linearising the system at x = π and x̂ = π , and denoting the error at this point by
el , we have
Nonlinear systems can be linearised around operating points and the behaviours in
the neighbourhoods of the operating points are then approximated by their linearised
models. The domain for a locally linearised model can be fairly small, and this may
result in that a number of linearised models are needed to cover an operating range
of a system. In this chapter, we will introduce another method to obtain a linear
model for nonlinear systems via feedback control design. The aim is to convert a
nonlinear system to a linear one by state transformation and redefining the control
input. The resultant linear model describes the system dynamics globally. Of course,
there are certain conditions for the nonlinear systems to satisfy so that this feedback
linearisation method can be applied.
ẋ1 = x2 + x13
ẋ2 = x12 + u (6.1)
y = x1 .
ẏ = x2 + x13 ,
ÿ = 3x12 (x2 + x13 ) + x12 + u.
and we obtain
ÿ = v.
76 Nonlinear and adaptive control systems
Viewing v as the new control input, we see that the system is linearised. Indeed, let
us introduce a state transformation
ξ1 := y = x1 ,
ξ2 := ẏ = x2 + x13 .
We then obtain a linear system
ξ̇1 = ξ2
ξ̇2 = v.
We can design a state feedback law as
v = −a1 ξ1 − a2 ξ2
to stabilise the system with a1 > 0 and a2 > 0. The control input of the original system
is given by
As shown in the previous example, we can keep taking the derivatives of the
output y until the input u appears in the derivative, and then a feedback linearisation
law can be introduced. The derivatives of the output also introduce a natural state
transformation.
Consider a nonlinear system
ẋ = f (x) + g(x)u
(6.3)
y = h(x),
where x ∈ D ⊂ Rn is the state of the system; y and u ∈ R are output and input respec-
tively; and f and g : D ⊂ Rn → Rn are smooth functions and h : D ⊂ Rn → R is a
smooth function.
Remark 6.1. The functions f (x) and g(x) are vectors for a given point x in the state
space, and they are often referred to as vector fields. All the functions in (6.3) are
required to be smooth in the sense that they have continuous derivatives up to certain
orders when required. We use the smoothness of functions in the remaining part of
the chapter in this way.
are described by
y(ρ) = v (6.6)
for 1 ≤ ρ ≤ n.
For the system (6.3), the first-order derivative of the output y is given by
∂h(x)
ẏ = (f (x) + g(x)u)
∂x
:= Lf h(x) + Lg h(x)u,
∂h(x)
Lf h(x) = f (x).
∂x
This notation can be used iteratively, that is
where k ≥ 0 is an integer.
The solution to this problem depends on the appearance of the control input in
the derivatives of the output, which is described by the relative degree of the dynamic
system.
Definition 6.1. The dynamic system (6.3) has relative degree ρ at a point x if the
following conditions are satisfied:
Example 6.2. Consider the system (6.1). Comparing it with the format shown in
(6.3), we have
3
x1 + x 2 0
f (x) = , g(x) = , h(x) = x1 .
x12 1
78 Nonlinear and adaptive control systems
Lg h(x) = 0,
Lf Lg h(x) = 1.
For SISO linear systems, the relative degree is the difference between the orders
of the polynomials in the numerator and denominator of the transfer function. With
the definition of the relative degree, we can present the input–output feedback
linearisation using Lie derivatives.
Example 6.3. Consider the system (6.1) again, and continue from Example 6.2. With
Lg h(x) = 0, we have
ẏ = Lf h(x)
where
Lf h(x) = x13 + x2 .
where
Therefore, we have
The procedure shown in Example 6.3 works for systems with any relative degrees.
Suppose that the relative degree for (6.3) is ρ, which implies that Lg Lfk h(x) = 0 for
k = 0, . . . , ρ − 2. Therefore, we have the derivatives of y expressed by
1
u= (−Lfρ h(x) + v)
Lg Lfρ−1 h(x)
it results in
y(ρ) = v.
< dLfk h, adfl+1 g > = Lf < dLfk h, adfl g > − < dLfk+1 h, adfl g > (6.9)
80 Nonlinear and adaptive control systems
∂ξ
By now, we have enough tools and notations to show that ∂x
has full rank. From
the definition of the relative degree,
and
< dLfk h, adfl g > = (−1)l < dLfρ−1 h, g >, for k + l = ρ − 1. (6.11)
⎡ ⎤
dh(x)
∂ξ ⎢ dLf h(x) ⎥
⎢ .. ⎥
= ⎢ ⎥
∂x ⎣ . ⎦
dLfρ−1 h
has full rank. We summarise the result about input–output feedback linearisation in
the following theorem.
Theorem 6.1. If the system in (6.3) has a well-defined relative degree ρ in D, the
input–output dynamics of the system can be linearised by the feedback control law
1
u= (−Lfρ h(x) + v) (6.13)
Lg Lfρ−1 h(x)
ξ̇1 = ξ2
..
. (6.14)
ξ̇ρ−1 = ξρ
ξ̇ρ = v
The results shown in (6.10) and (6.11) can also be used to conclude the following
result which is needed in the next section.
Remark 6.2. The input–output dynamics can be linearised based on Theorem 6.1. In
the case of ρ < n, the system for ρ < n can be transformed under certain conditions
to the normal form
ż = f0 (z, ξ ),
ξ̇1 = ξ2 ,
..
.
ξ̇ρ−1 = ξρ ,
ξ̇ρ = Lfρ h + uLg Lfρ−1 h,
y = ξ1
where z ∈ Rn−ρ is the part of the state variables which are not in the input–output
dynamics of the system, and f0 : Rn → Rn−ρ is a smooth function. It is clear that
when ρ < n, the input–output linearisation does not linearise the dynamics
ż = f0 (z, ξ ). Also note that the dynamics ż = f0 (z, 0) are referred to as the zero
dynamics of the system.
With the coordinates z, ξ1 and ξ2 , we have the system in the normal form
ż = −z − ξ2 + ξ13
ξ̇1 = ξ2
ξ̇2 = z + ξ2 + ξ12 − ξ13 + 3ξ12 ξ2 + u.
It is clear that the input–output linearisation does not linearise the dynamics of z. Also
note that the zero dynamics for this system are described by
ż = −z.
The output function h(x) that satisfies the condition shown in (6.20) is a solution of
the partial differential equation
∂h
[g, adf g, . . . , adfn−2 g] = 0. (6.22)
∂x
84 Nonlinear and adaptive control systems
To discuss the solution of this partial differential equation, we need a few notations
and results. We refer to a collection of vector fields as a distribution. For example if
f1 (x), . . . , fk (x) are vector fields, with k a positive integer,
where
⎡ ⎤ ⎡ ⎤
2x2 1
f1 (x) = ⎣ 1 ⎦ , f2 (x) = ⎣ 0 ⎦ .
0 x2
A direct evaluation gives
∂f2 ∂f1
[f1 , f2 ] = f1 − f2
∂x ∂x
⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤
0 0 0 2x2 0 2 0 1
= ⎣0 0 0⎦⎣ 1 ⎦ − ⎣0 0 0⎦⎣ 0 ⎦
0 1 0 0 0 0 0 x2
⎡ ⎤
0
= ⎣0⎦.
1
It can be shown that [f1 , f2 ] ∈ / , and therefore is not involutive. Indeed, the rank
of the matrix [f1 , f2 , [f1 , f2 ]] is 3, which means that [f1 , f2 ] is linearly independent of f1
and f2 , and it cannot be a vector field in . The rank of [f1 , f2 , [f1 , f2 ]] can be verified
by its non-zero determinant as
⎡ ⎤
2x2 1 0
|[f1 , f2 , [f1 , f2 ]]| = ⎣ 1 0 0 ⎦ = −1.
0 x2 1
Feedback linearisation 85
Now we are ready to state the main result for full-state feedback linearisation.
Theorem 6.4. The system (6.16) is full-state feedback linearisable if and only if
∀x ∈ D
● the matrix G = [g(x), adf g(x), . . . , adfn−1 g(x)] has full rank
● the distribution Gn−1 = span{g(x), adf g(x), . . . , adfn−2 g(x)} is involutive
Proof. For the sufficiency, we only need to show that there exists a function h(x) such
that the relative degree of the system by viewing h(x) as the output is n, and the rest
follows from Theorem 6.1. From the second condition that Gn−1 is involutive, and
Frobenius theorem, there exists a function h(x) such that
∂h
[g, adf g, . . . , adfn−2 g] = 0
∂x
which is equivalent to, from Lemma 6.2,
Lg Lfn−1 h(x) = 0.
which implies
∂h
[g, adf g, . . . , adfn−2 g] = 0.
∂x
86 Nonlinear and adaptive control systems
From Frobenius theorem, we conclude that Gn−1 is involutive. Furthermore, from the
fact that the system with h(x) as the output has a relative degree n, we can show, in
the same way as the discussion that leading to Lemma 6.2, that
⎡ ⎤
dh(x)
⎢ dLf h(x) ⎥
⎢ ⎥
.. ⎥ g(x) adf g(x) · · · adf g(x)
n−1
⎢
⎣ . ⎦
dLfn−1 h
⎡ ⎤
0 ... 0 (−1)n−1 r(x)
⎢ 0 . . . (−1)n−2 r(x) ∗ ⎥
⎢ ⎥
=⎢ . . . . ⎥
⎣ .. .. .. .. ⎦
r(x) . . . ∗ ∗
where r(x) = Lg Lfn−1 h(x). This implies that G has rank n. This concludes the
proof. 2
Remark 6.3. Let us see the conditions in Theorem 6.4 for linear systems with
[f , g] = −Ab,
and
adfk g = (−1)k Ak b
It can be seen that the full rank condition of G is equivalent to the full controllability
of the linear system.
In the next example, we consider the dynamics of the system that was consid-
ered in Example 6.4 for input–output linearisation for the input h(x) = x1 . We will
show that the full-state linearisation can be achieved by finding a suitable output
function h(x).
Feedback linearisation 87
Lg h = 0,
Lf h = x2 + x13 − x3 ,
Lg Lf h = 0,
Lf2 h = 3x12 (x13 + x2 ) + x3 ,
Lg Lf2 h = 3x12 + 1,
Lf3 h = (15x14 + 6x1 x2 )(x13 + x2 ) + 3x12 (x3 + x12 ) + x12 .
Lg Lf2 h = 3x12 + 1 = 0.
ξ1 = x1 − x2 + x3 ,
ξ2 = x2 + x13 − x3 ,
ξ2 = 3x12 (x13 + x2 ) + x3 ,
1
u= v − (15x14 + 6x1 x2 )(x13 + x2 ) − 3x12 (x3 + x12 ) − x12 .
3x12 +1
Chapter 7
Adaptive control of linear systems
The basic design idea can be clearly demonstrated by first-order systems. Consider a
first-order system
ẏ + ap y = bp u, (7.1)
where y and u ∈ R are the system output and input respectively, and ap and bp are
unknown constant parameters with sgn(bp ) known. The output y is to follow the output
of the reference model
ẏm + am ym = bm r. (7.2)
The reference model is stable, i.e., am > 0. The signal r is the reference input. The
design objective is to make the tracking error e = y − ym converge to 0.
Let us first design a Model Reference Control (MRC), that is, the control design
assuming all the parameters are known, to ensure that the output y follows ym .
Rearrange the system model as
a p − am
ẏ + am y = bp u − y
bp
and therefore we obtain
a p − am bm
ė + am e = bp u − y− r
bp bp
:= bp (u − au y − ar r) ,
where
ap − a m
ay = ,
bp
bm
ar = .
bp
If all the parameters are known, the control law is designed as
u = ar r + ay y (7.3)
and the resultant closed-loop system is given by
ė + am e = 0.
The tracking error converges to zero exponentially.
One important design principle in adaptive control is the so-called the certainty
equivalence principle, which suggests that the unknown parameters in the control
design are replaced by their estimates. Hence, when the parameters are unknown,
let âr and ây denote their estimates of ar and ay , and the control law, based on the
certainty equivalence principle, is given by
u = âr r + ây y. (7.4)
Adaptive control of linear systems 91
Note that the parameters ar and ay are the parameters of the controllers, and they
are related to the original system parameters ap and bp , but not the original system
parameters themselves.
The certainty equivalence principle only suggests a way to design the adaptive
control input, not how to update the parameter estimates. Stability issues must be
considered when deciding the adaptive laws, i.e., the way how estimated parameters
are updated. For first-order systems, the adaptive laws can be decided from Lyapunov
function analysis.
With the proposed adaptive control input (7.4), the closed-loop system dynamics
are described by
ė + am e = bp (− ãy y − ãr r), (7.5)
where ãr = ar − âr and ãy = ay − ây . Consider the Lyapunov function candidate
1 2 |bp | 2 |bp | 2
V = e + ã + ã , (7.6)
2 2γr r 2γy y
where γr and γy are constant positive real design parameters. Its derivative along the
trajectory (7.5) is given by
˙r
ã ˙y
ã
V̇ = −am e2 + ãr |bp | − ebp r + ãy |bp | − ebp y .
γr γy
If we can set
ã˙ r
|bp | − ebp r = 0, (7.7)
γr
ã˙ y
|bp | − ebp y = 0, (7.8)
γy
we have
V̇ = −am e2 . (7.9)
Noting that â˙ r = −ã˙ r and â˙ y = −ã˙ y , the conditions in (7.7) and (7.8) can be satisfied
by setting the adaptive laws as
â˙ r = −sgn(bp )γr er, (7.10)
â˙ y = −sgn(bp )γy ey. (7.11)
The positive real design parameters γr and γy are often referred to as adaptive gains,
as they can affect the speed of parameter adaptation.
From (7.9) and Theorem 4.2, we conclude that the system is Lyapunov stable
with all the variables e, ãr and ãy bounded, and hence the boundedness of âr and ây .
However, based on the stability theorems introduced in Chapter 4, we cannot
conclude anything about the tracking error e other than its boundedness. In order
to do it, we need to introduce an important lemma for stability analysis of adaptive
control systems.
92 Nonlinear and adaptive control systems
Lemma 7.2. For the first-order system (7.1) and the reference model (7.2), the adap-
tive control input (7.4) together with the adaptive laws (7.10) and (7.11) ensures the
boundedness of all the variables in the closed-loop system, and the convergence to
zero of the tracking error.
Remark 7.1. The stability analysis ensures the convergence to zero of the tracking
error, but nothing can be told about the convergence of the estimated parameters. The
estimated parameters are assured to be bounded from the stability analysis. In general,
the convergence of the tracking error to zero and the boundedness of the adaptive
parameters are stability results that we can establish for MRAC. The convergence
of the estimated parameters may be achieved by imposing certain conditions of the
reference signal to ensure the system is excited enough. This is similar to the concept
of persistent excitation for system identification.
ẏm + 2ym = r,
we obtain that
ė + 2e = u − ay y − r.
The stability analysis follows the same discussion that leads to Lemma 7.2. Simulation
study has been carried out with a = −1, γ = 10 and r = 1. The simulation results are
shown in Figure 7.1. The figure shows that the estimated parameter converges to the
true value ay = −3. The convergence of the estimated parameters is not guaranteed
by Lemma 7.2. Indeed, some strong conditions on the input or reference signal are
needed to generate enough excitation for the parameter estimation to achieve the
convergence of the estimated parameters in general.
1 y
0.8 ym
y and ym
0.6
0.4
0.2
0
0 2 4 6 8 10
t (s)
0
Estimated parameter
−1
−2
−3
−4
0 2 4 6 8 10
t (s)
Zp (s)
y(s) = kp u(s), (7.13)
Rp (s)
where y(s) and u(s) denote the system output and input in frequency domain; kp is the
high frequency gain; and Zp and Rp are monic polynomials with orders of n − ρ and
n respectively with ρ as the relative degree. The reference model is chosen to have
the same relative degree of the system, and is described by
Zm (s)
ym (s) = km r(s), (7.14)
Rm (s)
where ym (s) is the reference output for y(s) to follow; r(s) is a reference input; and
km > 0 and Zm and Rm are monic Hurwitz polynomials.
The objective of MRC is to design a control input u such that the output of the
system asymptotically follows the output of the reference model, i.e., limt→∞ (y(t) −
ym (t)) = 0.
Note that in this chapter, we abuse the notations of y, u and r by using same
notations for the functions in time domain and their Laplace transformed functions
in the frequency domain. It should be clear from the notations that y(s) is the Laplace
transform of y(t) and similarly for u and r.
To design MRC for systems with ρ = 1, we follow a similar manipulation to the
first-order system by manipulating the transfer functions. We start with
and then
Zp (s) θ T α(s)
= 1− 1 ,
Zm (s) Zm (s)
Rm (s) − Rp (s) θ T α(s)
y(s) = − 2 y(s) − θ3 ,
Zm (s) Zm (s)
where θ1 ∈ Rn−1 , θ2 ∈ Rn−1 and θ3 ∈ R are constants and
we obtain that
Zm (s) θ1T α(s) θ2T α(s)
y(s) = kp u(s) − u(s) − y(s) − θ3 y(s) . (7.15)
Rm (s) Zm (s) Zm (s)
where
θ T = [θ1T , θ4T , θ3 , θ4 ],
ω = [ω1T , ω2T , y, r]T ,
with
α(s)
ω1 = u,
Zm (s)
α(s)
ω2 = y.
Zm (s)
96 Nonlinear and adaptive control systems
Remark 7.3. The control design shown in (7.17) is a dynamic feedback con-
α(s)
troller. Each element in the transfer matrix is strictly proper, i.e.,
Zm (s)
with relative degree greater than or equal to 1. The total number of parameters in
θ equals 2n.
Lemma 7.3. For the system (7.13) with relative degree 1, the control input (7.17)
solves MRC problem with the reference model (7.14) and limt→∞ (y(t) − ym (t)) = 0.
Proof. With the control input (7.17), the closed-loop dynamics are given by
Zm (s)
e1 (s) = kp (s),
Rm (s)
where (s) denotes exponentially convergent signals due to non-zero initial values.
The reference model is stable, and then the track error e1 (t) converges to zero
exponentially. 2
where
1
ω1 (s) = u(s),
s+3
1
ω2 (s) = y(s).
s+3
Note that the control input in the time domain is given by
u(s) = [2 10 − 4 1][ω1 (t) ω2 (t) y(t) r(t)]T ,
where
ω̇1 = −3ω1 + u,
ω̇2 = −3ω2 + y.
For a system with ρ > 1, the input in the same format as (7.17) can be obtained.
The only difference is that Zm is of order n − ρ < n − 1. In this case, we let P(s) be a
monic and Hurwitz polynomial with order ρ − 1 so that Zm (s)P(s) is of order n − 1.
We adopt a slightly different approach from the case of ρ = 1.
Consider the identity
Zm (s) Rm (s)P(s)
y(s) = y(s)
Rm (s) Zm (s)P(s)
Zm (s) Q(s)Rp (s) + (s)
= y(s) . (7.18)
Rm (s) Zm (s)P(s)
Note that the second equation in (7.18) follows from the identity
Rm (s)P(s) = Q(s)Rp (s) + (s),
where Q(s) is a monic polynomial with order n − ρ − 1, and (s) is a polynomial
with order n − 1. In fact Q(s) can be obtained by dividing Rm (s)P(s) by Rp (s) using
long division, and (s) is the remainder of the polynomial division. From the transfer
function of the system, we have
Rp (s)y(s) = kp Zp (s)u(s).
Substituting it into (7.18), we have
Zm (s) Q(s)Zp (s) kp−1 (s)
y(s) = kp u+ y(s) .
Rm (s) Zm (s)P(s) Zm (s)P(s)
Similar to the case for ρ = 1, if we parameterise the transfer functions as
Q(s)Zp (s) θ T α(s)
= 1− 1 ,
Zm (s)P(s) Zm (s)P(s)
kp−1 θ2T α(s)
=− − θ3
Zm (s)P(s) Zm (s)P(s)
98 Nonlinear and adaptive control systems
where e1 = y − ym and θ4 = km
kp
. The control input is designed as
Remark 7.4. The final control input is in the same format as shown for the case
ρ = 1. The filters for w1 and w2 are in the same order as in the case for ρ = 1, as the
order of Zm (s)P(s) is still n − 1.
Lemma 7.4. For the system (7.13) with relative degree ρ > 1, the control input (7.19)
solves MRC problem with the reference model (7.14) and limt→∞ (y(t) − ym (t)) = 0.
where y(s) and u(s) denote the system output and input in frequency domain; kp is
the high frequency gain; and Zp and Rp are monic polynomials with orders of n − 1
and n respectively. This system is assumed to be minimum phase, i.e., Zp (s) is a
Hurwitz polynomial, and the sign of the high-frequency gain, sgn(kp ), is known. The
coefficients of the polynomials and the value of kp are constants and unknown. The
reference model is chosen to have the relative degree 1 and strictly positive real, and
is described by
Zm (s)
ym (s) = km r(s), (7.21)
Rm (s)
where ym (s) is the reference output for y(s) to follow, r(s) is a reference input, and
Zm (s) and Rm (s) are monic polynomials and km > 0. Since the reference model is
strictly positive real, Zm and Rm are Hurwitz polynomials.
MRC shown in the previous section gives the control design in (7.17). Based on
the certainty equivalence principle, we design the adaptive control input as
u(s) = θ̂ T ω, (7.22)
where θ̂ is an estimate of the unknown vector θ ∈ R2n , and ω is given by
ω = [ω1T , ω2T , y, r]T
with
α(s)
ω1 = u,
Zm (s)
α(s)
ω2 = y.
Zm (s)
With the designed adaptive control input, it can be obtained, from the tracking
error dynamics shown in (7.16), that
Zm (s) T
e1 (s) = kp (θ̂ ω − θ T ω)
Rm (s)
Zm (s) kp T
= km − θ̃ ω (7.23)
Rm (s) km
where θ̃ = θ − θ̂.
To analyse the stability using a Lyapunov function, we put the error dynamics in
the state space form as
k
ė = Am e + bm − kmp θ̃ T ω
(7.24)
e1 = cmT e
1 T 1
kp
T −1
V = e Pm e +
θ̃ θ̃,
2 2 km
where ∈ R2n is a positive definite matrix. Its derivative is given by
kp
V̇ = e (Am Pm + Pm Am )e + e Pm bm − θ̃ ω +
θ̃ T −1 θ̃˙
1 T T T kp T
2 km km
Using the results from (7.25) and (7.26), we have
kp
V̇ = − eT Qm e + e1 − θ̃ T ω +
θ̃ T −1 θ̃˙
1 kp
2 km km
kp
= − eT Qm e +
Theorem 7.5. For the first-order system (7.20) and the reference model (7.21), the
adaptive control input (7.22) together with the adaptive law (7.27) ensures the bound-
edness of all the variables in the closed-loop system, and the convergence to zero of
the tracking error.
Remark 7.5. The stability result shown in Theorem 7.5 only guarantees the conver-
gence of the tracking error to zero, not the convergence of the estimated parameters.
In the stability analysis, we use Kalman–Yakubovich lemma for the definition of
Lyapunov function and the stability proof. That is why we choose the reference model
to be strictly positive real. From the control design point of view, we do not need
to know the actual values of Pm and Qm , as long as they exist, which is guaranteed
102 Nonlinear and adaptive control systems
by the selection of a strictly positive real model. Also it is clear from the stability
analysis, that the unknown parameters must be constant. Otherwise, we would not
have θ̂˙ = −θ̃.
˙
In this section, we will introduce adaptive control design for linear systems with their
relative degrees higher than 1. Similar to the case for relative degree 1, the certainty
equivalence principle can be applied to the control design, but the designs of the
adaptive laws and the stability analysis are much more involved, due to the higher
relative degrees. One difficulty is that there is not a clear choice of Lyapunov function
candidate as in the case of ρ = 1.
Consider an nth-order system with the transfer function
Zp (s)
y(s) = kp u(s), (7.28)
Rp (s)
where y(s) and u(s) denote the system output and input in frequency domain, kp is
the high frequency gain, Zp and Rp are monic polynomials with orders of n − ρ and
n respectively, with ρ > 1 being the relative degree of the system. This system is
assumed to be minimum phase, i.e., Zp (s) is Hurwitz polynomial, and the sign of the
high-frequency gain, sgn(kp ), is known. The coefficients of the polynomials and the
value of kp are constants and unknown. The reference model is chosen as
Zm (s)
ym (s) = km r(s) (7.29)
Rm (s)
where ym (s) is the reference output for y(s) to follow; r(s) is a reference input; and
Zm (s) and Rm (s) are monic polynomials with orders n − ρ and n respectively and
km > 0. The reference model (7.29) is required to satisfy an additional condition that
there exists a monic and Hurwitz polynomial P(s) of order n − ρ − 1 such that
Zm (s)P(s)
ym (s) = km r(s) (7.30)
Rm (s)
is strictly positive real. This condition also implies that Zm and Rm are Hurwitz
polynomials.
MRC shown in the previous section gives the control design in (7.19). We design
the adaptive control input, again using the certainty equivalence principle, as
u = θ̂ T ω, (7.31)
with
α(s)
ω1 = u,
Zm (s)P(s)
α(s)
ω2 = y.
Zm (s)P(s)
The design of adaptive law is more involved, and we need to examine the dynamics
of the tacking error, which are given by
Zm
e1 = kp (u − θ T φ)
Rm
Zm P(s)
= km k(uf − θ T φ) , (7.32)
Rm
where
kp 1 1
k= , uf = u and φ = ω.
km P(s) P(s)
An auxiliary error is constructed as
Zm P(s) Zm P(s) 2
= e1 − km k̂(uf − θ̂ T φ) − km ns , (7.33)
Rm Rm
where k̂ is an estimate of k, n2s = φ T φ + uf2 . The adaptive laws are designed as
Theorem 7.6. For the system (7.28) and the reference model (7.29), the adaptive
control input (7.31) together with the adaptive laws (7.34) and (7.35) ensures the
boundedness of all the variables in the closed-loop system, and the convergence to
zero of the tracking error.
● measurement noise
● computation roundoff error and sampling delay
y = θ ω. (7.36)
where
= y − θ̂ ω
1 2
V = θ̃
2γ
V̇ = −θ̃(y − θ̂ ω)ω
= −θ̃ 2 ω2 . (7.38)
The boundedness of θ̂ can then be concluded, no matter what the signal ω is.
Now, if the signal is corrupted by some unknown bounded disturbance d(t),
i.e.,
y = θ ω + d(t).
Adaptive control of linear systems 105
Remark 7.6. If ω is a constant, then from (7.38) we can show that the estimate
exponentially converges to the true value. In this case, there is only one unknown
parameter. If θ is a vector, the requirement for the convergence is much stronger. In
the above example, ω is bounded, but not in a persistent way. It does demonstrate that
even a bounded disturbance can cause the estimated parameter divergent.
where g is a constant satisfying g > |d(t)| for all t. For > g, we have
V̇ = −θ̃ ω
= −(θω − θ̂ ω)
= −(y − d(t) − θ̂ ω)
= −( − d(t))
< 0.
Therefore, we have
< 0, || > g
V̇
= 0, || ≤ g
and we can conclude that V is bounded. Intuitively, when the error is small, the
bounded disturbance can be more dominant, and therefore, the correct adaptation
direction is corrupted by the disturbance. In such a case, a simple strategy would be
just to stop parameter adaptation. The parameter adaptation stops in the range || ≤ g,
and for this reason, this modification takes the name ‘dead-zone’ modification. The
size of the dead zone depends on the size of the bounded disturbances. One problem
with the dead-zone modification is that the adaptive law is discontinuous, and this
may not be desirable in some applications.
σ -Modification is another strategy to ensure the boundedness of estimated param-
eters. The adaptive law is modified by adding an additional term −γ σ θ̂ to the normal
adaptive law as
θ̂˙ = γ ω − γ σ θ̂ (7.42)
V̇ = −( − d(t)) + σ θ̃ θ̂
= − 2 + d(t) − σ θ̃ 2 + σ θ̃ θ
2 d2 θ̃ 2 θ2
≤ − + 0 −σ +σ
2 2 2 2
d02 θ2
≤ −σ γ V + +σ , (7.43)
2 2
where d0 ≥ |d(t)|, ∀t ≥ 0. Applying Lemma 4.5 (comparison lemma) to (7.43),
we have
t 2
−σ γ t −σ γ (t−τ ) d0 θ2
V (t) ≤ e + V (0) e +σ dτ ,
0 2 2
Adaptive control of linear systems 107
and therefore we can conclude that V ∈ L∞ , which implies the boundedness of the
estimated parameter. A bound can be obtained for the bounded parameter as
1 d02 θ2
V (∞) ≤ +σ . (7.44)
σγ 2 2
Note that this modification does not need a bound for the bounded disturbances, and
also it provides a continuous adaptive law. For these reasons, σ -modification is one
of the most widely used modifications for parameter adaptation.
θ̂˙ + γ σ θ̂ = γ ω.
Since (γ σ ) is a positive constant, the adaptive law can be viewed as a stable first-order
dynamic system with (ω) as the input and θ̂ as the output. With a bounded input,
obviously θ̂ remains bounded.
The robust adaptive laws introduced here can be applied to various adaptive
control schemes. We demonstrate the application of a robust adaptive law to MRAC
with ρ = 1. We start directly from the error model (7.24) with an additional bounded
disturbance
ė = Am e + bm (−k θ̃ T ω + d(t))
e1 = cmT e, (7.45)
where k = kp /km and d(t) are a bounded disturbance with the bound d0 , which rep-
resents the non-parametric uncertainty in the system. As discussed earlier, we need a
robust adaptive law to deal with the bounded disturbances. If we take σ -modification,
then the robust adaptive law is
We will show that this adaptive law will ensure the boundedness of the variables.
Let
1 T 1
V = e Pe + |k|θ̃ T −1 θ̃ .
2 2
Similar to the analysis leading to Theorem 7.5, the derivative of V is obtained as
V̇ = − eT Qe + e1 ( − k θ̃ T ω + d) + |k|θ̃ T −1 θ̃˙
1
2
1
≤ − λmin (Q)
e
2 + e1 d + |k|σ θ̃ T θ̂
2
1
≤ − λmin (Q)
e
2 + |e1 d| − |k|σ
θ̃
2 + |k|σ θ̃ T θ.
2
108 Nonlinear and adaptive control systems
Note that
1 d02
|e1 d| ≤ λmin (Q)
e
2 + ,
4 λmin (Q)
1 1
|θ̃ T θ | ≤
θ̃
2 +
θ
2 .
2 2
Hence, we have
1 |k|σ d02 |k|σ
V̇ ≤ − λmin (Q)
e
2 −
θ̃
2 + +
θ
2
4 2 λmin (Q) 2
d02 |k|σ
≤ −αV + +
θ
2 ,
λmin (Q) 2
where α is a positive real and
where x̂ ∈ Rn is the estimate of the state x, and L ∈ Rn×m is the observer gain such
that (A − LC) is Hurwitz.
For the observer (8.2), it is easy to see the estimate x̂ converges to x asymptoti-
cally. Let x̃ = x − x̂, and we can obtain
x̃˙ = (A − LC)x̃.
Remark 8.1. In the observer design, we do not consider control input terms in the
system in (8.1), as they do not affect the observer design for linear systems. In fact, if
Bu term is added to the right-hand side of the system (8.1), for the observer design,
we can simply add it to the right-hand side of the observer in (8.2), and the observer
error will still converge to zero exponentially.
Remark 8.2. The observability condition for the observer design of (8.1) can be
relaxed to the detectability of the system, or the condition that (A, C) is detectable,
for the existence of an observer gain L such that (A − LC) is Hurwitz. Detectability is
weaker than observability, and it basically requires the unstable modes of the system
observable. The pair (A, C) is detectable if the matrix
λI − A
C
has rank n for any λ in the closed right half of the complex plan. Some other design
methods shown in this chapter also need only the condition of detectability, although,
for simplicity, we state the requirement for the observability.
There is another approach to full-state observer design for linear systems (8.1).
Consider a dynamic system
ż = Fz + Gy, (8.3)
where z ∈ Rn is the state; F ∈ Rn×n is Hurwitz; and G ∈ Rn×m . If there exists an
invertible matrix T ∈ Rn×n such that Z converges to Tx, then (8.3) is an observer with
the state estimate given by x̂ = T −1 z. Let
e = Tx − z.
A direction evaluation gives
ė = TAx − (Fz + GCx)
= F(Tx − z) + (TA − FT − GC)x
= Fe + (TA − FT − GC)x.
If we have
TA − FT − GC = 0,
the system (8.3) is an observer with an exponentially convergent estimation error. We
have the following lemma to summarise this observer design.
Nonlinear observer design 111
Lemma 8.1. The dynamic system (8.3) is an observer for the system (8.1) if and only
if F is Hurwitz and there exists an invertible matrix T such that
TA − FT = GC. (8.4)
Proof. The sufficiency has been shown in the above analysis. For necessity, we only
need to observe that if any of the conditions is not satisfied, we cannot guaran-
tee the convergence of e to zero for a general linear system (8.1). Indeed, if (8.4)
is not satisfied, then e will be a state variable with a non-zero input, and we can set
up a case such that e does not converge to zero. So does for the condition that F is
Hurwitz. 2
How to find matrices F and G such that the condition (8.4) is satisfied? We list
the result in the following lemma without the proof.
Lemma 8.2. Suppose that F and A have exclusively different eigenvalues. The nec-
essary condition for the existence of a non-singular solution T to the matrix equation
(8.4) is that the pair (A, C) is observable and the pair (F, G) is controllable. This
condition is also sufficient when the system (8.1) is single output, i.e., m = 1.
This lemma suggests that we can choose a controllable pair (F, G) and make sure
that the eigenvalues of F are different from those of A. An observer can be designed if
there is a solution of T from (8.4). For single output system, the solution is guaranteed.
Note that even the system (8.5) is nonlinear, the observer error dynamics are
linear. The system in the format of (8.5) is referred to as the system with linear
observer errors. More specifically, if we drop the control input u in the function φ,
the system is referred to as the output injection form for observer design.
Let us summarise the result in the following proposition.
Proposition 8.3. For the nonlinear system (8.5), a full-state observer can be designed
as in (8.6) if (A, C) is observable. Furthermore, the observer error dynamics are linear
and exponentially stable.
For a nonlinear system in a more general form, there may exist a state transfor-
mation to put the system in the format of (8.5), and then the proposed observer can
be applied.
We will introduce the conditions for the existence of a nonlinear state transfor-
mation to put the system in the format shown in (8.6). Here, we only consider single
output case, and for the simplicity, we do not consider the system with a control input.
The system under consideration is described by
ẋ = f (x)
(8.7)
y = h(x),
where x ∈ Rn is the state vector, y ∈ R is the output, f : Rn → Rn and h : Rn → R
are continuous nonlinear functions with f (0) = 0, and h(0) = 0. We will show that
under what conditions there exists a state transformation
z = (x), (8.8)
Remark 8.3. Assume that the system (8.9) is different from (8.10) with (A, C) as
a general observable pair, instead of having the special format implied by (8.10).
Nonlinear observer design 113
In this case, we use a different variable z̄ to denote the state for (8.10), and it can be
written as
z̄˙ = Āz̄ + φ̄(y)
(8.11)
y = C̄ z̄.
It is clear that there exists a linear transformation
z̄ = Tz,
which transforms the system (8.9) to the system (8.11), because (A, C) is observable.
The transformation from (8.7) to (8.11) is given by
¯
z̄ = T φ(x) := (x).
Therefore, if there exists a nonlinear transformation from (8.7) to (8.9), there must
exist a nonlinear transformation from (8.7) to (8.11), and vise versa. That is why
we can consider the transformed system in the format of (8.10) without loss of
generality.
If the state transformation transforms the system (8.7) to (8.9), we must have
∂(x)
f (x) = Az + φ(Cz)
∂x x=(z) (8.12)
h((z)) = Cz,
f¯ (z) = Az + φ(Cz)
h̄(z) = Cz.
h̄(z) = z1 ,
Lf¯ h̄(z) = z2 + φ1 (z1 ),
∂φ1
Lf2¯ h̄(z) = z3 + (z2 + φ1 (z1 ))
∂z1
:= z3 + φ̄2 (z1 , z2 ),
...
n−2
∂ φ̄n−2
¯ h̄(z) = zn +
Lfn−1 (zk+1 + φk (z1 ))
k=1
∂zk
are linearly independent. This property is invariant under state transformation, and
therefore, we need the condition under the coordinate x, that is
and therefore
⎡ ⎤
⎡ ∂h(x) ⎤ ∂ h̄(z)
⎢ ⎥
⎢ ∂x ⎥ ⎢ ∂z ⎥
⎢ ⎥ ⎢ ⎥
⎢ ∂Lf h(x) ⎥ ⎢ ∂Lf¯ h̄(z) ⎥
⎢ ⎥ ⎢ ⎥ ∂(x)
⎢ ∂x ⎥=⎢ ∂z ⎥ .
⎢ .. ⎥ ⎢ .. ⎥ ∂x
⎢ . ⎥ ⎢ ⎥
⎢ ⎥ ⎢ . ⎥
⎣ ∂Ln−1 h(x) ⎦ ⎢ n−1 ⎥
f ⎣ ∂Lf¯ h̄(z) ⎦
∂x ∂z z=(x)
Theorem 8.4. The nonlinear system (8.7) can be transformed to the output injection
form in (8.10) if and only if
● the differentials dh, dLf h, . . . , dLfn−1 h are linearly independent
● there exists a map : Rn → Rn such that
Nonlinear observer design 115
∂(z) n−1
= ad−f r, . . . , ad−f r, r , (8.13)
∂z x=(z)
Proof. Sufficiency. From the first condition and (8.14), we can show, in a similar way
as for (6.12), that
⎡ ⎤
dh(x)
⎢ dLf h(x) ⎥
⎢ ⎥
⎥ ad−f r(x) . . . ad−f r(x) r(x)
n−1
⎢ ..
⎣ . ⎦
dLfρ−1 h
⎡ ⎤
1 0 ... 0
⎢∗ 1 ... 0⎥
⎢ ⎥
=⎢. .. .. .. ⎥ . (8.15)
⎣ .. . . .⎦
∗ ∗ ... 1
n−1
Therefore, ad−f r(x) . . . ad−f r(x) r(x) has full rank. This implies that there exists
an inverse mapping for . Let us denote it as = −1 , and hence we have
∂(x)
n−1
ad−f r(x) . . . ad−f r(x) r(x) = I . (8.16)
∂x
Let us define the state transformation as z = (x) and denote the functions after this
transformation as
¯f (z) = ∂(x) f (x) ,
∂x x=(z)
h̄(z) = h((z)).
We need to show that the functions f¯ and h̄ are in the format of the output injection
form as in (8.10). From (8.16), we have
∂(x) n−k
ad−f r(x) = ek , for k = 1, . . . , n,
∂x
where ek denotes the kth column of the identity matrix. Hence, we have, for
k = 1, . . . , n − 1,
116 Nonlinear and adaptive control systems
∂(x) n−k ∂(x) n−(k+1)
ad−f r(x) = [− f (x), ad−f r(x)]
∂x x=(z) ∂x x=(z)
∂(x) ∂(x) n−(k+1)
= − f (x), ad−f r(x)
∂x ∂x x=(z)
= −f¯ (z), ek+1
∂ f¯ (z)
= .
∂zk+1
This implies that
∂ f¯ (z)
= ek , for k = 1, . . . , n − 1,
∂zk+1
i.e.,
⎡ ⎤
∗ 1 0 ... 0
⎢∗ 0 1 ... 0⎥
⎢
∂ f¯ (z) ⎢ . ⎥
= ⎢ .. .. .. .. .. ⎥ .
∂z ⎢ . . . .⎥⎥
⎣∗ 0 0 ... 1⎦
∗ 0 0 ... 0
Therefore, we have
∂ h̄(z)
= [1, 0, . . . , 0].
∂z
This concludes the proof for sufficiency.
Nonlinear observer design 117
Necessity. The discussion prior to this theorem shows that the first condition is
necessary. Assume that there exists a state transformation z = (x) to put the system
in the output injection form, and once again, we denote
∂(x)
f¯ (z) = f (x)
∂x x=(z)
h̄(z) = h((z)),
where = −1 . We need to show that when the functions f¯ and h̄ are in the format
of the output injection form, the second condition must hold. Let
∂(z)
g(x) = .
∂zn z=(x)
∂(z)
n−1
= ad−f g, . . . , ad−f g, g x=(z) . (8.17)
∂z
118 Nonlinear and adaptive control systems
The remaining part of the proof is to show that g(x) coincides with r(x) in (8.14).
From (8.17), we have
∂ h̄(z) ∂h(x) ∂(z)
=
∂z ∂x ∂z
∂h(x)
n−1
= ad−f g, . . . , ad−f g, g x=(z)
∂x
= [Lad n−1 g h(x), . . . , Lad−f g h(x), Lg h(x)]x=(z) .
−f
Since h̄(z) is in the output feedback form, the above expression implies that
Lad n−1 g h(x) = 1,
−f
Remark 8.4. It would be interesting to revisit the transformation for linear single-
output systems to the observer canonical form, to reveal the similarities between the
linear case and the conditions stated in Theorem 8.4. For a single output system
ẋ = Ax
(8.18)
y = Cx,
where x ∈ Rn is the state; y ∈ R is the system output; and A ∈ Rn×n and C ∈ R1×n
are constant matrices. When the system is observable, we have Po full rank.
Solving r from Po r = e1 , i.e.,
⎡ ⎤ ⎡ ⎤
C 0
⎢ CA ⎥ ⎢ .. ⎥
⎢ ⎥ ⎢ ⎥
⎢ .. ⎥ r = ⎢ . ⎥ ,
⎣ . ⎦ ⎣0⎦
n−1
CA 1
and the state transformation matrix T is then given by
T −1 = [An−1 r, . . . , Ar, r]. (8.19)
Nonlinear observer design 119
We can show that the transformation z = Tx which transforms the system to the
observer canonical form
ż = TAT −1 z := Āz
y = CT −1 z := C̄z
where
⎡ ⎤
−a1 1 0 ... 0
⎢ −a2 0 1 ... 0⎥
⎢ ⎥
⎢ .. .. .. .. .. ⎥ ,
Ā = ⎢ . . . . .⎥
⎢ ⎥
⎣ −an−1 0 0 ... 1⎦
−an 0 0 ... 0
C̄ = [1 0 . . . 0 0],
with constants ai , i = 1, . . . , n, being the coefficients of the characteristic polynomial
|sI − A| = sn + a1 sn−1 + · · · + an−1 s + an .
Indeed, from (8.19), we have
AT −1 = [An r, . . . , A2 r, Ar] (8.20)
Then from Cayley–Hamilton theorem, we have
An = −a1 An−1 − · · · − an−1 A − an I .
Substituting this into the previous equation, we obtain that
⎡ ⎤
−a1 1 0 . . . 0
⎢ −a2 0 1 . . . 0 ⎥
⎢ ⎥
⎢ .. ⎥ = T −1 Ā,
AT = [A r, . . . , Ar, r] ⎢ ...
−1 n−1 .. .. . .
. . . .⎥
⎢ ⎥
⎣ −an−1 0 0 . . . 1 ⎦
−an 0 0 . . . 0
which gives
TAT −1 = Ā.
Again from (8.19), we have
CT −1 = [CAn−1 r, . . . , CAr, Cr] = [1, 0, . . . , 0] = C̄.
If we identify f (x) = Ax and h(x) = Cx, the first condition of Theorem 8.4 is that Po
has full rank, and the condition in (8.13) is identical as (8.19).
When the conditions for Theorem 8.4 are satisfied, the transformation z = (x)
exists. In such a case, an observer can be designed for the nonlinear system (8.7) as
ẑ˙ = Aẑ + L(y − C ẑ) + φ(y)
(8.21)
x̂ = (ẑ),
120 Nonlinear and adaptive control systems
Corollary 8.5. For nonlinear system (8.7), if the conditions for Theorem 8.4 are
satisfied, the observer in (8.21) provides an asymptotic state estimate of the system.
Proof. From Theorem 8.4, we conclude that (A, C) is observable, and there exists an
L such that (A − LC) is Hurwitz. From (8.21) and (8.10), we can easily obtain that
z̃˙ = (A − LC)z̃
Observers presented in the last section achieve the linear observer error dynamics by
transforming the nonlinear system to the output injection form, for which an observer
with linear observer error dynamics can be designed to estimate the transformed state,
and the inverse transformation is used to transform the estimate for the original state. In
this section, we take a different approach by introducing a state transformation which
directly leads to a nonlinear observer design with linear observer error dynamics, and
show that this observer can be directly implemented in the original state space.
We consider the same system (8.7) as in the previous section, and describe it here
under a different equation number for the convenience of presentation
ẋ = f (x)
(8.22)
y = h(x),
z = (x), (8.23)
ż = Fz + Gy, (8.24)
for a chosen controllable pair (F, G) with F ∈ Rn×n Hurwitz, and G ∈ Rn . Comparing
(8.24) with (8.9), the matrices F and G in (8.24) are chosen for the observer design,
while in (8.9), A and C are any observable pair which depends on the original system.
Therefore, the transformation in (8.24) is more specific. There is an extra benefit
gained from this restriction in the observer design as shown later.
Nonlinear observer design 121
From (8.22) and (8.24), the nonlinear transformation must satisfy the following
partial differential equation
∂(x)
f (x) = F(x) + Gh(x). (8.25)
∂x
Our discussion will be based on a neighbourhood around the origin.
Definition 8.1. Let λi (A) for i = 1, . . . , n are the eigenvalues of a matrix A ∈ Rn . For
another matrix F ∈ Rn , an eigenvalue of F is resonant with the eigenvalues of A if
there exists an integer q = ni=1 qi > 0 with qi being non-negative integers such that
for some j with 1 ≤ j ≤ n
n
λj (F) = qi λi (A).
i=1
The following theorem states a result concerning with the existence of a nonlinear
state transformation around the origin for (8.22).
Theorem 8.6. For the nonlinear system (8.22), there exists a state transformation,
i.e., a locally invertible solution to the partial differential equation (8.25) if
● the linearised model of (8.22) around the origin is observable
● the eigenvalues of F are not resonant with the eigenvalues of ∂f (0)
∂x
∂f ∂f
● the convex hall of λ1 ∂x (0) , . . . , λn ∂x (0) does not contain the origin
Proof. From the non-resonant condition and the exclusion of the origin of the convex
hall of the eigenvalues of ∂f∂x
(0), we can establish the existence of a solution to the
partial differential equation (8.25) by invoking Lyapunov Auxiliary Theorem. That
the function is invertible around the origin is guaranteed by the observability of the
linearised model
and the the controllability of (F, G). Indeed, with the observability of
∂f ∂h
∂x
(0), ∂x (0) and the controllability of (F, G), we can apply Lemma 8.2 to establish
∂
that ∂x
(0) is invertible. 2
With the existence of the nonlinear transformation to put the system in the form
of (8.24), an observer can be designed as
ẑ˙ = F ẑ + Gy
(8.26)
x̂ = −1 (ẑ),
where is the inverse transformation of . It is easy to see that the observer error
dynamics are linear as
z̃˙ = F z̃.
122 Nonlinear and adaptive control systems
Note that once the transformation is obtained, the observer is directly given without
designing observer gain, unlike the observer design based on the output injection
form.
The observer can also be implemented directly in the original state as
−1
∂
x̂˙ = f (x̂) + (x̂) G(y − h(x̂)), (8.27)
∂ x̂
which is in the same structure as the standard Luenberger observer for linear systems
−1
by viewing ∂ ∂ x̂
(x̂) G as the observer gain. In the following theorem, we show that
this observer also provides an asymptotic estimate of the system state.
Theorem 8.7. For the nonlinear system (8.22), if the state transformation in (8.25)
exists, the observer (8.27) provides an asymptotic estimate. Furthermore, the
dynamics of the transformed observer error ((x) − (x̂)) are linear.
Therefore, the dynamics of the transformed observer error are linear, and the trans-
formed observer error converges to zero exponentially, which implies the asymptotic
convergence of x̂ to x. 2
Remark 8.5. For linear systems, we have briefly introduced two ways to design
observers. For the observer shown in (8.2), we have introduced the observer shown
in (8.6) to deal with nonlinear systems. The nonlinear observer in (8.27) can be
viewed as a nonlinear version of (8.3). For both cases, the observer error dynamics
are linear.
with γ > 0.
where x̂ ∈ Rn is the estimate of the state x and L ∈ Rn×m is the observer gain. However,
the condition that (A − LC) is Hurwitz is not enough to guarantee the convergence
of the observer error to zero. Indeed, a stronger condition is needed, as shown in the
following theorem.
Theorem 8.8. The observer (8.30) provides an exponentially convergent state esti-
mate if for the observer gain L, there exists a positive definite matrix P ∈ Rn×n such
that
Let
V = x̃T P x̃.
Corollary 8.9. The observer (8.30) provides an asymptotically convergent state esti-
mate if for the observer gain L, there exists a positive definite matrix P ∈ Rn×n that
satisfies the inequality (8.32).
Remark 8.6. By using the inequality (8.32) instead of the equality (8.31), we only
establish the asymptotic convergence to zero of the observer error, not the exponential
convergence that is established inTheorem 8.8 using (8.31). Furthermore, establishing
the asymptotic convergence of the observer error from V̇ < 0 requires the stability
theorems based on invariant sets, which are not covered in this book.
Note that the one-sided Lipschitz constant ν can be negative. It is easy to see from
the definition of the one-sided Lipschitz condition that the term (x − x̂)T P(φ(x, u) −
φ(x̂, u)) is exactly the cross-term in the proof Theorem 8.8 which causes the term
γ 2 PP + I in (8.32). Hence, with the Lipschitz constant ν with respect to P, the
condition shown in (8.32) can be replaced by
(A − LC)T P + P(A − LC) + 2νI < 0. (8.34)
This condition can be further manipulated to obtain the result shown in the following
theorem.
To end this section, we consider a class of systems with nonlinear Lipschitz output
function
ẋ = Ax
(8.37)
y = h(x),
where x ∈ Rn is the state vector, y ∈ Rm is the output, A ∈ Rn×n is a constant matrix
and h : Rn → Rm is a continuous function. We can write the nonlinear function h as
h = Hx + h1 (x) with Hx denoting a linear part of the output, and the nonlinear part
h1 with Lipschitz constant γ .
An observer can be designed as
x̂˙ = Ax̂ + L(y − h(x̂)), (8.38)
where the observer gain L ∈ Rn×m is a constant matrix.
Remark 8.7. The nonlinearity in the output function with linear dynamics may
occur in some special cases such as modelling a periodic signal as the output of a
second-order linear system. This kind of formulation is useful for internal model
design to asymptotically reject some general periodic disturbances, as shown in
Chapter 10.
Nonlinear observer design 127
ẋ = f (x, u)
(8.41)
y = h(x),
is a diffeomorphism, where h(x) is the output function. The transformed states are
referred to as output-complement states.
is referred to as reduced-order observer form for the system (8.41) if the z = g(x) is
output-complement transformation, and the dynamic system
ż = p(z, y)
is differentially stable.
128 Nonlinear and adaptive control systems
Theorem 8.12. If the system (8.41) can be transformed to the reduced-order observer
form (8.42), the state estimate x̂ provided by the reduced-order observer in (8.43)
asymptotically converges to the state variable of (8.41).
We then choose χ (·) = γ3−1 (2c5 (ḡ(·))c3 ). For z ≥ χ (y + u), we have
γ3 (z) ≥ 2c5 (ḡ(y + u))c3
≥ 2c5 q(y, u)c3 ,
which further implies that
1
V̇ ≤ − γ3 (z). (8.48)
4
Hence, V (z) is an ISS-Lyapunov function, and therefore z is bounded when y is
bounded.
The dynamics of e are given by
ė = p(z(t), y) − p((z(t) − e), y).
Taking V (e) as the Lyapunov function candidate, we have
∂V
V̇ = (p(z(t), y) − p((z(t) − e), y))
∂e
≤ −γ3 (e).
Therefore, we can conclude that the estimation error asymptotically converges to zero.
With ẑ as a convergent estimate of z, we can further conclude that x̂ is an asymptotic
estimate of x. 2
∂V (z − ẑ)
(p(z) − p(ẑ))
∂z
= −(z − ẑ)(z − ẑ + z 3 − ẑ 3 )
= −(z − ẑ)2 (1 + z 2 − zẑ + ẑ 2 )
1
= −(z − ẑ)2 (1 + (z 2 + ẑ 2 + (z − ẑ)2 ))
2
≤ −(z − ẑ)2 .
Therefore, the system satisfies the conditions specified in (5.41). We design the
reduced-order observer as
Simulation study has been carried out, and the simulation results are shown in
Figures 8.1 and 8.2.
0.7
x1
x2
0.6
0.5
0.4
x1 and x2
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
Time (s)
0.4
x2
0.35 Estimate of x2
0.3
0.25
x2 and estimate
0.2
0.15
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10
Time (s)
There are lack of systematic design methods for nonlinear observers with global
convergence when there are general nonlinear terms of unmeasured state variables
in the systems. With the introduction of the reduced-order observer form, we like to
further explore the class of nonlinear systems which can be transformed to the format
shown in (8.42), and therefore a nonlinear observer can then be designed accordingly.
We consider a multi-output (MO) nonlinear system
ẋ = Ax + φ(y, u) + Eϕ(x, u)
(8.49)
y = Cx,
Without loss of generality, we can assume that C has full row rank. There exists
a nonsingular state transformation M such that
CM −1 = [Im , 0m×(n−m) ].
If span{E} is a complement subspace of ker{C} in Rn , we have (CM −1 )(ME) = CE
invertible. If we partition the matrix ME as
E1
ME :=
E2
with E1 ∈ Rm×m , then we have CE = E1 . Furthermore, if we partition Mx as
χ1
Mx :=
χ2
with χ1 ∈ Rm , we have
z = g(x) = χ2 − E2 E1−1 χ1 . (8.50)
Note that we have χ1 = y. With the partition of
A1,1 A1,2
MAM −1 :=
A2,1 A2,2
and
φ1
M φ := ,
φ2
we can write the dynamics χ1 and χ2 as
χ̇1 = A1,1 χ1 + A1,2 χ2 + φ1 + E1 ϕ
χ̇2 = A2,1 χ1 + A2,2 χ2 + φ2 + E2 ϕ.
Then we can obtain the dynamics of z as
ż = A2,1 χ1 + A2,2 χ2 + φ2 − E2 E1−1 (A1,1 χ1 + A1,2 χ2 + φ1 )
Remark 8.8. After the state transformation, (8.51) is in the same format as (8.42).
Therefore, we can design a reduced-order observer if ż = (A2,2 − E2 E1−1 A1,2 )z is dif-
ferentially stable. Notice that it is a linear system. Hence, it is differentially stable if it
Nonlinear observer design 133
is asymptotically stable, which means the eigenvalues of (A2,2 − E2 E1−1 A1,2 ) are with
negative real parts.
Proof. We only need to show that ż = p(z, y) is differentially stable, and then we can
apply Theorem 8.12 to conclude the asymptotic convergence of the reduced-order
observer error. The exponential convergence comes as a consequence of the linearity
in the reduced-order observer error dynamics. For (8.52), we have
which is linear in z. Therefore, the proof can be completed by proving that the
matrix (A2,2 − E2 E1−1 A1,2 ) is Hurwitz. It can be shown that the eigenvalues of
(A2,2 − E2 E1−1 A1,2 ) are the invariant zeros of (A, E, C). Indeed, we have
−1
M 0 sI − A E M 0
0 Im C 0 0 Im
sI − MAM −1 ME
=
CM −1 0
⎡ ⎤
sIm − A1,1 −A1,2 E1
= ⎣ −A2,1 sIn−m − A2,2 E2 ⎦ . (8.54)
Im 0 0
Let us multiply the above matrix in the left by the following matrix to perform a row
operation:
⎡ ⎤
Im 0 0
⎣ −E2 E1−1 In−m 0 ⎦ ,
0 0 Im
134 Nonlinear and adaptive control systems
and
z = x3 − y1 − 2y2 = x3 + x1 − 2x2 .
With
0 1 0
A1,1 = , A1,2 = , A2,1 0 0 , A2,2 = 0,
0 0 1
−y1 + u
φ1 = , φ2 = −y12 ,
−y1 + y1 y2
where
1
x1
x2
0 x3
−1
−2
x
−3
−4
−5
−6
0 2 4 6 8 10 12 14 16 18 20
Time (s)
0.6
x3
0.4 Estimate of x3
0.2
0
x3 and estimate
−0.2
−0.4
−0.6
−0.8
−1
−1.2
0 2 4 6 8 10 12 14 16 18 20
Time (s)
Simulation study has been carried with x(0) = [1, 0, 0.5]T and u = sin t. The
plots of the state variables are shown in Figure 8.3 and the estimated state is shown
in Figure 8.4 .
When there are unknown parameters in dynamic systems, observers can still be
designed under certain conditions to provide estimates of unknown states by using
adaptive parameters. These observers are referred to as adaptive observers. One would
expect more stringent conditions imposed on nonlinear systems for which adaptive
observers can be designed. In this section, we will consider two classes of nonlinear
systems. The first one is based on the nonlinear systems in the output injection form
or output feedback form with unknown parameters and the other class of the systems
is nonlinear Lipschitz systems with unknown parameters.
Consider nonlinear systems which can be transformed to a single-output
system
where θ̃ = θ − θ̂.
Consider a Lyapunov function candidate
V = x̃T P x̃ + θ̃ T −1 θ̃,
where ∈ Rn×n is a positive definite matrix. From (8.58) and (8.60), we have
we obtain
V̇ = −x̃T Qx̃.
Then similar to stability analysis of adaptive control systems, we can conclude that
limt→∞ x̃(t) = 0, and θ̂ is bounded. The above analysis leads to the following theorem.
Theorem 8.14. For the observable single-output system (8.57), if the linear system
characterised by (A, b, C) is minimum phase and has relative degree 1, there exists an
138 Nonlinear and adaptive control systems
observer gain L ∈ Rn that satisfies the conditions in (8.58) and an adaptive observer
designed as
x̂˙ = Ax̂ + φ0 (y, u) + bφ T (y, u)θ̂ + L(y − C x̂)
(8.61)
θ̂˙ = (y − C x̂)φ(y, u),
where ∈ Rn×n is a positive definite matrix and provides an asymptotically
convergent state estimate with adaptive parameter vector remaining bounded.
Remark 8.9. In this remark, we will show a particular choice of the observer
gain for the adaptive control observer for a system that satisfies the conditions in
Theorem 8.14. Without loss of generality, we assume
⎡ ⎤ ⎡ ⎤
0 1 0 ... 0 b1
⎢0 0 1 ... 0⎥ ⎢ b2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. .. , b = ⎢ .. ⎥ ,
⎥
A=⎢. . . . ⎥
.⎥ ⎢ ⎥
⎢ ⎢ . ⎥
⎣0 0 0 ... 1⎦ ⎣ bn−1 ⎦
0 0 0 ... 0 bn
C = [1 0 ... 0 0],
where b1 = 0, since the relative degree is 1. Indeed, for an observer system {A, b, C}
with relative degree 1, there exists a state transformation, as shown in Remark 8.4, to
transform the system to the observable canonical form. Furthermore, if we move the
first column of A in the canonical form, and combine it with φ0 (y, u), we have A as in
the format shown above. Since the system is minimum phase, b is Hurwitz, i.e.,
B(s) := b1 sn−1 + b2 sn−2 + · · · + bn−1 s + bn = 0
has all the solutions in the left half of the complex plane. In this case, we can design
the observer gain to cancel all the zeros of the system. If we denote L = [l1 , l2 , . . . , ln ],
we can choose L to satisfy
sn + l1 sn−1 + l2 sn−2 + · · · + ln−1 s + ln = B(s)(s + λ),
where λ is a positive real constant. The above polynomial equation implies
L = (λI + A)b.
This observer gain ensures
1
C (sI − (A − LC))−1 b = ,
s+λ
which is a strict positive real transfer function.
Now we will consider adaptive observer design for a class of Lipschitz nonlinear
systems. Consider
ẋ = Ax + φ0 (x, u) + bφ T (x, u)θ
(8.62)
y = Cx,
Nonlinear observer design 139
For a nonlinear system, the stability around an equilibrium point can be established
if one can find a Lyapunov function. Nonlinear control design can be carried out by
exploring the possibility of making a Lyapunov function candidate as a Lyapunov
function through control design. In Chapter 7, parameter adaptive laws are designed
in this way by setting the parameter adaptive laws to make the derivative of a Lyapunov
function candidate negative semi-definite. Backstepping is a nonlinear control design
method based on Lyapunov functions. It enables a designed control to be extended to
an augmented system, provided that the system is augmented in some specific way.
One scheme is so-called adding an integrator in the sense that if a control input is
designed for a nonlinear system, then one can design a control input for the augmented
system of which an integrator is added between the original system input and the
input to be designed. This design strategy can be applied iteratively. There are a few
systematic control design methods for nonlinear systems, and backstepping is one
of them. In this chapter, we start with the fundamental form of adding an integrator,
and then introduce the method for iterative backstepping with state feedback. We also
introduce backstepping using output feedback, and adaptive backstepping for certain
nonlinear systems with unknown parameters.
∂V
(f (x) + g(x)α(x)) ≤ −W (x), (9.2)
∂x
where W (x) is positive definite.
Condition (9.2) implies that the system ẋ = f (x) + g(x)α(x) is asymptotically
stable. Consider
ẋ = f (x) + g(x)α(x) + g(x)(ξ − α(x)).
Intuitively, if we can design a control input u = u(x, ξ ) to force ξ to converge to α(x),
we have a good chance to ensure the stability of the entire system. Let us define
z = ξ − α(x).
It is easy to obtain the dynamics under the coordinates (x, z) as
ẋ = f (x) + g(x)α(x) + g(x)z
∂α
ż = u − α̇ = u − (f (x) + g(x)ξ ).
∂x
Consider a Lyapunov function candidate
1
Vc (x, z) = V (x) + z 2 . (9.3)
2
Its derivative is given by
∂V ∂V
V̇c = (f (x) + g(x)α(x)) + g(x)z
∂x ∂x
∂α
+z u − (f (x) + g(x)ξ )
∂x
∂V ∂α
= −W (x) + z u + g(x) − (f (x) + g(x)ξ ) .
∂x ∂x
Let
∂V ∂α
u = −cz − g(x) + (f (x) + g(x)ξ ) (9.4)
∂x ∂x
with c > 0 which results in
V̇c = −W (x) − cz 2 . (9.5)
It is clear that −W (x) − cz is negative definite with respect to variables (x, z). Hence,
2
we can conclude that Vc (x, z) is a Lyapunov function, and that (0, 0) in the coordinates
(x, z) is a globally asymptotic equilibrium. From α(0) = 0, we can conclude that (0, 0)
in the coordinates (x, ξ ) is also a globally asymptotic equilibrium, which means that
the system (9.1) is globally asymptotically stable under the control input (9.4). We
summarise the above result in the following lemma.
Lemma 9.1. For a system described in (9.1), if there exist differentiable function α(x)
and a positive-definite function V (x) such that (9.2) holds, the control design given
in (9.4) ensures the global asymptotic stability of the closed-loop system.
Backstepping design 143
Remark 9.1. Considering the structure of (9.1), if ξ is viewed as the control input for
the x-subsystem, ξ = α(x) is the desired control, ignoring the ξ -system. This is why
ξ can be referred to as a virtual control input for the x-subsystem. The control input
u for the overall system is designed with the consideration of the dynamics back to
the control design for the x-subsystem, and it may suggest the name of this particular
design method as backstepping.
In Example 9.1, backstepping design has been used to design a control input for a
nonlinear system with unmatched nonlinearities. When a nonlinear function appears
in the same line as the control input, it is referred as a matched nonlinear function,
and it can be cancelled by adding the same term in u with an opposite sign. From the
system considered in Example 9.1, the nonlinear function x12 does not appear in the
same line as the control input u, and therefore it is unmatched. However, it is in the
same line as x2 , which is viewed as a virtual control. As a consequence, α(x1 ), which
is often referred to as a stabilising function, can be designed to cancel the nonlinearity
which matches with the virtual control, and backstepping method enables the control
in the next line to be designed to stabilise the entire system. This process can be
repeated by identifying a virtual control, designing a stabilising function and using
backstepping to design control input for more complicated nonlinear systems.
Note that the term −z1 in α2 is used to tackle a cross-term caused by z2 in the dynamics
of z1 in the stability analysis. Other terms in α2 are taken to cancel the nonlinear terms
and stabilise the dynamics of z2 .
Step i. For 2 < i < n, the dynamics of zi are given by
żi = ẋi − α̇i−1 (x1 , . . . , xi−1 )
= xi+1 + φi (x1 , . . . , xi )
i−1
∂αi−1
− (xj+1 + φj (x1 , . . . , xj ))
j=1
∂xj
= zi+1 + αi + φi (x1 , . . . , xi )
i−1
∂αi−1
− (xj+1 + φj (x1 , . . . , xj )).
j=1
∂xj
Design αi as
αi = −zi−1 − ci zi − φi (x1 , . . . , xi )
i−1
∂αi−1
+ (xj+1 + φj (x1 , . . . , xj )). (9.11)
j=1
∂xj
⎡ ⎤
−c1 1 0 ... 0
⎢ .. ⎥
⎢ −1 −c2 1 . 0 ⎥
⎢ ⎥
⎢ .. ⎥
ż = ⎢
⎢ 0 −1 −c3 . 0 ⎥⎥ z := Az z.
⎢ .. .. .. .. ⎥
⎢ . . . . 1 ⎥
⎣ ⎦
..
0 0 0 . −cn
1 T
V = z z. (9.15)
2
Its derivative along the dynamics of z is obtained as
n
n
V̇ = − ci zi2 ≤ −2 min ci V . (9.16)
i=1
i=1
Theorem 9.2. For a system in the form of (9.6), the control input (9.13) renders the
closed-loop system asymptotically stable.
ẋ = Ac x + bu + φ(y)
(9.17)
y = Cx
with
148 Nonlinear and adaptive control systems
⎡ ⎤
⎡ ⎤ 0
0 1 0 ... 0 ⎡ ⎤T ⎢ .. ⎥
⎢0 0 1 ... 1 ⎢ . ⎥
⎢ 0⎥ ⎥ ⎢0⎥ ⎢ ⎥
⎢ .. ⎥ , ⎢ ⎥ ⎢ 0 ⎥
Ac = ⎢ ... ... ... . . . .⎥ C=⎢.⎥ , b=⎢ ⎥
⎢ bρ ⎥ ,
⎢ ⎥ ⎣ .. ⎦ ⎢ ⎥
⎣0 0 0 ... 1⎦ ⎢ . ⎥
0 ⎣ .. ⎦
0 0 0 ... 0
bn
where x ∈ Rn is the state vector; u ∈ R is the control; φ : R → Rn with φ(0) = 0 is
a nonlinear function with element φi being differentiable up to the (n − i)th order;
and b ∈ Rn is a known constant Hurwitz vector with bρ = 0, which implies that the
relative degree of the system is ρ. This form of the system is often referred to as the
output feedback form. Since b is Hurwitz, the linear system characterised by (Ac , b, C)
is minimum phase. Note that a vector is said Hurwitz if its corresponding polynomial
is Hurwitz.
Remark 9.2. For a system in the output feedback form, if the input is zero, the
system is in exactly the same form as (8.10). We have shown the geometric conditions
in Chapter 8 for systems to be transformed to the output injection form (8.10), and
similar geometric conditions can be specified for nonlinear systems to be transformed
to the output feedback form. Clearly we can see that for the system (9.17) with any
observable pair (A, C), there exists a linear transformation to put the system in the
form of (9.17) with the specific (Ac , C).
Since the system (9.17) is in the output injection form, we design an observer as
x̂˙ = Ac x + bu + φ(y) + L(y − C x̂), (9.18)
where x̂ ∈ Rn is the state estimate and L ∈ Rn is an observer gain designed such that
(A − LC) is Hurwitz. Let x̃ = x − x̂, and it is easy to see
x̃˙ = (Ac − LC)x̃. (9.19)
The backstepping design can be carried out with the state estimate x̂ in ρ steps.
From the structure of the system (9.17) we have y = x1 . The backstepping design will
start with the dynamics of y. In the following design, we assume ρ > 1. In the case
of ρ = 1, control input can be designed directly without using backstepping.
To apply the observer backstepping through x̂ in (9.18), we define
z1 = y,
zi = x̂i − αi−1 , i = 2, . . . , ρ, (9.20)
zρ+1 = x̂ρ+1 + bρ u − αρ ,
where αi , i = 1, . . . , ρ, are stabilising functions decided in the control design.
Consider the dynamics of z1
ż1 = x2 + φ1 (y). (9.21)
Backstepping design 149
We design α1 as
where ci and ki for i = 1, . . . , ρ are positive real design parameters. Comparing the
backstepping design using the output feedback with the one using state feedback, we
have one additional term, −k1 z1 , which is used to tackle the observer error x̃2 in the
closed-loop system dynamics. Then from (9.22) and (9.23), we have
where l2 is the second element of the observer gain L, and in the subsequent design,
li is the ith element of L. We design α2 as
2
∂α1
α2 = −z1 − c2 z2 − k2 z2 − φ2 (y) − l2 (y − x̂1 )
∂y
∂α1 (y)
+ (x̂2 + φ1 (y)). (9.25)
∂y
Note that α2 = α2 (y, x̂1 , x̂2 ). The resultant dynamics of z2 are given by
2
∂α1 ∂α1
ż2 = −z1 − c2 z2 − k2 z2 + z 3 − x̃2 .
∂y ∂y
We design αi , 2 < i ≤ ρ, as
2
∂αi−1
αi = −zi−1 − ci zi − ki zi − φi (y) − li (y − x̂1 )
∂y
∂αi−1 ∂αi−1 i−1
+ (x̂2 + φ1 (y)) + x̂˙ j . (9.26)
∂y j=1
∂ x̂j
Note that αi = αi (y, x̂1 , . . . , x̂i ). The resultant dynamics of zi , 2 < i ≤ ρ, are given by
∂αi−1 2 ∂αi−1
żi = −zi−1 − ci zi − ki zi + zi+1 − x̃2 . (9.27)
∂y ∂y
When i = ρ, the control input appears in the dynamics of zi , and it is included in zρ+1 ,
as shown in the definition (9.20). We design the control input by setting zρ+1 = 0,
which gives
αρ (y, x̂1 , . . . , x̂ρ ) − x̂ρ+1
u= . (9.28)
bρ
The stability result of the above control design is given in the following theorem.
Theorem 9.3. For a system in the form of (9.17), the dynamic output feedback control
with the input (9.28) and the observer (9.18) asymptotically stabilise the system.
Proof. From the observer error dynamics, we know that the error exponentially con-
verges to zero. Since (Ac − LC) is Hurwitz, there exists a positive definite matrix
P ∈ Rn×n such that
(Ac − LC)T P + P(Ac − LC) = −I .
This implies that for
Ve = x̃T P x̃,
we have
V̇e = − x̃ 2 . (9.29)
Let
ρ
Vz = zi2 .
i=1
where we define α0 = y for notational convenience. Note that if we ignore the two
terms concerning with ki and x̃2 in the dynamics of zi , the evaluation of the derivative
Backstepping design 151
of Vz will be exactly the same as the stability analysis that leads to Theorem 9.2. For
the cross-term concerning with x̃2 , we have, from Young’s inequality,
∂αi−1 ∂αi−1 2 2 1 2
z x̃ ≤ k zi + x̃ .
∂y i 2 i
∂y 4ki 2
Hence, we obtain that
ρ
1 2
V̇z ≤ −ci zi +
2
x̃ . (9.30)
i=1
4ki 2
Let
1
V = Vz + 1 + Ve ,
4d
where d = minρi=1 ki . From (9.29) and (9.30), we have
ρ
1 2 1
V̇ ≤ −ci zi +
2
x̃2 − 1 + x̃ 2
i=1
4k i 4d
ρ
≤− ci zi2 − x̃ 2 .
i=1
We have presented backstepping design for a class of dynamic systems using output
feedback in the previous section. The control design starts from the system output
and the subsequent steps in the backstepping design are carried out with estimates of
state variables provided by an observer. The backstepping design, often referred to
as observer backstepping, completes in ρ steps, with ρ being the relative degree of
the system, and other state estimates of xi for i > ρ are not used in the design. Those
estimates are redundant, and make the dynamic order of the controller higher. There
is an alternative design method to observer backstepping for nonlinear systems in the
output feedback form, of which the resultant order of the controller is exactly ρ − 1.
In this section, we present backstepping design with filtered transformation for the
system (9.17), of which the main equations are shown here again for the convenience
of presentation,
ẋ = Ac x + bu + φ(y)
(9.32)
y = Cx.
ξ̇1 = −λ1 ξ1 + ξ2
..
. (9.33)
ξ̇ρ−1 = −λρ−1 ξρ−1 + u,
where λi > 0 for i = 1, . . . , ρ − 1 are the design parameters. Define the filtered
transformation
ρ−1
ζ̄ = x − d̄i ξi , (9.34)
i=1
d̄ρ−1 = b,
d̄i = (Ac + λi+1 I )d̄i+1 for i = ρ − 2, . . . , 1.
We also denote
d = (Ac + λ1 I )d̄1 .
Backstepping design 153
= Ac ζ̄ + φ(y) + dξ1 .
= C ζ̄
we have
n
n
d̄ρ−2,i sn−i = (s + λρ−1 ) bi sn−i .
i=ρ−1 ρ
which implies that d1 = bρ and that d is Hurwitz if b Hurwitz. In the special form
of Ac and C used here, b and d decide the zeros of the linear systems characterised
by (Ac , b, C) and (Ac , d, C) respectively as the solutions to the following polynomial
equations:
154 Nonlinear and adaptive control systems
n
bi sn−i = 0,
ρ
n
di sn−i = 0.
i=1
Hence, the invariant zeros of (Ac , d, C) are the invariant zeros of (Ac , b, C) plus λi for
i = 1, . . . , ρ − 1. For the transformed system, ξ1 can be viewed as the new input. In
this case, the relative degree with ξ1 as the input is 1. The filtered transformation lifts
the relative degree from ρ to 1.
As the filtered transformation may have its use independent of backstepping
design shown here, we summarise the property of the filtered transformation in the
following lemma.
Lemma 9.4. For a system in the form of (9.32) with relative degree ρ, the filtered
transformation defined in (9.34) transforms the system to (9.35) of relative degree 1,
with the same high frequency gain. Furthermore, the zeros of (9.35) consist of the
zeros of the original system (9.32) and λi for i = 1, . . . , ρ − 1.
and
d2:n d2:n
ψ(y) = D y + φ2:n (y) − φ1 (y),
d1 d1
d2
ψy (y) = y + φ1 (y).
d1
Backstepping design 155
If we view ξ1 as the input, the system (9.35) is of relative degree 1 with the stable
zero dynamics. For such a system, there exists an output feedback law to globally and
exponentially stabilise the system.
For this, we have the following lemma, stating in a more stand alone manner.
Lemma 9.5. For a nonlinear system (9.32), if the relative degree is 1, there exist
a continuous function ϕ : R → R with ϕ(0) = 0 and a positive real constant c such
that the control input in the form of
u = −cy − ϕ(y) (9.39)
globally and asymptotically stabilises the system.
Therefore, the proposed control design with c > 14 and (9.41) exponentially stabilises
the system in the coordinate (ζ , y), which implies the exponential stability of the
closed-loop system in x coordinate, because the transformation from (ζ , y) to x is
linear. 2
From Lemma 9.5, we know the desired value of ξ1 . But we cannot directly assign
a function to ξ1 as it is not the actual control input. Here backstepping can be applied
to design control input based on the desired function of ξ1 . Together with the filtered
transformation, the overall system is given by
ζ̇ = Dζ + ψ(y)
ẏ = ζ1 + ψy (y) + bρ ξ1
ξ̇1 = −λ1 ξ1 + ξ2 (9.42)
...
ξ̇ρ−1 = −λρ−1 ξρ−1 + u,
to which the backstepping design is then applied. Indeed, in the backstepping design,
ξi for i = 1, . . . , ρ − 1 can be viewed as virtual controls.
Let
z1 = y,
zi = ξi−1 − αi−1 , for i = 2, . . . , ρ
zρ+1 = u − αρ ,
where αi for i = 2, . . . , ρ are stabilising functions to be designed. We also use the
positive real design parameters ci and ki for i = 1, . . . , ρ and γ > 0.
Based on the result shown in Lemma 9.5, we have
γ P 2 ψ(y) 2
α1 = −c1 z1 − k1 z1 + ψy (y) − (9.43)
y
and
ż1 = z2 − c1 z1 − k1 z1 − γ P 2 ψ(y) 2 . (9.44)
For the dynamics of z2 , we have
∂α1
ż2 = −λ1 ξ1 + ξ2 − ẏ
∂y
∂α1
= z 3 + α 2 − λ 1 ξ1 − (ζ1 + ψ(y)).
∂y
The design of α2 is then given by
∂α1 2 ∂α1
α2 = −z1 − c2 z2 − k2 z2 + ψ(y) + λ1 ξ1 . (9.45)
∂y ∂y
Backstepping design 157
∂αi−1 ∂αi−1
i−2
żi = −λi−1 ξi−1 + ξi − ẏ − ξ̇j
∂y j=1
∂ξj
∂αi−1
= zi+1 + αi − λi−1 ξi−1 − (ζ1 + ψ(y))
∂y
i−2
∂αi−1
− (−λj ξi−1 + ξj+1 ).
j=1
∂ξj
For the control design parameters, ci , i = 1, . . . , ρ and γ can be any positive, and for
di , the following condition must be satisfied:
ρ
1
≤ γ.
1=1
4ki
The stability result of the above control design is given in the following
theorem.
158 Nonlinear and adaptive control systems
Theorem 9.6. For a system in the form of (9.32), the dynamic output feedback
control (9.49) obtained through backstepping with the input filtered transformation
asymptotically stabilises the system.
Proof. Let
ρ
Vz = zi2 .
i=1
From the dynamics for zi shown in (9.44), (9.46) and (9.48) we can obtain that
ρ
∂αi−1 2 2 ∂αi−1
V̇z = −ci zi − ki
2
zi − zi ζ1 − γ P 2 ψ(y) 2 ,
i=1
∂y ∂y
Let
Vζ = ζ T Pζ
and we can obtain, similar to the proof of Lemma 9.5,
V̇ζ ≤ −2 ζ 2 + P 2 ψ(y) 2 .
Let
V = Vz + γ Vζ .
and we have
ρ
1 2
V̇ ≤ −ci zi2 + ζ1 − 2γ ζ 2 (9.51)
i=1
4ki
ρ
≤− ci zi2 − γ ζ 2 . (9.52)
i=1
Therefore, we have shown that the system (9.42) is exponentially stable under the
coordinate (ζ , z1 , . . . , zρ−1 ). With y = z1 , we can conclude limt→∞ ζ̄ (t) = 0. From
y = z1 , and α1 (0) = 0, we can conclude that limt→∞ ξ1 (t) = 0. Following the same
process, we can show that limt→∞ ξ (t) = 0 for i = 1, . . . , ρ − 1. Finally from the
filtered transformation (9.34), we can establish limt→∞ x(t) = 0. 2
Backstepping design 159
We have shown backstepping design for two classes of systems with state feedback
and output feedback respectively. In this section, we will show the nonlinear adaptive
control design for a class of system of which there are unknown parameters.
Consider a first-order nonlinear system described by
ẏ = u + φ T (y)θ (9.53)
where c is a positive real constant, and
∈ Rp×p is a positive definite gain matrix.
The closed-loop dynamics is given by
ẏ = −cy + φ T (y)θ̃
V̇ = −cy2 ,
which ensures the boundedness of y and θ̂ . We can show limt→∞ y(t) = 0 in the same
way by invoking Babalat’s Lemma as in the stability analysis of adaptive control
systems shown in Chapter 7.
Remark 9.3. The system considered above is nonlinear with unknown parameters.
However, the unknown parameters are linearly parameterised, i.e., the terms relating
to the unknown parameters, φ T (y)θ are linear with the unknown parameters, instead of
some nonlinear functions φ(y, θ ), which are referred to as nonlinearly parameterised.
Obviously, nonlinear parameterised unknown parameters are much more difficult to
deal with in adaptive control. In this book, we only consider linearly parameterised
unknown parameters.
For the first-order system, the control input is matched with the uncertainty and
the nonlinear function. Backstepping can also be used with adaptive control to deal
with nonlinear and unknown parameters which are not in line with the input, or,
unmatched.
160 Nonlinear and adaptive control systems
ẋ1 = x2 + φ1 (x1 )T θ
ẋ1 = x3 + φ2 (x1 , x2 )T θ
... (9.56)
ẋn−1 = xn + φn−1 (x1 , x1 , . . . , xn−1 ) θ T
ẋn = u + φn (x1 , x2 , . . . , xn )T θ ,
z1 = x1 ,
zi = xi − αi−1 (xi , . . . , xi−1 , θ̂ ), for i = 2, . . . , n,
zn+1 = u − αn (xi , . . . , xn , θ̂ ),
ϕ 1 = φ1 ,
i−1
∂αi−1
ϕ i = φi − φj , for i = 2, . . . , n,
j=1
∂xj
ż1 = z2 + α1 + ϕ1T θ.
Design α1 as
where θ̃ = θ − θ̂ .
Backstepping design 161
i−1
∂αi−1 ∂αi−1 ˙
= zi+1 + αi + ϕiT θ − xj+1 − θ̂.
j=1
∂xj ∂ θ̂
The stabilising function αi , for 2 < i ≤ n, are designed as
i−1
∂αi−1 ∂αi−1
αi = −zi−1 − ci zi − ϕiT θ̂ + xj+1 + τ i + βi , (9.59)
j=1
∂xj ∂ θ̂
The control input u appears in the dynamics zn , in the term zn+1 . When i = n,
we have τi = θ̂˙ by definition. We obtain the control input by setting zn+1 = 0, which
gives
u = αn
= −zn−1 − cn zn − ϕnT θ̂
n−1
∂αn−1 ∂αn−1 ˙
+ xj+1 + θ̂ + βn . (9.60)
j=1
∂xj ∂ θ̂
We need to design the adaptive law, tuning functions and βi to complete the
control design. We will do it based on Lyapunov analysis. For notational convenience,
we set β1 = β2 = 0.
Let
1 2 1 T −1
n
V = z + θ̃
θ̃, (9.61)
2 i=1 i 2
where
∈ Rp×p is a positive definite matrix. From the closed-loop dynamics of zi ,
for i = 1, . . . , n, we obtain
n n
∂αi−1 ˙
V̇ = (−ci zi2 + zi ϕiT θ̃ + zi βi ) −
zi (θ̂ − τi ) + θ̃˙ T
−1 θ̃
i=1 i=2 ∂ θ̂
n n
∂αi−1 ˙
= − ci zi +
2
zi βi − zi (θ̂ − τi )
i=1 i=2 ∂ θ̂
n T
−1 ˙
+ z ϕ −
θ̂ θ̃ .
i i
i=1
n
n
n
∂αi−1 n
∂αi−1
= z i βi − zi zj
ϕj + zi (τi − z1
ϕ1 )
i=2 i=2 j=2 ∂ θ̂ i=2 ∂ θ̂
Backstepping design 163
n
n
n
∂αi−1
= zi β i − zi zj
ϕj
i=2 i=2 j=i+1 ∂ θ̂
n
i
∂αi−1 n
∂αi−1
− zi zj
ϕj + zi (τi − z1
ϕ1 )
i=2 j=2 ∂ θ̂ i=2 ∂ θ̂
n
n
j−1
∂αi−1
= zi β i − zi zj
ϕj
i=2 j=3 i=2 ∂ θ̂
⎛ ⎞
n
∂αi−1 ⎝ i
+ zi τi − z1
ϕ1 − zj
ϕj ⎠
i=2 ∂ θ̂ j=2
⎛ ⎞
n
i−1
∂αj−1
= zi ⎝βi − zj
ϕi ⎠
i=3 j=2 ∂ θ̂
⎛ ⎞
n
∂αi−1 ⎝ i
+ zi τi − z1
ϕ1 − zj
ϕj ⎠ .
i=2 ∂ θ̂ j=2
Hence, we obtain
i−1
∂αj−1
βi = zj
ϕi , for i = 3, . . . , n, (9.63)
j=2 ∂ θ̂
i
τi = zj
ϕj , for i = 2, . . . , n. (9.64)
j=1
n
V̇ = − ci zi2 .
i=1
Theorem 9.7. For a system in the form of (9.57), the control input (9.60) and adaptive
law (9.62) designed by adaptive backstepping ensure the boundedness of all the
variables and limt→∞ xi (t) = 0 for i = 1, . . . , n.
164 Nonlinear and adaptive control systems
Example 9.3. In Example 9.2, the nonlinear system is in the standard format as
shown in (9.56). In this example, we show adaptive control design for a system which
is slightly different from the standard form (9.56), but the same design procedure can
be applied with some modifications.
Consider a second-order nonlinear system
ẋ1 = x2 + x13 θ + x12
ẋ2 = (1 + x12 )u + x12 θ ,
where θ ∈ R is the only unknown parameter.
Let z1 = x1 and z2 = x2 − α1 . The stabilising function α1 is designed as
α1 = −c1 z1 − x13 θ̂ − x12 ,
which results in the dynamics of z1 as
ż1 = −c1 z1 + z2 + x13 θ̃ .
The dynamics of z2 are obtained as
∂α1 ∂α1 ˙
ż2 = (1 + x12 )u + x12 θ − (x2 + x13 θ + x12 ) − θ̂,
∂x1 ∂ θ̂
where
∂α1 ∂α1
= −c1 − 3x12 θ̂ − 2x1 , = −x13 .
∂x1 ∂ θ̂
Therefore, we design the control input u as
1 ∂α1 ∂α1 ˙
u= −z1 − c2 z2 − x1 θ̂ +
2
(x2 + x1 θ̂ + x1 ) +
3 2
θ̂ .
1 + x12 ∂x1 ∂ θ̂
The resultant dynamics of z2 are obtained as
∂α1 3
ż2 = −z1 − c2 z2 + x12 θ̃ − x θ̃ .
∂x1 1
Let
1 2 θ̃ 2
V = z + z2 +
2
.
2 1 γ
We obtain that
∂α1 3 1 ˙
V̇ = −c1 z12 − c2 z22 + z1 x1 3 + z2 x12 − x1 θ̃ − θ̃ θ̂.
∂x1 γ
We can set the adaptive law as
˙θ̂ = γ z x 3 + γ z x2 − ∂α1 x3
1 1 2 1
∂x1 1
to obtain
V̇ = −c1 z12 − c2 z22 .
166 Nonlinear and adaptive control systems
x1
1
x2
−1
State variables
−2
−3
−4
−5
−6
0 2 4 6 8 10
t (s)
3.5
3
Estimated parameter
2.5
1.5
0.5
0
0 2 4 6 8 10
t (s)
The rest part of the stability analysis follows Theorem 9.7. Simulation results are
shown in Figures 9.1 and 9.2 with x(0) = [1, 1]T , c1 = c2 = γ = θ = 1. The state
variables converge to zero as expected, and the estimated parameter converges to
a constant, but not to the correct value θ = 1. In general, estimated parameters in
adaptive control are not guaranteed to converge to their actual values.
are unknown parameters, we will not be able to design an observer in the same way
as in the observer backstepping. Instead, we can design filters similar to observers,
and obtain an expression of state estimation which contains unknown parameters. The
unknown parameters in the state estimation are then tackled by adaptive backstepping.
For the state estimation, we re-arrange the system as
ẋ = Ac x + φ0 (y) + F T (y, u)θ , (9.66)
where the vector θ ∈ Rq , with q = n − ρ + 1 + p, is defined by
b̄
θ=
a
and
0(ρ−1)×(n−ρ+1)
F(y, u) =
T
σ (y)u, (y) .
In−ρ+1
Similar to observer design, we design the following filters:
ξ̇ = A0 ξ + Ly + φ0 (y), (9.67)
˙ = A0 + F(y, u) ,
T T T
(9.68)
where ξ ∈ Rn , T ∈ Rn×q and
L = [l1 , . . . , ln ]T , A0 = Ac − LC
with L being chosen so that A0 is Hurwitz. An estimate of the state is then given by
x̂ = ξ + T θ. (9.69)
Let
= x − x̂
and from direct evaluation, we have
˙ = A0 . (9.70)
Therefore, the state estimate shown in (9.69) is an exponentially convergent one.
Notice that this estimate contains unknown parameter vector θ, and therefore x̂ cannot
be directly used in control design. The relationship between an convergent estimate
x̂ and the unknown parameter vector can be used in adaptive backstepping control
design.
We can reduce the order of the filters. Let us partition T as T = [v, ] with
v ∈ Rn×(n−ρ+1) and ∈ Rn×p . We then obtain, from (9.68), that
˙ = A0 + (y),
vj = A0 vj + ej σ (y)u, for j = ρ, . . . , n,
where ej denotes jth column of identity matrix I in Rn . For 1 < j < n, we have
A0 ej = (Ac − LC)ej = Ac ej = ej+1 .
Backstepping design 169
Note that bρ is unknown. To deal with unknown control coefficient, we often estimate
its reciprocal, instead of itself, to avoid using the reciprocal of an estimate. This is the
reason why we define
z2 = vρ,2 − α1 − ẏ
ˆ r
:= vρ,2 − ˆ ᾱ1 − ẏ
ˆ r.
From bρ = 1, we have
bρ ˆ = 1 − bρ ,
˜
where ˜ = − .
ˆ Then from (9.72), we have
ż1 = bρ (z2 + ˆ ᾱ1 + ẏ
ˆ r ) + ϕ0 + ϕ̄ T θ + 2 − ẏr
= bρ z2 − bρ (
˜ ᾱ1 + ẏr ) + ᾱ1 + ϕ0 + ϕ̄ T θ + 2 . (9.73)
Hence, we design
ᾱ1 = −c1 z1 − k1 z1 − ϕ0 − ϕ̄ T θ̂, (9.74)
where ci and ki for i = 1, . . . , ρ are positive real design parameters. Note that with
α = ˆ ᾱ1 , we have α1 = α1 (y, X1 , θ̂ ).
The resultant closed-loop dynamics are obtained as
˜ ᾱ1 + ẏr ) + ϕ̄ T θ̃ + 2
ż1 = −c1 z1 − k1 z1 + bρ z2 − bρ (
= −c1 z1 − k1 z1 + b̂ρ z2 + b̃ρ z2 − bρ (
˜ ᾱ1 + ẏr ) + ϕ̄ T θ̃ + 2
= −c1 z1 − k1 z1 + b̂ρ z2 − bρ (
˜ ᾱ1 + ẏr )
+ (ϕ − (
ˆ ᾱ1 + ẏr )e1 )T θ̃ + 2 , (9.75)
where θ̃ = θ − θ̂. Note that bρ = θ T e1 .
As shown in the observer backstepping, the term −k1 z1 is used to tackle the
error term 2 . In the subsequent steps, the terms headed by ki are used to deal with
the terms caused by 2 in stability analysis. Indeed, the method of tackling observer
errors is exactly the same as in the observer backstepping when all the parameters are
known. The adaptive law for ˆ can be designed in this step, as it will not appear in
the subsequent steps,
˙ˆ = −γ sgn(bρ )(ᾱ1 + ẏr ))z1 , (9.76)
where γ is a positive real constant.
Similar to adaptive backstepping shown in the previous section, the unknown
parameter vector θ will appear in the subsequent steps, and tuning functions can be
introduced in a similar way. We define the tuning functions τi as
τ1 =
(ϕ − (ẏ
ˆ r + ᾱ1 )e1 )z1 ,
(9.77)
τi = τi−1 −
∂α∂yi−1 ϕzi , i = 2, . . . , ρ,
where αi are the stabilising functions to be designed, and
∈ Rq×q is a positive
definite matrix, as the adaptive gain.
Backstepping design 171
∂αi−1 ˙
− (θ̂ − τi ) i = 3, . . . , ρ. (9.81)
∂ θ̂
θ̂˙ = τρ . (9.82)
1
u= (αρ − vρ,ρ+1 + y
ˆ r(ρ) ). (9.83)
σ (y)
For the adaptive observer backstepping, we have the following stability result.
Theorem 9.8. For a system in the form of (9.65), the control input (9.83) and adap-
tive laws (9.76) and (9.82) designed by adaptive observer backstepping ensure the
boundedness of all the variables and limt→∞ (y(t) − yr (t)) = 0.
Proof. With the control design and adaptive laws presented earlier for the case ρ > 1,
the dynamics of zi , for i = 1, . . . , ρ can be written as
ż1 = −c1 z1 − k1 z1 + 2 + (ϕ − (ẏ
ˆ r + ᾱ1 )e1 )T θ̃
− bρ (ẏr + ᾱ1 )˜ + b̂ρ z2 , (9.84)
2
∂α1
ż2 = −b̂ρ z1 − c2 z2 − k2 z2 + z 3
∂y
ρ
∂α1 T ∂α1
− ϕ θ̃ − 2 + σ2,j zj , (9.85)
∂y ∂y j=3
2
∂αi−1
żi = −zi−1 − ci zi − ki zi + zi+1
∂y
ρ
∂αi−1 T ∂αi−1
− ϕ θ̃ − 2 + σi,j zj
∂y ∂y j=i+1
i−1
− σj,i zj i = 3, . . . , ρ. (9.86)
j=2
Backstepping design 173
Let
ρ ρ
1 2 1 T −1 |bρ | 2 1 T
Vρ = zi + θ̃
θ̃ + ˜ + P, (9.87)
2 i=1 2 2γ i=1
4ki
where P is a positive definite matrix that satisfies
AT0 P + PA0 = −I .
From (9.84)–(9.86), (9.82) and (9.76), it can be shown that
ρ
ρ ρ
∂αi−1 2 2 ∂αi−1 1
V̇ρ = − ci + k i zi − zi 2 − 2 , (9.88)
i=1
∂y i=1
∂y i=1
4k i
where we set ∂α
∂y
0
= −1. Noting
2
∂αi−1
zi ≤ 1 2 2 + ki ∂αi−1 z 2 , (9.89)
∂y 2 4k ∂y i
i
we have
ρ
V̇ρ ≤ − ci zi2 . (9.90)
i=1
ζ̇ = Ac ζ + bu + φ(y) + Ew
(10.1)
y = Cζ ,
176 Nonlinear and adaptive control systems
with
⎡ ⎤
⎡ ⎤ 0
0 1 0 ... 0 ⎡ ⎤T ⎢ .. ⎥
⎢0 0 1 ... 1 ⎢ . ⎥
⎢ 0⎥ ⎥ ⎢0⎥ ⎢ ⎥
⎢ .. ⎥ , ⎢ ⎥ ⎢ 0 ⎥
Ac = ⎢ ... ... ... . . . .⎥ C=⎢.⎥ , b=⎢ ⎥
⎢ bρ ⎥ ,
⎢ ⎥ ⎣ .. ⎦ ⎢ ⎥
⎣0 0 0 ... 1⎦ ⎢ . ⎥
0 ⎣ .. ⎦
0 0 0 ... 0
bn
where ζ ∈ Rn is the state vector; u ∈ R is the control; φ : R → Rn with φ(0) = 0 is
a nonlinear function with element φi being differentiable up to the (n − i)th order;
b ∈ Rn is a known constant Hurwitz vector with bρ = 0, which implies the relative
degree of the system is ρ; E ∈ Rn×m is a constant matrix; and w ∈ Rm are disturbances,
and they are generated from an unknown exosystem
ẇ = Sw
of which, S is a constant matrix with distinct eigenvalues of zero real parts.
Remark 10.1. For the system (10.1), if the disturbance w = 0, it is exactly same
as (9.17), of which observer backstepping can be used to design a control input.
Due to the unknown disturbance, although the observer backstepping presented in
Chapter 9 cannot be applied directly for control design, a similar technique can be
developed using an observer and an adaptive internal model. Also note that the
linear system characterised by (Ac , b, C) is minimum phase.
Remark 10.3. With the assumption that S has distinct eigenvalues with zero real
parts, w is restrict to sinusoidal signals (sinusoidal disturbances) with a possible
constant bias. This is a common assumption for disturbance rejection. Roughly
speaking, all the periodic signal can be approximated by finite number of sinusoidal
functions.
Lemma 10.1. For the system (10.1) with an exosystem whose eigenvalues are distinct
and with zero real parts, there exist π (w) ∈ Rn with π1 (w) = 0 and α(w) ∈ R such
that
∂π(w)
Sw = Ac π(w) + Ew + bα(w). (10.2)
∂w
Proof. Asymptotic disturbance rejection aims at y = 0, which implies that π1 = 0.
The manifold π is invariant and it should satisfy the system equation (10.1) with
y ≡ 0. From the first equation of (10.1), we have
π2 = −E1 w (10.3)
where E1 denotes the first row of E. Furthermore, we have, for 2 ≤ i ≤ ρ,
d
πi = πi−1 − Ei−1 w. (10.4)
dt
From equations ρ to n of (10.1), we obtain
n
d n−i d n−ρ+1 n
d n−i
b i α(w) = π ρ − Ei w. (10.5)
i=ρ
dt n−i dt n−ρ+1 i=ρ
dt n−i
A solution of α(w) can always be found from (10.5). With α(w), we can write, for
ρ < i ≤ n,
d
πi = πi−1 − Ei−1 w − bi−1 α(w). (10.6)
dt
2
With the invariant manifold π (w), we define a transformation of state as
x = ζ − π (w). (10.7)
It can be easily shown from (10.1) and (10.2) that
ẋ = Ac x + b(u − α) + φ(y)
(10.8)
y = Cx.
The stabilisation and disturbance suppression problem of (10.1) degenerates to the
stabilisation problem of (10.8).
For dynamic output feedback control, state variables need to be estimated for
the control design directly or indirectly. The difficulty for designing a state estimator
for (10.8) is that α(w), the feedforward control input for disturbance suppression, is
unknown. Let us consider the observers
ṗ = (Ac − LC)p + φ(y) + bu + Ly (10.9)
q̇ = (Ac − LC)q + bα(w), (10.10)
178 Nonlinear and adaptive control systems
Remark 10.4. The importance of (10.14) compared with (10.13) is the re-formulation
of the uncertainty caused by the unknown exosystem. The uncertainty in (10.13)
parameterised by unknown S and l is represented by a single vector ψ in (10.14). The
relation between the two parameterisations is discussed here. Suppose that M ∈ Rm×m
is the unique solution of
MS − FM = Gl T . (10.15)
The existence of a non-singular M is ensured by the fact that S and F have exclu-
sively different eigenvalues, and (S, l) and (F, G) are observable and controllable
respectively. From (10.15), we have
MSM −1 = F + Gl T M −1 , (10.16)
−1
which implies η = Mw and ψ = l M
T T
= z2 + α1 − ψ T η + 2 + φ1 (y). (10.24)
We design α1 as
α1 = −c1 z1 − k̂1 z1 − φ1 (y) + ψ̂ T ξ. (10.25)
180 Nonlinear and adaptive control systems
∂αi−1 2
αi = −zi−1 − ci zi − k̂i zi − li (y − p1 ) − φi (y)
∂y
∂αi−1 ∂αi−1
i−1 ∂αi−1 ˙
i−1
∂αi−1
+ (p2 − ψ̂ T ξ + φ1 ) + ṗj + k̂j + ξ̇
∂y j=1
∂p j j=1 ∂ k̂ j
∂ξ
∂αi−1
i−1
∂αj−1 ∂αi−1
+ τi + ξ zj , i = 2, . . . , ρ, (10.27)
∂ ψ̂ j=2 ∂ ψ̂
∂y
where li are the ith element of the observer gain L in (10.9), is a positive definite
matrix, τi , i = 2, . . . , ρ, are the tuning functions defined by
i
∂αj−1
τi = ξ zj , i = 2, . . . , ρ, (10.28)
j=1
∂y
∂α0
where we set ∂y
= −1. The adaptive law for ψ̂ is set as
ψ̂˙ = τρ . (10.29)
The adaptive laws for k̂i , i = 1, . . . , ρ, are given by
˙ ∂αi−1 2 2
k̂i = γi zi , (10.30)
∂y
where γi is a positive real design parameter. The control input is obtained by setting
zρ+1 = 0 as
1
u= (αρ − pρ+1 ). (10.31)
bρ
Considering the stability, we set a restriction on c2 by
c2 > PG2 , (10.32)
where P is a positive definite matrix, satisfying
F T P + PF = −2I . (10.33)
To end the control design, we set the interlace function in (10.19) by
ι(y) = −(FG + c1 G + k̂1 G)y. (10.34)
Remark 10.6. The adaptive coefficients k̂i , i = 1, . . . , ρ, are introduced to allow the
exosystem to be truly unknown. It implies that the proposed control design can reject
Disturbance rejection and output regulation 181
Remark 10.7. The final control u in (10.31) does not explicitly contain α(w), the
feedforward control term for disturbance rejection. Instead, the proposed control
design considers q2 , the contribution of the influence of α(w) to x2 , from the first step
in α1 throughout to the final step in αρ .
Theorem 10.3. For the system (10.1), the control input (10.31) ensures the bound-
edness of all the variables, and asymptotically rejects the unknown disturbances in
the sense that limt→∞ y(t) = 0. Furthermore, if w(0) ∈ Rm is such that w(t) contains
the components at m/2 distinct frequencies, then limt→∞ ψ̂(t) = ψ.
e = ξ − η − Gy. (10.35)
ė = Fe − G(z2 + 2 ). (10.36)
i−1
∂αj−1 ∂αi−1
+ ξ zj , i = 2, . . . , ρ, (10.37)
j=2 ∂ ψ̂
∂y
with ψ̃ = ψ − ψ̂.
In the following analysis, we denote κ0 , κi,j , i = 1, . . . , 3, and j = 1, . . . , ρ, as
constant positive reals, which satisfy
ρ
1
κ0 + κ3,j < . (10.39)
j=1
2
182 Nonlinear and adaptive control systems
1 ψ2
ki > κ1,i + + i = 2, . . . , ρ. (10.41)
4κ2,i 4κ3,i
Define a Lyapunov function candidate
ρ ρ
1 T
−1 2 T −1
V = e Pe + zi +
2
γi k̃i + ψ̃ ψ̃ + β P ,
T
(10.42)
2 i=1 i=1
∂αi−1 T
1 ∂αi−1 2 2
∂y ψ Gz1 zi
< κ2,i |ψ G| z1 + 4κ i = 2, . . . , ρ,
T 2 2
zi ,
2,i ∂y
Then based on the conditions specified in (10.32), (10.39), (10.40), (10.41) and
(10.43), we can conclude that there exist positive real constants δi , for i = 1, 2, 3,
such that
ρ
V̇ ≤ −δ1 zi2 − δ2 eT e − δ3 T . (10.45)
i=1
Remark 10.8. The condition imposed on w(0) does not really affect the convergence
of ψ̂. It is added to avoid the situation that w(t) degenerates to have less independent
frequency components. In that situation, we can reform the exosystem and E in (10.1)
with a smaller dimension m̄ such that the reformed w(t) is persistently excited in
reduced space Rm̄ and accordingly we have η, ψ ∈ Rm̄ . With the estimate ψ̂, together
with (F, G), the disturbance frequencies can be estimated.
0.6
0.4
0.2
y
−0.2
0 50 100 150 200 250 300 350 400 450 500
Time (s)
30
20
10
u
−10
−20
0 50 100 150 200 250 300 350 400 450 500
Time (s)
80
60
40
Estimated parameters
20
−20
−40
−60
−80
0 50 100 150 200 250 300 350 400 450 500
Time (s)
Figure 10.2 Estimated parameters ψ̂1 (dashed), ψ̂2 (dotted), ψ̂3 (dashdot) and
ψ̂4 (solid)
186 Nonlinear and adaptive control systems
ω2 = 1.5 respectively. Under both sets of the frequencies, ψ̂ converged to the ideal
values. It can be seen through this example that the disturbance of two unknown
frequencies has been rejected completely.
ẋ = Ac x + φ(y, w, a) + bu
y = Cx (10.49)
e = y − q(w),
with
⎡ ⎤
⎡ ⎤ 0
0 1 0 ... 0 ⎡ ⎤T ⎢ .. ⎥
⎢0 0 1 ... 1 ⎢ . ⎥
⎢ 0⎥ ⎥ ⎢0⎥ ⎢ ⎥
⎢ .. ⎥ , ⎢ ⎥ ⎢ 0 ⎥
Ac = ⎢ ... ... ... . . . .⎥ C=⎢.⎥ , b=⎢ ⎥
⎢ bρ ⎥ ,
⎢ ⎥ ⎣ .. ⎦ ⎢ ⎥
⎣0 0 0 ... 1⎦ ⎢ . ⎥
0 ⎣ .. ⎦
0 0 0 ... 0
bn
ẇ = S(σ )w
Disturbance rejection and output regulation 187
Remark 10.9. Different from (10.1), the system (10.49) has a measurement e that is
perturbed by an unknown polynomial of the unknown disturbance. This is the reason
why the problem to be solved is an output regulation problem, rather than a disturbance
rejection problem.
Remark 10.11. In the system, all the parameters are assumed unknown, including
the sign of the high-frequency gain, bρ , and the parameters of the exosystem. A
Nussbaum gain is used to deal with the unknown sign of the high-frequency gain.
The adaptive control techniques presented in this control design can be easily applied
to other adaptive control schemes introduced in this book. The nonlinear functions
φ(y, w, a) and q(w) are only assumed to have known upper orders. This class of
nonlinear systems perhaps remains as the largest class of uncertain nonlinear systems
of which global output regulation problem can be solved with unknown disturbance
frequencies.
where λi > 0, for i = 1, . . . , ρ − 1, are the design parameters, and the filtered
transformation
z̄ = x − [d̄1 , . . . , d̄ρ−1 ]ξ , (10.51)
where ξ = [ξ1 , . . . , ξρ−1 ]T and d̄i ∈ Rn for i = 1, . . . , ρ − 1, and they are generated
recursively by d̄ρ−1 = b and d̄i = (Ac + λi+1 I )d̄i+1 for i = ρ − 2, . . . , 1. The system
(10.49) is then transformed to
z̄˙ = Ac z̄ + φ(y, w, a) + dξ1
(10.52)
y = C z̄,
With ξ1 as the input, the system (10.52) is with relative degree 1 and minimum phase.
We introduce another state transform to extract the internal dynamics of (10.52) with
z ∈ Rn−1 given by
d2:n
z = z̄2:n − y, (10.54)
d1
where (·)2:n refers to the vector or matrix formed by the 2nd row to the nth row. With
the coordinates (z, y), (10.52) is rewritten as
ż = Dz + ψ(y, w, θ )
(10.55)
ẏ = z1 + ψy (y, w, θ ) + bρ ξ1 ,
where the unknown parameter vector θ = [aT , bT ]T , and D is the left companion
matrix of d given by
⎡ ⎤
−d2 /d1 1 . . . 0
⎢ ⎥
⎢ −d3 /d1 0 . . . 0 ⎥
⎢ ⎥
D=⎢ ⎢
.. .. .. .. ⎥ ,
⎥ (10.56)
⎢ . . . . ⎥
⎣ −dn−1 /d1 0 . . . 1 ⎦
−dn /d1 0 . . . 0
and
d2:n d2:n
ψ(y, w, θ) = D y + φ2:n (y, w, a) − φ1 (y, w, a),
d1 d1
d2 d2:n
ψy (y, w, θ ) = y+ φ1 (y, w, a).
d1 d1
Notice that D is Hurwitz, from (9.36), and that the dependence of d on b is reflected in
the parameter θ in ψ(y, w, θ ) and ψy (y, w, θ ), and it is easy to check that ψ(0, w, θ) = 0
and ψy (0, w, θ ) = 0.
Disturbance rejection and output regulation 189
The solution of the output regulation problem depends on the existence of certain
invariant manifold and feedforward input. For this problem, we have the following
result.
where
∂q(w)
α(w, θ , σ ) = b−1
ρ S(σ )w − π1 (w) − ψy (q(w), w, θ) .
∂w
Furthermore, this immersion can be re-parameterised as
η̇ = (F + Gl T )η
(10.58)
α = l T η,
Proof. With ξ1 being viewed as the input, α is the feedforward term used for output
regulation to tackle the disturbances, and from the second equation of (10.55), we
have
∂q(w)
α(w, θ , σ ) = b−1
ρ S(σ )w − π 1 (w) − ψ y (q(w), w, θ) .
∂w
From the structure of the exosystem, the disturbances are sinusoidal functions.
Polynomials of sinusoidal functions are still sinusoidal functions, but with some high-
frequency terms. Since all the nonlinear functions involved in the system (10.49) are
polynomials of their variables, the immersion in (10.58) always exists. For a control-
lable pair (F, G), M is an invertible solution of (10.59) if (, ) is observable, which
is guaranteed by the immersion. 2
We now introduce the last transformation based on the invariant manifold with
z̃ = z − π (10.60)
190 Nonlinear and adaptive control systems
z̃˙ = Dz̃ + ψ̃
ė = z̃1 + ψ̃y + bρ (ξ − l T η)
ξ̇1 = −λ1 ξ1 + ξ2 (10.61)
...
ξ̇ρ−1 = −λρ−1 ξρ−1 + u,
where
ψ̃ = ψ(y, w, θ) − ψ(q(w), w, θ)
and
Since the state in the internal model η is unknown, we design the adaptive internal
model
η̃ = η − η̂ + b−1
ρ Ge, (10.63)
η̃˙ = F η̃ − FGb−1 −1 −1
ρ e + bρ Gz̃1 + bρ G ψ̃y . (10.64)
ξ̂1 = N (κ)ξ̄1
(10.65)
κ̇ = eξ̄1 ,
where the Nussbaum gain N is a function (e.g. N (κ) = κ 2 cos κ) which satisfies the
two-sided Nussbaum properties
1 κ
lim sup N (s)ds = +∞, (10.66)
κ→±∞ κ 0
κ
1
lim inf N (s)ds = −∞, (10.67)
κ→±∞ κ 0
where κ → ±∞ denotes κ → +∞ and κ → −∞ respectively. From (10.61)
and the definition of the Nussbaum gain, we have
ė = z̃1 + (bρ N − 1)ξ̄1 + ξ̄1 + b̃ρ ξ̃1 + b̂ρ ξ̃1 − lbT η + ψ̃y ,
Disturbance rejection and output regulation 191
where lb = bρ l, b̂ρ is an estimate of bρ and b̃ρ = bρ − b̂ρ , and ξ̃1 = ξ1 − ξ̂1 . Sine the
nonlinear functions involved in ψ̃ and ψ̃y are polynomials with ψ̃(0, w, θ , σ ) = 0 and
ψ̃y (0, w, θ, σ ) = 0, w is bounded, and the unknown parameters are constants, it can
be shown that
where p is a known positive integer, depending on the polynomials in ψ̃ and ψ̃y , and
r̄z and r̄y are unknown positive real constants. We now design the virtual control ξ̂1
as, with c0 > 0,
ė = −c0 e − k̂0 (e + e2p−1 ) + z̃1 + (bρ N − 1)ξ̄1 + b̃ρ ξ̃1 + b̂ρ ξ̃1
˙
k̂0 = e2 + e2p ,
τb,0 = ξ̃1 e, (10.69)
τl,0 = −η̂e,
where τb,0 and τl,0 denote the first tuning functions in adaptive backstepping design
for the final adaptive laws for b̂ρ and l̂b . If the relative degree ρ = 1, we set u = ξ̂1 .
For ρ > 1, adaptive backstepping can be used to obtain the following results:
2
∂ ξ̂1
ξ̂2 = −b̂ρ e − c1 ξ̃1 − k1 ξ̃1
∂e
∂ ξ̂1 ∂ ξ̂1 ˙
+ (b̂ρ ξ1 − l̂bT η̂) + η̂
∂e ∂ η̂
∂ ξ̂1 ˙ ∂ ξ̂1
+ k̂0 + τl,1 , (10.70)
∂ k̂0 ∂ l̂b
192 Nonlinear and adaptive control systems
2
∂ ξ̂i−1
ξ̂i = −ξ̃i−2 − ci−1 ξ̃i−1 − ki−1 ξ̃i−1
∂e
∂ ξ̂i−1 ∂ ξ̂i−1 ˙
+ (b̂ρ ξ1 − l̂bT η̂) + η̂
∂e ∂ η̂
∂ ξ̂i−1 ˙ ∂ ξ̂i−1 ∂ ξ̂i−1
+ k̂0 + τb,i−1 + τl,1
∂ k̂0 ∂ b̂ρ ∂ l̂b
i
∂ ξ̂i−1 ∂ ξ̂j−2
− ξ1 ξ̃j−2
j=4
∂e ∂ b̂ρ
i
∂ ξ̂i−1 ∂ ξ̂j−2
+ η̂ξ̃j−2 for i = 2, . . . , ρ, (10.71)
j=3
∂e ∂ l̂b
Theorem 10.5. For a system (10.49) satisfying the invariant manifold condition
(10.57), the adaptive output regulation problem is globally solved by the feedback
control system consisting the ξ -filters (10.50), the adaptive internal model (10.62),
Nussbaum gain parameter (10.65), the parameter adaptive laws (10.69), (10.72),
(10.73) and the feedback control (10.74), which ensures the convergence to zero of
the regulated measurement, and the boundedness of all the variables in the closed-loop
system.
where β1 and β2 are two positive reals and Pz and Pη are positive definite matrices
satisfying
Pz D + DT Pz = −I ,
Pη F + F T Pη = −I .
With the design of ξ̂i , for i = 1, . . . , ρ, the dynamics of ξ̃i can be easily evaluated.
From the dynamics of z̃ in (10.61) and the dynamics of η̃ in (10.64), virtual controls
and adaptive laws designed earlier, we have the derivative of V as
The stability analysis can be proceeded by using the inequalities 2xy < rx2 + y2 /r
or xy < rx2 + y2 /(4r) for x > 0, y > 0 and r being any positive real, to tackle the
cross-terms between the variables z̃, η̃, e, ξ̃i , for i = 1, . . . , ρ − 1. It can be shown
that there exist sufficiently big positive real β1 , and then sufficiently big positive real
β2 , and finally the sufficient big k0 such that the following result holds:
ρ−1
1 1
V̇ ≤ (bρ N (κ) − 1)κ̇ − β1 η̃T η̃ − β2 z̃ T z̃ − c0 e2 − ci ξ̃i2 . (10.75)
3 4 i=1
κ(t)
≤ bρ N (s)ds − κ(t) + V (0). (10.76)
0
If κ(t), ∀t ∈ R+ , is not bounded from above or below, then from (10.66) and (10.67)
it can be shown that the right-hand side of (10.76) will be negative at some instances
of time, which is a contradiction, since the left-hand side of (10.76) is non-negative.
194 Nonlinear and adaptive control systems
ẋ = Ac x + φ(y)a + E(w) + bu
y = Cx (10.77)
e = y − q(w),
Disturbance rejection and output regulation 195
with
⎡ ⎤
⎡ ⎤ 0
0 1 0 ... 0 ⎡ ⎤T ⎢ .. ⎥
⎢0 0 1 ... 1 ⎢ . ⎥
⎢ 0⎥ ⎥ ⎢0⎥ ⎢ ⎥
⎢ .. ⎥ , ⎢ ⎥ ⎢ 0 ⎥
Ac = ⎢ ... ... ... . . . .⎥ C=⎢.⎥ , b=⎢ ⎥
⎢ bρ ⎥ ,
⎢ ⎥ ⎣ .. ⎦ ⎢ ⎥
⎣0 0 0 ... 1⎦ ⎢ . ⎥
0 ⎣ .. ⎦
0 0 0 ... 0
bn
where x ∈ R is the state vector; u ∈ R is the control; y ∈ R is the output; e is the
n
Remark 10.12. The assumption about the function φ is satisfied for many kinds of
functions, for example polynomial functions.
Remark 10.13. The nonlinear exosystem (10.78) includes nonlinear systems that
have limit cycles.
Remark 10.14. The system (10.78) is very similar to the system (10.49) considered
in the previous section. The main difference is that the exosystem is nonlinear.
The system (10.78) has the same structure of Ac , bu and C as in (10.49), and
therefore the same filtered transformation as in the previous section can be used here.
We can use the same filtered transformation, and the transformation for extracting
the zero dynamics of the system as in the previous section. Using the transformations
(10.51) and (10.54), the system (10.78) is put in the coordinate (z, y) as
dn d n d2 dn dn
żn−1 = − z1 − 2 y + φn (y) − φ1 (y) a + En (w) − E1 (w),
d1 d1 d1 d1
d2
ẏ = z1 + y + φ1 (y)a + E1 (w) + bρ ξ1
d1
where di are defined in (10.53).
196 Nonlinear and adaptive control systems
Proposition 10.6. Suppose that there exist (w) ∈ Rn and ι(w) with 1 (w) = q(w)
for each a, b such that
∂
s(w) = Ac + φ(q(w))a + E(w) + bι(w). (10.80)
∂w
Then there exists π (w) ∈ Rn−1 along the trajectories of exosystem satisfying
Proof. Since the last equation of input filter (10.50) used for the filtered transformation
is an asymptotically stable linear system, there is a static response for every external
input u(w), i.e., there exists a function χρ−1 (w) such that
∂χρ−1 (w)
s(w) = −λρ−1 χρ−1 (w) + ι(w).
∂w
Recursively, if there exists χi (w) such that
∂χi (w)
s(w) = −λi χi (w) + χi+1 (w),
∂w
then there exists χi−1 (w) such that
∂χi−1 (w)
s(w) = −λi−1 χi−1 (w) + χi (w).
∂w
Define
π(w)
= Da ( (w) − [d̄1 , . . . , d̄ρ−1 ]χ ),
q(w)
where χ = [χ1 , . . . , χρ−1 ]T and
⎡ ⎤
−d2 /d1 1 ... 0
⎢ .. .. .. .⎥
⎢
Da = ⎢ . . . .. ⎥
⎥.
⎣ −dn /d1 0 ... 1⎦
1 0 ... 0
Disturbance rejection and output regulation 197
It can be seen that π(w) satisfies the dynamics of z along the trajectories of (10.78)
as shown in (10.80), and hence the proposition is proved. 2
∂q(w) d2
α = b−1
ρ s(w) − π1 (w) − q(w) − φ 1 (q(w))a − E 1 (w) .
∂w d1
We now introduce the last transformation based on the invariant manifold with
z̃ = z − π(w(t)).
Finally we have the model for the control design
Lemma 10.7. There exists a known function ζ (·) which is non-decreasing and an
unknown constant , which is dependent on the initial state w0 of exosystem, such that
|(y, w, d)| ≤ |e|ζ (|e|),
|φ1 (y) − φ1 (q(w))| ≤ |e|ζ (|e|).
Let
Vz = z̃ T Pd z̃,
where
Pd D + DT Pd = −I .
Then using 2ab ≤ ca2 + c−1 b2 and ζ 2 (|e|) ≤ ζ 2 (1 + e2 ), there exist unknown positive
real constants 1 and 2 such that
V̇z = −z̃ T z̃ + 2z̃ T Pd (e + (y, w, d)a)
3
≤ − z̃ T z̃ + 1 e2 + 2 e2 ζ 2 (1 + e2 ), (10.82)
4
noting that
1
2z̃ T Pd e ≤ z̃ T z̃ + 8eT T Pd2 e
8
1
≤ z̃ T z̃ + 1 e2 ,
8
and
1
2z̃ T Pd (y, w, d)a ≤ z̃ T z̃ + 8aT T Pd2 a
8
1
≤ z̃ T z̃ + 12 ||2
8
1
≤ z̃ T z̃ + 12 2 |e|2 ζ 2 (|e|)
8
1 T
≤ z̃ z̃ + 2 e2 ζ 2 (1 + e2 ),
8
where 12 is an unknown positive real constant.
Disturbance rejection and output regulation 199
Now let us consider the internal model design. We need an internal model to
produce a feedforward input that converges to the ideal feedforward control term
α(w), which can be viewed as the output of the exosystem as
ẇ = s(w)
α = α(w).
Suppose that there exists an immersion of the exosystem
η̇ = Fη + Gγ (J η)
(10.83)
α = H η,
where η ∈ Rr , H = [1, 0, . . . , 0], (H , F) is observable
(v1 − v2 )T (γ (v1 ) − γ (v2 )) ≥ 0,
and G and J are some appropriate dimensional matrices. We then design an internal
model as
η̂˙ = (F − KH )(η̂ − b−1 −1
ρ Ke) + Gγ (J (η̂ − bρ Ke)) + Kξ1 , (10.84)
where K ∈ Rr is chosen such that F0 = F − KH is Hurwitz and there exist a positive
definite matrix Pf and a semi-positive definite matrix Q satisfying
⎧
⎪
⎪ Pf F0 + F0T Pf = −Q
⎪
⎪
⎪
⎨Pf G + J T = 0
(10.85)
⎪
⎪ηT Qη ≥ γ0 |η1 |2 , γ0 > 0, η ∈ Rr
⎪
⎪
⎪
⎩span(P K) ⊆ span(Q).
F
Remark 10.16. Note the condition specified in (10.85) is weaker than the condition
that there exist Pf > 0 and Q > 0 satisfying
PF F0 + F0T PF = −Q
(10.86)
PF G + J T = 0,
which can be checked by LMI. This will be seen in the example later in this section. In
particular, if G and J T are two column vectors, (F0 , G) controllable, (J , F0 ) observable
and Re[−J (jωI − F0 )−1 G] > 0, ∀ω ∈ R, then there exists a solution of (10.86) from
Kalman–Yacubovich lemma.
200 Nonlinear and adaptive control systems
−1 d2
+ bρ K z̃1 + e + (φ1 (y) − φ1 (q(w)))a .
d1
Let
Vη = η̃PF η̃.
Then following the spirit of (10.82), there exist unknown positive real constants 1
and 2 such that
−1 d2
V̇η = −η̃ Qη̃ + 2η̃ PF bρ K z̃1 + e
T T
d1
+ 2η̃T PF b−1
ρ K(φ1 (y) − φ1 (q(w)))a
3 12
≤ − γ0 |η̃1 |2 + b−2 z̃ 2 + 1 e2 + 2 e2 ζ 2 (1 + e2 ). (10.87)
4 γ0 ρ 1
Let us proceed with the control design. From (10.81) and
α = η1 = η̂1 + η̃1 − b−1
ρ K1 e,
we have
d2
ė = z̃1 + e + (φ1 (y) − φ1 (q(w)))a + ξ̄1 + bρ (ξ̃1 − η̃1 − η̂1 + b−1
ρ K1 e),
d1
where ξ̃1 = ξ1 − ξ̂1 and
ξ̂1 = b−1
ρ ξ̄1 . (10.88)
For the virtual control ξ̂1 , we design ξ̄1 as, with c0 > 0,
there exist unknown positive real constants 1 and 2, and a sufficiently large
unknown positive constant β such that
d2 2
V̇e = −c0 e2 + ez̃1 + e + ebρ (ξ̃1 − η̃1 )
d1
+ e(φ1 (y) − φ1 (q(w)))a − l̂e2 (1 + ζ 2 (1 + e2 ))
1 1
≤ −c0 e2 + β z̃12 + γ0 η̃12 + 1 e2 + 2 e2 ζ (1 + e2 )
8 4
−l̂e2 (1 + ζ 2 (1 + e2 )) + bρ eξ̃1 . (10.90)
Let
1
V0 = βVz + Vη + Ve + γ −1 (l̂ − l)2 ,
2
where β ≥ 96 b−2 is chosen and l =
γ0 ρ 1 + 2 + 1 + 2 + β(1 + 2 ) is an
unknown constant. Let
˙
l̂ = γ e2 (1 + ζ 2 (1 + e2 )).
Theorem 10.8. For the system (10.78) with the nonlinear exosystem (10.78), if there
exists an invariant manifold (10.80) and an immersion (10.83), then there exists
K ∈ Rr such that F0 = F − KH is Hurwitz and there exist a positive definite matrix PF
and a semi-positive definite matrix Q satisfying (10.85), and there exists a controller
to solve the output regulation in the sense the regulated measurement converges to
zero asymptotically while other variables remain bounded.
ẇ1 = w1 + w2 − w13
ẇ2 = −w1 − w23 .
It is easy to see that V (w) = 12 w12 + 12 w22 satisfies
dV
= w12 − w14 − w24 ≤ 0, when |w1 | ≥ 1,
dt
and that
q(w) = w1 ,
π = w1 ,
α(w) = −w1 .
From the exosystem and the desired feedforward input α, it can be seen that the
condition specified in (10.85) is satisfied with η = −w and
⎧
⎪
⎪ 1 1 −1 0
⎪
⎨F = −1 0 , G =
⎪
0 −1
⎪
⎪ 1 0
⎪
⎪
⎩γ1 (s) = γ2 (s) = s , J = .
3
0 1
−1 1
F0 = , PF = I , Q = diag(2, 0),
−1 0
the internal model is designed as the following:
η̂˙ 1 = −(η̂1 − 2e) + η̂2 − (η̂1 − 2e)3 + 2u
η̂˙ 2 = −(η̂1 − 2e) − η̂23 .
The control input and the adaptive law are given by
0.4
0.2
Tracking error e
0
−0.2
−0.4
−0.6
−0.8
−1
0 5 10 15
Time t (s)
5
4
3
Control u
2
1
0
−1
−2
0 5 10 15
Time t (s)
0.5
0
h1 and h1
−0.5
−1
−1.5
−2
0 5 10 15
Time t (s)
Figure 10.4 The systems’s feedforward control η1 and its estimation η̂1
204 Nonlinear and adaptive control systems
2
1.5
1
w2
0.5
−0.5
−1
−1 −0.5 0 0.5 1 1.5 2
w1
Remark 10.17. For the convenience of presentation, we only consider the system
with relative degree 1 as in (10.91). The systems with higher relative degrees
can be dealt with similarly by invoking backstepping. The second equation in
(10.91) describes the internal dynamics of the system states, and if we set v = 0
and y = 0, ż = f (z, 0, 0) denotes the zero dynamics of this system.
Remark 10.18. The system in (10.91) specifies a kind of standard form for
asymptotic rejection of general periodic disturbances. For example, consider
ẋ = Ax + φ(y, v) + bu
y = cT x,
206 Nonlinear and adaptive control systems
with b, c ∈ Rn and
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−a1 1 ... 0 b1 1
⎢ .. ⎥ ⎢ b2 ⎥ ⎢0⎥
⎢ −a2 0 . 0⎥ ⎢ ⎥ ⎢ ⎥
A=⎢ ⎢ . .. . .
⎥
.. ⎥ , b = ⎢ . ⎥, c = ⎢ . ⎥,
⎣ .. . .⎦ ⎣ .. ⎦ ⎣ .. ⎦
.
−an ... ... 0 bn 0
where x ∈ Rn is the state vector; y and u ∈ R are the output and input respectively
of the system; v ∈ Rm denotes general periodic disturbances; and φ : R × Rm → Rn
is a nonlinear smooth vector field in Rn with φ(0, 0) = 0. This system is similar to
the system (10.49) with q(w) = 0. For this class of nonlinear systems, the asymptotic
disturbance rejection depends on the existence of state transform to put the systems
in the form shown in (10.91), and it has been shown in Section 10.2 that such a
transformation exists under some mild assumptions.
where ω = 2πT
. Here, the desired feedforward input μ(v) is modelled as the nonlinear
output h(w) of the second-order system. With
cos ωt sin ωt
eAt = ,
−sin ωt cos ωt
the linear part of the output h(w), Hw, is always in the form of a sin (ωt + φ) where
a and φ denote the amplitude and phase respectively. Hence, we can set H = [1 0]
without loss of generality, as the amplitude and the phase can be decided by the initial
value with
Based on the above discussion, the dynamic model for general periodic disturbance
is described by
ẇ1 = ωw2
ẇ2 = −ωw1 (10.92)
μ = w1 + h1 (w1 , w2 ),
where h1 (w1 , w2 ) is a Lipschitz nonlinear function with Lipschitz constant γ .
For the model shown in (10.93), the dynamics are linear, but the output function
is nonlinear. Many results in literature on observer design for nonlinear Lipschitz
systems are for the system with nonlinearities in the system dynamics while the output
functions are linear. Here we need the results for observer design with nonlinear output
functions. Similar techniques to the observer design of nonlinearities in dynamics can
be applied to the case when the output functions are nonlinear.
We have shown the observer design for a linear dynamic system with a nonlinear
Lipschitz output function in Section 8.4 with the observer format in (8.38) and gain
in Theorem 8.11. Now we can apply this result to observer design for the model of
general periodic disturbances. Forthe model shown in (10.93), the observer shown
0 ω
in (8.38) can be applied with A = and H = [1 0]. We have the following
−ω 0
lemma for the stability of this observer.
Lemma 10.9. An observer in the form of (8.38) can be designed to provide an expo-
nentially convergent state estimate for the general periodic disturbance model (10.93)
if the Lipschitz constant γ for h1 satisfies γ < √12 .
Therefore, the second condition in (8.40) is satisfied. Following the first condition
specified in (8.39), we set
⎡ ⎤
4ω(4pγ 2 ω)
⎢ (4pγ 2 ω)2 − 1 ⎥
L=⎢ ⎣
⎥.
⎦ (10.93)
4ω
(4pγ 2 ω)2 − 1
The rest part of the proof can be completed by invoking Theorem 8.11. 2
Hence from the above lemma, we design the observer for the general disturbance
model as
x̂˙ = Ax̂ + L(y − h(x̂)), (10.94)
0 ω
where A = and H = [1 0] with the observer gain L as shown in (10.93).
−ω 0
Before introducing the control design, we need to examine the stability issues of
the z−subsystem, and hence introduce a number of functions that are needed later
for the control design and stability analysis of the entire system.
Proof. From Corollary 5.10, there exists a Lyapunov function Vz (z) that satisfies that
α1 (z) ≤ Vz (z) ≤ α2 (z)
(10.96)
V̇z (z) ≤ −α(z) + σ (|y|),
where α, α1 and α2 are class K∞ functions, and σ is a class K function. Let β
be a K∞ function such that β(z) ≥ a2 (z) and β(s) = O(a2 (s)) as s → 0. Since
β(s) = O(a2 (s)) = O(α(s)) as s → 0, there exists a smooth nondecreasing (SN )
function q̃ such that, ∀r ∈ R+
1
q̃(r)α(r) ≥ β(r).
2
Let us define two functions
q(r) := q̃(α1−1 (r)),
r
ρ(r) := q(t)dt.
0
Disturbance rejection and output regulation 209
Define
Ṽ (z) := ρ(V (z)),
and it can be obtained that
Ṽ˙ (z) ≤ −q(V (z))α(z) + q(V (z))σ (|y|)
1
≤ − q(V (z))α(z) + q(θ (|y|))σ (|y|)
2
1
≤ − q(α1 (z))α(z) + q(θ (|y|))σ (|y|)
2
1
= − q̃(z)α(z) + q(θ(|y|))σ (|y|),
2
where θ is defined as
θ(r) := α2 (α −1 (2σ (r))
for r ∈ R+ . Let us define a smooth function σ̄ such that
σ̄ (r) ≥ q(θ (|r|))σ (|r|)
for r ∈ R and σ̄ (0) = 0, and then we have established (10.95). 2
σ̄ (y)
u = −b−1 ψ0 (y) + k0 y + k1 y + k2 + k3 ψ̄(y) + h(η − b−1 Ly), (10.98)
y
where k0 is a positive real constant, and
3
k1 = κ −1 b2 (γ + H )2 + ,
4
k2 = 4κ −1 ||b−1 PL2 + 2,
1
k3 = 4κ −1 ||b−1 PL2 + .
2
For the stability of the closed-loop system, we have the following theorem.
● the subsystem ż = f (z, v, y) is ISS with state z and input y, characterized by ISS
pair (α, σ ), and furthermore, α(s) = O(a2 (s)) as s → 0
the output feedback control design with the internal model (10.97) and the control
input (10.98) ensures the boundedness of all the variables of the closed-loop system
and the asymptotic convergence to zero of the state variables z and y and the estimation
error (w − η + b−1 Ly).
Proof. Let
ξ = w − η + b−1 Ly.
It can be obtained from (10.97) that
ξ̇ = (A − LH )ξ + b−1 L(h1 (w) − h1 (w − ξ )) + b−1 La(z) + b−1 Lψ(y, v).
Let Vw = ξ T Pξ . It can be obtained that
V̇w (ξ ) ≤ −κξ 2 + 2|ξ T b−1 PLa(z)| + 2|ξ T b−1 PLψ(y, v)|
1
≤ − κξ 2 + 2κ −1 ||b−1 PL2 (a2 (z) + ψ(y, v)|2 )
2
1 1
≤ − κξ + (k2 − 2)β(z) + k3 −
2
yψ̄(y) (10.99)
2 2
where κ = 2γ1 2 − 1.
Based on the control input (10.98), we have
σ̄ (y)
ẏ = −k0 y − k1 y − k2 − k3 ψ̄(y) + a(z) + ψ(y, v) + b(h(w − ξ ) − h(w)).
y
Let Vy = 12 y2 . It follows from the previous equation that
V̇y = − (k0 + k1 )y2 − k2 σ̄ (y) − k3 yψ̄(y) + ya(z) + yψ(y, v) + yb(h(w − ξ ) − h(w))
1 1
≤ −k0 y − k2 σ̄ (y) − k3 −
2
yψ̄(y) + β(z) + κξ 2 . (10.100)
2 4
Let us define a Lyapunov function candidate for the entire closed-loop system as
V = Vy + Vw + k2 Ṽz .
Following the results shown in (10.96), (10.99) and (10.100), we have
1
V̇ ≤ −k0 y2 − κξ 2 − β(z).
4
Therefore, we can conclude that closed-loop system is asymptotically stable with
respect to the state variables y, z and the estimation error ξ .
Several types of disturbance rejection and output regulation problems can be
converted to the form (10.91). In this section, we show two examples. The first
example deals with rejection of general periodic disturbances, and the second example
demonstrates how the proposed method can be used for output regulation.
Disturbance rejection and output regulation 211
ẋ1 = x2 + φ1 (x1 ) + b1 u
ẋ2 = φ2 (x1 ) + ν(w) + b2 u
(10.101)
ẇ = Aw
y = x1 ,
b2
z̄ = x2 − x1 .
b1
b2
ẏ = z̄ + y + φ1 (y) + b1 u
b1
2
b2 b2 b2
z̄˙ = − z̄ + φ2 (y) − φ1 (y) − y + ν(w).
b1 b1 b1
Consider
b2
π̇z = − πz + ν(w).
b1
It can be shown that there exists a steady-state solution, and furthermore, we can
express the solution as a nonlinear function of w, denoted by πz (w). Let us introduce
another state transformation with z = z̄ − πz (w). We then have
b2
ẏ = z + y + φ1 (y) + b1 (u + b−1
1 πz (w))
b1
2
b2 b2 b2
ż = − z + φ2 (y) − φ1 (y) − y. (10.102)
b1 b1 b1
212 Nonlinear and adaptive control systems
a(z) = z,
b2
ψ(y) = y + φ1 (y),
b1
b = b1 ,
h(w) = −b−1
1 πz (w),
2
b2 b2
f (z, v, y) = φ2 (y) − φ1 (y) − y.
b1 b1
From a(z) = z, we can set β(z) = z2 = z 2 .
It can be shown that the second condition of Theorem 10.11 is satisfied by
(10.102). Indeed, let V (z) = 12 z 2 , and we have
2
b2 2 b2 b2
V̇z = − z + z φ2 (y) − φ1 (y) − y
b1 b1 b1
2 2
1 b2 2 1 b1 b2 b2
≤ − z + φ2 (y) − φ1 (y) − y .
2 b1 2 b2 b1 b1
Let
b1
Ṽz = 2 Vz
b2
and finally we have
2
2 2
Ṽ˙ z ≤ −β(|z|) +
b1 b2 b2
φ2 (y) − φ1 (y) − y . (10.103)
b2 b1 b1
It can be seen that there exists a class K function σ (|y|) to dominate the second term
on the right-hand side of (10.103), and the z-subsystem is ISS. For the control design,
we can take
2
2 2
b1 b2 b2
σ̄ (y) = φ2 (y) − φ1 (y) − y .
b2 b1 b1
The rest part of the control design follows the steps shown earlier.
For the simulation study, we set the periodic disturbance as a square wave. For
convenience, we abuse the notations of ν(w(t)) and h(w(t)) as ν(t) and h(t). For ν
with t in one period, we have
⎧
⎪ T
⎨ d, 0 ≤ t < ,
ν= 2 (10.104)
⎪
⎩ −d, T
≤ t < T,
2
Disturbance rejection and output regulation 213
⎪
⎪ 1 b
− b2 t 1 − bb2 t T b2 T
⎨− b (1 − e
⎪ 1 )+ e
b2
1 tanh
4 b1
, 0≤t< ,
2
2
h̄(t) =
(10.105)
⎪
⎪ 1 b b T 1 − t
b T b2 T
⎪ − t ( −t)
2 2 2
⎩ (1 + e b1 − 2e b1 2 ) + e b1 tanh , ≤ t < T.
b2 b2 4 b1 2
Eventually we have the matched periodic disturbance h(w) given by
w2
h(w) = w1 + w2 h̄ arctan
2 2
.
w1
Note that w12 + w22 decides the amplitude, which can be determined by the initial
state of w.
In the simulation study, we set T = 1, d = 10, φ1 = y3 , φ2 = y2 and b1 = b2 = 1.
The simulation results are shown in Figures 10.6–10.9. It can be seen from Figure 10.6
that the measurement output converges to zero and the control input converges to a
periodic function. In fact, the control input converges to h(w) as shown in Figure 10.7.
0.2
0.15
0.1
y
0.05
−0.05
0 5 10 15
Time (s)
−5
u
−10
−15
−20
0 5 10 15
Time (s)
5
u
h
−5
u and h
−10
−15
−20
0 5 10 15
Time (s)
8
h
Estimate of h
4
h and estimate
−2
−4
0 5 10 15
Time (s)
40
w1
w2
h1
30
h2
20
w and estimates
10
−10
−20
0 5 10 15
Time (s)
Figure 10.9 The exosystem states and the internal model states
As for the internal model and state estimation, it is clear from Figure 10.8 that the
estimated equivalent input disturbance converges to h(w), and η converges to w.
Example 10.4. In this example, we briefly show that an output regulation problem
can also be converted to the form in (10.91). Consider
ẋ1 = x2 + (ey − 1) + u
ẋ2 = (ey − 1) + 2w1 + u
(10.106)
ẇ = Aw
y = x1 − w1 ,
where y ∈ R is the measurement output and w1 = [1 0]w. In this example, the mea-
sured output contains the unknown disturbance, unlike Example 10.3. The control
objective remains the same, to design an output feedback control law to ensure the
overall stability of the system and the convergence to zero of the measured output.
The key step in the control design is to show that the system shown in (10.106) can
be converted to the form as shown in (10.91).
Let
1
πz = [1 − ω]w,
1 + ω2
216 Nonlinear and adaptive control systems
where
2 + ω2
h(w) = [−1 ω]w − (ew1 − 1).
1 + ω2
It can be seen that we have transformed the system to the format as shown in
(10.91) with ψ(y, v) = ew1 (ey − 1).
To make H = [1 0], we introduce a state transform for the disturbance model as
2 + ω2 −1 ω
ζ = w.
1 + ω2 −ω −1
2
ẏ = z + y + e1/(2+ω )[−1−ω]ζ (ey − 1) + (u − h(ζ ))
2
ż = −z − y + [−1 −ω]ζ (10.107)
2 + ω2
ζ̇ = Aζ ,
where
2 )[−1−ω]ζ
h(ζ ) = ζ1 − (e1/(2+ω − 1).
Note that e y−1 is a continuous function, and we can take ψ̄(y) = d0 y( e y−1 )2 where
y y
0.2
0.15
0.1
y
0.05
−0.05
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
50
0
−50
−100
u
−150
−200
−250
−300
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
50
u
h
−50
−100
u and h
−150
−200
−250
−300
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
20
h
Estimate of h
15
10
5
h and estimate
−5
−10
−15
−20
−25
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
40
m1
m2
30 h1
h2
20
m and estimates
10
−10
−20
−30
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time (s)
Figure 10.13 The exosystem states and the internal model states
Chapter 11
Control applications
In this chapter, we will address a few issues about control applications. Several
methods of disturbance rejection are presented in Chapter 10, including rejection
of general periodic disturbances. A potential application can be the estimation and
rejection of undesirable harmonics in power systems. Harmonics, often referred to
high-order harmonics in power systems, are caused by nonlinearities in power systems,
and the successful rejection depends on accurate estimation of amplitudes and phase
of harmonics. We will show an iterative estimation method based on a new observer
design method.
There are tremendous nonlinearities in biological systems, and there have been
some significant applications of nonlinear system analysis and control methods in
system biology. We will show a case that nonlinear observer and control are applied
to circadian rhythms. A Lipschitz observer is used to estimate unknown states, and
backstepping control design is then applied to restore circadian rhythms.
Most of the control systems are implemented in computers or other digital devices
which are in discrete-time in nature. Control implementation using digital devices
inevitably ends with sample-data control. For linear systems, the sampled systems are
still linear, and the stability of the sampled-date system can be resolved in stability
analysis using standard tools of linear systems in discrete-time. However, when a
nonlinear system is sampled, the system description may not have a closed form, and
the structure cannot be preserved. The stability cannot be assumed for a sampled-data
implementation of nonlinear control strategy. We will show that for certain nonlinear
control schemes, the stability can be preserved by fast sampling in the last section.
Remark 11.1. The disturbance-free system of (11.1) is in the output feedback form
discussed in the previous chapters. With the disturbance, it is similar to the system
(10.1), with the only difference that w is a general periodic disturbance. It is also
different from the system (10.91) of which the disturbance is in the matched form.
Remark 11.2. The continuity requirement specified for the general periodic distur-
bance w in (11.1) is for the existence of a continuous input equivalent disturbance
and a continuous invariant manifold in the state space. For the case of ρ < ι, we may
allow disturbance to have finite discontinuous points within each period, and for each
of the discontinuous points, the left and right derivatives exist.
Remark 11.3. The minimum phase assumption is needed for the convenience of
presentation of the equivalent input disturbance and the control design based on
backstepping. It is not essential for control design, disturbance rejection or disturbance
estimation. We could allow the system to be non-minimum phase, provided that there
exists a control design for the disturbance-free system which renders the closed-loop
system exponentially stable.
The zero dynamics of (11.1) is linear. To obtain the equivalent input disturbance,
we need a result for steady-state response for stable linear systems.
ẋ = Ax + bw, (11.2)
Therefore, we have
N −1 T
x(NT + t) = eA(NT +t) x(0) + eAt eA(N −i)T e−Aτ bw(τ )dτ
i=0 0
t
+ eA(t−τ ) bw(τ )dτ. (11.4)
0
The steady-state response in (11.3) is obtained by taking the limit of (11.4) for
t → ∞. 2
where
⎡ ⎤
−bρ+1 /bρ 1 ... 0 ⎡ ⎤
⎢ .. .. .. .⎥ bρ+1 /bρ
⎢ . . . .. ⎥ ⎢ .. ⎥
B=⎢
⎢
⎥,
⎥ b̄ = ⎣ . ⎦.
⎣ −bn−1 /bρ 0 ... 1⎦ bn /bρ
−bn /bρ 0 ... 0
Control applications 223
ż = Bz + φz (y) + dz w,
where
and
⎡ ⎤
dρ+1 ρ
⎢ ⎥ ρ−i
dz = ⎣ ... ⎦ − B b̄di .
i=1
dn
The periodic trajectory and the equivalent input disturbance can be found using
the system in the coordinate (ζ1 , . . . , ζρ , z). Since the system output y does not
contain the periodic disturbance, we have the invariant manifold for π1 = 0. From
Lemma 11.1 which is used for the result of the steady-state response of linear systems
to the periodic input, we have, for 0 ≤ t < T ,
t
πz (t) = eB(t−τ ) dz w(τ )dτ + eBt (I − eBT )−1 eBT WT
0
T
with WT = 0 e−Bτ dz w(τ )dτ . From the first equation of (11.6), we have, for
i = 1, . . . , ρ − 1
dπi (t)
πi+1 (t) = − di w.
dt
Based on the state transformation introduced earlier, we can use its inverse
transformation to obtain
⎡ ⎤
πρ+1 ρ
⎢ .. ⎥
⎣ . ⎦ = πz + Bρ−i b̄πi .
πn i=1
π = [π, . . . , πn ]T . (11.6)
224 Nonlinear and adaptive control systems
Let x = ζ − π denote the difference between the state variable ζ and the periodic
trajectory.
The periodic trajectory, π, plays a similar role as the invariant manifold in the
set-up for the rejection of disturbances generated from linear exosystems. For this,
we have the following result.
Theorem 11.2. For the general periodic disturbance w in (11.1), the periodic tra-
jectory given in (11.6) and the equivalent input disturbance given in (11.7) are well
defined and continuous, and the difference between the state variable (11.1) and the
periodic trajectory, denoted by x = ζ − π , satisfies the following equation:
ẋ = Ac x + φ(y) + b(u − μ)
(11.8)
y = Cx.
The control design and disturbance rejection will be based on (11.8) instead
of (11.1).
Remark 11.4. The control design and disturbance rejection only use the output y,
with no reference to any of other state of the system. Therefore, there is no difference
whether we refer to (11.1) or (11.8) for the system, because they have the same output.
The format in (11.8) shows that there exist an invariant manifold and an equivalent
input disturbance. However, the proposed control design does not depend on any
information of μ, other than its period, which is the same as the period of w. In other
words, control design only relies on the form shown in (11.1). The form shown in
(11.8) is useful for the analysis of the performance of the proposed control design,
including the stability. In this section, we start our presentation from (11.1) rather
than (11.8) in order to clearly indicate the class of the systems to which the proposed
control design can be applied, without the restriction to the rejection of matched
disturbances.
where ωk = 2πkT
and ak and φk are the amplitude and phase angle of the mode for
frequency ωk . The dynamics for a single frequency mode can be described as
ẇk = Sk wk , (11.14)
where
0 ωk
Sk = .
−ωk 0
For a single frequency mode with the frequency ωk as the input denoted by
μk = ak sin (ωk t + φk ), its output in y, denoted by yk , is a sinusoidal function with
the same frequency, if we only consider the steady-state response. In fact, based on
the frequency response, we have the following result.
Lemma 11.3. Consider a stable linear system (A, b, C) with no zero at jωk for any
integer k. For the output yk of a single frequency mode with the frequency ωk , there
exists an initial state wk (0) such that
yk = g T wk , (11.15)
226 Nonlinear and adaptive control systems
where g = [1 0]T and wk is the state of (11.14). Furthermore, the input μk for this
frequency mode can be expressed by
μk = gkT wk , (11.16)
where
1 cos θk sin θk
gk = g := Qk g (11.17)
mk −sin θk cos θk
with θk = ∠Q(jωk ) and mk = |Q(jωk )|.
For a stable single-input linear system, if the input is a T -periodic signal that is
orthogonal to a frequency ωk , the steady state, as shown earlier, is also T -periodic.
Furthermore, we have the following results.
Lemma 11.4. If the input to a stable single-input linear system (A, b) is T -periodic
signal that is orthogonal to a frequency mode ωk , for any positive integer k, the steady
state is orthogonal to the frequency mode and the state variable is asymptotically
orthogonal to the frequency mode. Furthermore, if the linear system (A, b, C) has
no zero at jωk , the steady-state output is orthogonal to the frequency mode ωk if and
only if the input to the system is orthogonal to the frequency mode.
ẋ = Ax + bμ.
Since μ is T -periodic, the steady-state solution of the above state equation, denoted
by xs , is also T -periodic and
Let
T
Jk = xs (τ ) sin ωk τ dτ.
0
= −ωk−2 A2k Jk .
Hence, we have
(ωk2 I + A2 )Jk = 0.
228 Nonlinear and adaptive control systems
Since A is a Hurwitz matrix which cannot have ±ωk j as its eigenvalues, we conclude
Jk = 0. Similarly we can establish
T
xs (τ ) cos ωk τ dτ = 0
0
y = y⊥ + yw ,
where lk is chosen such that Sk − lk g T is Hurwitz. For this observer, we have a useful
result stated in the following lemma.
Lemma 11.5. For any positive integer k, with the observer as designed in (11.18),
(μ(τ ) − gkT ŵk ) is asymptotically orthogonal to the frequency mode ωk , i.e.,
t+T
lim (μ(τ ) − gkT ŵk ) sin ωk τ dτ = 0, (11.19)
t→∞ t
t+T
lim (μ(τ ) − gkT ŵk ) cos ωk τ dτ = 0. (11.20)
t→∞ t
Control applications 229
Proof. Consider the steady-state output y of input μ. From Lemma 11.3, there exists
an initial state wk (0) such that
T
(y(τ ) − g T wk (τ )) sin ωk τ dτ = 0,
0
T
(y(τ ) − g T wk (τ )) cos ωk τ dτ = 0,
0
which implies that μ − gkT wk is orthogonal to the frequency mode ωk , again based on
Lemma 11.3.
Let w̃k = wk − ŵk . The dynamics of w̃k can be obtained from (11.14) and
(11.18) as
w̃˙ k = S̄k w̃k − lk (y − g T wk ), (11.21)
where S̄k = Sk − lk g T . Note that S̄k is a Hurwitz matrix and (y − g T wk ) is a
T -periodic signal. There exists a periodic steady-state solution of (11.21) such that
π̇k = S̄k πk − lk (y − g T wk ).
From Lemma 11.4, πk is orthogonal to the frequency mode ωk because (y − g T wk )
is. Let ek = w̃k − πk . We have
ėk = S̄k ek ,
which implies that ek exponentially converges to zero. The observer state ŵk can be
expressed as
ŵk = wk − πk − ek .
Therefore, (11.19) and (11.20) can be established, and this completes the proof. 2
The result in Lemma 11.5 shows how an individual frequency mode can be
removed with the observer designed in the way as if the output would not contain
other frequency modes. From the proof of Lemma 11.5, it can be seen that there
is an asymptotic error, πk , between the observer state and the actual state variables
associated with the frequency mode ωk . Although πk is orthogonal to the frequency
mode ωk , it does in general contain components generated from all the other frequency
modes. Because of this, a set of observers of the same form as shown in (11.18) would
not be able to extract multiple frequency modes simultaneously. To remove multiple
frequency modes, it is essential to find an estimate which is asymptotically orthogonal
to the multiple frequency modes. For this, the interactions between the observers must
be dealt with.
Suppose that we need to remove a number of frequency modes ωk for all the k
in a finite set of positive integers K = {ki }, for = 1, . . . , m. To estimate the frequency
modes for ωk,i , i = 1, . . . , m, we propose a sequence of observers,
and, for i = 2, . . . , m,
η̇k,i−1 = Aηk,i−1 + bgk,i−1 T
ŵk,i−1 (11.23)
⎛ ⎞
i−1
ŵ˙ k,i = Sk,i ŵk,i + lk,i ⎝y − Cηk,j − g T ŵk,i ⎠ , (11.24)
j=1
where lk,i , for i = 1, . . . , m, are designed such that S̄k,i := Sk,i − lk,i g T are Hurwitz,
and
1 cos φk,i sin φk,i
gk,i = g := Qk,i g
mk,i −sin φk,i cos φk,i
with mk,i = |C(jwk,i − A)−1 b| and φk,i = ∠C(jwk,i − A)−1 b.
The estimate for the input disturbance which contains the required frequency
modes for asymptotic rejection is given by
m
μ̂m = T
gk,i ŵk,i . (11.25)
i=1
The estimate μ̂m contains all the frequency modes ωk,i , for i = 1, . . . , m. The useful
property of the estimate is given in the following theorem.
Theorem 11.6. For the estimate μ̂m given in (11.25), μ − μ̂m is asymptotically
orthogonal to the frequency modes ωk,i for i = 1, . . . , m.
Proof. In the proof, we will show how to establish the asymptotic orthogonality in
detail by induction.
We introduce the notations w̃k,i = wk,i − ŵk,i . We use πk,i to denote the steady-
state solutions of w̃k,i and ek,i = w̃k,i − πk,i , for i = 1, . . . , m.
Lemma 11.5 shows that the results hold for m = 1. Let
μ1 = gk,1
T
(wk,1 − πk,1 )
and μ − μ1 is orthogonal to the frequency mode ωk,1 .
We now establish the result for m = 2. From Lemma 11.3, there exists an initial
state variable wk,2 (0) for the dynamic system
μ − gk,1
T
(wk,1 − πk,1 ) − gk,2
T
wk,2 to the system (A, b, C). Therefore, from Lemma 11.4,
μ − gk,1 (wk,1 − πk,1 ) − gk,2
T T
wk,2 is orthogonal to ωk,2 . Furthermore, from the previous
Control applications 231
Since qk,1 − ηk,1 exponentially converges to zero, so does ek,2 . From (11.28) and
Lemma 11.4, πk,2 is orthogonal to the frequency modes ωk,j for j = 1, 2. Therefore,
by letting
μ2 = gk,1
T
(wk,1 − πk,1 ) + gk,2
T
(wk,2 − πk,2 ),
2
= T
gk,1 ek,j .
j=1
i
μi = T
gk,j (wk,j − πk,j ).
j=1
μi+1 = μi + gk,i+1
T
(wk,i+1 − πk+i+1 ).
i
ėk,i+1 = S̄k,i+1 ek,i+1 − lk,i+1 C (qk,j − ηk,j ).
j=1
Since qk,j − ηk,j , for j = 1, . . . , i, exponentially converge to zero, so does ek,i+1 . With
i+1
μi+1 − μ̂i+1 = T
gk,j (wk,j − πk,j − ŵk,j )
j=1
i
= T
gk,j ek,j ,
j=1
If the disturbance does not exist in (11.31), the system (11.31) is in the linear
observer error with output injection that is shown in Chapter 8. In that case, we can
design a state observer as
where q denotes the steady-state solution. Such a solution exists, and an explicit
solution is given earlier. Each element of q is a periodic function, as μ is periodic.
We have an important property for p and q stated in the following lemma.
x = p − q + , (11.34)
and, for i = 2, . . . , m,
where lk,i , for i = 1, . . . , m, are designed such that S̄k,i := Sk,i − lk,i g T are Hurwitz,
and
1 cos φk,i sin φk,i
gk,i = g := Qk,i g (11.39)
mk,i −sin φk,i cos φk,i
The estimate for the input disturbance which contains the required frequency
modes for asymptotic rejection is given by
m
μ̂m = T
gk,i ŵk,i . (11.40)
i=1
The estimate μ̂m contains all the frequency modes ωk,i , for i = 1, . . . , m. The useful
property of the estimate is given in the following theorem.
Theorem 11.8. The estimate μ̂m given in (11.40) is bounded and satisfies the
following:
t+T
lim [μ(τ ) − μ̂m (τ )] sin ωk,i τ dτ = 0, (11.41)
t→∞ t
t+T
lim [μ(τ ) − μ̂m (τ )] cos ωk,i τ dτ = 0, (11.42)
t→∞ t
for i = 1, . . . , m.
Proof. From Theorem 11.6, the results in (11.41) and (11.42) hold if the input
Cp − y in (11.36), (11.37) and (11.38) equals Cq. Since Cp − y − Cq converges
exponentially to 0, and (11.36)–(11.38) are stable linear systems, the errors in
the estimation of μ̂ caused by replacing Cq by Cp − y in (11.36)–(11.38) are also
exponentially convergent to zero. Therefore, (11.41) and (11.42) can be established
from Theorem 11.6. 2
u = v + μ̂m ,
where v is designed based on backstepping design which is shown in Chapter 9. Due
to the involvements of disturbance in the system, we need certain details of stability
analysis with explicit expression of Lyapunov functions. For the convenience of sta-
bility analysis, we adopt an approach using backstepping with filtered transformation
shown in Section 9.4. The control input v can be designed as a function
v = v(y, ξ ),
where ξ is the state variable of the input filter for the filtered transformation. The final
control input is designed as
Theorem 11.9. The closed-loop system of (11.1) under the control input (11.43)
ensures the boundedness of all the state variables, and the disturbance modes of
specified frequencies ωk,i , for i = 1, . . . , m, are asymptotically rejected from the
system in the sense that all the variables of the closed-loop system is bounded and
the system is asymptotically driven by the frequency modes other than the specified
modes.
Proof. For the standard backstepping design for the nonlinear systems, we establish
the stability by considering the Lyapunov function
ρ−1
1 1 2
V = βz T Pz + y2 + ξ̃ , (11.44)
2 2 i=1 i
where β is a constant and ξ̃i = ξi − ξ̂i and P is a positive definite matrix satisfy-
ing DT P + PD = −I . The standard backstepping design ensures that there exists a
positive real constant γ1 such that
Remark 11.5. In the control design, we have used the control input obtained with the
backstepping design for the proof of Theorem 11.9. In case that there exists a known
control design for the disturbance-free system which ensures exponential stability of
the disturbance-free case, we can pursue the proof of Theorem 11.9 in a similar way
to obtain the result shown in (11.45). This is very useful in particular for the case of
non-minimum phase systems, to which the standard backstepping presented earlier
does not apply.
11.1.5 Example
In this section, we use a simple example to demonstrate estimation of harmonics using
the method proposed earlier in this section.
We consider a voltage signal with diados as shown in Figure 11.1. We are
going to use iterative observers to estimate harmonics in this signal. To simplify
the presentation, we assume that the signal is directly measurable. In this case,
there are no dynamics between the measurement and the point of estimation.
Hence, we can simplify the algorithms presented in Subsection 11.1.2 ((11.22),
236 Nonlinear and adaptive control systems
1.5
0.5
Voltage
–0.5
–1
0.7 0.72 0.74 0.76 0.78 0.8
t (s)
(11.23) and (11.24)). In this case, there is no need to use the observer model in
(11.23). The simplified algorithm for direct estimation of harmonics is presented
below.
To estimate the frequency modes for ωk,i , i = 1, . . . , m, the iterative observers
are given by
and, for i = 2, . . . , m,
⎛ ⎞
i
ŵ˙ k,i = Sk,i ŵk,i + lk,i ⎝y − g T ŵk,j ⎠ , (11.47)
j=1
where lk,i , for i = 1, . . . , m, are designed such that S̄k,i := Sk,i − lk,i g T are Hurwitz.
For the electricity, the base frequency is 50 Hz. We will show the estimates
of harmonics from the base frequency, that is, we set ωk,1 = 100π , ωk,2 = 200π ,
ωk,3 = 300π, etc. For these frequency modes, we have
0 i100π
Sk,i = .
−i100π 0
Control applications 237
The observer gain lk,i can be easily designed to place the close-loop poles at any
specified positions. For the simulation study, we place the poles at {−200, −200}. For
such pole positions, we have
400
lk,i = .
40000
i100π
− 200π
The harmonics of second and third orders are plotted with the original signal and
the estimated base frequency component in Figure 11.2. The approximations of
accumulative harmonics to the original signal are shown in Figure 11.3.
In this example, we have shown the estimation of harmonics from direct
measurements. The advantage of this estimation is that the estimation is on-line
and can be implemented in real time. Also, the estimation using the proposed
method can provide the phase information, which is very important for harmonic
rejection. We did not show the rejection of harmonics in this example. However,
it is not difficult to see that rejection can be simulated easily with the proposed
method.
1.5
Original
1st order
2nd order
1 3rd order
Signal & harmonics
0.5
–0.5
–1
0.74 0.745 0.75 0.755 0.76 0.765 0.77 0.775 0.78
t (s)
1.5
Original
1st order
Up to 2nd order
1 Up to 3rd order
Signal & approximation
0.5
–0.5
–1
0.74 0.745 0.75 0.755 0.76 0.765 0.77 0.775 0.78
t (s)
6
x1
x2
5 x3
State variables of Neurospora model
0
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
The model (11.48) is not in the right format for control and observer design. We
introduce a state transformation as
z = Tx (11.49)
with
⎡ ⎤
0 0 1
T =⎣ 0 k1 0⎦,
k1 ks 0 0
and the transformed system is described by
ż1 = z2 − k2 z1
k 1 z2
ż2 = z3 − k1 z2 + k1 k2 z1 − vd (11.50)
k 1 Kd + z 2
k1 ks Kin k1 k s z 3
ż3 = vs n − vm .
Ki + z1n K M k1 ks + z 3
Now the transformed model (11.50) is in the lower-triangular format.
where λi are positive real constants, and P ∈ ∩ni=1 P n (i, λi ). The definition of
P ∈ ∩ni=1 P n (i, λi ) is given by
P n (i, λi ) = {P : |pji | < λi , for j = 1, 2, . . . , n}.
For the system (11.50), we set
C1 = 1 0 0
and therefore state variables z2 and z3 are unknown. Comparing with the structure in
(8.29), we have
⎡ ⎤
0
φ(z, u) = ⎣ φ2 ⎦,
φ3a + φ3b
where clearly φ1 = 0, and
k 1 z2
φ2 = −vd ,
k1 K d + z 2
k1 ks Kin
φ3a = vs ,
Kin + z1n
k 1 k s z3
φ3b = −vm .
K M k1 k s + z 3
−vm k1 ks z3 vm k1 ks ẑ3 vm
z3 − ẑ3 ,
K k k + z + K k k + ẑ ≤ K
M 1 s 3 M 1 s 3 M
and therefore we obtain the Lipschitz constants as γ2 = Kvd =10.7962 for ϕ2 (z2 ), and
d
γ3b = KvmM =1.01 for ϕ3b (z3 ). For nonlinear function ϕ3a (z1 ), its Lipschitz constant can
be computed by using mean value theorem which is described by
f (x) − f (x̂)
f (ζ ) =
x − x̂ ,
where ζ ∈ [min (z1 , ẑ1 ), max (z1 , ẑ1 )]. We find maximum value of |f (ζ )| by solving
|f (ζ )| = 0, and obtain the result as 0.325. Since the value of Lipschitz constant is
equivalent to maximum value of f (ζ ), the Lipschitz constant is given by γ3a = 0.325.
The Lipschitz constant λ3 is given by λ3 = λ3a + λ3b = 1.326.
Control applications 243
4.5
4 z2
Estimate of z2
Transformed state z2 and its estimate
3.5
2.5
1.5
0.5
0
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
1.8
z3
1.6 Estimate of z3
Transformed state z3 and its estimate
1.4
1.2
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
where u is the control input. The system model after the state transformation (11.49)
is then obtained as
ż1 = z2 − k2 z1
k 1 z2
ż2 = z3 − k1 z2 + k1 k2 z1 − vd
k1 K d + z 2
k1 ks Kin k1 ks z 3 k1 ks Kin
ż3 = vs n − v m + u . (11.53)
Ki + z1n K M k1 k s + z 3 Kin + z1n
This model (11.53) is then used for phase control design. We use q to denote the state
variable for the target circadian model
q̇1 = q2 − k2 q1
k1 q2
q̇2 = q3 − k1 q2 + k1 k2 q1 − vd
k1 K d + q 2
k1 ks Kin k1 k s q 3
q̇3 = vs − vm , (11.54)
Kin + q1n K M k1 ks + q 3
to which the variable z is controlled to follow.
The transformed dynamic models (11.53) and (11.54) are in the triangular form,
and therefore the iterative backstepping method shown in Section 9.2 can be applied.
There is a slight difference in the control objective here from the convergence to zero
Control applications 245
The stability analysis of the proposed control design can be established in the
same way as the proof shown in Section 9.2 for iterative backstepping control design.
Indeed, consider a Lyapunov function candidate
1 2
V = w1 + w22 + w32 . (11.63)
2
Using the dynamics of wi , i = 1, 2, 3, in (11.59), (11.60) and (11.62), we obtain
Therefore, we can conclude that the closed-loop system under the control of u in
(11.61) is exponentially stable with respect to the variables wi , i = 1, 2, 3, which
suggests that the controlled circadian rhythm asymptotically tracks the targeted one.
Simulation study of the control design proposed above has been carried with the
control parameters c1 = c2 = c3 = 0.1. The states z2 and z3 are shown in Figures 11.7
and 11.8 when the control input is applied at t = 50 h, and the control input is shown
in Figure 11.9. The plot of z1 is very similar to the plot of z2 , and it is omitted. It can
be seen from the figures that there is a phase difference before the control input is
applied, and the control input resets the phase of the control circadian rhythm to the
targeted one.
Control applications 247
4.5
4 q2
z2
3.5 Control input applied
3
q2 and z2
2.5
1.5
0.5
0
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
1.8
q3
1.6
Control input applied z3
1.4
1.2
q3 and z3
0.8
0.6
0.4
0.2
0
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
5
Control input u
0
–5
Control input
–10
–15
–20
–25
–30
0 10 20 30 40 50 60 70 80 90 100 110 120
Time (h)
control system. It is well known for linear systems, we can use emulation method,
that is, to design a controller in continuous-time and then implement its discrete-time
version. Alternative to emulation method is the direct design in discrete-time. Stability
analysis for sampled-data control of linear dynamic systems can then be carried out in
the framework of linear systems in discrete-time. Sampled-data control of nonlinear
systems is a much more challenging task.
For a nonlinear system, it is difficult or even impossible to obtain a nonlinear
discrete-time model after sampling. Even with a sampled-data model in discrete-
time, the nonlinear model structure cannot be preserved in general. It is well known
that nonlinear control design methods do require certain structures. For example
backstepping can be applied to lower-triangular systems, but a lower-triangular system
in continuous-time will not have its discrete-time model in the triangular structure in
general after sampling. Hence, it is difficult to carry out directly design in discrete-time
for continuous-time nonlinear systems.
For a control input designed based on the continuous-time model, it can be sampled
and implemented in discrete-time. However, the stability cannot be automatically
guaranteed for the sampled-data system resulted from emulation method. The stability
analysis is challenging because there is no corresponding method in discrete-time for
nonlinear systems after sampling, unlike linear systems. One would expect that if the
sampling is fast enough, the stability of the sampled-data system might be guaranteed
by the stability of the continuous-time system. In fact, there is a counter example to
this claim. Therefore, it is important to analyse the stability of sampled-data system
in addition to the stability of the continuous-time control, for which the stability is
guaranteed in control design.
In this section, we will show the stability analysis of sampled-data control for a
class of nonlinear systems in the output feedback form. A link between the sampling
Control applications 249
period and initial condition is established, which suggests that for a given domain
of initial conditions, there always exists a sampling time such that if the sampling is
faster than that time, the stability of the sampled-data system is guaranteed.
ẋ = Ac x + bu + φ(y)
(11.64)
y = Cx,
with
⎡ ⎤
⎡ ⎤ 0
0 1 0 ... 0 ⎡ ⎤T ⎢ .. ⎥
⎢0 0 1 ... 1 ⎢ . ⎥
⎢ 0⎥ ⎥ ⎢0⎥ ⎢ ⎥
⎢ .. ⎥ , ⎢ ⎥ ⎢ 0 ⎥
Ac = ⎢ ... ... ... . . . .⎥ C=⎢.⎥ , b=⎢ ⎥
⎢ bρ ⎥ ,
⎢ ⎥ ⎣ .. ⎦ ⎢ ⎥
⎣0 0 0 ... 1⎦ ⎢ . ⎥
0 ⎣ .. ⎦
0 0 0 ... 0
bn
ξ̇ = ξ + bf u (11.65)
u = uc (y, ξ ), (11.66)
250 Nonlinear and adaptive control systems
where
⎡ ⎤ ⎡ ⎤
−λ1 1 0 ... 0 0
⎢ 0 −λ2 1 ... 0 ⎥ ⎢ .. ⎥
⎢ ⎥ ⎢ ⎥
=⎢ . .. .. .. .. ⎥ , bf = ⎢ . ⎥ .
⎣ .. . . . . ⎦ ⎣0⎦
0 0 0 . . . −λρ−1 1
For a given uc , the sampled-data controller based on emulation method is given as
ud (t) = uc (y(mT ), ξ (mT )), ∀t ∈ [mT , mT + T ), (11.67)
T
ξ (mT ) = eT ξ ((m − 1)T ) + bf uc (y((m − 1)T ), ξ ((m − 1)T )) e dτ , τ
(11.68)
0
where y(mT ) is obtained by sampling y(t) at each sampling instant; ξ (mT ) is the
discrete-time implementation of the filter shown in (11.65); T is the fixed sampling
period; and m is the discrete-time index, starting from 0.
Before the analysis of the sampled-data control, we briefly review the control
design in continuous-time.
As shown in Section 9.4, through the state transformations, we obtain
ζ̇ = Dζ + ψ(y)
(11.69)
ẏ = ζ1 + ψy (y) + bρ ξ1 .
For the system with relative degree ρ = 1, as shown in Lemma 9.5 and its proof,
the continuous-time control uc1 can be designed as
1
P
2
ψ(y)
2
uc1 = − c0 + y − ψy (y) − , (11.70)
4 y
where c0 is a positive real constant and P is a positive real constant that satisfies
DT P + PD = −3I .
For the control (11.70), the stability of the continuous-time system can be established
with the Lyapunov function
1
V = ζ T Pζ + y2
2
and its derivative
V̇ ≤ −c0 y2 −
ζ
2 .
For the case of ρ > 1, backstepping is used to obtain the final control input uc2 .
We introduce the same notations zi , for i = 1, . . . , ρ, as in Section 9.4 for backstepping
z1 = y, (11.71)
zi = ξi−1 − αi−1 , for i = 2, . . . , ρ, (11.72)
zρ+1 = u − αρ , (11.73)
where αi for i = 2, . . . , ρ are stabilising functions to be designed. We also use the
positive real design parameters ci and ki for i = 1, . . . , ρ and γ > 0.
Control applications 251
The control design has been shown in Section 9.4. We list a few key steps here
for the convenience of the stability analysis. The stabilising functions are designed as
γ
P
2
ψ(y)
2
α1 = −c1 z1 − k1 z1 + ψy (y) − ,
y
∂α1 2 ∂α1
α2 = −z1 − c2 z2 − k2 z2 + ψ(y) + λ1 ξ1
∂y ∂y
(11.74)
∂αi−1 2 ∂αi−1
αi = −zi−1 − ci zi − ki zi + ψ(y) + λi−1 ξi−1
∂y ∂y
∂αi−1
+ i−2 (−λj ξj + ξj+1 ) for i = 3, . . . , ρ.
j=1
∂ξj
2
∂αρ−1 ∂αρ−1
uc2 = −zρ−1 − cρ zρ − kρ zρ + ψ(y)
∂y ∂y
i−2
∂αρ−1
+ (−λj ξj + ξj+1 ) + λρ−1 ξρ−1 . (11.75)
j=1
∂ξj
For the control input (11.75) in the continuous-time, the stability result has been
shown in Theorem 9.6. In the stability analysis, the Lyapunov function candidate is
chosen as
ρ
V = zi2 + γ ζ T Pζ
i=1
hold for all χ (mT ) ∈ D, where μ, β are any given positive reals with μ > β, T > 0 the
fixed sampling period and Vm := V (χ (mT )). If χ (0) ∈ D, then the following holds:
lim χ(t) = 0.
t→∞
Proof. Since χ (0) ∈ D, then (11.77) holds for t ∈ (0, T ] with the following form:
V̇ ≤ −μV + βV (χ (0)).
Using the comparison lemma (Lemma 4.5), it is easy to obtain from the above that
for t ∈ (0, T ]
1 − e−μt
V (χ (t)) ≤ e−μt V0 + βV0 = q(t)V0 , (11.78)
μ
where q(t) := (e−μt + μβ (1 − e−μt )). Since μ > β > 0, then q(t) ∈ (0, 1), ∀t ∈
(0, T ]. Then we have
Remark 11.6. Lemma 11.10 plays an important role in stability analysis for the
sampled-data system considered in this section. When sampled-data systems are
analysed in discrete-time in literature, only the performances at sampling instances are
considered. In this section, Lemma 11.10 provides an integrated analysis framework
where the behaviour of the sampled-data system both at and between sampling instants
can be characterised. In particular, the system’s behaviour at sampling instants is
portrayed in (11.82), and (11.81) shows that the inter-sample behaviour of the system
is well bounded by a compact set defined by Vm .
Control applications 253
For the case of relative degree 1, the sampled-data system takes the following
form:
ζ̇ = Dζ + ψ(y)
(11.84)
ẏ = ζ1 + ψy (y) + ud1 ,
where ud1 is the sampled-data controller and can be simply implemented via a zero-
order hold device as the following:
ud1 (t) = uc1 (y(m)), ∀t ∈ [mT , mT + T ). (11.85)
Define χ := [ζ T , yT ]T and we have the following result.
Theorem 11.11. For system (11.84) and the sampled-data controller ud1 shown in
(11.85), and a given neighbourhood of the origin Br := {χ ∈ Rn |
χ
≤ r} with r
any given positive real, there exists a constant T1 > 0 such that, for all 0 < T < T1
and for all χ (0) ∈ Br , the system is asymptotically stable.
Proof. We choose
1
V (χ ) = ζ T Pζ + y2
2
as the Lyapunov function candidate for the sampled-data system. We start with some
sets used throughout the proof. Define
c := max V (χ )
χ∈Br
The existence of T1∗ is ensured by continuous dependency of the solution χ(t) on the
initial conditions.
Next, calculate the estimate of |y(t) − y(0)| forced by the sampled-data control
ud1 during the interval [0, T ], provided that χ(0) ∈ Br ⊂ c and T ∈ (0, T1∗ ). From
the second equation of (11.84), the dynamics of y are given by
ẏ = ζ1 + ψy (y) + ud1 .
It follows that
t t
y(t) = y(0) + ζ1 (τ )dτ + ud1 (τ )dτ
0 0
t t
+ ψy (y) − ψy (y(0)) dτ + ψy (y(0)) dτ.
0 0
Then we have
t t
|y(t) − y(0)| ≤
ζ (τ )
dτ + Lu1 |y(0)|dτ
0
! 0
1
t t
+ L1 |y(τ ) − y(0)|dτ + L1 |y(0)|dτ. (11.87)
0 0
We first calculate the integral 1 . From the first equation of system (11.69), we obtain
t
ζ (t) = e ζ (0) +
Dt
eDt ψ(y(τ ))dτ. (11.88)
0
Since D is a Hurwitz matrix, there exist positive reals κ1 and σ1 such that
eDt
≤
κ1 e−σ1 t . Thus, from (11.88)
t
−σ1 t
ζ (t)
≤ κ1 e
ζ (0)
+ κ1 e−σ1 (t−τ )
ψ(y(τ )) − ψ(y(0))
dτ
0
t
+ κ1 e−σ1 (t−τ )
ψ(y(0))
dτ
0
t
−σ1 t
≤ κ1 e
ζ (0)
+ L2 κ1 e−σ1 (t−τ ) |y(τ ) − y(0)|dτ
0
t
+ L2 κ1 e−σ1 (t−τ ) |y(0)|dτ.
0
Now we are ready to compute |y(t) − y(0)|. In fact, we have from (11.87) and (11.89)
t
|y(t) − y(0)| ≤ A1 (1 − e−σ1 t ) + B1 t + H |y(τ ) − y(0)|dτ , (11.90)
0
where
A1 = σ1−1 κ1
ζ (0)
,
B1 = Lu1 |y(0)| + L1 |y(0)| + σ1−1 κ1 L2 |y(0)|,
H = σ1−1 κ1 L2 + L1 .
B1 Ht
|y(t) − y(0)| ≤ A1 (1 − e−σ1 t ) + (e − 1)
H
+ A1 (σ1 eHt + He−σ1 t − (H + σ1 ))(H + σ1 )−1 (11.91)
where
Note that δ1 (T ) and δ2 (T ) only depend on the sampling period T once the Lipschitz
constants and the control parameters are chosen.
Next we shall study the behaviour of the sampled-data system during each inter-
val using a Lyapunov function candidate V (y, ζ ) = ζ T Pζ + 12 y2 . Consider χ (0) =
[ζ (0), y(0)]T ∈ Br . When t ∈ (0, T ], its time derivative satisfies
V̇ = −3
ζ
2 + 2ζ T Pψ + y(ζ1 + ψy + ud1 )
≤ −c0 y2 −
ζ
2 + |y||ud1 (y(0)) − uc1 |
≤ −c0 y2 −
ζ
2 + Lu1 |y − y(0)||y|
Lu1
≤ − c0 − (δ1 (T ) + δ2 (T )) y2 −
ζ
2
2
Lu1 Lu1
+ δ1 (T )|y(0)|2 + δ2 (T )
ζ (0)
2
2 2
≤ −μ1 (T )V + β1 (T )V (ζ (0), y(0)), (11.93)
256 Nonlinear and adaptive control systems
where
" #
1
μ1 = min 2c0 − Lu1 δ1 − Lu1 δ2 (T ), ,
λmax (P)
" # (11.94)
Lu1 δ2 (T )
β1 = max Lu1 δ1 (T ), ,
2λmin (P)
with λmax ( · ) and λmin ( · ) denoting the maximum and minimum eigenvalues of a matrix
respectively.
Next we shall show that there exists a constant T2∗ > 0 such that the condi-
tion μ1 (T ) > β1 (T ) > 0 is satisfied for all 0 < T < T2∗ . Note from (11.92) that both
δ1 (T ) and δ2 (T ) are actually the continuous functions of T with δ1 (0) = δ2 (0). Define
e1 (T ) := μ1 (T ) − β1 (T ) and we have e1 (0) > 0. It can also be established from
(11.94) that e1 (T ) is a decreasing and continuous function of T , which asserts by
the continuity of e1 (T ) the existence of T2∗ so that for 0 < T < T2∗ , e1 (T ) > 0, that is
0 < β1 (T ) < μ1 (T ).
Finally, set T1 = min (T1∗ , T2∗ ), and from Lemma 11.10, it is known that V1 ≤ c,
which means χ (T ) ∈ c , and subsequently, all the above analysis can be repeated for
every interval [mT , mT + T ]. Applying Lemma 11.10 completes the proof. 2
For systems with relative degree ρ > 1, the implementation of the sampled-data
controller ud2 is given in (11.67) and (11.68). It is easy to see from (11.68) that ξ (mT )
is the exact, discrete-time model of the filter
ξ̇ = −ξ + bf ud2 (11.95)
due to the fact that ud remains constant during each interval and the dynamics of ξ
shown in (11.95) is linear. Then (11.68) and (11.95) are virtually equivalent at each
sampling instant. This indicates that we can use (11.95) instead of (11.68) for stability
analysis of the sampled-data system.
Let χ := [ζ T , z T ]T and we have the following result.
Theorem 11.12. For the extended system consisting (11.65), (11.69) and the
sampled-data controller ud2 shown in (11.67) and (11.68), and a given neighbour-
hood of the origin Br := {χ ∈ Rn |
χ
≤ r} with r any given positive real, there exists
a constant T2 > 0 such that, for all 0 < T < T2 and for all χ (0) ∈ Br , the system is
asymptotically stable.
Proof. The proof can be carried out in a similar way to that for the case ρ = 1,
except that the effect of the dynamic filter of ξ has to be dealt with.
For the overall sampled-data system, a Lyapunov function candidate is chosen
the same as for continuous-time case, that is
ρ
1 2
V = γ ζ T Pζ + z .
2 i=1 i
Control applications 257
Similar to the case of ρ = 1, the sets Br , c and Bl can also be defined such that
Br ⊂ c ⊂ Bl , and there exists a T3∗ > 0 such that for all T ∈ (0, T3∗ ), the following
holds:
χ (t) ∈ Bl , ∀t ∈ (0, T ], χ (0) ∈ c .
As in the proof for Theorem 11.11, we also aim to formulate the time derivative
of V (χ ) into form (11.77). Next we shall derive the bounds for
ξ (t) − ξ (0)
and
|y(t) − y(0)| during t ∈ [0, T ] with 0 < T < T3∗ . Consider the case where χ (0) ∈ Br .
We have from (11.95)
t
ξ (t) = et ξ (0) + e(t−τ ) bf ud2 dτ. (11.96)
0
Since is a Hurwitz matrix, there exist positive reals κ2 , κ3 and σ2 such, that
et
≤ κ2 e−σ2 t ,
et − I
≤ κ3 (1 − e−σ2 t ),
where I is the identity matrix. Then, using the Lipschitz property of uc2 with respect
to the set Bl and the fact that ud2 (0, 0) = 0, it can be obtained from (11.96) that
t
κ2
ξ (0)
κ2 Lu2
ξ (τ )
dτ ≤ 1 − e−σ2 t + (|y(0)| +
ξ (0)
)t (11.97)
0 σ2 σ2
and
t
ξ (t) − ξ (0)
≤ κ3
ξ (0)
(1 − e−σ2 t ) +
ud2 (y(0), ξ (0))
κ2 e−σ2 (t−τ ) dτ
0
≤ δ3 (T )|y(0)| + δ4 (T )
ξ (0)
, (11.98)
where
δ3 (T ) = σ2−1 κ2 Lu2 (1 − e−σ2 T ),
δ4 (T ) = (κ3 + σ2−1 κ2 Lu2 )(1 − e−σ2 T ),
and Lu2 is a Lipschitz constant of uc2 . As for |y − y(0)|, we have from (11.69)
t t
y(t) = y(0) + ζ1 (τ )dτ + ξ (τ )dτ
0 0
t t
+ ψy (y) − ψy (y(0)) dτ + ψy (y(0))dτ.
0 0
where 1 is already shown in (11.89) and 2 in (11.97). With (11.89), (11.97) and
(11.99), it follows that
|y(t) − y(0)| ≤ A1 (1 − e−σ1 t ) + A2 (1 − e−σ2 t ) + B2 t
t
+H |y(τ ) − y(0)|dτ (11.100)
0
|ξ2 (0)| ≤ |z3 (0)| + |α2 (0)| ≤ |z3 (0)| + L2 |y(0)| + L2 |ξ1 (0)|
..
.
ρ−2
|ξρ−1 (0)| ≤ |zρ (0)| + |αρ−1 (0)| ≤ |zρ (0)| + Lρ−1 |y(0)| + Lρ−1 |ξi (0)|
i=1
where with a bit abuse of notation, Li is the Lipschitz constant of ξ̂i with respect to
the set Bl . Thus, a constant L0 can be found such that the following holds:
ρ
≤ −c1 y2 − γ
ζ
2 − λ0 zi2 +
z̄
|ud2 − uc2 |, (11.103)
i=2
+ ε4 (T ) z̄ 2 , (11.104)
where
" #
γ
α2 (T ) = min 2c1 , , 2(λ0 − ε4 (T )) ,
λmax (P)
" #
ε2 (T )
β2 (T ) = max 2(ε1 (T ) + 2L20 ε3 (T )), , 4L20 ε3 (T ) .
λmin (P)
260 Nonlinear and adaptive control systems
Remark 11.7. In the results shown in Theorems 11.11 and 11.12, the radius r of Br
can be any positive value. It means that we can set the stability region as large as we
like, and the stability can still be guaranteed by determining a fast enough sampling
time. Therefore, Theorems 11.11 and 11.12 establish the semi-global stability of the
sampled-data control of nonlinear systems in the output feedback form.
11.3.3 Simulation
The following example is a simplified model of a jet engine without stall:
3 1
φ˙m = ψ + φm2 − φm3
2 2
ψ̇ = u,
where φm is the mass flow and ψ the pressure rise. Take φm as the output and then the
above system is in the output feedback form. The filter ξ̇ = −λξ + u is introduced so
that the filtered transformation y = x1 and ζ̄ = x2 − ξ , and the state transformation
ζ = ζ̄ − λy can render the system into the following form:
3 2 1 3
ζ̇ = −λζ − λ y − y − λ2 y
2 2
3 2 1 3
ẏ = ζ + λy + y − y + ξ.
2 2
Finally, the stabilising function
2
3 2 1 3 3 1 2
α1 = −y − y − y −λ y 2
y− y +λ
2 2 2 2
with P = 1, and the control uc can be obtained using α2 as shown in (11.74). For
simulation, we choose λ = 0.5.
Simulations are carried out with the initial values set as x1 (0) = 1 and
x2 (0) = 1, which are for simulation purpose only and have no physical meaning.
Results shown in Figures 11.10 and 11.11 indicate that the sampled-data system
is asymptotically stable when T = 0.01 s, which is confirmed by a closer look
at the convergence of V shown in Figure 11.12. Further simulations show that
the overall system is unstable if T = 0.5 s. In summary, the example illustrates
that for a range of sampling period T , the sampled-data control design presented
earlier in this section can asymptotically stabilise the sampled-data system.
Control applications 261
1.2
0.8
0.6
x1
0.4
0.2
–0.2
0 1 2 3 4 5 6 7 8 9 10
t (s)
0.5
0
x2
–0.5
–1
–1.5
–2
–0.2 0 0.2 0.4 0.6 0.8 1 1.2
x1
4.5
3.5
2.5
V
1.5
0.5
Chapter 1. The basic concepts of systems and states of nonlinear systems dis-
cussed in this chapter can be found in many books on nonlinear systems and
nonlinear control systems, for example Cook [1], Slotine and Li [2], Vidyasagar
[3], Khalil [4] and Verhulst [5]. For the background knowledge of linear systems,
readers can consult Ogata [6], Kailth [7]. Antsaklis and Michel [8], Zheng [9],
and Chen, Lin and Shamash [10]. The existence of unique solutions of differen-
tial equations is discussed in the references such as Arnold [11] and Borrelli and
Coleman [12]. Many concepts such as stability and backstepping control design are
covered in detail in later chapters of the book. For system with saturation, read-
ers may consult Hu and Lin [13]. Sliding mode control is covered in detail in
Edwards and Spurgeon [14] and feedforward control in Isidori [15]. Limit cycles
and chaos are further discussed in Chapter 2. Semi-global stability is often related
to systems with saturation, as discussed in Hu and Lin [13], and Isidori [15].
Chapter 5. The definition of positive real systems are common in many books
such as Slotine and Li [2], Khalil [4] and Vidyasagar [3]. For strictly positive real
systems, there may have some variations in definitions. A discussion on the varia-
tions can be found in Narendra and Annaswamy [23]. The definition in this book
follows Slotine and Li [2], Khalil [4]. Lemma 5.3 was adapted from Khalil [4],
where a poof of necessity can also be found. Kalman–Yakubovich Lemma can appear
in different variations. A proof of it can be found in Khalil [4] and Marino and
Tomei [24]. A book on absolute stability was published by Narendra and Taylor [25].
Circle criterion is covered in a number of books such as Cook [1], Slotine and Li
[2], Vidyasagar [3] and Khalil [4]. The treatment of loop-transformation to obtain
the condition of circle criterion is similar to Cook [1]. The interpretation based on
Figure 5.3 is adapted from Khalil [4]. Complex bilinear mapping is also used to
justify the circle interpretation. Basic concepts of complex bilinear mapping can
be found from text books on complex analysis, such as Brown and Churchill [26].
Input-to-state stability was first introduced by Sontag [27]. For the use of comparison
functions of classes K and K∞ , more are to be found in Khalil [4]. The defini-
tions for ISS are adapted from Isidori [15]. Theorem 5.8 is a simplified version, by
using powers of the state norm, rather than K∞ to characterise the stability for the
convenience of presentation. The original result of Theorem 5.8 was shown by Son-
tag and Wang [28]. Using ISS pair (α, σ ) for interconnected systems is based on
a technique, changing supply functions, which is shown in Sontag and Teel [29].
The small gain theorem for ISS was shown in Jiang, Teel and Praly [30]. A sys-
tematic presentation of ISS stability issues can be found in Isidori [15]. Differential
stability describes the stability issue of nonlinear systems for observer design. It is
introduced in Ding [31], and the presentation in Section 5.4 is adapted from that
paper.
Chapter 7. For self-tuning control, readers can refer to the textbooks such as
Astrom and Wittenmark [35] and Wellstead and Zarrop [36]. The approach to obtain
Bibliographical Notes 265
the model reference control is based on Ding [37]. Early work of using Lyapunov
function for stability analysis can be found in Parks [38]. A complete treatment of
MRAC of linear systems with high relative degrees can be found in Narendra and
Annaswamy [23] and Ioannou and Sun [39]. The example to show the divergence
of parameter estimation under a bounded disturbance is adapted from Ioannou and
Sun [39], where more robust adaptive laws for linear systems are to be found. Results
on robust adaptive control and on relaxing assumptions of the sign of high-frequency
gain, minimum-phase, etc., for nonlinear systems can be found in Ding [37, 40–45].
Backstepping designs without adaptive laws shown in this chapter are basically sim-
plified versions of their adaptive counterparts. Filtered transformations were used for
backstepping in Marino and Tomei [63], and the presentation in this chapter is dif-
ferent from the forms used in [63]. Initially, multiple estimation parameters are used
for adaptive control with backstepping, for example the adaptive laws in [63]. Tun-
ing function method was first introduced in Krstic, Kanellakopoulos and Kokotovic
[64] to remove multiple adaptive parameters for one unknown parameter vector with
backstepping. Adaptive backstepping with filtered transformation was based on Ding
[65], without the disturbance rejection part. Adaptive observer backstepping is largely
based on the presentation in Krstic, Kanellakopoulos and Kokotovic [66]. Nonlinear
adaptive control with backstepping is also discussed in details in Marino and Tomei
[24]. Some further developments of nonlinear adaptive control using backstepping can
be found in Ding [40] for robust adaptive control with dead-zone modification, robust
adaptive control with σ -modification in Ding [67], with unknown control directions
and unknown high-frequency gains in Ding [41, 42]. Adaptive backstepping was also
applied to nonlinear systems with nonlinear parameterisation in Ding [68].
Clock gene was reported in 1994 by Takahashi et al. [92]. Reports on key genes
in circadian oscillators are in Albrecht et al. [93], van der Horst et al. [94] and Bunger
et al. [95]. The effects of jet lags and sleep disorders to circadian rhythms are reported
in Sack et al. [96] and Sack et al. [97]. The effects of light and light treatments
to disorders of circadian rhythms are shown by Boulos et al. [98], Kurosawa and
Goldbeter [99] and Geier et al. [100]. The circadian model (11.48) is adapted from
Gonze, Leloup and Goldbeter [101]. The condition (11.52) is shown by Zhao, Tao
and Shi [54]. The observer design is adapted from That and Ding [102], and the con-
trol design is different from the version shown in that paper. More results on observer
design for a seventh-order circadian model are published by That and Ding [103].
There are many results in literature on sampled-data control of linear systems,
for example see Chen and Francis [104] and the references therein. The difficulty
of obtaining exact discrete-time model for nonlinear systems is shown by Nesic and
Teel [105]. Structures in nonlinear systems cannot be preserved as shown in Grizzle
and Kokotovic [106]. Approximation to nonlinear discrete-time models is shown
by several results, for example Nesic, Teel and Kokotovic [107] and Nesic and Laila
[108]. The effect on fast-sampling of nonlinear static controllers is reported in Owens,
Zheng and Billings [109]. Several other results on emulation method for nonlinear
systems are shown in Laila, Nesic and Teel [110], Shim and Teel [111] and Bian
and French [112]. The results presented in Section 10.3 are mainly based on Wu and
Ding [113]. Sampled-data control for disturbance rejection of nonlinear systems is
shown by Wu and Ding [114]. A result on sampled-data adaptive control of a class of
nonlinear systems is to be found in Wu and Ding [115].
References
[59] Bastin, G. and Gevers, M. ‘Stable adaptive observers for nonlinear time
varying systems,’ IEEE Trans. Automat. Control, vol. 33, no. 7, pp. 650–658,
1988.
[60] Cho, Y. M. and Rajamani, R. ‘A systematic approach to adaptive observer
synthesis for nonlinear systems,’ IEEE Trans. Automat. Control, vol. 42,
no. 4, pp. 534–537, 1997.
[61] Tsinias, J., ‘Sufficient lyapunov-like conditions for stabilization,’ Math.
Control Signals Systems, vol. 2, pp. 343–357, 1989.
[62] Kanellakopoulos, I., Kokotovic, P. V. and Morse, A. S. ‘Systematic design
of adaptive controllers for feedback linearizable systems,’ IEEE Trans.
Automat. Control, vol. 36, pp. 1241–1253, 1991.
[63] Marino, R. and Tomei, P. ‘Global adaptive output feedback control of nonlin-
ear systems, part i: Linear parameterization,’ IEEE Trans. Automat. Control,
vol. 38, pp. 17–32, 1993.
[64] Krstic, M., Kanellakopoulos, I. and Kokotovic, P. V. ‘Adaptive nonlinear
control without overparametrization,’ Syst. Control Lett., vol. 19, pp. 177–
185, 1992.
[65] Ding, Z. ‘Adaptive output regulation of class of nonlinear systems with
completely unknown parameters,’ in Proceedings of 2003American Control
Conference, Denver, CO, 2003, pp. 1566–1571.
[66] Krstic, M., Kanellakopoulos, I. and Kokotovic, P. V. Nonlinear and Adaptive
Control Design. New York: John Wiley & Sons, 1995.
[67] Ding, Z. ‘Almost disturbance decoupling of uncertain output feedback
systems,’ IEE Proc. Control Theory Appl., vol. 146, pp. 220–226, 1999.
[68] Ding, Z. ‘Adaptive control of triangular systems with nonlinear parameter-
ization,’ IEEE Trans. Automat. Control, vol. 46, no. 12, pp. 1963–1968,
2001.
[69] Bodson, M., Sacks, A. and Khosla, P. ‘Harmonic generation in adaptive
feedforward cancellation schemes,’ IEEE Trans. Automat. Control, vol. 39,
no. 9, pp. 1939–1944, 1994.
[70] Bodson, M. and Douglas, S. C. ‘Adaptive algorithms for the rejection of
sinusoidal disturbances with unknown frequencies,’ Automatica, vol. 33,
no. 10, pp. 2213–2221, 1997.
[71] Ding, Z. ‘Global output regulation of uncertain nonlinear systems with
exogenous signals,’ Automatica, vol. 37, pp. 113–119, 2001.
[72] Ding, Z. ‘Global stabilization and disturbance suppression of a class of
nonlinear systems with uncertain internal model,’Automatica, vol. 39, no. 3,
pp. 471–479, 2003.
[73] Davison, E. J. ‘The robust control of a servomechanism problem for lin-
ear time-invariant multivariable systems,’ IEEE Trans. Automat. Control,
vol. 21, no. 1, pp. 25–34, 1976.
[74] Francis, B. A. ‘The linear multivariable regulator problem,’ SIAM J. Control
Optimiz., vol. 15, pp. 486–505, 1977.
[75] Huang, J. and Rugh, W. J. ‘On a nonlinear multivariable servomechanism
problem,’ Automatica, vol. 26, no. 6, pp. 963–972, 1990.
272 Nonlinear and adaptive control systems
[93] Albrecht, U., Sun, Z., Eichele, G. and Lee, C. ‘A differential response of two
putative mammalian circadian regulators, mper1 and mper2 to light,’ Cell
Press, vol. 91, pp. 1055–1064, 1997.
[94] van der Horst, G., Muijtjens, M., Kobayashi, K., Takano, R., Kanno,
S., Takao, M., de Wit, J., Verkerk, A., Eker, A., van Leenen, D., Buijs,
R., Bootsma, D., Hoeijmakers, J., and Yasui, A. ‘Mammalian cry1 and
cry2 are essential for maintenance of circadian rhythms,’ Nature, vol. 398,
pp. 627–630, 1999.
[95] Bunger, M., Wilsbacher, L., Moran, S., Clendenin, C., Radcliffe, L.,
Hogenesch, J., Simon, M., Takahashi, J., and Bradfield, C., ‘Mop3 is an
essential component of the master circadian pacemaker in mammals,’ Cell,
vol. 103, pp. 1009–1017, 2000.
[96] Sack, R., Auckley, D., Auger, R., Carskadon, M., Wright, K., Vitiello, M.,
and Zhdanova, I. ‘Circadian rhythm sleep disorders: part i, basic principles,
shift work and jet lag disorders,’ Sleep, vol. 30, pp. 1460–1483, 2007.
[97] Sack, R., Auckley, D., Auger, R., Carskadon, M., Wright, K., Vitiello, M.,
and Zhdanova, I. ‘Circadian rhythm sleep disorders: Part ii, advanced sleep
phase disorder, delayed sleep phase disorder, free-running disorder, and
irregular sleep-wake rhythm,’ Sleep, vol. 30, pp. 1484–1501, 2007.
[98] Boulos, Z., Macchi, M., Sturchler, M., Stewart, K., Brainard, G., Suhner, A.,
Wallace, G., and Steffen, R., ‘Light visor treatment for jet lag after west
ward travel across six time zones,’ Aviat. Space Environ. Med., vol. 73,
pp. 953–963, 2002.
[99] Kurosawa, G. and Goldbeter, A. ‘Amplitude of circadian oscillations
entrained by 24-h light-dark cycles,’ J. Theor. Biol., vol. 242, pp. 478–488,
2006.
[100] Geier, F., Becker-Weimann, S., Kramer, K., and Herzel, H. ‘Entrainment in
a model of the mammalian circadian oscillator,’ J. Biol. Rhythms, vol. 20,
pp. 83–93, 2005.
[101] Gonze, D., Leloup, J., and Goldbeter, A., ‘Theoretical models for circadian
rhythms in neurospora and drosophila,’ Comptes Rendus de l’Academie des
Sciences -Series III -Sciences de la Vie, vol. 323, pp. 57–67, 2000.
[102] That, L. T. and Ding, Z. ‘Circadian phase re-setting using nonlinear output-
feedback control,’ J. Biol. Syst., vol. 20, no. 1, pp. 1–19, 2012.
[103] That, L. T. and Ding, Z. ‘Reduced-order observer design of multi-output
nonlinear systems with application to a circadian model,’ Trans. Inst. Meas.
Control, to appear, 2013.
[104] Chen, T. and Francis, B. Optimal Sampled-Data Control Systems. London:
Springer-Verlag, 1995.
[105] Nesic, D. and Teel, A. ‘A framework for stabilization of nonlinear sampled-
data systems based on their approximate discrete-time models,’ IEEE Trans.
Automat. Control, vol. 49, pp. 1103–1122, 2004.
[106] Grizzle, J. W. and Kokotovic, P. V. ‘Feedback linearization of sampled-data
systems,’ IEEE Trans. Automat. Control, vol. 33, pp. 857–859, 1988.
274 Nonlinear and adaptive control systems
[107] Nesic, D., Teel, A., and Kokotovic, P.V. ‘Sufficient conditions for stabiliza-
tion of sampled-data nonlinear systems via discrete-time approximations,’
Syst. Control Lett., vol. 38, pp. 259–270, 1999.
[108] Nesic, D. and Laila, D. ‘A note on input-to-state stabilization for nonlinear
sampled-data systems,’ IEEE Trans. Automat. Control, vol. 47, pp. 1153–
1158, 2002.
[109] Owens, D. H., Zheng, Y. and Billings, S. A. ‘Fast sampling and stability of
nonlinear sampled-data systems: Part 1. Existing theorems,’ IMA J. Math.
Control Inf., vol. 7, pp. 1–11, 1990.
[110] Laila, D. S., Nesic, D. and Teel, A. R. ‘Open-and closed-loop dissipa-
tion inequalities under sampling and controller emulation,’ Eur. J. Control,
vol. 18, pp. 109–125, 2002.
[111] Shim, H. and Teel, A. R. ‘Asymptotic controllability and observability
imply semiglobal practical asymptotic stabilizability by sampled-data output
feedback,’ Automatica, vol. 39, pp. 441–454, 2003.
[112] Bian, W. and French, M. ‘General fast sampling theorems for nonlinear
systems,’ Syst. Control Lett., vol. 54, pp. 1037–1050, 2005.
[113] Wu, B. and Ding, Z. ‘Asymptotic stablilisation of a class of nonlinear systems
via sampled-data output feedback control,’ Int. J. Control, vol. 82, no. 9,
pp. 1738–1746, 2009.
[114] Wu, B. and Ding, Z. ‘Practical disturbance rejection of a class of nonlin-
ear systems via sampled output,’ J. Control Theory Appl., vol. 8, no. 3,
pp. 382–389, 2010.
[115] Wu, B. and Ding, Z. ‘Sampled-data adaptive control of a class of nonlinear
systems,’ Int. J. Adapt. Control Signal Process., vol. 25, pp. 1050–1060,
2011.
Index
Control Systems
Nonlinear and Adaptive
Control Systems Nonlinear and
Adaptive Control
Systems
An adaptive system for linear systems with unknown parameters is a Zhengtao Ding is a Senior Lecturer in Control
nonlinear system. The analysis of such adaptive systems requires similar Engineering and Director for MSc in Advanced Control
techniques to analyse nonlinear systems. Therefore it is natural to treat and Systems Engineering at the Control Systems
adaptive control as a part of nonlinear control systems. Centre, School of Electrical and Electronic Engineering,
The University of Manchester, UK. His research interests
Nonlinear and Adaptive Control Systems treats nonlinear control focus on nonlinear and adaptive control design. He
and adaptive control in a unified framework, presenting the major pioneered research in asymptotic rejection of general
results at a moderate mathematical level, suitable for MSc students periodic disturbances in nonlinear systems and
and engineers with undergraduate degrees. Topics covered include produced a series of results to systematically solve this
introduction to nonlinear systems; state space models; describing problem in various situations. He also made significant
contributions in output regulation and adaptive control
functions for common nonlinear components; stability theory; feedback
of nonlinear systems with some more recent results on
linearization; adaptive control; nonlinear observer design; backstepping observer design and output feedback control as well.
design; disturbance rejection and output regulation; and control Dr Ding has been teaching ‘Nonlinear and Adaptive
applications, including harmonic estimation and rejection in power Control Systems’ to MSc students for 9 years, and he
distribution systems, observer and control design for circadian rhythms, has accumulated tremendous experiences in explaining
and discrete-time implementation of continuous-time nonlinear control difficult control concepts to students.
laws.
Ding
Zhengtao Ding
The Institution of Engineering and Technology
www.theiet.org
978-1-84919-574-4