AAE666 Notes
AAE666 Notes
Martin Corless
School of Aeronautics & Astronautics
Purdue University
West Lafayette, Indiana
[email protected]
January 3, 2017
Contents
1 Introduction
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
13
14
14
16
17
17
19
20
21
23
23
25
26
26
27
29
29
31
32
.
.
.
.
33
33
35
38
38
41
41
41
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
5.3
.
.
.
.
.
.
.
.
time
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
42
.
.
.
.
.
.
.
.
43
43
46
48
48
48
50
52
54
.
.
.
.
.
.
.
.
57
57
58
59
59
60
60
62
63
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65
67
67
69
72
74
74
74
77
77
78
81
81
82
83
84
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
87
88
89
90
91
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
9.5
Exponential stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
92
95
95
95
97
99
99
103
104
106
109
110
11 Quadratic stability
113
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
11.2 Polytopic nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
11.2.1 LMI Control Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
11.2.2 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
11.2.3 Robust stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
11.3 Another special class of nonlinear systems . . . . . . . . . . . . . . . . . . . 120
11.3.1 The circle criterion: a frequency domain condition . . . . . . . . . . . 124
11.3.2 Sector bounded nonlinearities . . . . . . . . . . . . . . . . . . . . . . 125
11.3.3 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
11.4 Yet another special class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
11.4.1 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
11.4.2 Strictly positive real transfer functions and a frequency domain condition133
12 Invariance results
12.1 Invariant sets . . . . . . . . . . . . . . . . . .
12.2 Limit sets . . . . . . . . . . . . . . . . . . . .
12.3 LaSalles Theorem . . . . . . . . . . . . . . .
12.4 Asymptotic stability and LaSalle . . . . . . .
12.4.1 Global asymptotic stability . . . . . .
12.4.2 Asymptotic stability . . . . . . . . . .
12.5 LTI systems . . . . . . . . . . . . . . . . . . .
12.6 Applications . . . . . . . . . . . . . . . . . . .
12.6.1 A simple stabilizing adaptive controller
12.6.2 PI controllers for nonlinear systems . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
139
139
141
145
148
148
150
151
152
152
153
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
155
156
156
156
158
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
13.1.4 Global asymptotic stability . . . . . . . . . . . . . . . . . . . . . . . 158
13.1.5 Exponential stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
161
161
163
163
163
164
165
165
165
166
168
168
168
168
169
171
172
174
176
177
179
179
181
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
183
184
185
189
193
194
16 Aerospace/Mechanical Systems
16.1 Aerospace/Mechanical systems . . . . . . . . . .
16.2 Equations of motion . . . . . . . . . . . . . . .
16.3 Some fundamental properties . . . . . . . . . .
16.3.1 An energy result . . . . . . . . . . . . .
16.3.2 Potential energy and total energy . . . .
16.3.3 A skew symmetry property . . . . . . . .
16.4 Linearization of aerospace/mechanical systems .
16.5 A class of GUAS aerospace/mechanical systems
16.6 Control of aerospace/mechanical systems . . . .
16.6.1 Computed torque method . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
199
199
200
203
203
204
206
208
209
211
211
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
215
215
216
217
222
224
226
228
229
.
.
.
.
.
.
.
.
231
231
231
232
234
237
238
240
241
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
247
247
249
249
251
252
252
257
258
259
263
265
266
266
266
267
268
270
8
20 Nonlinear H
20.1 Analysis . . . . . . . . . . . . . .
20.1.1 The HJ Inequality . . . .
20.2 Control . . . . . . . . . . . . . .
20.3 Series solution to control problem
20.3.1 Linearized problem . . . .
20.3.2 Nonlinear problem . . . .
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
273
273
273
276
277
277
278
21 Performance
21.1 Analysis . . . . . . . . . . . . . . . . . . . . . . .
21.2 Polytopic systems . . . . . . . . . . . . . . . . . .
21.2.1 Polytopic models . . . . . . . . . . . . . .
21.2.2 Performance analysis of polytopic systems
21.3 Control for performance . . . . . . . . . . . . . .
21.3.1 Linear time-invariant systems . . . . . . .
21.3.2 Control of polytopic systems . . . . . . . .
21.3.3 Multiple performance outputs . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
281
281
282
282
285
286
286
287
288
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22 Appendix
289
22.1 The Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Chapter 1
Introduction
HI!
(How is it going?)
10
CHAPTER 1. INTRODUCTION
Chapter 2
State space representation of
dynamical systems
2.1
2.1.1
Examples
Continuous-time
A first example
1 if v < 0
0 if v = 0
sgm (v) :=
1 if v > 0
11
12
and
x2 := ,
2.1. EXAMPLES
13
=0
For the simplest situation in orbit mechanics ( a satellite orbiting YFHB)
g(r) = /r 2
= GM
14
Ballistics in drag
x
y
mv
mv
=
=
=
=
v cos
v sin
mg sin D(v)
mg cos
2.1. EXAMPLES
15
P2
m2
m0
P1
2
m1
16
Payload
g^
lc 2
q2
q1
lc 1
u2
u1
= u2
2.1. EXAMPLES
17
Traffic flow
Two roads connecting two points. Let x1 and x2 be the traffic flow rate (number of cars per
hour) along the two routes.
x 1 = (c1 (x) c2 (x))x1 + (c2 (x) c1 (x))x2
x 2 = (c1 (x) c2 (x))x1 (c2 (x) c1 (x))x2
where
(y) =
y if y 0
0 if y < 0
2.1.2
c2 (x) = 2x2
Discrete-time
18
2.2
2.2.1
General representation
Continuous-time
All of the preceding systems can be described by a bunch of first order ordinary differential
equations of the form
x 1 = f1 (x1 , x2 , . . . , xn )
x 2 = f2 (x1 , x2 , . . . , xn )
..
.
x n = fn (x1 , x2 , . . . , xn )
where the scalars xi (t), i = 1, 2, . . . , n, are called the state variables and the real scalar t is
called the time variable.
Higher order ODE descriptions
Single equation. (Recall the planar pendulum.) Consider a system described by a single
nth - order differential equation of the form
F (q, q,
. . . , q (n) ) = 0
n
where q(t) is a scalar and q (n) := ddtnq . To obtain a state space description of this system, we
need to assume that we can solve for q (n) as a function of q, q,
. . . , q (n1) . So suppose that
the above equation is equivalent to
q (n) = a(q, q,
. . . , q (n1) ) .
Now let
x1 := q
x2 := q
..
.
xn := q (n1)
to obtain the following state space description:
x 1 = x2
x 2 = x3
..
.
x n1 = xn
x n = a(x1 , x2 , . . . , xn )
19
Multiple equations. (Recall the two link manipulator.) Consider a system described by
m scalar differential equations in N variables:
(n1 )
F1 (q1 , q1 , . . . , q1
(N2 )
, q2 , q2 , . . . , q2
(n1 )
(n )
, . . . , qN , qN , . . . , qN N ) = 0
(n2 )
(n )
, . . . , qN , qN , . . . , qN N ) = 0
..
.
(n1 )
(n2 )
(nN )
FN (q1 , q1 , . . . , q1 , q2 , q2 , . . . , q2 , . . . , qN , qN , . . . , qN ) = 0
F2 (q1 , q1 , . . . , q1
, q2 , q2 , . . . , q2
(n )
where t, q1 (t), q2 (t), . . . , qN (t) are real scalars. Note that qi i is the highest order derivative
of qi which appears in the above equations.
(n )
(n )
(n )
First solve for the highest order derivatives, q1 1 , q2 2 , . . . , qN N , to obtain:
(n1 )
q1
(n )
q2 2
(n )
qN N
(n11)
= a1 ( q1 , q1 , . . . q1
(n21)
, q2 , q2 , . . . , q2
(n 1)
a2 ( q1 , q1 , . . . q1 1 ,
(n 1)
, . . . , qN , qN , . . . , qN N
(n 1)
q2 , q2 , . . . , q2 2 ,
(n 1)
qN , qN , . . . , qN N
=
...,
)
..
.
(n 1)
(n 1)
(n 1)
= aN ( q1 , q1 , . . . q1 1 , q2 , q2 , . . . , q2 2 , . . . , qN , qN , . . . , qN N )
Now let
(n 1)
x1 := q1
xn1 +1 := q2 ,
x2 := q1
xn1 +2 := q2
...
...
..
.
xn1 := q1 1
(n 1)
xn1 +n2 := q2 2
xn1 +...+nm1 +1 := qN
xn1 +...+nm1 +2 := qN
...
xn := qN N
where
n := n1 + n2 + . . . + nN
to obtain
x 1 = x2
x 2 = x3
..
.
x n11 = xn1
x n1 = a1 (x1 , x2 , . . . , xn )
x n1 +1 = xn1 +2
..
.
x n1 +n2 = a2 (x1 , x2 , . . . , xn )
..
.
x n = an (x1 , x2 , . . . , xn )
Example 5
q1 + q2 + 2q1 = 0
q1 + q1 + q2 + 4q2 = 0
(n 1)
20
2.2.2
Discrete-time
A general state space description of a discrete-time system consists of a bunch of first order
difference equations of the form
x1 (k+1) = f1 (x1 (k), x2 (k), . . . , xn (k))
x2 (k+1) = f2 (x1 (k), x2 (k), . . . , xn (k))
..
.
xn (k+1) = fn (x1 (k), x2 (k), . . . , xn (k))
where the scalars xi (k), i = 1, 2, . . . , n, are called the state variables and the integer k is
called the time variable.
Higher order DE descriptions
Story is similar to continuous-time.
2.3. VECTORS
2.3
2.3.1
21
Vectors
Vector spaces and IRn
A scalar is a real or a complex number. The symbols IR and C represent the set of real and
complex numbers, respectively.
In this section all the definitions and results are given for real scalars. However, they also
hold for complex scalars; to get the results for complex scalars, simply replace real with
complex and IR with C.
Consider any positive integer n. A real n-vector x is an ordered n-tuple of real numbers,
x1 , x2 , . . . , xn . This is is usually written as follows:
x1
x2
x = ..
or
x = x1 , x2 , , xn
.
xn
The real numbers x1 , x2 , . . . , xn are called the scalar components of x; xi is called the i-th
component. The symbol IRn represents the set of ordered n-tuples of real numbers.
Addition. The addition of any two elements x, y of IRn yields an element of IRn and is
defined by:
x1 + y1
y1
x1
x2 + y2
y2
x2
x + y = .. + .. :=
..
.
.
.
xn + yn
yn
xn
Zero element of IRn :
0 :=
0
0
..
.
0
Note that we are using the same symbol, 0, for a zero scalar and a zero vector.
The negative of x:
x :=
x1
x2
..
.
xn
22
Properties of addition
(a) (Commutative). For each pair x, y in IRn ,
x+y = y+x
(b) (Associative). For each x, y, z in IRn ,
(x + y) + z = x + (y + z)
(c) There is an element 0 in IRn such that for every x in IRn ,
x+0=x
(d) For each x in IRn , there is an element x in IRn such that
x + (x) = 0
Scalar multiplication. The multiplication of an element x of IRn by a real scalar yields
an element of IRn and is defined by:
x1
x1
x2
x2
x = .. := ..
.
.
xn
xn
Properties of scalar multiplication
(a) For each scalar and pair x, y in IRn
(x + y) = x + y
(b) For each pair of scalars , and x in IRn ,
( + )x = x + x
(c) For each pair of scalars , , and x in IRn ,
(x) = ()x
(d) For each x in IRn ,
1x = x
2.3. VECTORS
23
Vector space. Consider any set V equipped with an addition operation and a scalar
multiplication operation. Suppose the addition operation assigns to each pair of elements
x, y in V a unique element x + y in V and it satisfies the above four properties of addition
(with IRn replaced by V). Suppose the scalar multiplication operation assigns to each scalar
and element x in V a unique element x in V and it satisfies the above four properties
of scalar multiplication (with IRn replaced by V). Then this set (along with its addition
and scalar multiplication) is called a vector space. Thus IRn equipped with its definitions
of addition and scalar multiplication is a specific example of a vector space. We shall meet
other examples of vectors spaces later.
An element x of a vector space is called a vector.
A vector space with real (complex) scalars is called a real (complex) vector space.
Subtraction in a vector space is defined by:
x y := x + (y)
Hence in IRn ,
xy =
2.3.2
x1
x2
..
.
xn
y1
y2
..
.
yn
x1 y1
x2 y2
..
.
xn yn
24
2.3.3
Derivatives
dx
1
x
1
dt
dx2 x 2
dt
dx
:=
x :=
=
.. ..
dt
. .
dxn
x n
dt
2.4
25
Recall the general description of a dynamical system given in Section 2.2.1. For a system
with n state variables, x1 , , xn , we define the state (vector) x to be the n-vector given by
x1
x2
x := .. .
.
xn
f1 (x1 , x2 , . . . , xn )
f2 (x1 , x2 , . . . , xn )
f (x) :=
.
..
.
fn (x1 , x2 , . . . , xn )
(2.1)
where x(t) is an n-vector and the scalar t is the time variable. A system described by
the above equation is called autonomous or time-invariant because the right-hand side of the
equation does not depend explicitly on time t. For the first part of the course, we will only
concern ourselves with these systems.
However, one can have a system containing time-varying parameters or disturbance
inputs which are time-varying. In this case the system might be described by
x = f (t, x)
that is, the right-hand side of the differential depends explicitly on time. Such a system is
called non-autonomous or time-varying. We will look at them later.
Discrete time The general representation of a discrete-time dynamical system can now
be compactly described by the following first order vector differential equation:
x(k+1) = f (x(k))
(2.2)
where x(k) is an n-vector and the scalar k is the time variable. A system described by
the above equation is called autonomous or time-invariant because the right-hand side of the
equation does not depend explicitly on time k. For the first part of the course, we will only
concern ourselves with these systems.
However, one can have a system containing time-varying parameters or disturbance
inputs which are time-varying. In this case the system might be described by
x(k+1) = f (k, x(k))
26
that is, the right-hand side of the differential depends explicitly on time. Such a system is
called non-autonomous or time-varying. We will look at them later.
2.5
2.5.1
27
A solution of (2.1) is any continuous function x() which is defined on some time interval and
which satisfies
x(t)
= f (x(t))
for all t in the time interval.
An equilibrium solution is the simplest type of solution; it is constant for all time, that is,
it satisfies
x(t) xe
for some fixed state vector xe . The state xe is called an equilibrium state. Since an equilibrium
solution must satisfy the above differential equation, all equilibrium states are given by:
f (xe ) = 0
or, in scalar terms,
f1 (xe1 , xe2 , . . . , xen ) = 0
f2 (xe1 , xe2 , . . . , xen ) = 0
..
.
fn (xe1 , xe2 , . . . , xen ) = 0
Sometimes an equilibrium state is referred to as an equilibrium point, a stationary point, or,
a singular point.
Example 6 (Planar pendulum) Here, all equilibrium states are given by
m
e
x =
0
where m is an arbitrary integer. Physically, there are only two distinct equilibrium states
0
e
e
x =
and
x =
0
0
An isolated equilibrium state is an equilibrium state with the property that there is a
neighborhood of that state which contains no other equilibrium states.
28
The origin is an equilibrium state, but, it is not isolated. All other equilibrium states are
isolated.
Higher order ODEs
Consider a system described by an ordinary differential equation of the form
F (y, y,
. . . , y (n) ) = 0
where y(t) is a scalar. An equilibrium solution is a solution y() which is constant for all time,
that is,
y(t) y e
F (y e, 0, . . . , 0) = 0
(2.3)
For the state space description of this system introduced earlier, all equilibrium states
are given by
ye
0
xe = ..
.
0
29
i = 1, 2, . . . , m
e
F1 (y1e , 0, . . . , y2e , 0, . . . , . . . , yN
,...,0 )
= 0
e
F2 (y1e , 0, . . . , y2e , 0, . . . , . . . , yN
,...,0 )
= 0
(2.4)
..
.
e
FN (y1e , 0, . . . , y2e , 0, . . . , . . . , yN
,...,0 ) = 0
For the state space description of this system
are given by
e
y1
0
..
.
e
y2
e
x = ..
.
.
..
ye
N
..
.
0
e
where y1e , y2e , . . . , yN
solve (2.4).
=0
Equilibrium solutions
r(t) r e ,
Hence,
(t) e
r,
r, = 0
This yields
r e ( e )2 + /(r e )2 = 0
0
=0
Thus there are infinite number of equilibrium solutions given by:
p
e = /(r e )3
where r e is arbitrary. Note that, for this state space description, an equilibrium state corresponds to a circular orbit.
2.5.2
Discrete-time
30
2.6
2.6.1
Numerical simulation
MATLAB
31
of an event function defined in the ODE file. The ODE file must be
coded so that F(T,Y,events) returns appropriate information. See
ODEFILE for details. Output TE is a column vector of times at which
events occur, rows of YE are the corresponding solutions, and indices in
vector IE specify which event occurred.
See also ODEFILE and
other ODE solvers:
options handling:
output functions:
odefile examples:
>>
2.6.2
SIMULINK
32
Exercises
Exercise 1 By appropriate definition of state variables, obtain a first order state space
description of the following systems where y1 and y2 are real scalars.
(i)
2
y1 + y2 + sin y1 = 0
y1 + 2
y2 + sin y2 = 0
(ii)
y1 + y 2 + y13 = 0
y 1 + y 2 + y23 = 0
Exercise 2 By appropriate definition of state variables, obtain a first order state space
description of the following system where q1 and q2 are real scalars.
q1 (k+2) + q1 (k) + 2q2 (k+1) = 0
q1 (k+2) + q1 (k+1) + q2 (k) = 0
Exercise 3 Consider the Lorenz system described by
x 1 = (x2 x1 )
x 2 = rx1 x2 x1 x3
x 3 = bx3 + x1 x2
with = 10, b = 38 , and r = 28. Simulate this system with initial states
0
0
1
1 + eps
and
0
0
where eps is the floating point relative accuracy in MATLAB. Comment on your results for
the integration interval [0 60].
Chapter 3
First order systems
The simplest type of nonlinear systems is one whose state can be described by a single scalar.
We refer to such a system as a first order system.
3.1
Continuous time
(3.1)
34
(3.2)
where a is a constant real scalar. All solutions of this system are of the form
x(t) = ceat
where c is an a constant real scalar. Thus the qualitative behavior of (3.2) is completely
determined by the sign of a.
Linearization. Suppose xe is an equilibrium state for system (3.1). Then
f (xe ) = 0 .
Suppose that the function f is differentable at xe with derivative f (xe ). When x is close
to xe , it follows from the definition of the derivative that
f (x) f (xe ) + f (xe )(x xe ) = f (xe )(x xe ) .
If we introduce the perturbed state,
x = x xe ,
then
x = f (x) f (xe )x .
So, the linearization of system (3.1) about an equilibrium state xe is defined to be the
following system:
x = ax
where
a = f (xe ) .
(3.3)
One can demonstrate the following result.
If f (xe ) 6= 0, then the local behavior of the nonlinear system (3.1) about xe is qualitatively
the same as that of the linearized system about the origin.
Example 12
x = x x3
Example 13
x = ax2
3.2
35
Consider
x = f (x)
with initial condition x(0) = x0 . Suppose that f (x0 ) 6= 0; then f (x(t)) 6= 0 for some interval
[0, t1 ) and over this interval we have
1 dx
= 1.
f (x(t)) dt
Let
g(x) :=
x0
Then
1
d
f ()
d
d
dx
1 dx
(g(x(t))) =
(g(x))
=
= 1,
dt
dx
dt
f (x) dt
that is,
d
(g(x(t))) = 1 .
dt
Integrating over the interval [0, t] and using the initial condition x(0) = x0 yields g(x(t)) = t,
that is,
Z x(t)
1
d = t .
(3.4)
f ()
x0
One now solves the above equation for x(t).
Example 14 (Finite escape time) This simple example illustrates the concept of a finite
escape time.
x = x2
Here,
Z
x0
1
1
1
d
=
+
.
2
x x0
1
1
+
= t.
x(t) x0
Then, provided x0 t 1, the above equation can be uniquely solved for x(t) to obtain
x(t) =
x0
.
1 x0 t
(3.5)
Suppose x0 > 0. Then x(t) goes to infinity as t approaches 1/x0 ; the solution blows up in a
finite time. This cannot happen in a linear system.
36
3.3
Discrete time
Exercises
Exercise 4 Draw the state portrait of the first nonlinear system.
Exercise 5 Draw the state portrait for
x = x4 x2 .
Exercise 6 Obtain an explicit expression for all solutions of
x = x3 .
Chapter 4
Second order systems
In this section, we consider systems whose state can be described by two real scalar variables
x1 , x2 . We will refer to such systems as second order systems. They are described by
x 1 = f1 (x1 , x2 )
x 2 = f2 (x1 , x2 )
(4.1)
4.1
Linear systems
a11 a12
a21 a22
37
38
real
real
real
complex
complex
complex
1 , 2 < 0
0 < 1 , 2
1 < 0 < 2
(1 ) < 0
(1 ) > 0
(1 ) = 0
4.2. LINEARIZATION
4.2
39
Linearization
Suppose xe = (xe1 , xe2 ) is an equilibrium state of system (4.1). Then the linearization of (4.1)
about xe is given by:
x 1 = a11 x1 + a12 x2
x 2 = a21 x1 + a22 x2
where
aij =
(4.2a)
(4.2b)
fi e e
(x , x )
xj 1 2
for i, j = 1, 2.
The behavior of a second order nonlinear system near an equilibrium state is qualitatively the
same as the behavior of the corresponding linearized system about 0, provided the eigenvalues
of the linearized system have non-zero real part.
Example 16 Van der Pol oscillator. Consider
w + (w)
+w =0
where is a nonlinear function. If one lets y = w and differentiates the above equation, one
obtains
y + (y)y + y = 0 .
Considering (y) = (y 3/3 y) yields
y (1 y 2)y + y = 0
Introducing state variables x1 = y and x2 = y,
this system can be described by
x 1 = x2
x 2 = x1 + (1 x21 )x2
This system has a single equilibrium state at the origin. Linearization about the origin
results in
x 1 = x2
x 2 = x1 + x2 .
The eigenvalues 1 , 2 of this linearized system are given by
r
r
2
2
and
2 = + 1
.
1 = 1
2
4
2
4
If we consider 0 < < 2, then 1 and 2 are complex numbers with positive real parts.
Hence, the origin is an unstable focus of the linearized system. Thus, the origin is an unstable
focus of the the original nonlinear system. Figure 4.1 contains some state trajectories of this
system for = 0.3.
Note that, although the system has a single equilibrium state and this is unstable, all
solutions of the system are bounded. This behavior does not occur in linear systems. If the
origin of a linear system is unstable, then the system will have some unbounded solutions.
40
ydot
3
3
y y = 0 .
A state space description of this linearized system is given by
x 1 = x2
x 2 = x1
The eigenvalues 1 , 2 and corresponding eigenvectors v1 , v2 associated with this linear system are given by
1
1
1 = 1 ,
v1 =
,
2 = 1 ,
v1 =
.
1
1
So the origin of the linearized system is a saddle; see Figure 4.2. Hence the origin is a saddle
for the original nonlinear system.
Linearization about either of the two non-zero equilibrium solutions results in
y + 2y = 0 .
The eigenvalues associated with any state space realization of this system are
1 = 2 .
1 = 2
4.2. LINEARIZATION
41
ydot
3
3
0
y
10
15
10
ydot
10
15
15
10
0
y
42
4.3
Isocline method
See Khalil.
4.4
and
Note that
x2 = r sin .
1
r = (x21 + x22 ) 2
and, when (x1 , x2 ) 6= (0, 0),
43
(4.3a)
(4.3b)
Exercises
Exercise 7 Determine the nature of each equilibrium state of the damped duffing system
y + 0.1y y + y 3 = 0
Numerically obtain the phase portrait.
44
Exercise 8 Determine the nature (if possible) of each equilibrium state of the simple pendulum.
y + sin y = 0
Numerically obtain the phase portrait.
Exercise 9 Numerically obtain a state portrait of the following system:
x 1 = x22
x 2 = x1 x2
Based on the state portrait, predict the stability properties of each equilibrium state.
Chapter 5
Some general considerations
Consider the differential equation
x = f (x)
(5.1)
where x is an n-vector. By a solution of (5.1) we mean any continuous function x() : [0, t1 )
IRn , with t1 > 0, which satisfies (5.1). Consider an initial condition
x(0) = x0 ,
(5.2)
where x0 is some specified initial state. The corresponding initial value problem associated
with (5.1) is that of finding a solution to (5.1) which satisfies the initial condition (5.2). If
the system is linear then, for every initial state x0 , there is a solution to the corresponding
initial value problem, this solution is unique and is defined for all t 0, that is, t1 = .
One cannot make the same statement for nonlinear systems in general.
5.1
Existence of solutions
x(0) = 0
1 if x 0
1 if x < 0
5.2
Uniqueness of solutions
46
The above initial value problem has an infinite number of solutions. They are given by
0
if 0 t T
x(t) =
g
2
(t
T
)
if
T t
2
where T 0 is arbitrary.
Fact.Differentiability of f guarantees uniqueness.
The f above is not differentiable at 0.
5.3
x(0) = 0
Fact. If a solution cannot be extended indefinitely, that is over [0, ), then it must have
a finite escape time Te , that is, Te is finite and
lim x(t) = .
tTe
Hence, if a solution is bounded on bounded time intervals then, it cannot have a finite
escape time and, hence can be extended over [0, ).
The following condition guarantees that solutions can be extended indefinitely. There
are constants and such that
kf (x)k kxk +
for all x. Here one can show that when 6= 0
kx(t)k e(tt0 ) kx(t0 )k +
t
(e 1)
When = 0,
kx(t)k kx(t0 )k + (t t0 )
Chapter 6
Stability and boundedness
Consider a general nonlinear system described by
x = f (x)
(6.1)
where x(t) is a real n-vector and t is a real scalar. By a solution of (6.1) we mean any
continuous function x() : [0, t1 ) IRn with t1 > 0, which satisfies x(t)
= f (x(t)) for
0 t < t1 .
6.1
Boundedness of solutions
for all
t0
48
x(0) = x0
If x0 > 0, the corresponding solution has a finite escape time and is unbounded. If x0 < 0,
the corresponding solution is bounded.
Example 27 Undamped oscillator.
x 1 = x2
x 2 = x1
All solutions are bounded.
Boundedness and linear time-invariant systems. Consider a general LTI (linear timeinvariant) system
x = Ax
(6.2)
Recall that every solution of this system has the form
x(t) =
l n
i 1
X
X
tj ei t v ij
i=1 j=0
where 1 , . . . , l are the eigenvalues of A, ni is the index of i , and the constant vectors v ij
depend on initial state.
We say that an eigenvalue of A is non-defective if its index is one; this means that the
algebraic multiplicity and the geometric multiplicity of are the same. Otherwise we say
is defective.
Hence we conclude that all solutions of (6.2) are bounded if and only if for each eigenvalue
i of A:
(b1) (i ) 0 and
(b2) if (i ) = 0 then i is non-defective.
If there is an eigenvalue i of A such that either
(u1) (i ) > 0 or
(u2) (i ) = 0 and i is defective
then, the system has some unbounded solutions.
49
0 1
0 0
has a single eigenvalue 0. This eigenvalue has algebraic multiplicity 2 but geometric multiplicity 1; hence some of the solutions of the system are unbounded. One example is
t
x(t) =
1
Example 29
x 1 = 0
x 2 = 0
Here
A=
0 0
0 0
has a single eigenvalue 0. This eigenvalue has algebraic multiplicity 2 and geometric multiplicity 2. Hence all the solutions of the system are bounded. Actually every state is an
equilibrium state and every solution is constant.
Example 30 (Resonance) Consider a simple linear oscillator subject to a sinusoidal input
of amplitude W :
q + q = W sin(t+)
Resonance occurs when = 1. To see this, let
x1 := q ,
x2 := q ,
x3 := W sin(t+) ,
x4 := W cos(t+)
to yield
x = Ax
where
0
1
A=
0
0
1 0
0 1
0 0
0 2
0
0
1
0
If = 1 then, A has eigenvalues and . These eigenvalues have algebraic multiplicity two
but geometric multiplicity one; hence the system has unbounded solutions.
50
6.2
51
52
6.3
6.3.1
Asymptotic stability
Global asymptotic stability
If xe is a globally asymptotically stable equilibrium state, then there are no other equilibrium states and all solutions are bounded. In this case we say that the system x = f (x)
is globally asymptotically stable.
Example 37 The system
x = x
is GAS.
Example 38 The system
x = x3
is GAS.
6.3.2
Asymptotic stability
In asymptotic stability, we do not require that all solutions converge to the equilibrium state;
we only require that all solutions which originate in some neighborhood of the equilibrium
state converge to the equilibrium state.
DEFN. (Asymptotic stability) An equilibrium state xe is asymptotically stable (AS) if
(a) It is stable.
(b) There exists R > 0 such that whenever ||x(0)xe || < R one has
lim x(t) = xe .
(6.3)
The region of attraction of an equilibrium state xe which is AS is the set of initial states
which result in (13.3), that is it is the set of initial states which are attracted to xe . Thus, the
region of attraction of a globally asymptotically equilibrium state is the whole state space.
53
Example 40
x = x x3
The equilibrium states 1 and 1 are AS with regions of attraction (, 0) and (0, ),
respectively.
Example 41 Damped simple pendulum
x 1 = x2
x 2 = sin x1 x2
The zero state is AS but not GAS. Also, the system has unbounded solutions.
LTI systems. For LTI systems, it should be clear from the general form of the solution
that the zero state is AS if and only if all the eigenvalues i of A have negative real parts,
that is,
(i ) < 0
Also AS is equivalent to GAS.
Example 43 The system
is GAS.
x 1 = x1 + x2
x 2 = x2
is GAS.
x 1 = x1 + x2
x 2 = x1 x2
54
6.4
Exponential stability
||x(0)xe|| exp(t)
for all t 0
Example 45
x = x
GES with rate = 1.
Note that global exponential stability implies global asymptotic stability, but, in general,
the converse is not true. This is illustrated in the next example. For linear time-invariant
systems, GAS and GES are equivalent.
Example 46
x = x3
Solutions satisfy
x(t) = p
x0
1 + 2x20 t
where
x0 = x(0) .
55
x 1 = x1 + cos(x1 )x2
x 2 = 2x2 cos(x1 )x1
DEFN. (Exponential stability) An equilibrium state xe is exponentially stable (ES) with rate of
convergence > 0 if there exists R > 0 and > 0 such that whenever ||x(0)xe || < R one
has
||x(t)xe || ||x(0)xe|| exp(t)
for all t 0
Note that exponential stability implies asymptotic stability, but, in general, the converse is
not true.
Example 48
x = x3
GAS but not even ES
Example 49
x =
GAS, ES, but not GES
x
1 + x2
56
6.5
LTI systems
(6.4)
l n
i 1
X
X
tj ei t v ij
i=1 j=0
where 1 , . . . , l are the eigenvalues of A, the integer ni is the index of i , and the constant
vectors v ij depend on initial state. From this it follows that the stability properties of this
system are completely determined by the location of its eigenvalues; this is summarized in
the table below.
The following table summarizes the relationhip between the stability properties of a LTI
system and the eigenproperties of its A-matrix. In the table, unless otherwise stated, a
property involving must hold for all eigenvalues of A.
Stability properties
Eigenproperties
() < 0
Stability and
boundedness
() 0
If () = 0 then is non-defective.
57
is GES.
Example 51 The system
x 1 = x1 + x2
x 2 = x1 x2
is GES.
Example 52 The system
x 1 = 0
x 2 = 0
is stable about every equilibrium point.
Example 53 The system
x 1 = x2
x 2 = 0
is unstable about every equilibrium point.
Figure 6.6: The big picture. The concepts in each dashed box are equivalent for linear
systems.
58
6.6
where A =
f e
(x ) .
x
The following results can be demonstrated using nonlinear Lyapunov stability theory.
Stability. If all the eigenvalues of the A matrix of the linearized system have negative real
parts, then the nonlinear system is exponentially stable about xe .
Instability. If at least one eigenvalue of the A matrix of the linearized system has a positive
real part, then the nonlinear system is unstable about xe .
Undetermined. Suppose all the eigenvalues of the A matrix of the linearized system have
non-positive real parts and at least one eigenvalue of A has zero real part. Then, based on
the linearized system alone, one cannot predict the stability properties of the nonlinear system
about xe .
Note that the first statement above is equivalent to the following statement. If the
linearized system is exponentially stable, then the nonlinear system is exponentially stable
about xe .
Example 54 (Damped simple pendulum.) Physically, the system
x 1 = x2
x 2 = sin x1 x2
has two distinct equilibrium states: (0, 0) and (, 0). The A matrix for the linearization of
this system about (0, 0) is
0
1
A=
1 1
Since all the eigenvalues of this matrix have negative real parts, the nonlinear system is
exponentially stable about (0, 0). The A matrix corresponding to linearization about (, 0)
is
0
1
A=
1 1
Since this matrix has an eigenvalue with a positive real part, the nonlinear system is unstable
about (, 0)
The following example illustrates the fact that if the eigenvalues of the A matrix have nonpositive real parts and there is at least one eigenvalue with zero real part, then, one cannot
make any conclusions on the stability of the nonlinear system based on the linearization.
59
60
Exercises
Exercise 10 For each of the following systems, determine (from the state portrait) the
stability properties of each equilibrium state. For AS equilibrium states, give the region of
attraction.
(a)
x = x x3
(b)
x = x + x3
(c)
x = x 2x2 + x3
Exercise 11 If possible, use linearization to determine the stability properties of each of
the following systems about the zero equilibrium state.
(i)
x 1 = (1 + x21 )x2
x 2 = x31
(ii)
x 1 = sin x2
x 2 = (cos x1 )x3
x 3 = ex1 x2
Exercise 12 If possible, use linearization to determine the stability properties of each equilibrium state of the Lorenz system.
Chapter 7
Stability and boundedness: discrete
time
7.1
Boundedness of solutions
(7.1)
for all
k 0.
61
62
Linear time invariant (LTI) systems. All solutions of the LTI system
x(k+1) = Ax(k)
(7.2)
7.2
63
Example 62
x(k+1) = 2x(k) .
The origin is unstable.
Example 63
x(k+1) = x(k)3 .
The single equilibrium at the origin is stable, but, the system has unbounded solutions.
LTI systems. Every equilibrium state of a LTI system (7.2) is stable if and only if all
eigenvalues of A satisfy conditions (b1) and (b2) above. Hence every equilibrium state of
a LTI system is stable if and only if all solutions are bounded.
Every equilibrium state is unstable if and only if there is an eigenvalue of A which
satisfies condition (u1) or (u2) above. Hence every equilibrium state of a LTI system is
unstable if and only if there are unbounded solutions.
7.3
7.3.1
Asymptotic stability
Global asymptotic stability
If xe is a globally asymptotically stable equilibrium state, then there are no other equilibrium states. In this case we say the system (7.1) is globally asymptotically stable.
Example 64
1
x(k+1) = x(k)
2
GAS
Example 65
x(k+1) =
x(k)
2 + x(k)2
64
7.3.2
Asymptotic stability
(7.3)
The region of attraction of an equilibrium state xe which is AS is the set of initial states
which result in (7.3), that is it is the set of initial states which are attracted to xe . Thus, the
region of attraction of a globally asymptotically equilibrium state is the whole state space.
Example 66
x(k+1) = x(k)3
Origin is AS with region of attraction [1 1].
LTI systems. For LTI systems, it should be clear from the general form of the solution
that the zero state is AS if and only if all the eigenvalues i of A have magnitude less than
one, that is,
|i | < 1 .
Also AS is equivalent to GAS.
7.4
Exponential stability
k ||x(0)xe ||
Example 67
1
x(k+1) = x(k)
2
GES with = 21 .
for all k 0
65
Note that global exponential stability implies global asymptotic stability, but, in general,
the converse is not true. This is illustrated in the next example. For linear time-invariant
systems, GAS and GES are equivalent.
Lemma 1 Consider a system described by x(k + 1) = f (x(k)) and suppose that for some
scalar 0,
kf (x)k kxk
for all x. Then, every solution x() of the system satisfies
kx(k)k k kx(0)k
for all k 0. In particular, if < 1 then, the system is globally exponentially stable.
Example 68
x(k+1) = p
Solutions satisfy
x(k) = p
x0
1 + 2kx20
x(k)
1 + 2x(k)2
where
x0 = x(0) .
k ||x(0)xe ||
for all k 1
Note that exponential stability implies asymptotic stability, but, in general, the converse
is not true.
Example 69
x(k+1) = p
Solutions satisfy
x(k) = p
x0
1 + 2kx20
x(k)
1 + 2x(k)2
where
x0 = x(0) .
66
7.5
LTI systems
The following table summarizes the relationhip between the stability properties of a LTI
system and the eigenproperties of its A-matrix. In the table, unless otherwise stated, a
property involving must hold for all eigenvalues of A.
Stability properties
eigenproperties
Asymptotic stability
and boundedness
|| < 1
Stability and
boundedness
|| 1
If || = 1 then is non-defective
7.6
67
where A =
f e
(x ) .
x
The following results can be demonstrated using nonlinear Lyapunov stability theory.
Stability. If all the eigenvalues of the A matrix of the linearized system have magnitude less
than one, then the nonlinear system is exponentially stable about xe .
Instability. If at least one eigenvalue of the A matrix of the linearized system has magnitude
greater than one, then the nonlinear system is unstable about xe .
Undetermined. Suppose all the eigenvalues of the A matrix of the linearized system have
magnitude less than or equal to one and at least one eigenvalue of A has magnitude one.
Then, based on the linearized system alone, one cannot predict the stability properties of the
nonlinear system about xe .
Note that the first statement above is equivalent to the following statement. If the
linearized system is exponentially stable, then the nonlinear system is exponentially stable
about xe .
Example 70 (Newtons method) Recall that Newtons method for a scalar function can
be described by
g(x(k))
x(k+1) = x(k)
g (x(k))
Here
f (x) = x
So,
f (x) = 1 1 +
g(x)
g (x)
g(x)g (x)
g(x)g (x)
=
.
g (x)2
g (x)2
At an equilibrium state xe , we have g(xe ) = 0; hence f (xe ) = 0 and the linearization about
any equilibrium state is given by
x(k+1) = 0 .
Thus, every equilibrium state is exponentially stable.
68
Chapter 8
Basic Lyapunov theory
Suppose we are interested in the stability properties of the system
x = f (x)
(8.1)
where x(t) is a real n-vector at time t. If the system is linear, we can determine its stability
properties from the properties of the eigenvalues of the system matrix. What do we do for
a nonlinear system? We could linearize about each equilibrium state and determine the
stability properties of the resulting linearizations. Under certain conditions this will tell us
something about the local stability properties of the nonlinear system about its equilibrium
states. However there are situations where linearization cannot be used to deduce even the
local stability properties of the nonlinear system. Also, linearization tells us nothing about
the global stability properties of the nonlinear system.
In general, we cannot explicitly obtain solutions for nonlinear systems. However, Lyapunov theory allows to say something about the stability properties of a system without
knowing the form or structure of its solutions. Lyapunov theory is based on Lyapunov functions which are scalar-valued functions of the system state.
Suppose V is a scalar-valued function of the state, that is V : IRn IR. If V is
continuously differentiable then, at any time t, the derivative of V along a solution x() of
system (8.1) is given by
dV
(x(t)) = DV (x(t))x(t)
dt
= DV (x(t))f (x(t))
where DV (x) is the derivative of V at x and is given by
DV (x) =
Note that
DV (x)f (x) =
V
(x)
x1
V
(x)
x2
...
V
(x)
xn
V
V
V
(x)f1 (x) +
(x)f2 (x) + . . . +
(x)fn (x)
x1
x2
xn
In what follows, if a condition involves DV , then it is implicitly assumed that V is continuously differentiable. Sometimes DV is denoted by
V
or
V (x)T .
x
69
70
8.1
8.1.1
71
DEFN. (Locally positive definite function) A function V is locally positive definite (lpd) about
a point xe if
V (xe ) = 0
and there is a scalar R > 0 such that
V (x) > 0
whenever x 6= xe
Basically, a function is lpd about a point xe if it is zero at xe has a strict local minimum
at xe .
V (x) = 1 ex
V (x) = 1 cos x
V (x) = x2 x4
Example 72
V (x) = ||x||2
= x21 + x22 + . . . + x2n
Lpd about the origin.
Example 73
V (x) = ||x||1
= |x1 | + |x2 | + . . . + |xn |
Lpd about the origin.
72
V (x) = x P x =
n X
n
X
Pij xi xj .
i=1 j=1
Clearly V (0) = 0. Recalling the definition of a positive definite matrix, it follows that
V (x) = xT P x > 0 for all nonzero x. Hence V is locally positive definite about the origin.
The second derivative of V at x is the square symmetric matrix given by:
that is,
2
D V (x) :=
2V
(x)
2 x1
2V
(x)
x1 x2
...
2V
(x)
x1 xn
2V
2V
2V
(x)
(x)
(x)
.
.
.
x2 x1
2 x2
x2 xn
..
..
..
.
.
.
2V
2V
2V
(x) xn x2 (x) . . . 2 xn (x)
xn x1
2V
(x)
D V (x)ij =
xi xj
2
2V
x2
1
V (x) = 1 cos x1 + x22
2
Clearly
V (0) = 0
Since
DV (x) =
sin x1 x2
73
we have
DV (0) = 0
Also,
2
D V (x) =
Hence,
cos x1 0
0
1
1 0
0 1
D V (0) =
>0
Since V satisfies the hypotheses of the previous lemma with xe = 0, it is lpd about zero.
8.1.2
A stability result
If the equilibrium state of a nonlinear system is stable but not asymptotically stable, then
one cannot deduce the stability properties of the equilibrium state of the nonlinear system
from the linearization of the nonlinear system about that equilibrium state. The following
Lyapunov result is useful in demonstrating stability of an equilibrium state for a nonlinear
system.
Theorem 1 (Stability) Suppose there exists a function V and a scalar R > 0 such that V is
locally positive definite about xe and
for ||x xe || < R
DV (x)f (x) 0
Then xe is a stable equilibrium state.
k>0
74
1
1
V (x) = kx21 + x22
2
2
as a candidate Lyapunov function. Then V is lpd about the origin and
DV (x)f (x) = 0
Hence the origin is stable.
Example 78 (Simple pendulum.)
x 1 = x2
x 2 = sin x1
Consider the total energy,
1
V (x) = 1 cos x1 + x22
2
as a candidate Lyapunov function. Then, as we have allready shown, V is lpd about the origin;
also
DV (x)f (x) = 0
Hence the origin is stable.
Example 79 (Stability of origin for attitude dynamics system.) Recall
(I2 I3 )
x2 x3
I1
(I3 I1 )
=
x3 x1
I2
(I1 I2 )
x1 x2
=
I3
x 1 =
x 2
x 3
where
I1 , I2 , I3 > 0
Consider the kinetic energy
V (x) =
1
I1 x21 + I2 x22 + I3 x23
2
75
As candidate Lyapunov function for the equilibrium state xe = [ 1 0 ]T consider the total
energy
1
1
1
1
V (x) = x41 x21 + x22 + .
4
2
2
4
Since
2
3x1 1 0
2
3
DV (x) = x1 x1 x2
and
D V (x) =
0
1
we have V (xe ) = 0, DV (xe ) = 0 and D 2 V (xe ) > 0, and it follows that V is lpd about xe .
One can readily verify that DV (x)f (x) = 0. Hence, xe is stable.
Exercises
Exercise 13 Determine whether or not the following functions are lpd. (a)
V (x) = x21 x41 + x22
(b)
V (x) = x1 + x22
(c)
V (x) = 2x21 x31 + x1 x2 + x22
Exercise 14 (Simple pendulum with Coulomb damping.) By appropriate choice of Lyapunov function, show that the origin is a stable equilibrium state for
x 1 = x2
x 2 = sin x1 c sgm (x2 )
where c > 0.
Exercise 15 By appropriate choice of Lyapunov function, show that the origin is a stable
equilibrium state for
x 1 = x2
x 2 = x31
Note that the linearization of this system about the origin is unstable.
Exercise 16 By appropriate choice of Lyapunov function, show that the origin is a stable
equilibrium state for
x 1 = x2
x 2 = x1 + x31
76
8.2
Asymptotic stability
The following result presents conditions which guarantee that an equilibrium state is asymptotically stable.
Theorem 2 (Asymptotic stability) Suppose there exists a function V and a scalar R > 0
such that V is locally positive definite about xe and
DV (x)f (x) < 0
for x 6= xe
for x 6= 0
Hence the origin is AS. Although the origin is AS, there are solutions which go unbounded
in a finite time.
Example 83
x = sin x
Consider
V (x) = x2
Then V is lpd about zero and
DV (x)f (x) = 2x sin(x) < 0
Hence the origin is AS.
77
Example 84 (Simple pendulum with viscous damping.) Intuitively, we expect the origin to
be an asymptotically stable equilibrium state for the damped simple pendulum:
x 1 = x2
x 2 = sin x1 cx2
where c > 0 is the damping coefficient. If we consider the total mechanical energy
1
V (x) = 1 cos x1 + x22
2
as a candidate Lyapunov function, we obtain
DV (x)f (x) = cx22 .
Since DV (x)f (x) 0 for all x, we have stability of the origin. Since DV (x)f (x) = 0
whenever x2 = 0, it follows that DV (x)f (x) = 0 for points arbitrarily close to the origin;
hence V does not satisfy the requirements of the above theorem for asymptotic stability.
Suppose we modify V to
1
1
V (x) = c2 x21 + cx1 x2 + x22 + 1 cos x1
2
2
where is any scalar with 0 < < 1. Letting
1 c2 c
P =
c 1
2
note that P > 0 and
V (x) = xT P x + 1 cos x1
xT P x
Hence V is lpd about zero and we obtain
DV (x)f (x) = cx1 sin x1 (1 )cx22 < 0
Exercises
Exercise 17 By appropriate choice of Lyapunov function, show that the origin is an asymptotically stable equilibrium state for
x 1 = x2
x 2 = x51 x2
Exercise 18 By appropriate choice of Lyapunov function, show that the origin is a asymptotically stable equilibrium state for
x 1 = x2
x 2 = x1 + x31 x2
78
8.3
8.3.1
Boundedness
Radially unbounded functions
Example 85
V (x)
V (x)
V (x)
V (x)
V (x)
V (x)
=
=
=
=
=
=
x2
2
1 ex
x2 x
x4 x2
x sin x
x2 (1 cos x)
yes
no
yes
yes
no
no
for
||x|| R
8.3.2
A boundedness result
Theorem 3 Suppose there exists a radially unbounded function V and a scalar R 0 such
that
DV (x)f (x) 0
for ||x|| R
Then all solutions of x = f (x) are bounded.
8.3. BOUNDEDNESS
79
Note that, in the above theorem, V does not have to be positive away from the origin; it
only has to be radially unbounded.
Example 87 Recall
x = x x3 .
Consider
V (x) = x2
Since V is radially unbounded and
DV (x)f (x) = 2x2 (x2 1)
the hypotheses of the above theorem are satisfied with R = 1; hence all solutions are bounded.
Note that the origin is unstable.
Example 88 Duffings equation
x 1 = x2
x 2 = x1 x31
Consider
1
1
1
V (x) = x22 x21 + x41
2
2
4
It should be clear that V is radially unbounded; also
DV (x)f (x) = 0 0
for all x
So, the hypotheses of the above theorem are satisfied with any R; hence all solutions are
bounded.
Example 89
x = x x3 + w
where |w| .
Example 90
x = Ax + Bw
where A is Hurwitz and |w| .
80
Exercises
Exercise 19 Determine whether or not the the following function is radially unbounded.
V (x) = x1 x31 + x41 x22 + x42
Exercise 20 (Forced Duffings equation with damping.) Show that all solutions of the
system
x 1 = x2
x 2 = x1 x31 cx2 + 1
c>0
are bounded.
Hint: Consider
1
1
1
1
V (x) = c2 x21 + cx1 x2 + x22 x21 + x41
2
2
2
4
where 0 < < 1. Letting
1 c2 c
P =
c 1
2
note that P > 0 and
1
1
V (x) = xT P x x21 + x41
2
4
1
T
x Px
4
Exercise 21 Recall the Lorenz system
x 1 = (x2 x1 )
x 2 = rx1 x2 x1 x3
x 3 = bx3 + x1 x2
with b > 0. Prove that all solutions of this system are bounded. (Hint: Consider V (x) =
rx21 + x22 + (x3 2r)2 .)
8.4
8.4.1
81
for all x 6= 0
pd
lpd but not pd
lpd but not pd
Example 92
V (x) = x41 + x22
Example 93
V (x) = ||x||2
= x21 + x22 + . . . + x2n
Example 94 Suppose P is a real n n matrix and is positive definite symmetric and
consider the quadratic form defined by
V (x) = xT P x
V is a positive definite function.
The following lemma can be useful in demonstrating that a function has the first two
properties needed for a function to be pd.
Lemma 4 Suppose V is twice continuously differentiable and
V (0) = 0
DV (0) = 0
D 2 V (x) > 0
Then V (x) > 0 for all x.
for all x
82
If V satisfies the hypotheses of the above lemma, it is not guaranteed to be radially unbounded, hence it is not guaranteed to be positive definite. Lemma 3 can be useful for
guaranteeing radial unboundedness. We also have the following lemma.
Lemma 5 Suppose V is twice continuously differentiable,
V (0) = 0
DV (0) = 0
and there is a positive definite symmetric matrix P such that
D 2 V (x) P
Then
for all x
1
V (x) xT P x
2
for all x.
8.4.2
Theorem 4 (Global asymptotic stability) Suppose there exists a positive definite function V
such that
DV (x)f (x) < 0
for all x 6= 0
Then the origin is a globally asymptotically stable equilibrium state for x = f (x).
Example 95
x = x3
V (x) = x2
DV (x)f (x) = 2x4
< 0
for all x 6= 0
We have GAS. Note that linearization of this system about the origin cannot be used to
deduce the asymptotic stability of this system.
Example 96 The first nonlinear system
x = sgm (x) .
This system is not linearizable about its unique equilibrium state at the origin. Considering
V (x) = x2
we obtain
DV (x)f (x) = 2|x|
< 0
for all x 6= 0 .
Hence, we have GAS.
83
Example 97 Consider
x 1 = x1 + sin(x1 x2 )
x 1 = 2x2 sin(x1 x2 )
We will show that this system is GAS provided is small enough. Considering the positive
definite function
V (x) = x21 + x22 .
as a candidate Lyapunov function, we have
V
= 2x1 x 1 + 2x2 x 2
1
1
1
V (x) = k22 x21 + k2 x1 x2 + x22 + k1 x21 + cos x1 1
2
2
2
where 0 < < 1. Then V is pd (apply lemma 5) and
DV (x)f (x) = k2 (k1 x21 + x1 sin x1 ) (1 )k2 x22
Since
| sin x1 | |x1 |
for all x1
84
it follows that
x1 sin x1 x21
for all x1
hence
DV (x)f (x) k2 (k1 1)x21 (1 )k2 x22
<0
for all x 6= 0
x 1 =
x 2
x 3
u1
I1
u2
I2
u3
I3
where
I1 , I2 , I3 > 0
Consider any linear controller of the form
ui = kxi
i = 1, 2, 3,
k>0
I2 I3
x2 x3
=
I1
I3 I1
=
x3 x1
I2
I1 I2
x1 x2
=
I3
kx1
I1
kx2
I2
kx3
I3
1
I1 x21 + I2 x22 + I3 x23
2
85
Exercises
Exercise 22 Determine whether or not the following function is positive definite.
V (x) = x41 x21 x2 + x22
Exercise 23 Consider any scalar system described by
x = g(x)
where g has the following properties:
g(x) > 0 for x > 0
g(x) < 0 for x < 0
Show that this system is GAS.
Exercise 24 Recall the Lorenz system
x 1 = (x2 x1 )
x 2 = rx1 x2 x1 x3
x 3 = bx3 + x1 x2
Prove that if
b>0
and
0 r < 1,
then this system is GAS about the origin. (Hint: Consider V (x) = x21 + x22 + x23 .)
Exercise 25 (Stabilization of the Duffing system.) Consider the Duffing system with a
scalar control input u(t):
x 1 = x2
x 2 = x1 x31 + u
Obtain a linear controller of the form
u = k1 x1 k2 x2
which results in a closed loop system which is GAS about the origin. Numerically simulate
the open loop system (u = 0) and the closed loop system for several initial conditions.
8.5
8.5.1
Exponential stability
Global exponential stability
Theorem 5 (Global exponential stability) Suppose there exists a function V and positive
scalars , 1 , 2 such that for all x,
1 ||xxe ||2 V (x) 2 ||xxe ||2
86
and
DV (x)f (x) 2V (x) .
Then, for the system x = f (x), the state xe is a globally exponentially stable equilibrium
state with rate of convergence . In particular, all solutions x() of the system satisfy
kx(t)xe k
2 /1 kx(0)xe k et
for all
8.5.2
Proof of theorem 6
t 0.
8.5.3
87
Exponential stability
Theorem 6 (Exponential stability) Suppose there exists a function V and positive scalars R,
, 1 , 2 such that, whenever kxxe k R, one has
1 ||xxe ||2 V (x) 2 ||xxe ||2
and
DV (x)f (x) 2V (x)
Then, for system x = f (x), the state xe is an exponentially stable equilibrium state with rate
of convergence .
Exercise 26 Consider the scalar system
x = x + x3
As a candidate Lyapunov function for exponential stability, consider V (x) = x2 . Clearly,
the condition on V is satisfied with 1 = 2 = 1. Noting that
V = 2x2 + 2x4 = 2(1 x2 )x2 ,
and considering R = 1/2, we obtain that whenever |x| R, we have V 2V where
= 3/4. Hence, we have ES.
88
8.5.4
(8.2)
and suppose that there exist two positive definite symmetric matrices P and Q such that
xT P f (x) xT Qx .
We will show that the origin is GES with rate
:= min (P 1 Q)
where min(P 1 Q) > 0 is the smallest eigenvalue of P 1 Q.
As a candidate Lyapunov function, consider
V (x) = xT P x .
Then
min (P )||x||2 V (x) max (P )||x||2
that is
with
1 = min(P ) > 0
and
2 = max (P ) > 0 .
I2 I3
=
x2 x3
I1
I3 I1
x3 x1
=
I2
I1 I2
x1 x2
=
I3
kx1
I1
kx2
I2
kx3
I3
89
Considering
we have
I1 0 0
P = 0 I2 0
0 0 I3
xT P f (x) = kxT x
90
8.5.5
Summary
The following table summarizes the results of this chapter for stability about the origin.
stability properties
lpd
0 for ||x|| R
lpd
AS
ru
0 for ||x|| R
pd
< 0 for x 6= 0
GAS
2V (x)
GES
ES
Chapter 9
Basic Lyapunov theory: discrete time
Suppose we are interested in the stability properties of the system,
x(k + 1) = f (x(k))
(9.1)
where x(k) IRn and k IR. If the system is linear, we can determine its stability properties
from the properties of the eigenvalues of the system matrix. What do we for a nonlinear
system? We could linearize about each equilibrium state and determine the stability properties of the resulting linearizations. Under certain conditions (see later) this will tell us
something about the local stability properties of the nonlinear system about its equilibrium
states. However there are situations where linearization cannot be used to deduce even the
local stability properties of the nonlinear system. Also, linearization tells us nothing about
the global stability properties of the nonlinear system.
In general, we cannot explicitly obtain solutions for nonlinear systems. Lyapunov theory
allows to say something about the stability properties of a system without knowing the form
or structure of the solutions.
In this chapter, V is a scalar-valued function of the state, i.e. V : IRn IR. At any time
k, the one step change in V along a solution x() of system (9.1) is given by
V (x(k + 1)) V (x(k)) = V (x(k))
where
V (x) := V (f (x)) V (x)
91
92
9.1
Stability
Theorem 7 (Stability) Suppose there exists a locally positive definite function V and a
scalar R > 0 such that
V (x) 0
for ||x|| < R
Then the origin is a stable equilibrium state.
If V satisfies the hypotheses of the above theorem, then V is said to be a Lyapunov function
which guarantees the stability of origin.
Example 103
x(k + 1) = x(k)
Consider
V (x) = x2
as a candidate Lyapunov function. Then V is a lpdf and
V (x) = 0
Hence (it follows from theorem 7 that) the origin is stable.
9.2
93
Asymptotic stability
Theorem 8 (Asymptotic stability) Suppose there exists a locally positive definite function
V and a scalar R > 0 such that
V (x) < 0
for
x 6= 0
Example 104
1
x(k + 1) = x(k)
2
Consider
V (x) = x2
Then V is a lpdf and
1
V (x) = ( x)2 x2
2
3
= x2
4
< 0
for x 6= 0
Hence the origin is AS.
Example 105
1
x(k + 1) = x(k) + x(k)2
2
Consider
V (x) = x2
Then V is a lpdf and
1
V (x) = ( x + x2 )2 x2
2
1
3
= x2 ( + x)( x)
2
2
1
< 0
for |x| < , x 6= 0
2
Hence the origin is AS.
94
9.3
Boundedness
Theorem 9 Suppose there exists a radially unbounded function V and a scalar R 0 such
that
V (x) 0
for ||x|| R
Then all solutions of (9.1) are bounded.
Note that, in the above theorem, V does not have to positive away from the origin; it only
has to be radially unbounded.
Example 106
x(k + 1) =
2x(k)
1 + x(k)2
Consider
V (x) = x2
Since V is radially unbounded and
V (x) =
2x
1 + x2
2
x2
x2 (x2 + 3)(x2 1)
(x2 + 1)2
for |x| 1
the hypotheses of the above theorem are satisfied with R = 1; hence all solutions are bounded.
Note that the origin is unstable.
9.4
95
Theorem 10 (Global asymptotic stability) Suppose there exists a positive definite function
V such that
V (x) < 0
for all x 6= 0
Then the origin is a globally asymptotically stable equilibrium state.
Example 107
1
x(k + 1) = x(k)
2
Example 108
x(k + 1) =
x(k)
1 + x(k)2
Consider
V (x) = x2
Then
V (x) =
=
< 0
x
1 + x2
2
x2
2x4 + x6
(1 + x2 )2
for all x 6= 0
96
9.5
Exponential stability
Theorem 11 (Global exponential stability.) Suppose there exists a function V and scalars
, 1 , 2 such that for all x,
1 ||x||2 V (x) 2 ||x||2
1 , 2 > 0
V (f (x)) 2 V (x)
0<1
and
2 k
||x(0)||
1
for
k0
Hence, the origin is a globally exponentially stable equilibrium state with rate of convergence
.
Proof.
Example 109
1
x(k + 1) = x(k)
2
97
Considering
V (x) = x2
we have
1 2
x
4
1
= ( )2 V (x)
2
V (f (x)) =
1
sin(x(k))
2
Considering
V (x) = x2
we have
1
V (f (x)) = ( sin x)2
2
1 2
= ( ) | sin x|2
2
1 2 2
( ) |x|
2
1 2
= ( ) V (x)
2
Hence, we have GES with rate of convergence =
1
2
98
Chapter 10
Lyapunov theory for linear
time-invariant systems
The main result of this section is contained in Theorem 12.
10.1
10.1.1
Definite matrices
x Px =
n X
n
X
pij xi xj
i=1 j=1
1 1
1
2
x P x = x1 x1 x1 x2 x2 x1 + 2x2 x2
= (x1 x2 ) (x1 x2 ) + x2 x2
= |x1 x2 |2 + |x2 |2
Clearly, x P x 0 for all x. If x P x = 0, then x1 x2 = 0 and x2 = 0; hence x = 0. So,
P > 0.
99
101
1 1
1
2
10.1.2
Semi-definite matrices
Fact 2 The following statements are equivalent for any hermitian matrix P .
(a) P is positive semi-definite.
(b) All the eigenvalues of P are non-negative.
(c) All the leading minors of P are positive.
Example 113 This example illustrates that non-negativity of the leading principal minors
of P is not sufficient for P 0.
0
0
P =
0 1
We have p11 = 0 and det(P ) = 0. However,
x P x = |x2 |2
hence, P is not psd. Actually, P is nsd.
Fact 3 Consider any m n complex matrix M and let P = M M. Then
(a) P is hermitian and P 0
(b) P > 0 iff rank M = n.
1
1
Since
P =
and
rank
P 0 but P is not pd.
1 1
1 1
1 1
1 1
=1
Exercise 28 (Optional)
Suppose P is hermitian and T is invertible. Show that P > 0 iff T P T > 0.
Exercise 29 (Optional)
Suppose P and Q are two hermitian matrices with P > 0. Show that P + Q > 0 for
> 0 such that whenever || < ,
one has
all real sufficiently small; i.e., there exists
P + Q > 0.
103
10.2
Lyapunov theory
10.2.1
1
3
0 1
A+A =
3 2
is not negative definite.
Since stability is invariant under a similarity transformation T , i.e., the stability of A and
T AT are equivalent for any nonsingular T , we should consider the more general condition
1
T 1 AT + T A T < 0
Introducing the hermitian matrix P := T T 1 , and pre- and post-multiplying the above
inequality by T and T 1 , respectively, yields
P > 0
PA + A P < 0
(10.2a)
(10.2b)
We now show that the existence of a hermitian matrix P satisfying these conditions guarantees asymptotic stability.
Lemma 6 Suppose there is a hermitian matrix P satisfying (10.2). Then system (10.1) is
asymptotically stable
Proof. Suppose there exists a hermititian matrix P which satisfies inequalities (10.2).
Consider any eigenvalue of A. Let v 6= 0 be an eigenvector corresponding to , i.e.,
Av = v
Then
v P Av = v P v ;
P v
v A P v = v
(10.3a)
(10.3b)
Hence the above lemma can be stated with S replacing P and the preceding inequalities
replacing (10.2).
So far we have shown that if a LTI system has a Lyapunov matrix, then it is AS. Is the
converse true? That is, does every AS LTI system have a Lyapunov matrix? And if this is
105
true how does one find a Lyapunov matrix? To answer this question note that satisfaction
of inequality (10.2b) is equivalent to
P A + A P + Q = 0
(10.4)
where Q is a hermitian positive definite matrix. This linear matrix equation is known as the
Lyapunov equation. So one approach to looking for Lyapunov matrices could be to choose a
pd hermitian Q and determine whether the Lyapunov equation has a pd hermitian solution
for P .
We first show that if the system x = Ax is asymptotically stable and the Lyapunov
equation (10.4) has a solution then, the solution is unique. Suppose P1 and P2 are two
solutions to (10.4). Then,
(P2 P1 )A + A (P2 P1 ) = 0 .
Hence,
d eA t (P2 P1 )eAt
=0
dt
eA t QeAt dt
deA t
deAt
A t
At
Ae =
e A=
dt
dt
PA + A P =
eA t QeAt A + A eA t QeAt
dt
At
deA t At
A t de
=
dt
e Q
+
Qe
dt
dt
0
Z
d eA t QeAt
=
dt
dt
0
Z t A t At
de Qe
dt
= lim
t 0
dt
Z
= lim eA t QeAt Q
t
= Q
We have already demonstrated uniqueness of solutions to (10.4),
Suppose Q is pd hermitian. Then it should be clear that P is pd hermitian.
Using the above two lemmas, we can now state the main result of this section.
Theorem 12 The following statements are equivalent.
(a) The system x = Ax is asymptotically stable.
(b) There exist positive definite hermitian matrices P and Q satisfying the Lyapunov equation (10.4).
(c) For every positive definite hermitian matrix Q, the Lyapunov equation (10.4) has a
unique solution for P and this solution is hermitian positive definite.
Proof. The first lemma yields (b) = (a). The second lemma says that (a) = (c).
Hence, (b) = (c).
To see that (c) = (b), pick any positive definite hermitian Q. So, (b) is equivalent to
(c). Also, (c) = (a); hence (a) and (c) are equivalent.
.
Example 116
x1 = x2
x2 = x1 + cx2
Here
A=
0 1
1 c
107
Choosing
Q=
1 0
0 1
and letting
P =
p11 p12
p12 p22
(note we have taken p21 = p12 because we are looking for symmetric solutions) the Lyapunov
equation results in
2p12 + 1 = 0
p11 + cp12 p22 = 0
2p12 + 2cp22 + 1 = 0
10.2.2
(10.5)
MATLAB.
Lyapunov equation.
X = LYAP(A,C) solves the special form of the Lyapunov matrix
equation:
A*X + X*A = -C
X = LYAP(A,B,C) solves the general form of the Lyapunov matrix
equation:
A*X + X*B = -C
See also DLYAP.
10.2.3
Stability results
(10.6a)
(10.6b)
109
Lemma 9 Suppose system (10.1) is stable. Then there exist a positive definite hermitian
matrix P and a positive semi-definite hermitian matrix Q satisfying the Lyapunov equation
(10.4).
The above lemma does not state that for every positive semi-definite hermitian Q the
Lyapunov equation has a solution for P . Also, when the Lyapunov equation has a solution,
it is not unique. This is illustrated in the following example.
Example 117
A=
With
P =
we obtain
0 1
1 0
1 0
0 1
P A + A P = 0
In this example the Lyapunov equation has a pd solution with Q = 0; this solution is
not unique; any matrix of the form
p 0
P =
0 p
(where p is arbitrary) is also a solution.
If we consider the psd matrix
Q=
the Lyapunov equation has no solution.
1 0
0 0
10.3
Mechanical systems
q1
q2
this system can be described by the following second order vector differential equation:
M q + C q + Kq = 0
where the symmetric matrices M, C, K are given by
c1 + c2 c2
m1 0
C=
M=
c2
c2
0 m2
K=
k1 + k2 k2
k2
k2
potential energy =
1
m q2
2 1 1
1
k q2
2 1 1
+ 21 m2 q22
1
q M q
2
+ 21 k2 (q2 q1 )2 =
1
q Kq
2
111
and
C = C 0 .
P A + A P + Q = 0
where
Q=
It should be clear that
0 0
0 C
P A + A P 0
iff
Hence,
C0
V (x) = q C q
1
2
K 0
0 M
+
2
C M
M 0
K
0
0 C M
For sufficiently small > 0, the matrix C M is pd and, hence, Q is pd. So,
K = K > 0 and C = C > 0 imply asymptotic stability.
10.4
113
where
:= max{() : is an eigenvalue of A } .
Then x = Ax is GES with rate .
Proof: Consider any satisfying 0 < <
. As a consequence of the definition of
, all
the eigenvalues of the matrix A+I have negative parts. Hence, the Lyapunov equation
P (A+ I) + (A+I)P + I = 0 .
has a unique solution for P and P = P > 0. As a candidate Lyapunov for the system
x = Ax, consider V (x) = x P x. Then,
V
= x P x + x P x
= x P Ax + (Ax) P x
= x (P A + A P )x .
= 2x P x x x
2V (x) .
10.5
(10.7)
and suppose that xe is an equilibrium state for this system. We assume that f is differential
at xe and let
f1 e f1 e
f1 e
x1 (x ) x2 (x ) xn (x )
f2 e
f2 e f2 e
(x )
(x )
(x )
x1
x2
xn
e
Df (x ) =
.
.
.
..
..
..
fn e fn e
fn e
(x )
(x )
(x )
x1
x2
xn
where x = (x1 , x2 , , xn ). Then the linearization of x = f (x) about xe is the linear system
defined by
x = Ax
where A = Df (xe ) .
(10.8)
:= max{() : is an eigenvalue of A } .
Since all the eigenvalues have negative real parts, we have
> 0. Consider now any
satisfying 0 < < .
As a consequence of the definition of
, all the eigenvalues of the
matrix A+I have negative parts. Hence, the Lyapunov equation
P (A+ I) + (A+I)P + I = 0 .
has a unique solution for P and P = P > 0. Without loss of generality consider xe = 0. As
a candidate Lyapunov function for the nonlinear system, consider
V (x) = x P x .
Recall that
f (x) = f (0) + Df (0)x + o(x)
where the remainder term has the following property:
o(x)
= 0.
x0, x6=0 kxk
lim
115
Hence
f (x) = Ax + o(x)
and
V
=
=
=
2x P x
2x P f (x)
2x P Ax + 2x P o(x)
x (P A + A P )x + 2kxkkP kko(x)k .
= 2x P x x x + 2kxkkP kko(x)k
= 2V (x) kxk2 + 2kxkkP kko(x)k .
Chapter 11
Quadratic stability
11.1
Introduction
[3] In this chapter, we introduce numerical techniques which are useful in finding Lyapunov
functions for certain classes of globally exponentially stable nonlinear systems. We restrict
consideration to quadratic Lyapunov functions. For specific classes of nonlinear systems, we
reduce the search for a Lyapunov function to that of solving LMIs (Linear matrix inequalities).
The results in this chapter are also useful in proving stability of switching linear systems.
To establish the results in this chapter, recall that a system
x = f (x)
(11.1)
is globally exponentially stable about the origin if there exist positive definite symmetric
matrices P and Q which satisfy
xT P f (x) xT Qx
for all x. When this is the case we call P a Lyapunov matrix for system (11.1).
11.2
(11.2)
where the n-vector x is the state. The state dependent matrix A(x) has the following
structure
A(x) = A0 + (x)A
(11.3)
where A0 and A are constant n n matrices and is a scalar valued function of the state
x which is bounded above and below, that is
a (x) b
(11.4)
for some constants a and b. Examples of functions satisfying the above conditions are given
2
by (x) = g(c(x)) where c(x) is a scalar and g(y) is given by sin y, cos y, ey or sgm (y).
The signum function is useful for modeling switching systems.
117
118
Example 119 Inverted pendulum under linear feedback. Consider an inverted pendulum
under linear control described by
x 1 = x2
x 2 = 2x1 x2 + sin x1
>0
(11.5a)
(11.5b)
sin x1 /x1 if x1 =
6 0
if x1 = 0
(x) ;
hence a = and b = .
The following theorem provides a sufficient condition for the global exponential stability of
system (11.2)-(11.4). These conditions are stated in terms of linear matrix inequalities and
the two matrices corresponding to the extreme values of (x), namely
A1 := A0 + aA
and
A2 := A0 + bA
Theorem 16 Suppose there exists a positive-definite symmetric matrix P which satisfies the
following linear matrix inequalities:
P A1 + AT1 P < 0
P A2 + AT2 P < 0
(11.6)
Then system (11.2)-(11.4) is globally exponentially stable (GES) about the origin with Lyapunov matrix P .
Proof. As a candidate Lyapunov function for GES, consider V (x) = xT P x. Then
V
= 2xT P x
= 2xT P A0 x + 2(x)xT P Ax .
For each fixed x, the above expression for V is a linear affine function of the scalar (x).
Hence an upper bound for V occurs when (x) = a or (x) = b which results in
V 2xT P A1 x = xT (P A1 + AT1 P )x
or
V 2xT P A2 x = xT (P A2 + AT2 P )x
119
respectively. As a consequence of the matrix inequalities (11.6), there exist positive scalars
1 and 2 such that
P A1 + AT1 P 21 P
P A2 + AT2 P 22 P
Letting = min{1 , 2 }, we now obtain that
V 2xT P x = 2V .
This guarantees the system is GES with rate .
Exercise 31 Prove the following result: Suppose there exists a positive-definite symmetric
matrix P and a positive scalar which satisfy
P A1 + AT1 P + 2P 0
P A2 + AT2 P + 2P 0
(11.7a)
(11.7b)
where A1 := A0 + aA and A2 := A0 + bA. Then system (11.2)-(11.4) is globally exponentially stable about the origin with rate of convergence .
11.2.1
Recall the pendulum of Example 119. We will use the Matlab LMI Control Toolbox to see if
we can show exponential stability of this system. First note that the existence of a positive
definite symmetric P satisfying inequalities (11.6) is equivalent to the existence of another
symmetric matrix P satisfying
P A1 + AT1 P < 0
P A2 + AT2 P < 0
P > I
For = 1, we determine the feasibility of these LMIs using the following Matlab program.
% Quadratic stability of inverted pendulum
%
gamma=1;
A0 = [0 1; -2 -1];
DelA = [0 0; 1 0];
A1 = A0 - gamma*DelA;
A2 = A0 + gamma*DelA;
%
%
setlmis([])
%
p=lmivar(1, [2,1]);
%
120
lmi1=newlmi;
lmiterm([lmi1,1,1,p],1,A1,s)
%
lmi2=newlmi;
lmiterm([lmi2,1,1,p],1,A2,s)
%
Plmi= newlmi;
lmiterm([-Plmi,1,1,p],1,1)
lmiterm([Plmi,1,1,0],1)
%
lmis = getlmis;
%
[tfeas, xfeas] = feasp(lmis)
%
P = dec2mat(lmis,xfeas,p)
Running this program yields the following output
Solver for LMI feasibility problems L(x) < R(x)
This solver minimizes t subject to L(x) < R(x) + t*I
The best value of t should be negative for feasibility
Iteration
1
2
Result:
tfeas =
0.202923
-0.567788
best value of t:
-0.567788
f-radius saturation: 0.000% of R =
1.00e+09
-0.5678
xfeas = 5.3942
1.0907
2.8129
P = 5.3942
1.0907
1.0907
2.8129
Hence the above LMIs are feasible and the pendulum is exponentially stable with Lyapunov
matrix
5.3942 1.0907
P =
1.0907 2.8129
121
By iterating on , the largest value for which LMIs (11.6) were feasible was found to
be 1.3229. However, from previous considerations, we have shown that this system is
exponentially stable for < 2. Why the difference?
Exercise 32 What is the supremal value of > 0 for which Theorem 16 guarantees that
the following system is guaranteed to be stable about the origin?
2
x 1 = 2x1 + x2 + ex1 x2
2
x 2 = x1 3x2 ex1 x1
Exercise 33 Consider the pendulum system of Example 119 with = 1. Obtain the largest
rate of exponential convergence that can be obtained using the results of Exercise 31 and
the LMI toolbox.
122
11.2.2
Generalization
One can readily generalize the results of this section to systems described by
x = A(x)x
(11.8)
where the state dependent matrix A(x) has the following structure
A(x) = A0 + 1 (x)A1 + + l (x)Al ,
(11.9)
(11.11)
for all A in A
(11.12)
Then system (11.8)-(11.10) is globally exponentially stable about the origin with Lyapunov
matrix P .
Exercise 34 Consider the double inverted pendulum described by
1 + 21 2 + 2k1 k2 sin 1 = 0
2 1 + 2 k1 + k2 sin 2 = 0
Using the results of Theorem 17, obtain a value of the spring constant k which guarantees
that this system is globally exponentially stable about the zero solution.
11.2.3
123
Robust stability
(11.13)
where is some uncertain parameter vector. suppose that the only information we have on
is its set of possible values, that is,
where the set is known. So we do not know what is, we only know a set of possible
values of . We say that the above system is robustly stable if it is stable for all allowable
values of , that is, for all .
We say that the uncertain system is quadratically stable if there are positive definite
symmetric matrices P and Q such
xT P f (x, ) xT Qx
(11.14)
for x and . Clearly, quadratic stability guarantees robust exponentially stability about
zero.
One of the useful features of the above concept, is that one can sometimes guarantee that
(11.14) holds for all by just checking that is holds for a finite number of . For, example
consider the linear uncertain system described by
x = A()x .
Suppose that the uncertain matrix A() has the following structure
A() = A0 + 1 A1 + + Al ,
(11.15)
(11.16)
for all A in A
(11.17)
Then, for all allowable , the uncertain system is globally exponentially stable about the
origin with Lyapunov matrix P .
124
11.3
(11.18)
where the n-vector x(t) is the state; B and C are constant matrices with dimensions n m
and p n, respectively; and is a function which satisfies
||(z)|| ||z||
(11.19)
for some 0.
Note that if we introduce a fictitious output z = Cx and a fictitious input w = (Cx),
the nonlinear system (11.18) can be described as a feedback combination of a linear time
invariant system
x = Ax + Bw
z = Cx
and a memoryless nonlinearity
w = (z)
This is illustrated in Figure 11.1.
(11.20)
Then system (11.18)-(11.19) is globally exponentially stable about the origin with Lyapunov
matrix P .
125
(11.21)
2||B T P x||||||
2||B T P x||||Cx||
2 ||B T P x||2 + ||Cx||2
2 xT P BB T P x + xT C T Cx .
(11.22)
P A + AT P + C T C
PB
T
B P
2 I
<0
(11.23)
Note that this inequality is linear in P and 2 . Suppose one wishes to compute the supremal
value of which guarantees satisfaction of the hypotheses of Theorem 18 and hence stability
of system (11.32). This can be achieved by solving the following optimization problem:
minimize subject to
P A + AT P + C T C P B
BT P
I
< 0
0 < P
0 <
1
and then letting = 2 where is the infimal value of . The following program uses the
LMI toolbox to compute for the inverted pendulum example.
126
-0.031634
0.148578
0.284167
new
7
***
new
8
***
new
9
***
new
10
***
new
11
***
new
12
***
new
13
***
new
14
***
Result:
new
0.598055
lower bound:
0.580987
lower bound:
0.572209
lower bound:
0.571637
lower bound:
0.571450
lower bound:
0.571438
lower bound:
0.571429
lower bound:
0.571429
lower bound:
0.571429
lower bound:
127
0.394840
0.535135
0.551666
0.567553
0.570618
0.571248
0.571391
0.571420
0.571426
copt = 0.5714
xopt = 1.1429
0.2857
0.5714
0.5714
gamma = 1.3229
Thus = 1.3229 which is the same as that achieved previously.
A Riccati equation. It should be clear that P satisfies the above QMI if and only if it
satisfies the following Riccati equation for some positive definite symmetric matrix Q:
P A + AT P + 2 P BB T P + C T C + Q = 0
(11.24)
Using properties of QMI (11.20) (see Ran and Vreugdenhil, 1988), one can demonstrate the
following.
Lemma 10 There exists a positive-definite symmetric matrix P which satisfies the quadratic
matrix inequality (11.20) if and only if for any positive-definite symmetric matrix Q there
128
is an > 0 such that for all (0, ] the following Riccati equation has a positive-definite
symmetric solution for P
P A + AT P + 2 P BB T P + C T C + Q = 0
(11.25)
Using this corollary, the search for a Lyapunov matrix is reduced to a one parameter search.
Exercise 35 Prove the following result: Suppose there exist a positive-definite symmetric
matrix P and a positive scalar which satisfy
P A + AT P + C T C + 2P
PB
0.
(11.26)
BT P
2 I
Then system (11.18)-(11.19) is globally exponentially stable about the origin with rate of
convergence .
11.3.1
In many situations, one is only interested in whether a given nonlinear system is stable or
not; one may not actually care what the Lyapunov matrix is. To this end, the following
frequency domain characterization of quadratic stability is useful. Consider system (11.18)
by
and define the transfer function G
and let
G(s)
= C(sI A)1 B
(11.27)
:= sup kG(j)k
kGk
(11.28)
IR
(11.29)
The last lemma and Theorem 18 tell us that , the supremal value of for stability of
system (11.18)-(11.19) via the method of this section, is given by
= 1/||G||
Example 121 Consider Example 120 again. Here the matrix A is asymptotically stable
and
G(s)
= 1/(s2 + s + 2)
One may readily compute that
= 2/ 7 < 1
kGk
=
Hence, this nonlinear system is exponentially stable. Also, = 1/||G||
This is the same as before.
7/2 1.3229.
129
0.8
0.6
0.4
Imag Axis
0.2
-0.2
-0.4
-0.6
-0.8
-0.8
-0.6
-0.4
-0.2
0
Real Axis
0.2
0.4
0.6
0.8
11.3.2
Suppose is a scalar valued function of a scalar variable (m = p = 1). Then the bound
(11.19) is equivalent to
z 2 z(z) z 2
That is, is a function whose graph lies in the sector bordered by the lines passing through
the origin and having slopes and ; see Figure 11.3. Consider now a function whose
(11.30)
130
Example 122 The saturation function which is defined below is illustrated in Figure 11.5.
(z) =
z if |z| 1
1 if z < 1
1 if z > 1
The following result is sometimes useful in showing that a particular nonlinear function
is sector bounded.
Fact 4 Suppose is a differentiable scalar valued function function of a scalar variable,
(0) = 0 and for all z
a (z) b
for some scalars a and b. Then satisfies (11.30) for all z.
We can treat sector bounded nonlinearities using the results of the previous section by
introducing some transformations. Specifically, if we introduce
(z)
:= (z) kz
where
k := (a + b)/2 ,
then,
z 2 z (z)
z 2
with
:= (b a)/2
+ B (Cx)
x = Ax
(11.31)
11.3.3
131
Generalization
(11.32)
i = 1, 2, , l
z1
1 (z1 )
z2
2 (z2 )
(11.34)
where
z = .. with zi IRpi
(z) =
..
.
.
zl
l (zl )
132
B1 B2 . . . Bl
B :=
C :=
Also,
||(z)||2 =
l
X
i=1
l
X
i=1
C1
C2
..
.
Cl
that is ||(z)|| ||z||. We can now apply the results of the previous section to obtain
sufficient conditions for global exponential stability. However, these results will be overly
conservative since they do not take into account the special structure of ; the function is
not just any function satisfying the bound k(z)k kzk; it also has the structure indicated
in (11.34). To take this structure into account, consider any l positive scalars 1 , 2 , . . . , l
and introduce the nonlinear function
1 1 (1
z1
1 z1 )
2 2 (1 z2 )
z2
2
(z) =
where
z = .. with zi IRpi
..
.
1
l l (l zl )
zl
Then system (11.32) can also be described by (11.18), specifically, it can be described by
Cx)
(
x = Ax + B
(11.35)
with
:=
B
Noting that
1
1 B1
||i i (1
i zi )||
1
2 B2
...
i ||(1
i zi )||
||(z)||
=
l
X
i=1
1
l Bl
i ||1
i zi ||
2
||i (1
i zi )||
l
X
i=1
C :=
1 C 1
2 C 2
..
.
l C l
PA + A P +
l
X
i=1
T
2
T
C
1
P
B
B
P
+
C
<0
i
i
i
i
i
i
(11.36)
Then system (11.32) is globally exponentially stable about the origin with Lyapunov matrix
P.
133
Since ||(z)||
||z||, it follows from representation (11.35) and Theorem 18 that system
(11.32) is globally exponentially stable about the origin with Lyapunov matrix P .
An LMI. Note that, using a Schur complement result, inequality (11.36) is equivalent to
the following inequality which is linear in P and the scaling parameters 1 , , l :
P
P A + AT P + li=1 i Ci Ci P B1 P B2 P Bl
B1T P
1 I
0
T
B
P
0
0
2
<0
.
.
.
.
..
..
..
..
T
Bl P
0
0
l I
It should be clear that one may also obtain a sufficient condition involving a Riccati
equation with scaling parameters i using Corollary 10 and a H sufficient condition using
Lemma 11.
Exercise 36 Recall the double inverted pendulum of Example 34. Using the results of this
section, obtain a value of the spring constant k which guarantees that this system is globally
exponentially stable about the zero solution.
134
11.4
(11.37)
z (z) 0
(11.38)
where
for all z. Examples of include (z) = z, z 3 , z 5 , sat(z), sgm (z).
(11.39)
Then system (11.37)-(11.38) is globally exponentially stable about the origin with Lyapunov
matrix P .
Proof.
2x P x = x (P A + A P )x 2x P B(Cx)
= x (P A + A P )x 2(Cx) (Cx)
x (P A + A P )x .
Hence
2x P x 2xT Qx
135
I
BP C
P B C
I
Here
A=
3
1
1 3
B=
0
1
C=
0 1
and (z) = z 3 . Clearly z(z) 0 for all z. Also, conditions (11.39) are assured with P = I.
Hence, the system of this example is globally exponentially stable. Performing the above
optimization with the LMI toolbox yields a minimum of to equal to zero and
22.8241 0
P =
0
1
This P also satisfies conditions (11.39).
Exercise 37 Prove the following result: Suppose there exists a positive-definite symmetric
matrix P and a positive scalar which satisfy which satisfies
P A + A P + 2P 0
BP = C
(11.40a)
(11.40b)
Then system (11.37)-(11.38) is globally exponentially stable about the origin with rate
and with Lyapunov matrix P .
11.4.1
Generalization
(11.41)
z (z) 0
(11.42)
where
136
for all z. In order for the above system to be well-defined, one must have a unique solution
to the equation
z + D(z) = Cx
for each x.
Theorem 21 Consider a system described by (11.41) and satisfying (11.42). Suppose there
exist a positive-definite symmetric matrix P and a positive scalar which satisfy
P A + A P + 2P P B C
BP C
(D + D )
(11.43)
Then system (11.41) is globally exponentially stable about the origin with rate and Lyapunov
matrix P .
Proof.
2x P x = x (P A + A P )x + 2x P Bw
x
w
P A + A P P B
BP
0
We also have
x
w
0
C
C D + D
x
w
x
w
11.4.2
137
given by
We consider here square transfer functions G
G(s)
= C(sI A)1 B + D
(11.44)
exists a real scalar > 0 such that G has no poles in a region containing the set {s C :
(s) } and
G()
+ G()
0
for all
IR .
(11.45)
is regular if det[G()+
We say that G
G()
] is not identically zero for all IR.
G()
+ G()
>0
for all
IR
(11.46)
lim 2 det[G()
+ G()
] 6= 0
||
(11.47)
or
lim 2[G()
+ G()
] 6= 0
||
lim 2 [G()
+ G()
] = CAB
||
138
s2
s+1
.
+ bs + 1
We claim that this transfer function is SPR if and only if b > 1. This is illustrated in Figures
11.8 and 11.9 for b = 2 and b = 0.5 respectively.
Nyquist Diagram
0.5
0.4
0.3
Imaginary Axis
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
1
0.8
0.6
0.4
0.2
0
Real Axis
0.2
0.4
0.6
0.8
s+1
s2 +2s+1
Nyquist Diagram
2.5
2
1.5
Imaginary Axis
1
0.5
0
0.5
1
1.5
2
2.5
1
0.5
0.5
1
Real Axis
1.5
2.5
s+1
s2 +0.5s+1
To prove the above claim, we first note that g is stable if and only if b > 0. Now note
that
1 +
g() =
1 2 + b
Hence
2[1 + (b 1) 2]
g() + g() =
(1 2 )2 + (b)2
It should be clear from the above expression that g() + g() > 0 for all finite if and
only if b 1. Also,
lim 2[
g () + g()] = b 1 .
Thus, in order for the above limit to be positive, it is necessary and sufficient that b > 1.
139
The next lemma tells us why SPR transfer functions are important in stability analysis
of the systems under consideration in the last section.
Lemma 13 (KYPSPR) Suppose (A, B) is controllable and (C, A) is observable. Then
there exists a matrix P = P > 0 and scalar > 0 such that LMI (11.43) holds if and only
only if there exists a matrix P = P > 0, a positive scalar and matrices W and L such that
P A + A P = L L P
P B = C L W
W W = D + D
(11.48)
(11.49)
(11.50)
To see that this result is equivalent to Lemma 13 above, let = /2 and rewrite the
above equations as
P A + A P + 2P = L L
P B C = L W
(D + D ) = W W ,
that is,
M :=
P A + A P + 2P P B C
BP C
(D + D )
L L L W
W L W W
L
W
L
W
s + 1
s2 + s + 2
Using Lemma 12, determine the range of for which this transfer function is SPR. Verify
your results with the KYPSPR lemma.
140
and
Q11 Q12 Q1
22 Q21 > 0
and
Q11 Q12 Q1
22 Q21 < 0
The following result also follows from (11.51). Suppose Q22 > 0. Then
Q0
if and only if
Q11 Q12 Q1
22 Q21 0
(11.51)
Bibliography
[1] Boyd, S. and El Ghaoui, L. and Feron, E. and Balakrishnan, V., Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, 1994.
[2] Gahinet, P., Nemirovski, A., Laub, A.J., and Chilali, M., LMI Contol Toolbox Users
Guide, The MathWorks Inc., Natick, Massachussets, 1995.
[3] Corless, M., Robust Stability Analysis and Controller Design With Quadratic Lyapunov Functions, in Variable Structure and Lyapunov Control, A. Zinober, ed.,
Springer-Verlag, 1993.
[4] Ackmese, A.B., and Corless, M., Stability Analysis with Quadratic Lyapunov Functions: Some Necessary and Sufficient Multiplier Conditions, Systems and Control Letters, Vol. 57, No. 1, pp. 78-94, January 2008.
[5] Corless, M. and Shorten, R., A correct characterization of strict positive realness,
submitted for publication.
[6] A. Rantzer, On the Kalman-Yakubovich-Popov lemma, Systems & Control Letters,
vol. 28, pp. 7-10, 1996.
[7] Shorten, R., Corless, M., Wulff, K., Klinge, S. and R. Middleton, R., Quadratic Stability and Singular SISO Switching Systems, submitted for publication.
[8] Ran, A.C.M., R. Vreugdenhil, R. 1988, Existence and comparison theorems for algebraic
Riccati equations for continuous- and discrete-time systems, Linear Alg. Appl. Vol. 99,
pp. 6383, 1988
141
142
BIBLIOGRAPHY
Chapter 12
Invariance results
Recall the simple damped oscillator described by
x 1 = x2
x 2 = (k/m)x1 (c/m)x2
with m, c, k positive. This system is globally asymptotically stable about the origin. If we
consider the total mechanical energy, V (x) = 21 kx21 + 21 mx22 , of this system as a candidate
Lyapunov function, then along any solution we obtain
V (x) = cx22 0 .
Using our Lyapunov results so far, this will only guarantee stability; it will not guarantee
asymptotic stability, because, regardless of the value of x1 , we have V (x) = 0 whenever
x2 = 0. However, one can readily show that there are no non-zero solutions for which x2 (t)
is identically zero. Is this sufficient to guarantee asymptotic stability? We shall shortly see
that it is sufficient; hence we can use the energy of this system to prove asymptotic stability.
Consider a system described by
x = f (x)
(12.1)
By a solution of system (12.1) we mean a continuous function x() : [0, ) IRn which
identically satisfies x(t)
= f (x(t)).
12.1
Invariant sets
A subset M of the state space IRn is an invariant set for system (12.1) if it has the following
property.
Every solution which originates in M remains in M for all future time, that is, for every
solution x(),
x(0) M
implies
x(t) M for all t 0
The simplest example of an invariant set is a set consisting of a single equilibrium point.
143
144
Lemma 14 Suppose
DV (x)f (x) 0
when
V (x) < c
Then
M = {x : V (x) < c}
is an invariant set.
Example 125
x 1 = x1 + x31
Considering V (x) = x2 and c = 1, one can readily show that the interval (1, 1) is an
invariant set.
145
Largest invariant set. It should be clear that the union and intersection of two invariant
sets is also invariant. Hence, given any subset S of IRn , we can talk about the largest invariant
set M contained in S, that is M is an invariant set and contains every other invariant set
which is contained in S. As an example, consider the Duffing system
x 1 = x2
x 2 = x1 x31
and let S correspond the the set of all points of the form (x1 , 0) where x1 is arbitrary. If a
solution starts at one of the equilibrium states
(1, 0)
(0, 0)
(1, 0)
12.2
Limit sets
A state x is called a positive limit point of a solution x() if there exists a sequence {tk }
k=0
of times such that:
lim tk =
lim x(tk ) = x
The set of all positive limit points of a solution is called the positive limit set of the solution.
The solution x(t) = et has only zero as a positive limit point, whereas x(t) = cos t has
the interval [1, 1] as its positive limit set. What about x(t) = et cos t? The positive limit
146
set of x(t) = et is empty. Note that the positive limit set of a periodic solution consists of
all the points of the solution.
Suppose a solution converges to a single state x ; then the positive limit set of this solution
is the set consisting of that single state, that is, {x }. For another example, consider the
Van der Pol oscillator. Except for the zero solution, the positive limit set of every solution
is the set consisting of all the states in the limit cycle.
Exercise 39 What are the positive limit sets of the following solutions?
(a) x(t) = sin(t2 )
(b) x(t) = et sin(t)
Distance of a point x from a set M:
d(x, M) := inf{||x y|| : y M}
147
Lemma 15 (Positive limit set of a bounded solution.) The positive limit set L+ of a
bounded solution to (12.1) is nonempty and has the following properties.
1. It is closed and bounded.
2. It is an invariant set.
3. The solution converges to L+ .
Proof. See Khalil.
148
Example 126 (Van der Pol oscillator)
12.3
149
LaSalles Theorem
In the following results, V is a scalar valued function of the state, that is, V : IRn IR, and
is continuously differentiable.
Lemma 16 (A convergence result for bounded solutions.) Suppose x() is a bounded
solution of (12.1) and there exists a function V such that
DV (x(t))f (x(t)) 0
for all t 0. Then x() converges to the largest invariant set M contained in the set
S := {x IRn | DV (x)f (x) = 0}
Proof. Since x() is a bounded solution of (12.1), it follows from the previous lemma that
it has a nonempty positive limit set L+ . Also, L+ is an invariant set for (12.1) and x()
converges to this set.
We claim that V is constant on L+ . To see this, we first note that
d
(V (x(t)) = DV (x(t))f (x(t) 0 ,
dt
hence, V (x(t)) does not increase with t. Since x() is bounded and V is continuous, it follows
that V (x()) is bounded below. It now follows that there is a constant c such that
lim V (x(t)) = c .
Consider now any member x of L+ . Since L+ is the positive limit set of x(), there is a
sequence {tk }
k=0 such that
lim tk =
and
lim x(tk ) = x .
Since V is continuous,
V (x ) = lim V (x(tk )) = c .
k
150
for all t .
Thus, x1 (t) must belong to the set E of equilibrium states, that is,
E = {(n, 0) : n is an integer } .
Thus M is contained in E. Since E is invariant, E = M. Thus all bounded solutions converge
to E.
Theorem 22 (LaSalles Theorem.) Suppose there is a radially unbounded function V
such that
DV (x)f (x) 0
for all x. Then all solutions of (12.1) are bounded and converge to the largest invariant set
M contained in the set
S := {x IRn | DV (x)f (x) = 0}
Proof. We have already seen that the hypothesis of the theorem guarantee that all
solutions are bounded. The result now follows from Lemma 16.
1
V (x) = x21 + x1 x2 + x22
2
151
Exercise 40 Using LaSalles Theorem, show that all solutions of the system
x 1 = x22
x 2 = x1 x2
must approach the x1 axis.
152
for all q 6= 0
Suppose the term k(q) is due to conservative forces and define the potential energy by
Z q
P (q) =
k() d
0
Show that if limq P (q) = , then all motions of this system must approach one of its
equilibrium positions.
12.4
12.4.1
Using LaSalles Theorem, we can readily obtain the following result on global asymptotic
stability. This result does not require V to be negative for all nonzero states.
Theorem 23 (Global asymptotic stability) Suppose there is a positive definite function
V with
DV (x)f (x) 0
for all x and the only solution for which
DV (x(t))f (x(t)) 0
is the zero solution. Then the origin is a globally asymptotically stable equilibrium state.
Proof.
Example 130 A damped nonlinear oscillator
x 1 = x2
x 2 = x31 cx2
with c > 0.
Example 131 A nonlinearly damped oscillator
x 1 = x2
x 2 = x1 cx32
with c > 0.
153
for all q 6= 0
k() d =
154
12.4.2
Asymptotic stability
for ||x|| R
and ||x(t)|| R
is the zero solution. Then the origin is an asymptotically stable equilibrium state.
Example 132 Damped simple pendulum.
x 1 = x2
mgl
c
x 2 =
sin x1 x2
I
I
with c > 0.
12.5
155
LTI systems
x = Ax
(12.2)
(c) For every matrix C with (C, A) observable, the Lyapunov equation (12.2) has a unique
solution for P and this solution is hermitian positive-definite.
156
12.6
Applications
12.6.1
(12.3)
where the state x and the control input u are scalars. The scalars a and b are unknown but
constant. The only information we have on these parameters is that
b > 0.
We wish to obtain a controller which guarantees that, regardless of the values of a and b, all
solutions of the resulting closed loop system satisfy
lim x(t) = 0 .
u = kx
k = x2
(12.4a)
(12.4b)
(12.5a)
(12.5b)
= 2xx + 2(b/)k k
= 2ax2 2x2 bk + 2bkx2
= 2ax2 0 .
Since b and are positive, V is radially unbounded; also V 0. It now follows that x and k
are bounded. Also, all solutions approach the largest invariant set in the set corresponding
to x = 0. This implies that x(t) approaches zero as t goes infinity.
12.6. APPLICATIONS
157
(12.6)
where t IR is time, x(t) IR is the state, and u(t) IR is the control input. The
uncertainty in the system is due to the unknown constant real scalar parameter . We wish
to design a controller generating u(t) which assures that, for all initial conditions and for all
IR, the closed-loop system satisfies
lim x(t) = 0
(12.7)
= x sin x
(12.8a)
(12.8b)
(12.9)
where t, x, u, are as before and f and g are continuous functions with f satisfying
f (x) > 0 if x < 0
f (x) < 0 if x > 0
Design a controller generating u which assures that, for all initial conditions and for all ,
the resulting closed-loop system has property (12.7). Prove that your design works.
12.6.2
(12.10)
where the n-vector x(t) is the state, the m-vector u(t) is the control input while the constant
m-vector w is an unknown constant disturbance input.
158
for all x 6= 0 .
(12.11)
and the gain matrices KP and KI are arbitrary positive definite symmetric matrices. We
claim that these controllers achieve the desired behavior.
To prove our claim, note that the closed loop system can be described by
x = f (x) g(x)KP y + g(x)(w xc )
x c = KI y
y = g(x) DV (x)
and consider the candidate Lyapunov function
1
W (x, xc ) = V (x) + (xc w) KI1 (xc w) .
2
Prove your design works and illustrate your results with numerical simulations.
Chapter 13
Stability of nonautonomous systems
Here, we are interested in the stability properties of systems described by
x = f (t, x)
(13.1)
where the state x(t) is an n-vector at each time t. By a solution of the above system we mean
any continuous function x() : [t0 , t1 ) IRn (with t0 < t1 ) which satisfies x(t)
= f (t, x(t))
for t0 t < t1 . We refer to t0 as the initial time associated with the solution.
We will mainly focus on stability about the origin. However, suppose one is interested in
stability about some nonzero solution x(). Introducing the new state e(t) = x(t) x(t), its
evolution is described by
e = f(t, e)
(13.2)
with f (t, e) = f (t, x(t) + e) x (t). Since e(t) = 0 corresponds to x(t) = x(t), one can study
the stability of the original system (13.1) about x() by studying the stability of the error
system (13.2) about the origin.
Scalar linear systems. All solutions of
x = a(t)x
satisfy
Rt
x(t) = e
t0
a( ) d
x(t0 ) .
Consider
x = et x
Here a(t) = et . Since a(t) < 0 for all t, one might expect that all solutions converge to
zero. This does not happen. Since,
x(t) = e(e
t et0 )
we have
lim x(t) = ee
t0
x(t0 ) .
160
13.1
13.1.1
Boundedness of solutions
for all
t t0
||x(t)|| (x0 )
for all
t t0
Note that, in the above definition, the bound (x0 ) is independent of the initial time t0 .
13.1.2
Stability
161
Note that, in the above definition, the scalar may depend on the initial time t0 . When
can be chosen independent of t0 , we say that the stability is uniform.
DEFN. Uniform stability (US). The origin is uniformly stable if for each > 0 there exists
> 0 such that for all t0 ,
||x(t0 )|| <
||x(t)|| <
for all t t0
Example 133 Stable but not uniformly stable. Consider the scalar linear time-varying system:
x = (6t sin t 2t)x
We will analytically show that this system is stable about the origin. We will also show that
not only is it not uniformly stable about zero, the solutions are not uniformly bounded. If
one simulates this system with initial condition x(t0 ) = x0 for different initial times t0 while
keeping x0 the same one will obtain a sequence of solutions whose peaks go to infinity as t0
goes to infinity. This is illustrated in Figure 13.2.
where
Clearly, for each initial time t0 , there is a bound (t0 ) such that
e[(t)(t0 )] (t0 )
for all t t0
Hence every solution x() satisfies |x(t)| (t0 )|x(t0 )| for t t0 and we can demonstrate
stability of the origin by choosing = /(t0 ) for any > 0.
To see that this system is not uniformly stable consider t0 = 2n for any nonnegative
integer n. Then (t0 ) = 12n 4n2 2 and for t = (2n + 1) we have (t) = 6(2n + 1)
(2n + 1)2 2 . Hence
(t) (t0 ) = (6 )(4n + 1)
So for any solution x() and any nonnegative integer n, we have
x(2n + ) = x(t0 )e(6)(4n+1)
So, regardless of how small a nonzero bound one places on x(t0 ), one cannot place a bound
on x(t) which is independent of t0 . Hence the solutions of this system are not uniformly
bounded. Also, this system in not uniformly stable about the origin.
162
13.1.3
Asymptotic stability
lim x(t) = 0
(13.3)
||x(t)|| (x0 )
for all t t0
ii) For each > 0, there exists T (, x0 ) 0 such that for all t0 ,
x(t0 ) = x0
13.1.4
||x(t)|| <
for all
t > t0 + T (, x0 )
DEFN. Global asymptotic stability (GAS). The origin is globally asymptotically stable if
(a) It is stable.
(b) Every solution satisfies
lim x(t) = 0
163
DEFN. Global uniform asymptotic stability (GUAS). The origin is globally uniformly asymptotically stable if
(a) It is uniformly stable.
(b) The solutions are globally uniformly bounded.
(c) For each initial state x0 and each each > 0, there exists T (, x0 ) 0 such that for all
t0 ,
x(t0 ) = x0 = ||x(t)|| <
for all t > t0 + T (, x0 )
13.1.5
Exponential stability
DEFN. Uniform exponential stability (UES). The origin is uniformly exponentially stable with
rate of convergence > 0 if there exists R > 0 and > 0 such that whenever ||x(t0 )|| < R
one has
||x(t)|| ||x(t0 )||e(tt0 )
for all t t0
Note that exponential stability implies asymptotic stability, but, in general, the converse is
not true.
DEFN. Global uniform exponential stability (GUES). The origin is globally exponentially stable
with rate of convergence > 0 if there exists > 0 such that every solution satisfies
||x(t)||
||x(t0 )||e(tt0 )
for all t t0
164
Chapter 14
Lyapunov theory for nonautonomous
systems
14.1
Introduction
(14.1)
where x(t) IRn and t IR. We will focus on stability about the origin. However, suppose
one is interested in stability about some nonzero solution x(). Introducing the new state
e(t) = x(t) x(t), its evolution is described by
e = f(t, e)
(14.2)
with f (t, e) = f (t, x(t) + e) x (t). Since e(t) = 0 corresponds to x(t) = x(t), one can study
the stability of the original system (14.1) about x() by studying the stability of the error
system (14.2) about the origin.
In this chapter, V is a scalar-valued function of the time and state, that is, V : IRIRn
IR. Suppose V is continuously differentiable. Then, at any time t, the derivative of V along
a solution x() of system (14.1) is given by
dV (t, x(t))
V
V
=
(t, x(t)) +
(t, x(t))x(t)
dt
t
x
=
V
V
(t, x(t)) +
(t, x(t))f (t, x(t))
t
x
V
V
+
f
t
x
V
V
V
V
+
f1 +
f2 + . . . +
fn
t
x1
x2
xn
165
166
V
t
or
V
,
x
14.2. STABILITY
167
14.2
Stability
14.2.1
A function V is said to be locally positive definite (lpd) if there is a locally positive definite
function V1 : IRn IR and a scalar R > 0 such that
V1 (x) V (t, x)
A function V is said to be locally decresent (ld) if there is a locally positive definite function
V2 : IRn IR and a scalar R > 0 such that
V (t, x) V2 (x)
Example 134
(a)
V (t, x) = (2 + cos t)x2
lpd and ld
(b)
2
V (t, x) = et x2
ld but not lpd
(c)
V (t, x) = (1 + t2 )x2
lpd but not ld
14.2.2
A stability theorem
Theorem 26 (Uniform stability) Suppose there exists a function V with the following
properties.
(a) V is locally positive definite and locally decresent.
(b) There is a scalar R > 0 such that
V
V
(t, x) +
f (t, x) 0
t
x
Then the origin is a uniformly stable equilibrium state for x = f (t, x).
If V satisfies the hypotheses of the above theorem, then V is said to be a Lyapunov
function which guarantees the stability of origin for system (14.1).
Example 135
x = a(t)x
where a(t) 0 for all t.
168
14.3
14.4
14.4.1
169
14.4.2
V (t, x) = x21 + (1 +
1
sin t)x22
4
Since
5
3
x21 + x22 V (t, x) x21 + x22
4
4
it follows that V is positive definite and decresent. Along any solution of the above system,
we have
1
1
1
V =
cos t x22 + 2x1 (x1 sin t x2 ) + (2 + sin t)x2 (x1 x2 )
4
4
2
1
1
= 2x21 (2 + sin t cos t)x22 + 2x1 x2
2
4
W (x)
where
5
W (x) = 2x21 2x1 x2 + x22
4
Since W (x) is positive for all nonzero x, it follows from the previous theorem that the above
system is globally uniformly stable about the origin.
170
14.4.3
Proof of theorem 28
In what follows x() represents any solution and t0 is its initial time.
Uniform stability. Consider any > 0. We need to show that there exists > 0 such
that whenever ||x(t0 )|| < , one has ||x(t)|| < for all t t0 .
Since V is positive definite, there exists a positive definite function V1 such that
V1 (x) V (t, x)
for all t and x. Since V1 is positive definite, there exists c > 0 such that
V1 (x) < c
||x|| <
V2 (x) < c
for t t0 .
that is,
V1 (x(t)) V2 (x(t0 )) t t0
||x||
171
Global uniform convergence. Consider any initial state x0 and any > 0. We need
to show that there exists T 0 such that whenever x(t0 ) = x0 , one has ||x(t)|| < for all
t t0 + T .
Consider any solution with x(t0 ) = x0 . From uniform stability, we know that there exists
> 0 such that for any t1 t0 ,
||x(t1 )|| <
dV (t, x(t))
W (x(t))
dt
:= min{W (x) : ||x|| }
Since W is continuous and W (x) > 0 for x 6= 0, the above minimum exists and > 0. Let
T := V2 (x0 )/
We now show by contradiction, there exists t1 with t0 t1 t2 , t2 := t0 + T , such that
||x(t1 )|| < ; from this it will follow that ||x(t)|| < for all t t1 and hence for t t0 + T .
So, suppose, on the contrary, that
||x(t)||
for t0 t t2
dV (t, x(t))
dt
So,
V1 (x(t2 )) V (x(t2 ), t2 )
= V (t0 , x(t0 )) +
t2
t0
V2 (x0 ) T
0
dV (t, x(t))
dt
dt
that is,
V1 (x(t2 )) 0
This contradicts V1 (x(t2 )) > 0. Hence, there must exist t1 with t0 t1 t0 + T such that
||x(t1 )|| < .
172
14.5
Exponential stability
14.5.1
Exponential stability
14.5.2
14.6
Quadratic stability
From the above it should be clear that all the results in the section on quadratic stability
also hold for time-varying systems.
14.7. BOUNDEDNESS
14.7
173
Boundedness
A scalar valued function V of time and state is said to be uniformly radially unbounded if
there are radially unbounded functions V1 and V2 of state only such that
V1 (x) V (t, x) V2 (x)
for all t and x.
Example 141
V (t, x)
V (t, x)
V (t, x)
V (t, x)
=
=
=
=
(2 + sin t)x2
2
(1 + et )x2 x
2
et x2
(1 + t2 )x2
yes
yes
no
no
Theorem 31 Suppose there exists a uniformly radially unbounded function V and a scalar
R 0 such that that for all t and x with ||x|| R,
V
V
(t, x) +
f (t, x) 0
t
x
Then the solutions of x = f (t, x) are globally uniformly bounded.
Note that, in the above theorem, V does not have to be positive away from the origin.
Example 142 Consider the disturbed nonlinear system,
x = x3 + w(t)
where w is a bounded disturbance input. We will show that the solutions of this system are
GUB.
Let be a bound on the magnitude of w, that is, |w(t)| for all t. Consider V (x) = x2 .
Since V is (uniformly) radially unbounded and
V
= 2x4 + 2xw(t)
2|x|4 + 2|x|
1
the hypotheses of the above theorem are satisfied with R = 3 ; hence GUB.
Exercise 46 (Forced Duffings equation with damping.) Show that all solutions of the system
x 1 = x2
x 2 = x1 x31 cx2 + w(t)
|w(t)|
c>0
174
are bounded.
Hint: Consider
1
1
1
1
V (x) = c2 x21 + cx1 x2 + x22 x21 + x41
2
2
2
4
where 0 < < 1. Letting
1 c2 c
P =
2 c 1
note that P > 0 and
1
1
V (x) = xT P x x21 + x41
2
4
1
T
x Px
4
14.8
175
Suppose E is a nonempty subset of the state space IRn . Recall that E is invariant for the
system
x = f (t, x)
(14.3)
if E has the property that, whenever a solution starts in E, it remains therein thereafter,
that is, if x(t0 ) is in E, then x(t) is in E for all t t0 .
We say that the set E is attractive for the above system if every solution x() of (14.3)
converges to E, that is,
lim d(x(t), E) = 0 .
t
Theorem 32 Suppose there is a continuously differentiable function V , a continuous function W and a scalar c such that the following hold.
1) V is radially unbounded.
2) Whenever V (x) > c, we have
DV (x)f (t, x) W (x) < 0
(14.4)
for all t.
Then, the solutions of the system x = f (t, x) are globally uniformly bounded and the set
{x IRn : V (x) c}
is an invariant and attractive set.
176
14.9
(14.5)
where the m-vector w(t) is the disturbance input at time t. As a measure of the size of a
disturbance w() we shall consider its peak norm:
kw()k = sup kw(t)k .
t0
Then, for every bounded disturbance w(), the state x() is bounded and the set
x IRn : V (x) 1 kw()k2
is invariance and attractive for system (14.5).
Example 144
x = x x3 + w
(14.7)
177
where U(x, w) = |x| (|x| + |x|3 |w|). Clearly U(x, w) > 0 when |x| > |w|, that is V (x) >
|w|2. Hence, taking 1 = 1, it follows from the above theorem that the following interval is
invariant and attractive for this system:
[ ]
where = kw()k .
Proof of Theorem 33 Consider the disturbed system (14.5) subject to a specific bounded
disturbance w(). This disturbed system can be described by x = f (t, x) with f (t, x) =
F (t, x, w(t)). Let
= kw()k = sup kw(t)k .
t0
178
Corollary 2 Consider the disturbed system (14.5) equipped with a performance output specified in (14.6). Suppose the hypotheses of Theorem 33 are satisfied and there exists a scalar
2 such that for all t, x and w,
kH(t, x, w)k2 V (x) + 2 kwk2
(14.8)
Then for every disturbance bounded disturbance w(), the output z() is bounded and satisfies
lim sup kz(t)k kw()k
t
where
=
1 + 2 .
(14.9)
14.9.1
P A + AT P + P
PB
< 0
BT P
1 I
T
C CP
CT D
0
DT C
D T D 2 I
(14.10a)
(14.10b)
Then
lim supt kz(t)k kw()k
where
=
1 + 2 .
(14.11)
179
For each fixed we have LMIs in P , 1 and 2 . One can minimize 1 + 2 for each
and do a line search over .
The above results can be generalized to certain classes of nonlinear systems like polytopic
systems. Above results can also be used for control design.
180
14.10
Regions of attraction
Suppose that the origin is an asymptotically stable equilibrium state of the system
x = f (t, x)
(14.12)
where the state x(t) is an n-vector. We say that a non-empty subset A of the state space
IRn is a region of attraction for the origin if every solution which originates in A converges to
the origin, that is,
x(t0 ) A
implies
lim x(t) = 0 .
t
Figure 14.4:
Example 145
x = x(1 x2 )
181
Example 146
x 1 = x1 (1 x21 x22 ) + x2
x 2 = x1 x2 (1 x21 x22 )
V = 2V (1 V )
14.10.1
(14.13)
where
A(t, x) = A0 + (t, x)A
and
(14.14)
(14.15)
Here
C=
1 0
b() = 2 .
a() = 0 ,
Theorem 35 Suppose there is a positive definite symmetric P and a positive scalar such
that
P A0 + AT0 P a()(P A + AT P ) < 0
P A0 + AT0 P + b()(P A + AT P ) < 0
CT C P 0
(14.16a)
(14.16b)
(14.16c)
(14.17)
182
for all x. So whenever, kxk < 2 , we must have xT P x < 2 , that is x is in A. Hence the
1
set of states of norm less than 2 is a region of attraction for the origin.
14.11
183
(14.18)
X(t)
= A(t)X(t)
X(0) = I
(14.19a)
(14.19b)
(14.20)
Example 148 (Markus and Yamabe, 1960.) This is an example of an unstable second
order system whose time-varying A matrix has constant eigenvalues with negative real parts.
Consider
1 + a cos2 t
1 a sin t cos t
A(t) =
1 a sin t cos t 1 + a sin2 t
with 1 < a < 2. Here,
(t, 0) =
Since a > 1, the system corresponding to this A(t) matrix has unbounded solutions. However,
for all t, the characteristic polynomial of A(t) is given by
p(s) = s2 + (2 a)s + 2
Since 2 a > 0, the eigenvalues of A(t) have negative real parts.
14.11.1
Lyapunov functions
and
max (P (t)) c2
(14.21)
184
(14.22)
(14.23)
V (t, x) c3 kx||2 .
V (t, x) 2V (t, x)
where
= c3 /2c2 .
So, if there are matrix functions P () and Q() which satisfy (14.21)-(14.23) , we can conclude
that the LTV system (14.18) is GUES. The following theorem provides the converse result.
Theorem 36 Suppose A() is continuous and bounded and the corresponding linear timevarying system (14.18) is globally uniformly asymptotically stable. Then, for all t, the matrix
Z
P (t) =
(, t)T (, t) d
(14.24)
t
is well defined, where is the state transition matrix associated with (14.18), and
P + P A + A P + I = 0 .
(14.25)
Also, there exists positive real scalars c1 and c2 such that for all t,
c1 min(P (t))
and
max (P (t)) c2
(, t) = e
Hence,
P (t) =
(1+2 sin s) ds
d =
14.12. LINEARIZATION
14.12
185
Linearization
x = f (t, x)
f
(t, x(t)) .
x
(14.26)
186
Chapter 15
The describing function method
Here we consider the problem of determining whether or not a nonlinear system has a limit
cycle (periodic solution) and approximately finding this solution.
We consider systems which can be described by the negative feedback combination of
a SISO LTI (linear time invariant) system and a SISO memoryless nonlinear system; see
and the nonlinear
Figure 15.2. If the LTI system is represented by its transfer function G
(15.1a)
(15.1b)
where u and y are the input and output, respectively for the LTI system and their respective
Laplace transforms are u and y.
As a general example, consider any system described by
x = Ax B(Cx)
where x(t) is an n-vector. In this example
G(s)
= C(sI A)1 B
A solution y() is periodic with period T > 0 if
y(t + T ) = y(t)
We are looking for periodic solutions to (15.1).
187
for all t .
(15.2)
188
15.1
Consider any piecewise continuous signal s : IR IR which is periodic with period T . Then
s has a Fourier series expansion, that is, s can be expressed as
s(t) = a0 +
ak cos(kt) +
bk sin(kt)
k=1
k=1
where = 2/T is called the natural frequency of s. The Fourier coefficients a0 , a1 , . . . and
b1 , b2 , . . . are uniquely given by
Z T
2
1
a0 =
s(t) dt
T T2
Z T
Z T
2
2
2
2
ak =
s(t) cos (kt) dt
and
bk =
s(t) sin (kt) dt
for k = 1, 2, . . .
T T2
T T2
If s is odd, that is, s(t) = s(t) then,
ak = 0
and
4
bk =
T
T
2
for k = 0, 1, 2, . . .
for k = 1, 2, . . .
Example 150 Sometimes it is easier to compute Fourier coefficients without using the above
integrals. Consider
s(t) = sin3 t .
Clearly, this is an odd periodic signal with period T = 2. Since
sin t =
ejt ejt
2j
we have
3
sin (t) =
ejt ejt
2j
3
8j
8j
3
1
=
sin t sin 3t .
4
4
So,
3
b1 = ,
4
and all other Fourier coefficients are zero.
b3 =
1
4
15.2
189
Describing functions
The describing function for a static nonlinearity is an approximate description of its frequency
response. Suppose : IR IR is a piecewise continuous function and consider the signal
s(t) = (a sin t)
for a, > 0. This signal is the output of the nonlinear system defined by and subject to
input a sin t.
The signal s is piecewise continuous and periodic with period T = 2/. Hence it has a
Fourier series expansion and its Fourier coefficients are given by
1
a0 (a) =
T
T
2
(a sin t) dt
1
2
T2
and for k = 1, 2, . . .,
Z
1
ak (a) =
(a sin ) cos k d
(a sin ) d
( = t = 2t/T )
1
bk (a) =
and
(a sin ) sin k d
2
bk (a) =
for k = 0, 1, . . .
Z
(a sin ) sin k d .
2
a
(a sin ) sin d
Basically, N(a) is an approximate system gain for the nonlinear system when subject to a
sinusoidal input of amplitude a. Notice that, unlike a dynamic system, this gain is independent of the frequency of the input signal. For a linear system, the gain is independent of
amplitude.
190
b1 (a)
3a2
=
.
a
4
1 if y < 0
0 if y = 0
(y) =
1 if y > 0
Example 153
1 if y < e
0 if e y e
(y) =
1 if y > e
(15.3)
where e is some positive real number. This is an odd function. Clearly, N(a) = 0 if a e.
If a > e, let e be the unique number between 0 and /2 satisfying e = arcsin(e/a) that is,
a sin(e ) = e. Then,
Z
(a sin ) sin d =
191
p
Since sin e = e/a and 0 e < /2, we have cos e = 1 (e/a)2 . Thus
4 a2 e2
b1 (a) =
a
Hence,
N(a) =
if a e
4 a2 e2
if a > e
a2
(15.4)
One can readily show that the maximum value of N(a) is 2/(e) and this occurs at a =
2e.
Complex describing functions. If the function is not odd then, its describing function
may be complex valued; see [1] for an example. Assuming a0 (a) = 0, in this case, we let
N(a) = b1 (a)/a + a1 (a)/a
(15.5)
where
1
a1 (a) =
(a sin ) cos d
and
1
b1 (a) =
(a sin ) sin d
Note that
N(a) = |N(a)|ej(a)
where
a1 (a)2 + b1 (a)2
a
and (a) is the unique angle in [0, 2) given by
|N(a)| =
cos((a)) = p
b1 (a)
a1
(a)2
+ b1
(a)2
sin((a)) = p
a1 (a)
a1 (a)2 + b1 (a)2
192
Here, the approximate output of the nonlinearity due to input a sin(t) is given by
s(t) a1 (a) cos(t) + b1 (a) sin(t)
= |N(a)| a [sin((a)) cos(t) + cos((a)) sin(t)]
= |N(a)| a sin(t + (a))
that is,
s(t) |N(a)|a sin(t + (a))
where (a) is the argument of N(a). Notice the phase shift in s due to (a).
1 if y < 1
y if 1 y 1
sat(y) =
1 if y > 1
N(a) =
h
2 sin1 ( 1 ) +
1 if 0 a 1
a2 1
a2
if 1 < a
(15.6)
15.3
193
Recall the nonlinear system described in (15.1) Suppose we are looking for a periodic solution
for y which can be approximately described by
y(t) a sin(t)
where a > 0.
Real describing functions. Suppose is odd,
u(t) (a sin(t)) =
bk (a) sin(kt)
k=1
where
2
bk (a) =
(a sin ) sin(k) d
So,
u(t) b1 (a) sin(t) = N(a)a sin(t)
where N is the describing function for . Since u(t) N(a)a sin(t) and y(t) a sin(t)
u, it follows that
and y = G
G()
1/N(a) .
So, if for some pair a, > 0, the condition
1 + G()N(a)
=0
(15.7)
is satisfied, it is likely that the nonlinear system will have a periodic solution with
y(t) a sin t
When N(a) is real the above condition can be expressed
(G())
= 0
1 + (G())N(a) = 0
(15.8a)
(15.8b)
The first condition in (15.8) simply states that the imaginary part of G()
is zero, that is,
G() is real. This condition is independent of a and can be solved for values of . The
values of a are then determined by the second condition in (15.8).
Example 155
y + y 3 = 0
194
G(s)
= 2
s
and
N(a) =
3a2
4
3a/2
3a
Example 156 Consider the nonlinear system,
x 1 = x2
x 2 = 4x1 5x2 sgm (x1 x2 )
This can be represented as the LTI system
x 1 = x2
x 2 = 4x1 5x2 + u
y = x1 x2
subject to negative feedback from the memoryless nonlinearity u = sgm (y). Hence
G(s)
=
s2
s + 1
+ 5s + 4
+ 1
N(a) = 0
2 + 4 + 5
that is
2 + 4 + 5 + ( + 1)N(a) = 0
or
4 2 + N(a) = 0
5 N(a) = 0
195
0.8
0.6
0.4
0.2
0.2
0.4
0
10
12
14
16
18
20
4
= 0.2546 ,
5
T =
2
= 2.0944
The y-response of this system subject to initial conditions x1 (0) = 0.5 and x2 (0) = 0 is
illustrated in Figure 15.3
Example 157 Consider the system of the previous example with the signum function replaced with an approximation as described in Example 153, that is,
x 1 = x2
x 2 = 4x1 5x2 (x1 x2 )
where is given by (15.3) for some e > 0. Proceeding as in the previous example, the
describing function method still gives the following conditions for a limit cycle
=3
and
N(a) = 5 .
Recalling the expression for N(a) in 15.4, we must have a > e and
4 a2 e2
=5
a2
Solving this for a2 yields two solutions
p
8 4 4 (5e)2
a =
(5)2
2
2
= 0.1273 .
5
196
Complex describing functions. We still have the criterion (15.7) for a limit cycle, that
is,
1 + G()N(a)
=0
However, we cannot usually simplify things as in (15.8).
Example 158 (Van der Pol oscillator) Here we demonstrate a generalization of the method.
y + (y 2 1)y + y = 0
We can describe this system by
y y + y = u
u = y 2 y
Here
G(s)
=
If y(t) = a sin t then
s2
s + 1
a3
a3
cos t
sin(t + /2)
y y = a sin t cos t
4
4
2
So here
N(a, ) =
a2
a2 e/2
=
.
4
4
Solving 1 + G()N(a,
) = 0 yields
=1
a = 2.
15.3.1
197
When the describing function of the nonlinearity is real, the approximate frequencies at
which a limit cycle occurs are a subset of those frequencies for which G()
is real, that
functions G
is
Suppose G(s)
= C(sI A)1 B is a scalar transfer function with real A, B, C and A has
no imaginary eigenvalues. Noting that
(I A)1 = (I A)(I A)1 (I A)1
= (I A)( 2I + A2 )1
= A( 2 I + A2 )1 ( 2I + A2 )1 ,
we obtain that
G()
= CA( 2 I + A2 )1 B C( 2I + A2 )1 B .
(15.9)
G()
= C( 2 I + A2 )1 B
(15.10)
To determine the nonzero values (if any) of for which the imaginary part of G()
is zero,
we note that
2 I A2 B
det
= det 2 I A2 det C( 2 I + A2 )1 B
C
0
= det 2 I A2 C( 2I + A2 )1 B .
Introducing the matrix pencil given by
P () =
A2 I B
C
0
(15.11)
Since it is assumed that A does not have any purely imaginary eigenvalues, A2 does not have
any negative real eigenvalues. This implies that det( 2 I A2 ) is non-zero for all . Hence,
for nonzero ,
G()
=0
if and only if
det P ( 2) = 0
(15.12)
Note that
P () =
A2 B
C
0
I 0
0 0
198
The values of for which det P () = 0 are the finite generalized eigenvalues of the matrix
pencil P . You can use the Matlab command eig to compute generalized eigenvalues. Thus
we have the following conclusion:
G()
is real if and only if = and is a positive, real, finite, generalized eigenvalue
of the matrix pencil P
Example 159 For example 156 we have
0
0
1
B=
A=
1
4 5
Hence
P () =
A I B
C
0
C=
1 1
4
5
0
= 20 21 1
1
1
0
positive real. Thus, there is a single nonzero value of for which G()
is real and this is
given by = 3.
15.4
Exercises
15.4. EXERCISES
199
where
u = kP q kI
q kD q
(a) For kP = 1 and kD = 2 determine the largest of kI 0 for which the closed loop
system is stable about q(t) 0.
(b) For kP = 1 and kD = 2, use the describing function method to determine the smallest
value kI 0 for which the closed loop system has a periodic solution.
200
Bibliography
[1] Boniolo, I., Bolzern, P., Colaneri,P., Corless, M., Shorten, R. Limit Cycles in Switching
Capacitor Systems: A Lure Approach
201
202
BIBLIOGRAPHY
Chapter 16
Aerospace/Mechanical Systems
16.1
Aerospace/Mechanical systems
Let the real scalar t represent time. At each instant of time, the configuration of the
aerospace/mechanical systems under consideration can be described by a real N-vector q(t).
We call this the vector of generalized coordinates; each component is usually an angle, a
length, or a displacement. It is assumed that there are no constraints, either holonomic or
non-holonomic, on q. So, N is the number of degrees-of-freedom of the system. We let the
real scalar T represent the kinetic energy of the system. It is usually given by
N
N X
X
1
T = qT M(q)q =
Mjk (q)qj qk
2
j=1
k=1
where the symmetric N N matrix M(q) is called the system mass matrix. Usually, it
satisfies
M(q) > 0
for all q
The real N-vector Q is the sum of the generalized forces acting on the system. It includes
conservative and non-conservative forces. It can depend on t, q, q.
Here
1
1
T = m1 q12 + m2 q22
2
2
So,
M=
m1 0
0 m2
203
204
)T
Q=
16.2
m 0
0 mr 2
F
0
Equations of motion
La Granges Equation
T
d T
= Qi
dt qi
qi
In vector form:
d
dt
T
q
for i = 1, 2, , N
=Q
q
205
(16.2)
N X
N
X
Mij
qk
j=1 k=1
Proof. Since
1 Mjk
2 qi
qj qk
1
T = qT M(q)q
2
it follows that
T
= M(q)q
q
Hence
d
dt
T
q
= M q + M q
Since
it follows that
T
c = M q
.
q
N
(16.3)
1 XX
T =
Mjk (q)qj qk ,
2 j=1 k=1
N
T
1 X X Mjk
=
qj qk
qi
2 j=1 k=1 qi
(16.4)
206
Noting that
N
X
N
N
X
X
Mij
(M q)
i=
Mij qj =
qk
qk
j=1
j=1
k=1
qj =
N X
N
X
Mij
j=1 k=1
we have
ci
N X
N
X
Mij
1 Mjk
T
qj qk
=
= (M q)
i
qi
qk
2 qi
j=1 k=1
qk
qj qk
(16.5)
16.3
207
16.3.1
An energy result
(16.6)
q c M q = q
M q
= q M q
q
2
2
q
2
q
(16.7)
Using (16.4),
N
X T
X
T
q =
qi =
q
qi
i=1
i=1
1 X X Mjk
qj qk
2 j=1 k=1 qi
1 X X X Mjk
qi =
qi qj qk .
2 i=1 j=1 k=1 qi
(16.8)
Using (16.5),
N
N
N
N
1X
1 X X X Mij
1
q M q =
qj qk
qi (M q)
i=
2
2 i=1
2 i=1 j=1
qk
k=1
1 X X X Mij
qi =
qi qj qk
2 i=1 j=1
qk
k=1
By interchangining the indices i and k and using the symmetry of M, we obtain that
N
1
1 X X X Mjk
q M q =
qi qj qk .
2
2 i=1 j=1 k=1 qi
Substituting (16.8) and (16.9) into (16.7) yields
1
q c M q = 0
2
(16.9)
208
16.3.2
1
= q M q + q M q
2
1
1
= q (c + Q) + q M q = q Q q c M q
2
2
= q Q .
and
q=
Here Q = U
where
q
U = W l(cos 1) .
F
0
q=
where
Here Q = U
q
U =
m
.
r
F =
m
r2
209
Suppose one splits the generalized force vector Q into a conservative piece K and a
non-conservative piece D, thus
Q = K(q) D(q, q)
(16.11)
where K is the sum of the conservative forces and D is the sum of the non-conservative
forces. Let U be the total potential energy associated with the conservative forces, then
U
.
q
(16.12)
L=T U
(16.13)
K=
Introducing the Lagrangian
+D = 0.
dt q
q
In particular, if all the forces are conservative, we have
L
d L
= 0.
dt q
q
Introducing the total mechanical energy
E =T +U
(16.14)
dE
= q D
dt
(16.15)
we obtain that
that is, the time rate of change of the total energy equals the power of the non-conservative
forces. To see this:
dE
dT
dU
=
+
dt
dt
dt
U
q
= q Q +
q
U
U
= q
q D(q, q)
+
q
q
q
= q D(q, q)
.
In particular, if all the forces are conservative, we have
dE
=0
dt
thus E is constant, that is, energy is conserved.
(16.16)
210
16.3.3
(16.17)
1
1
T
c = M q + M q
2
2
q
From (16.5),
(M q)
i=
N X
N
X
Mij
qk
j=1 k=1
qj qk =
N X
N
X
Mik
qj
j=1 k=1
qj qk
qj qk
(M q)
i
2
q i
2
qi
2 j=1 k=1 qj
qi
=
N
X
1
j=1
where
Sij :=
Sij qj
N
X
Mjk
k=1
qi
Mik
qj
qk
(16.18)
T
1
1
M q
= S q
2
q
2
and
c=
with S skew-symmetric.
1
M S q
2
(16.19)
and M 2C is skew-symmetric.
Proof. Let
1
C = (M S)
2
where S is skew-symmetric as given by previous result. Then c = C q and
M 2C = S
(16.20)
211
(16.21)
1X
Cij =
2 k=1
Mij
Mik Mjk
+
qk
qj
qi
qk .
(16.22)
212
16.4
(16.23)
(16.24)
where
= Q (q e , 0),
= Q (q e , 0)
D
K
q
q
Note that the c term does not contribute anything to the linearization; the contribution of
M(q)
q is simply M(q e ) q.
We have now the following stability result.
= M(q e ),
M
Theorem 38 Consider the linearization (16.24) of system (16.23) about q(t) q e and
and K
are symmetric and positive definite. Suppose also that for all q,
suppose that M
q 0
q D
and
q(t)
q(t)
D
0 q(t)
0
then the nonlinear system (16.23) is asymptotically stable about the solution q(t) q e .
Proof. As a candidate Lyapunov function for the linearized system (16.24), consider
1
q + 1 q Kq
V (q, q)
= q M
2
2
Example 167
m1 q1 + dq1 +(k1 + k2 )q1 k2 q2
m2 q2
k2 q1
+k2 q2
= 0
= 0
m
q1 + dq1 dq2 +(k1 + k2 )q1 k2 q2
m
q2 dq1 + dq2 k2 q1
+(k1 + k2 )q2
This system is not asymptotically stable.
= 0
= 0
213
16.5
(16.25)
and
K(q) 6= 0 for q 6= q e
(ii)
D(q, 0) = 0
for all q
This first assumption guarantees that q(t) q e is a unique equilibrium solution of the above
system. The next assumption requires that the generalized force term K(q) be due to
potential forces and that the total potential energy associated with these forces is a positive
definite function.
214
U
(q)
q
for all q.
The next assumption requires that the generalized force term D(q, q)
be due to damping
forces and that these forces dissipate energy along every solution except constant solutions.
Assumption 3 For all q, q,
q D(q, q)
0
q(t)
D(q(t), q(t))
0
then
q(t)
0
Theorem 39 Suppose the mechanical system (16.25) satisfies assumptions 1-3. Then the
system is GUAS about the equilibrium solution q(t) q e .
Proof Consider the total mechanical energy (kinetic + potential)
1
V = q M(q)q + U(q)
2
as a candidate Lyapunov function and use a LaSalle type result.
Example 169
I1 q1 + dq1 +(k1 + k2 )q1 k2 q2
= 0
I2 q2
k2 q1
+k2 q2 W l sin(q2 ) = 0
where all parameters are positive and
Wl <
k1 k2
=
k1 + k2
1
k1
1
+
1
k2
16.6
215
(16.26)
where q(t) and the control input u(t) are real N-vectors. So, Q = F + u. The salient
feature of the systems considerd here is that the number of scalar control inputs is the same
as the number of degrees of freedom of the system.
16.6.1
This is a special case of the more general technique of feedback linearization. Suppose we
let
u = c(q, q)
+ F (q, q)
+ M(q)v
(16.27)
where v is to be regarded as a new control input. Then, M(q)
q = M(q)v and since M(q) is
nonsingular, we obtain
q = v
(16.28)
This can also be expressed as a bunch of N decoupled double integrators:
qi = vi
for i = 1, 2, , N
Now use YFLTI (your favorite linear time invariant) control design method to design controllers for (16.28).
Disadvantages: Since this method requires exact cancellation of F and exact knowledge of
M, it may not be robust with respect to uncertainty in M and F .
16.6.2
Linearization
(16.29)
where
= F (q e , 0)
= F (q e , 0),
K
D
q
q
Now use YFLTI control design method to design controllers for u and let
= M(q e ),
M
u = ue + u
(16.30)
216
16.6.3
(16.31)
So, F (q, q)
= D(q, q)
+ K(q).
Assumption 4 There is an N-vector q e uch that
K(q e ) = 0
and
D(q, 0) = 0
for all q
This first assumption guarantees that q(t) q e is an equilibrium solution of the uncontrolled
(u = 0) system. It does not rule out the possibility of other equilibrium solutions.
Assumption 5 There is a function U and a scalar such that for all q
K(q) =
and
U
(q)
q
K
(q) I
q
q D(q, q)
0
PD controllers
with
u = Kp (q q e ) Kd q
Kp = Kp > I ,
Kd + Kd > 0
(16.32a)
(16.32b)
Theorem 40 Consider a system described by (16.31) which satisies assumptions 4-6 and is
subject to PD controller (16.32). Then the resulting closed system is GUAS about q(t) q e .
Proof. We show that the closed loop system
M(q)
q + c(q, q)
+ D(q, q)
+ Kd q + K(q) + Kp (q q e ) = 0
satisfies the hypotheses of Theorem 39.
Example 170 Single link manipulator with drag.
|
W l sin = u
I + d|
with d 0. Here q = ,
K() = W l sin
and
= d|
|
D(, )
(16.33)
217
K
(q) = W l cos q W l
q
kd > 0
0.5
q1(0) =1,
q1dot(0) = 0
-0.5
0
10
time (sec)
12
14
16
18
20
16
18
20
0.6
q1(0) =0,
q1dot(0) = 1
0.4
0.2
0
-0.2
0
10
time (sec)
12
14
218
Exercise 57 Consider the two link manipulator with two control input torques, u1 and u2 :
[m1 lc21 + m2 l12 + I1 ]
q1
q2
[m2 l1 lc2 cos(q1 q2 )]
q1 +[m2 lc22 + I2 ]
Using each of the 3 control design methods of this section, obtain controllers which asymptotically stabilize this system about the upward vertical configuration. Design your controllers
using the following data:
m1
kg
10
l1
m
1
lc1
I1
m kg m2
0.5 10/12
m2
kg
10
l2
m
1
lc2
I2
m kg m2
0.5
5/12
= u2
Chapter 17
Quadratic stabilizability
17.1
Preliminaries
Here we consider the problem of designing stabilizing state feedback controllers for specific
classes of nonlinear systems. Controller design is based on quadratic Lyapunov functions.
First recall that a system
x = f (t, x)
(17.1)
is quadratically stable if there exist positive definite symmetric matrices P and Q which satisfy
2x P f (t, x) 2x Qx
(17.2)
for all t and x. When this is the case, the system is globally uniformly exponentially stable
(GUES) about the origin and the scalar = min (P 1 Q) is a rate of exponential convergence.
Also, we call P a Lyapunov matrix for (17.1).
Consider now a system with control input u where u(t) is an m-vector:
x = F (t, x, u) .
(17.3)
(17.4)
(17.5)
(17.6)
220
17.2
K = LS 1 .
(17.8)
(17.9)
where the real n-vector x(t) is the state and the real m-vector u(t) is the control input. The
following observation is sometimes useful in reducing control design problems to the solution
of linear matrix inequalities (LMIs).
which
Suppose there exist a matrix L and positive definite symmetric matrices S and Q
satisfy
(17.11)
renders system (17.9) GUES about the origin with Lyapunov matrix P = S 1 .
To see this, suppose (17.10) holds and let P = S 1 and K = LS 1 . Pre- and postmultiply inequality (17.10) by S 1 to obtain
(17.12)
Now notice that the closed loop system resulting from controller (17.11) is described by
x = [A(t, x) + B(t, x)K]x
(17.13)
that is, it is described by (17.1) with f (t, x) = [A(t, x) + B(t, x)K]x. Thus, using inequality
(17.12), the closed loop system satisfies
2x P f (t, x) = 2x P [A(t, x) + B(t, x)K]x
= x P [A(t, x) + B(t, x)K] + [A(t, x) + B(t, x)K] P x
x.
x P QP
replacing Q. Therefore,
Thus, the closed loop system satisfies inequality (17.6) with P QP
the closed loop system (17.13) is GUES with Lyapunov matrix P = S 1 .
17.2.1
221
We first consider systems described by (17.9) where the time/state dependent matrices
A(t, x) and B(t, x) have the following structure:
A(t, x) = A0 + (t, x)A ,
(17.14)
The matrices A0 , B0 and A, B are constant and is a scalar valued function of time t
the state x which is bounded above and below, that is,
a (t, x) b
(17.15)
2
for some constants a and b. Examples of such functions are given by sin x, cos x, ex and
sin t.
Example 171 (Single-link manipulator) A single-link manipulator (or inverted pendulum) subject to a control torque u is described by
J W l sin = u
with J, W l > 0. Letting x1 = and x2 = this system has the following state space
description:
x 1 = x2
J x 2 = W l sin x1 + u
This description is given by (17.9) and (17.14) with
0 0
0
0 1
,
,
A =
,
B0 =
A0 =
1 0
1/J
0 0
and
(t, x) =
Since | sin x1 | |x1 |, we have
(17.16a)
(17.16b)
B = 0
(17.17a)
(17.17b)
222
with
A1 := A0 + aA ,
A2 := A0 + bA ,
B1 := B0 + aB
B2 := B0 + bB .
(17.18a)
(17.18b)
Then the controller given by (17.11) renders system (17.9),(17.14) globally uniformly exponentially stable about the origin with Lyapunov matrix S 1 .
be any positive definite
Proof. Suppose S and L satisfy inequalities (17.17) and let Q
matrix which satisfies
A1 S + B1 L + SA1 + L B1 Q
.
A2 S + B2 L + SA2 + L B2 Q
(17.19a)
(17.19b)
and
N = AS + BL + SA + L B
N(t, x) Q
for all t and x if
N0 + aN Q
and
N0 + bN Q
(17.20a)
(17.20b)
where A1 , B1 , A2 , B2 are given by (17.18). Then controller (17.11) renders system (17.9),
(17.14), GUES about the origin with rate of convergence and Lyapunov matrix S = P 1 .
Note that the existence of a positive definite symmetric S and a matrix L satisfying
inequalities (17.17) or (17.20) is equivalent to the existence of another symmetric matrix S
and another matrix L satisfying S I and (17.17) or (17.20), respectively.
Suppose B1 = B2 and (S, L) is a solution to (17.17) or (17.20), then (S, LB1 ) is also
a solution for all 0. Since the feedback gain matrix corresponding to the latter pair is
223
(17.21)
0
0
S
(17.22)
and letting u = LS 1 x.
Example 172 Recall the single-link manipulator of Example 171. For J = W l = 1, the
following Matlab program uses the results of this section and the LMI toolbox to obtain
stabilizing controllers.
% Quadratic stabilization of
%
clear all
% Data and specs
alpha = 0.1;
J = 1;
Wl = 1;
%
% Description of manipulator
A0 = [0 1; 0 0];
DelA = [0 0; 1 0];
B0 = [0; 1/J];
DelB = 0;
a= -Wl/J
b= Wl/J
single-link manipulator
224
A1 = A0 + a*DelA;
A2 = A0 + b*DelA;
B1 = B0 + a*DelB;
B2 = B0 + b*DelB;
%
% Form the system of LMIs
setlmis([])
%
S=lmivar(1, [2,1]);
L=lmivar(2, [1,2]);
beta=lmivar(1, [1,1]);
%
lmi1=newlmi;
lmiterm([lmi1,1,1,S], A1,1,s)
lmiterm([lmi1,1,1,L], B1,1,s)
lmiterm([lmi1,1,1,S], 2*alpha, 1)
%
lmi2=newlmi;
lmiterm([lmi2,1,1,S], A2,1,s)
lmiterm([lmi2,1,1,L], B2,1,s)
lmiterm([lmi2,1,1,S], 2*alpha, 1)
%
Slmi= newlmi;
lmiterm([-Slmi,1,1,S],1,1)
lmiterm([Slmi,1,1,0],1)
%
lmi4=newlmi;
lmiterm([-lmi4,1,1,beta],1,1)
lmiterm([-lmi4,1,2,L],1,1)
lmiterm([-lmi4,2,2,S],1,1)
lmis = getlmis;
%
c=mat2dec(lmis,0,0,1);
options=[1e-5 0 0 0 0];
[copt,xopt]=mincx(lmis,c,options)
%
S = dec2mat(lmis,xopt,S)
L = dec2mat(lmis,xopt,L)
%
K =L*inv(S)
%A1*S + S*A1
%B1*L + L*B1
%2*alpha*S
%A2*S + S*A2
%B2*L + L*B2
%2*alpha*S
%beta*I
%L
%S
%specify weighting
%Minimize
1.3036 1.2479
3.6578 2.8144
for
for
= 0.1
=1
152.9481 20.1673
for
225
= 10
(0)
= 0,
(0) =
Figure 17.1 illustrates the closed loop behavior of corresponding to = 1 and = 10.
3.5
2.5
theta
1.5
0.5
0.5
0
3
t
226
So, when ud 6= 0, one needs kI 6= 0 and the equilibrium d value of is given by d = ud /kI .
Introduce states
x1 = d
x2 =
x3 = d
to obtain the following state space description:
x 1 = x2
J x 2 = kp x1 kd x2 kI x3 + W l[sin(d + x1 ) sin(d )]
x 3 = x1
(17.24a)
(17.24b)
(17.24c)
kp kd kI
so that system (17.24) is GUES about the origin, then the PID controller (17.23) will guarantee that the angle of the manipulator will exponentially approach the desired value d
and all other variables will be bounded. Note that a stabilizing K will have kI 6= 0.
Using Theorem 41 and the LMI toolbox, obtain a stabilizing K for J = W l = 1. Simulate
the resulting closed loop system for several initial conditions and values of d and w.
17.2.2
Generalization
One can readily generalize the results of this section to systems described by (17.9) where
the time/state dependent matrices A(t, x) and B(t, x) have the following structure
A(t, x) = A0 + 1 (t, x)A1 + + l (t, x)Al
B(t, x) = B0 + 1 (t, x)B1 + + l (t, x)Bl
(17.25a)
(17.25b)
(17.26)
(17.27a)
(17.27b)
with J, W l > 0. Suppose that we do not know the parameters J and W l exactly and only
have knowledge on their upper and lower bounds, that is,
0<J J J
and
Wl Wl Wl
227
2 (t, x) = 1/J ,
the above state space description is given by (17.9) and (17.25) with l = 2 and
0 1
0
0 0
A0 =
, B0 =
, A1 =
, B1 = 0 , A2 = 0 ,
0 0
0
1 0
B2 =
0
1
B0 + 1 B1 + + l Bl ) : i = ai or bi for i = 1, , l}
(17.28)
for every pair (A, B) in AB. Then the controller given by (17.11) renders system (17.9),(17.25)
globally uniformly exponentially stable about the origin with Lyapunov matrix S 1 .
Recalling the discussion in the previous section, we can globally stabilize system (17.9),(17.14),(17.26)
with rate of convergence by solving the following optimization problem:
Minimize subject to
AS + BL + SA + L B + 2S 0
S I
I L
0
L S
and letting u = LS 1 x.
(17.29)
228
17.3
(17.30)
where the real n-vector x(t) is the state and the real m2 -vector u(t) is the control input. The
matrices B1 , B2 and C are constant with dimensions n m1 , n m2 , and p n, respectively.
The function : IR IRp IRm1 satisfies
||(t, z)|| ||z||
(17.31)
229
The following theorem yields controllers for for the global exponential stabilization of
system (17.30).
Theorem 43 Consider a system described by (17.30) and satisfying (17.31). Suppose there
exist a matrix L, a positive-definite symmetric matrix S and a positive scalar which satisfy
the following matrix inequality :
AS + B2 L + SA + L B2 + 2 B1 B1 + 1 (CS + DL) (CS + DL) < 0
(17.32)
Then the controller given by (17.11) renders system (17.30) globally uniformly exponentially
stable about the origin with Lyapunov matrix S 1 .
Proof. Consider any K and u = Kx. Then
x = (A + B2 K)x + B1 (t, (C + DK)x)
Recalling quadratic stability results, this system is GUES with Lyapunov matrix P if there
exists > 0 such that
P (A + B2 K) + (A + B2 K) P + 2 P B1 B1 P + 1 (C + DK) (C + DK) < 0 = 0
Letting S = P 1 and L = KS, this is equivalent to
AS + B2 L + SA + L B2 + 2 B1 B1 + 1 (CS + DL) (CS + DL) < 0
Exercise 61 Consider a system described by (17.30) and satisfying (17.31). Suppose there
exist a matrix L, a positive-definite symmetric matrix S and a positive scalar which satisfy
AS + B2 L + SA + L B2 + 2 B1 B1 + 2S + 1 (CS + DL) (CS + DL) 0
(17.33)
Then the controller given by (17.11) renders system (17.30) globally uniformly exponentially
stable about the origin with rate of convergence and Lyapunov matrix S 1
Using a Schur complement result, one can show that satisfaction of quadratic matrix
inequality (17.32) is equivalent to satisfaction of the following matrix inequality :
AS + B2 L + SA + L B2 + 2 B1 B1 (CS + DL)
<0
(17.34)
CS + DL
I
Note that this inequality is linear in S, L, and . Recalling the discussion in Section 17.2,
we can globally stabilize system (17.30) with rate of convergence by solving the following
optimization problem:
Minimize subject to
AS + B2 L + SA + L B2 + 2 B1 B1 + 2S (CS + DL)
CS + DL
I
0
(17.35)
S I
and letting u = LS 1 x.
I L
L S
230
17.3.1
Generalization
(17.36)
for i = 1, 2, , l
AS + B2 L + SA + L
B2
l
X
i=1
i 2 B1i B1i + 1
i (Ci S + Di L) (Ci S + Di L) < 0 (17.38)
Then the controller given by (17.11) renders system (17.36) globally uniformly exponentially
stable about the origin with Lyapunov matrix S 1 .
231
C1 S + D1 L
1 I
0
C2 S + D2 L
0
2 I
..
..
..
..
.
.
.
.
.
.
.
Cl S + Dl L
0
0
l I
Recalling previous discussions, we can globally stabilize system (17.36) with rate of convergence by solving the following optimization problem:
Minimize subject to
P
AS + B2 L + SA + L B2
+ li=1 i 2 B1i B1i
+ 2S
C
S
+
D
1
1L
C2 S + D 2 L
.
Cl S + D l L
(C1 S + D1 L)
1 I
0
.
.
.
0
(C2 S + D2 L)
0
2 I
.
.
.
0
(Cl S + Dl L)
0
0
.
.
.
l I
I
L
L
S
(17.39)
and letting u = LS 1 x.
<0
232
17.4
(17.40)
z (t, z) 0
(17.41)
where
for all z.
(17.42a)
(17.42b)
Then controller (17.11) renders system (17.40) globally exponentially stable about the origin
with Lyapunov matrix S 1 .
Proof. Apply corresponding analysis result to closed loop system.
Note that (17.42) has a solution with S, > 0 if and only if the following optimization
problem has a minimum of zero:
Minimize subject to
AS + B2 L + SA + L B2
<
>0
I
B1 CS DL
B1 SC L D
I
233
Exercise 62 Prove the following result: Consider a system described by (17.40) and satisfying (17.41). Suppose there exist a matrix L, a positive-definite symmetric matrix S, and
positive scalars , which satisfy
AS + B2 L + SA + L B2 + 2S 0
B1 = CS + DL
(17.43a)
(17.43b)
Then controller (17.11) renders system (17.40) globally uniformly exponentially stable about
the origin with rate and with Lyapunov matrix S 1 .
17.4.1
Generalization
(17.44)
z (t, z) 0
(17.45)
where
for all z.
Theorem 46 Consider a system described by (17.44) and satisfying (17.45). Suppose there
exist a matrix L, a positive-definite symmetric matrix S and positive scalars and which
satisfy
AS + SA + B2 L + L B2 + 2S
(CS + D2 L) B1
0
(17.46)
CS + D2 L B1
(D1 + D1 )
Then controller (17.11) renders system (17.40) globally uniformly exponentially stable about
the origin with rate and with Lyapunov matrix S 1 .
Proof.
Note that (17.46) is linear in L, S, .
If D + D > 0, then (17.46) is equivalent to
AS + SA + B2 L + L B2 + 2S + 1 (B1 SC L D2 )(D1 + D1 )1 (B1 CS D2 L) 0
234
Chapter 18
Singularly perturbed systems
18.1
18.1.1
G(s)
= 2
s
Suppose we subject this system to high-gain static output feedback, that is, we let
u = ky
where k is large. Letting = k 1 , the closed loop system is described by
x 1 = x2
x 2 = 2x1 x2
We are interested in the behavior of this system for > 0 small.
The eigenvalues of the closed loop system matrix
0
1
21 1
235
(18.1)
(18.2)
236
() =
,
1 + 1 8
s
() =
1+
1 8
2
and
lim s () = 2 ,
lim f () = 1
s ()t
Thus, for small > 0, system responses are characterized by a slow mode e
f
fast mode e ()t/ .
For = 0, the differential equation (18.2) reduces to the algebraic equation
and a
2x1 x2 = 0
Substitution for x2 into (18.1) yields
x 1 = 2x1
Note that the dynamics of this system are described by the limiting slow mode.
18.1.2
Consider the single-link manipulator illustrated in Figure 18.1. There is a flexible joint or
shaft between the link and the rotor of the actuating motor; this motor exerts a control
torque u. The equations of motion for this system are:
q1
m,I
q2
k
u
237
kd > 0
Will this controller also stabilize the flexible model provided, the flexible elements are
sufficiently stiff? To answer this question we let
k = 2 k0 ,
c = 1 c0
x2 = q1 ,
y1 = 2 (q2 q1 ) ,
y2 = 1 (q2 q1 )
= x2
= I 1 W l sin x1
+ I 1 (k0 y1 + c0 y2 )
=
y2
1
1
= I W l sin x1 + J p(x) I1 (k0 y1 + c0 y2 )
where I1 := I 1 + J 1 .
Suppose we let = 0 in this description. Then the last two equations are no longer
differential equations; they are equivalent to
y2 = 0
1
y1 = k01 I(I
W l sin x1 + J 1 p(x))
Substituting these two equations into the first two equations of the above state space description yields
x 1 = x2
x 2 = (I + J)1 (W l sin x1 + p(x))
Note that this is a state space description of the closed loop rigid model.
238
18.2
The state space description of the above two examples are specific examples of singularly
perturbed systems. In general a singularly perturbed system is described by
x = f (x, y, )
y = g(x, y, )
(18.3a)
(18.3b)
where x(t) IRn and y(t) IRm describe the state of the system at time t IR and > 0
is the singular perturbation parameter. We suppose 0 <
.
Reduced order system. Letting = 0 results in
x = f (x, y, 0)
0 = g(x, y, 0)
If we assume that there is a continuously differentiable function h0 such that
g(x, h0 (x), 0) 0 ,
then y = h0 (x). This yields the reduced order system:
x = f (x, h0 (x), 0)
A standard question is whether the behavior of the original full order system (18.3) can be
predicted by looking at the behavior of the reduced order system. To answer this question
one must also look at another system called the boundary layer system.
Boundary layer system. First introduce fast time described by
= t/
and let
( ) := x( ) = x(t)
( ) := y( ) h0 (x( )) = y(t) h0 (x(t))
to obtain a regularly perturbed system:
= f (, h0() + , )
= g(, h0() + , )
h0
()f (, h0() + , )
x
239
240
1
rigid
0
-1
0
10
time (sec)
12
14
16
18
20
16
18
20
16
18
20
16
18
20
1
flexible: k=900, c=30
0
-1
0
10
time (sec)
12
14
1
flexible: k=9, c=3
0
-1
0
10
time (sec)
12
14
10
flexible: k=1, c=1
0
-10
0
10
time (sec)
12
14
Figure 18.2: Comparison of rigid case and flexible case with large and small stiffnesses
18.3
241
Linear systems
(18.4)
where A(), B(), C(), and D() are differentiable matrix valued functions defined on some
interval [0,
). For > 0, this is a LTI system with A-matrix
A()
B()
.
(18.5)
A() =
1 C() 1 D()
We assume that
D(0) is invertible
Letting = 0 results in the reduced order system:
x = Ax
where
A := A(0) B(0)D(0)1 C(0)
The boundary layer system is described by:
= D(0)
Let
r1 , r2 , . . . , rn
and
bl
bl
bl
1 , 2 , . . . , m
be the eigenvalues of A and D(0), respectively, where an eigenvalue is included p times if its
algebraic multiplicity is p. Then we have the following result.
Theorem 47 If D(0) is invertible, then for each > 0 sufficiently small, A() has n
eigenvalues
s1 (), s2 (), . . . , sn ()
with
lim si () = ri ,
i = 1, 2, . . . , n
and m eigenvalues
f1 ()/,
f2 ()/,
...,
fm ()/
with
lim fi () = bl
i ,
i = 1, 2, . . . , m
242
bl
1 = 1
Hence the closed loop system has two eigenvalues s1 () and f1 ()/ with the following
properties
lim s1 () = 2 ,
lim f1 () = 1
0
The above theorem has the following corollary: If all the eigenvalues of A and D(0) have
negative real parts then, for sufficiently small, all the eigenvalues of A() have negative real
parts. If either A or D(0) has an eigenvalue with a positive real part, then for sufficiently
small, A() has an eigenvalue with positive real part.
18.4
Suppose a singularly perturbed system has a boundary layer system which is just marginally
stable, that is, just stable but not exponentially stable. Then we cannot apply our previous
results. Further analysis is necessary.
The slow system. Consider a singularly perturbed system described by (18.3). A subset
M of IRn IRm is an invariant set for (18.3) if every solution originating in M remains there.
A parameterized set M , is a slow manifold for (18.3), if for each [0,
), M is an invariant
n
m
set for (18.3) and there is a continuous function h : IR [0,
) IR such that
M = {(x, y) IRn IRm : y = h(x, )} .
(18.6)
When system (18.3) has a slow manifold described by (18.6), then the behavior of the system
on this manifold is governed by
x = f (x, h(x, ), )
y = h(x, ) .
(18.7a)
(18.7b)
We refer to system (18.7a) as the slow system associated with singularly perturbed system
(18.3). Thus the slow system describes the motion of (18.3) restricted to its slow invariant
manifold.
The slow manifold condition. Now note that if M , given by (18.6), is an invariant
manifold for (18.3), then h must satisfy the following slow manifold condition:
g(x, h(x, ), )
h
(x, )f (x, h(x, ), ) = 0
x
(18.8)
For each > 0, this condition is a partial differential equation for the function h(, ); for
= 0, it reduces to a condition we have seen before:
g(x, h(x, 0), 0) = 0 .
243
(18.9)
(18.10a)
(18.10b)
with
f(x, z, ) := f (x, h(x, ) + z, )
g(x, z, ) := g(x, h(x, ) + z, )
(18.11a)
h
(x, )f (x, h(x, ) + z, )
x
(18.11b)
Note that M is an invariant manifold for system (18.3) if and only if the manifold
{(x, z) IRn IRm : z = 0}
(18.12)
(18.13)
From (18.10) and (18.13), it should be clear that if the slow manifold condition holds then,
for any x(0) = x0 and z(0) = 0, the resulting system behavior satisfies (assuming uniqueness
of solutions), z(t) 0. Hence the manifold described by (18.12) is an invariant manifold. So,
the slow manifold condition is necessary and sufficient for the existence of a slow invariant
manifold.
The fast system. Introducing the fast time variable
:= t/
and letting
( ) := x(t) = x( )
( ) := z(t) = z( )
system (18.10) is described by the following regularly perturbed system:
= f(, , )
= g(, , )
We define
to be the fast system.
= g(, , )
244
18.4.1
Suppose
f (x, h(x, ), ) = f0 (x) + f1 (x) + o(; x)
where
lim o(; x)/ = 0 .
Then
x = f0 (x)
and
x = f0 (x) + f1 (x)
are called the zero and first order slow systems, respectively. Note that the zero order slow
system is the reduced order system.
To compute these approximations, we suppose that
h(x, ) = h0 (x) + h1 (x) + o(; x)
Then,
f0 (x) = f (x, h0 (x), 0)
f1 (x) =
f
f
(x, h0 (x), 0)h1 (x) +
(x, h0 (x), 0)
y
The following expressions for h0 and h1 can be obtained from the slow manifold condition
g(x, h0 (x), 0) = 0
(18.14a)
h0
g
g
(x, h0 (x), 0)h1 (x)
(x)f0 (x) +
(x, h0 (x), 0) = 0
y
x
(18.14b)
(18.15)
are called the zero and first order fast systems, respectively. Note that = 0 is an equilibrium
state for both. Using (18.11b) one can obtain
g0 (, ) = g(, h0() + , 0)
g
h0
g
g1 (, ) =
(, h0 () + , 0)h1()
()f (, h0() + , 0) +
(, h0 () + , 0)
y
x
Note that the zero order fast system is the boundary layer system.
A reasonable conjecture is the following: if the reduced order system is exponentially
stable and the first order fast system is exponentially stable for > 0 sufficiently small, then
the full order system is exponentially stable for small > 0.
18.4.2
245
(18.16)
and
h0
g
D
(x)f0 (x) +
(x, 0) +
(0)h0 (x) = 0
x
g
D
h0
()[f0 () + B(0)] +
(x, 0) +
(0)[h0 () + ]
x
h0
g
D
h0
D
= D(0)h1 ()
()f0 () +
(x, 0) +
(0)h0 () + [
()B(0) +
(0)]
x
= D1 ()
g1 (, ) = D(0)h1 ()
where
D1 () = D(0)1
g
D
(, 0)B(0) +
(0)
x
c = c0
This is described by
x 1
x 2
y 1
y 2
=
=
=
=
x2
I 1 W l sin x1
+ I 1 (k0 y1 + c0 y2 )
y2
I 1 W l sin x1 + J 1 p(x) I1 (k0 y1 + c0 y2 )
246
0
1
I W l sin x1 + J 1 p(x)
D1 =
(I + J)1 kd
0
1
0
I c0
Hence
and
D(0) + D1 =
(I + J)1 kd
I1 k0
1
I1 c0
A necessary and sufficient condition for this sytem to be exponentially for small > 0 is
kd < (I + J)I1 c0
Figures 18.3 and 18.4 illustrate numerical simulations with the above condition unsatisfied
and satisfied, respectively.
247
1
rigid
0
-1
0
10
time (sec)
12
14
16
18
20
14
16
18
20
14
16
18
20
14
16
18
20
20
flexible: k=900, c=0.1
0
-20
0
10
time (sec)
12
50
flexible: k=9, c=0.1
0
-50
0
10
time (sec)
12
x 10
10
time (sec)
12
248
1
rigid
0
-1
0
10
time (sec)
12
14
16
18
20
14
16
18
20
14
16
18
20
14
16
18
20
2
flexible: k=900, c=0.5
0
-2
0
10
time (sec)
12
1
flexible: k=9, c=0.5
0
-1
0
10
time (sec)
12
500
flexible: k=1, c=0.5
0
-500
0
10
time (sec)
12
249
Now
x 1
x 2
y 1
y 2
=
=
=
=
p(q1 , q2 )
kp q1 kd q2
kp x1 kd x2 kd y2
p(x) kd y2
x2
I 1 W l sin x1
+ I 1 (k0 y1 + c0 y2 )
y2
I 1 W l sin x1 + J 1 p(x) I1 k0 y1 (I1 c0 + J 1 kd )y2
D() =
0
1
1
1
I k0 (I c0 + J 1 kd )
D(0) + D1 =
(I + J)1 kd
I1 k0
This yields
1
(I1 c0 + J 1 kd )
250
1
rigid
0
-1
0
10
time (sec)
12
14
16
18
20
14
16
18
20
14
16
18
20
14
16
18
20
1
flexible: k=900, c=0
0
-1
0
10
time (sec)
12
1
flexible: k=10, c=0
0
-1
0
10
time (sec)
12
5000
flexible: k=1, c=0
0
-5000
0
10
time (sec)
12
Figure 18.5: Rigid and flexible case with colocated rate feedback
Chapter 19
Input output systems
19.1
Input-output systems
To describe an input-output system, we need several ingredients. First we have the time set
T . Usually
T = [0, )
for continuous-time systems or
T = { 0, 1, 2, . . . , }
for discrete-time systems. We have the set U of input signals u : T U where U is the set
of input values. Common examples are U = IRm and U is the set of piecewise continuous
functions u : [0, ) IRm . We also have the set Y of output signals y : T Y where U
is the set of output values. Common examples are Y = IRo and Y is the set of piecewise
continuous functions y : [0, ) IRo .
By saying that a function s : [0, ) IRn is piecewise continuous we will mean that on every
finite interval [0, T ], the function is continuous except (possibly) at isolated points.
Example 176 Two examples of piecewise continuous functions.
(i) Square wave function.
N t < N + 21
1 if
s(t) =
1 if N + 12 t N + 1
N = 0, 1, 2, . . .
(ii)
s(t) =
(1 t)1 if 0 t < 1
0
if 1 t
252
y(k) = CA x0 +
k1
X
j=0
19.1.1
253
Causality
Basically, an input-output system is causal if the current value of the output depends only
on the previous values of the input. Mathematically, we express this as follows. A system G
is causal, if
u1 (t) = u2 (t)
for t T
implies
for t T .
The systems in the above examples are causal. As examples of noncausal systems, consider
y(t) = u(t + 1)
Z
y(t) =
u( ) d
t
19.2
Signal norms
To talk about the size of signals, we introduce signal norms. Consider any real scalar p 1.
We say that a piecewise continuous signal s : [0, ) IRn is an Lp signal if
Z
||s(t)||p dt <
0
where ||s(t)|| is the Euclidean norm of the n-vector s(t). If S is any linear space of Lp signals,
then the scalar valued function || ||p defined by
||s||p :=
Z
p1
||s(t)|| dt
p
is a norm on this space. We call this the p-norm of s. Common choices of p are one and two.
For p = 2 we have
Z
12
2
||s||2 =
||s(t)|| dt
.
0
This is sometimes called the rms (root mean square) value of the signal. For p = 1,
Z
||s||1 =
ks(t)k dt .
0
We say that s is an L
signal if
ess sup ||s(t)|| < .
t0
By ess sup we mean that supremum is taken over the set of times t where s is continuous. If
S is a any linear space of L signals, then the scalar valued function || || defined by
||s|| := ess sup ||s(t)||
t0
254
is a norm on this space. We call this the -norm of s. Also ||s(t)|| ||s|| a.e (almost
everywhere) that is, everywhere except(possibly) where s is not continuous.
Example 180 Lp or not Lp
(i) Lp for all p.
s(t) = et
with
> 0.
Since |et
| 1 for all t 0, this signal is an L signal; also ksk = 1. For 1 p < ,
R
we have 0 |et |p dt = 1/p. Hence, this is an Lp signal with kskp = (1/p)1/p .
s(t) = et
s(t) 1
(1 t) 2 for 0 t < 1
0
for
1t
s(t) = (1 + t)1
s(t) = et
Lpe for all p.
(iv)
s(t) =
(1 t) 2 for 0 t < 1
0
for
1t
p
Not L
e but L for any other p
p
Note that if a signal is L
e then it is Le for all p.
19.3
255
Input-output stability
Suppose U is a set of Lpe signals u : [0, ) IRm and Y is a set of Lpe signals y : [0, ) IRo .
Consider an input-output system
G:U Y.
DEFN. The system G is Lp stable if it has the following properties.
(i) Whenever u is an Lp signal, the output y = G(u) is an Lp signal.
(ii) There are scalars and such that, for every Lp signal u U, we have
||G(u)||p + ||u||p
(19.1)
The Lp gain of a system is the infimum of all such that there is a which guarantees (19.1)
for all Lp signals u U.
Example 182 Consider the memoryless nonlinear SISO system defined by
y(t) = sin(u(t)) + 1
Since
|y(t)| = 1 + | sin(u(t))| 1 + |u(t)|
it follows that for any 1 p and any Lp signal u we have
||y||p 1 + ||u||p
Hence this system is Lp stable with gain 1 for 1 p .
Example 183 Consider the simple delay system with time delay h 0:
0
for 0 t h
y(t) =
u(t h) for
ht
One can readily show that for any 1 p and any Lp signal u we have
||y||p = ||u||p
Hence this system is Lp stable for 1 p .
Example 184 (Simple integrator is not io stable.)
x = u
y = x
x(0) = 0
256
1 for 0 t 1
0 for
1t
y(t) =
t for 0 t 1
1 for
1t
Then
y(t) = t
Since u is an L signal and y is not, this system is not L stable.
19.4
Suppose U is a set of Lpe signals u : [0, ) IRm and Y is a set of Lpe signals y : [0, ) IRo .
Consider the input-output system defined by the convolution integral
Z t
y(t) =
(19.2)
H(t )u( ) d
0
19.4.1
Rt
0
where
and we define
||H(t)|| dt .
257
H(t )u( ) d
with
H(t) = et
For any T > 0,
Hence H is L and
T
0
|H(t)| dt =
1
(1 eT )
||H||1 =
(19.3)
258
Remark 10 Consider
x = Ax + Bu
y = Cx
where A is Hurwitz. Then H(t) = CeAt B is L1 . To see this, first choose any > 0 satisfying
<
:= max {() : is an eigenvalue of A} .
(19.4)
T
0
|f (t)g(t)| dt
Z
p1 Z
|f (t)|p dt
1q
|g(t)|q dt
.
259
Proof of Theorem 48. First note that, for any input signal u and any time t, we have
Z t
||y(t)|| =
H(t
)u(
)
dt
0
Z t
||H(
)|| d
||u||
Z0
||H(
)|| d
||u||
0
= ||H||1||u|| .
Hence y is L and
||y|| ||H||1||u|| .
Z
||H(t )||d
1/q
||H||1
Z
1/q Z
1/p
||H(t )||||u( )|| dt
p
1/p
||H(t )||||u( )|| dt
.
p
T
p
||y(t)|| dt
p/q
||H||1
260
Z TZ T
=
||H(t )|| dt ||u( )||p d
0
Z T
||H||1||u( )||p d
0
Z T
= ||H||1
||u( )||p d .
0
If u is Lp then
R
0
||u( )||p d .
||u( )||p d ;
hence y is L and
||y||p ||H||1||u||p .
Remark 13 Here we provide a way of obtaining an upper bound on kHk1 for single output
systems using a Lyapunov equation. Consider a scalar output system described by
x = Ax + Bu
y = Cx
where A is Hurwitz, that is, all eigenvalues of A have negative real parts. Consider any
satisfying (19.4) and let S be the solution to
AS + SA + 2S + BB = 0 .
(19.5)
Then
kHk1
1
CSC
2
21
(19.6)
To see this first note that the above Lyapunov equation can be written as
(A + I)S + S(A + I) + BB = 0
Since A + I is Hurwitz, S is uniquely given by
Z
Z
(A+I)t
(A+I) t
S=
e
BB e
dt =
0
261
Hence
CSC =
2t
At
At
Ce BB e C dt =
2t
H(t)H(t) dt =
e2t kH(t)k2 dt .
Z
Z
21 Z
) dt
(e
12 Z
e2t dt
21
21
(e kH(t)k) dt
t
21
e2t kH(t)k2 dt
1
CSC
2
kHk1
t 2
19.4.2
1
CSC
2
21
(19.7)
(19.8)
262
19.4.3
Consider a convolution system described by (19.2) and suppose the impulse response H is
piecewise continuous. The Laplace Transform of H is a matrix valued function of a complex
variable s and is given by
Z
H(s) =
H(t)est dt .
0
for all complex s where the integral is defined. If u and y are the Laplace transforms of u
and y, respectively, then the convolution system can be described by
u(s)
y(s) = H(s)
(19.9)
||H(s)||
||H||1
for all s with (s) 0.
: C Cmo . Suppose
H norm of a transfer function. Consider a transfer function H
is analytic in the open right half complex plane. Then, the H -norm of H
is defined by
H
= sup ||H()||
||H||
IR
where
Note that if H is L1 then
||H()||
= max [H()]
.
||H||1
||H||
Theorem 49 Consider a convolution system (19.2) where the impulse response H is piece be the Laplace transform of H. Then, this system is L2 stable if
wise continuous and let H
is defined for all s with (s) > 0 and ||H||
is finite. In this case, whenever
and only if H
u is L2 ,
||u||2
||y||2 ||H||
(19.10)
is the L2 gain of the system in the sense that
Actually, one can show that ||H||
=
kHk
kyk2
uL2 , u6=0 kuk2
sup
H(s)
=
263
1
s+
Since H(j)
= 1/( + ), it follows that |H()|
= 1/( 2 + 2 ) 2 . Hence
=
kHk
1
.
19.4.4
Here,
At
y(t) = Ce x0 +
(19.11a)
(19.11b)
(19.11c)
T
2
ky(t)k dt
x0 P x0
T
0
ku(t)k2 dt
where
x0 P x0
264
T
2
ky(t)k dt
ku(t)k dt 0 .
T
2
ky(t)k dt
x0 P x0
ku(t)k dt
265
and let
P + C C
C D
DC
2 I + D D
(19.14)
= (1 + 2 ) 2 .
Then, for all t 0,
ky(t)k et + kuk
(19.15)
(19.16)
Remark 15 For a fixed , inequalities (19.13) and (19.14) are LMIs in the variables P , 1
and 2 . The scalar is an upper bound on the L gain of the system.
Proof. Consider any solution x() of (19.11) and let v(t) = x(t) P x(t). Then v0 := v(0) =
x0 P x0 and
v = 2x P x = 2x P Ax + 2x P Bu .
It now follows from the matrix inequality (19.13) that
x (P A + A P + 2P )x + x P Bu + u B P x 21 u u 0 ,
that is,
2x P Ax + 2x P Bu 2x P x + 21 kuk2 .
v(t) v(0)e
e2(t ) 21 kuk2 d
266
Thus,
v(t) v0 e2t + 1 kuk2 .
Recalling the matrix inequality (19.14) we obtain that
x (P + C C)x + x CDu + uD Cx + u (2 I + D D)u 0 ,
that is,
kyk2 = x CCx + 2x CDu + u DDu x P x + 2 kuk2 .
Hence, for all t 0,
ky(t)k2 v(t) + 2 ku(t)k2 v0 e2t + 1 kuk2 + 2 kuk2 ,
that is,
ky(t)k2 v0 e2t + 2 kuk2 .
Taking the square root of both sides on this inequality yields
ky(t)k et + kuk
1
(19.17)
19.4.5
267
(19.18a)
(19.18b)
(19.18c)
(t)
= A0 (t) + A1 (th)
(0) = I
(t) = 0 for h t < 0 .
Then,
y(t) =
H(t )u( ) d
where
H(t) = C(t)B .
(19.19)
Proof: Consider
x(t) =
(t )Bu( ) d
x(t)
= (0)Bu(t) +
(t
)Bu( ) d
0
Z t
= Bu(t) +
(A0 (t ) + A1 (t h))Bu( ) d
0
Z t
Z t
= Bu(t) + A0
(t )Bu( ) d + A1
(t h)Bu( ) d
0
(t )Bu( ) d =
H(t )Bu( ) d
0
(19.20)
(19.21)
268
= ax(t) bx(th) + u
y(t) = x(t)
x(t) = 0
for h t 0
with
|b| < a ,
h0
H(s)
=
1
d(s)
where
d(s) = s + a + behs
19.5
269
270
19.6
19.6.1
19.6.2
Truncations
Suppose s : T IRn is a signal and consider any time T . The corresponding truncation of s
is defined by
s(t) for 0 t < T
sT (t) :=
0
for T t
Note that a system G is causal if and only if for every input u and every time T , we have
G(u)T = G(uT )T .
271
For any Lp signal s, we have ksT kp kskp . A signal s is Lpe if and only if sT is Lp for all
T . Also, an Lpe signal s is an Lp signal if there exists a constant such that ksT kp for
all T .
19.6.3
Suppose U1 and U2 are vector spaces of Lpe signals with the property that if they contain
a signal, they also contain every truncation of the signal. Suppose G1 : U1 U2 and
G2 : U2 U1 are causal Lp -stable systems which satisfy
||G1 (u1 )||p 1 + 1 ||u1 ||p
||G2 (u2 )||p 2 + 2 ||u2 ||p
(19.22)
=
=
=
=
G1 (u1 )
G2 (u2 )
r1 + y 2
r2 + y 1
(19.23)
We assume that for each pair (r1 , r2 ) of signals with r1 U1 and r2 U2 , the above
relations uniquely define a pair (y1 , y2 ) of outputs with y1 U2 and y2 U1 .
Theorem 51 (Small gain theorem) Consider a feedback system satisfying the above conditions and suppose
1 2 < 1
Then, whenever r1 and r2 are Lp signals, the signals u1 u2, y1 , y2 are Lp and
||u1||p (1 1 2 )1 (2 + 2 1 + ||r1||p + 2 ||r2||p )
||u2||p (1 1 2 )1 (1 + 1 2 + ||r2||p + 1 ||r1||p )
Hence,
||y1 ||p 1 + 1 (1 1 2 )1 (2 + 2 1 + ||r1 ||p + 2 ||r2 ||p )
||y2 ||p 2 + 2 (1 1 2 )1 (1 + 1 2 + ||r2 ||p + 1 ||r1 ||p )
and the feedback system is Lp stable.
Proof. Suppose r1 and r2 are Lp signals. Since r1 , r2 and y1 , y2 are Lpe signals, it follows
that u1 and u2 are Lpe signals. Consider any time T > 0. Since all of the signals r1 , r2 , u1 ,
u2 , y1 , y2 are Lpe , it follows that their respective truncations r1T , r2T , u1T , u2T , y1T , y2T are
Lp signals. Since G1 is causal,
y1T = G1 (u1 )T = G1 (u1T )T
Hence,
ky1T kp = kG1 (u1T )T kp kG1 (u1T )kp 1 + 1 ku1T kp
272
19.7
273
Suppose that the impulse response H of the convolution system is L1 and, for some 0,
the nonlinearity satisfies
k(t, y)k kyk
(19.24)
for all t and y.
Since the impulse response H of the convolution system is L1 , the L2 gain of the con where H
is the system transfer function. We now claim that the
volution system is kHk
2
nonlinear system is L stable with gain . To see this, consider u(t) = (t, y(t)) and note
that, for any T > 0,
Z T
Z T
Z T
Z T
2
2
2
2
ku(t)k dt =
k(t, y(t))k dt
(ky(t)k) dt =
ky(t)k2 dt .
0
If y is L2 then
T
2
ku(t)k dt
ku(t)k dt
ky(t)k2 dt
ky(t)k2 dt
This implies that u is L2 and kuk2 kyk2. Hence the nonlinear system is L2 stable with
gain . From the small gain theorem we obtain that the feedback system is L2 stable if
<1
kHk
(19.25)
For a SISO system, the above condition is equivalent to the requirement that the Nyquist
plot of the system transfer function lie within the circle of radius 1/ centered at the origin.
As a consequence, the above condition is sometimes referred to as the circle criterion.
Example 190 Consider
x = ax(t) + b sin(x(th))
where a > |b|. Letting y(t) = x(th) this system can be described by
x(t)
= ax(t) + u(t)
y(t) = x(th)
u(t) = b sin(y(t))
Here
H(s)
= esh /(s + a)
Thus,
so, kHk = 1/a. Also,
and
(t, y) = b sin(y) .
|H()|
= |e |2 /( 2 + a2 ) = 1/( 2 + a2 ) ;
|b sin(y)| |b||y| ;
hence the condition (19.24) is satisfied with = |b|. Since a > |b|, we obtain that
kHk = |b|/a < 1.
274
19.7.1
(19.26)
where =
y)| |y|
|(t,
ba
2
(19.28)
w k y)
y = H(
Hence
where
w
y = G
= (1 + k H)
1 H
.
G
(19.29)
275
Thus the systems under consideration here can be described by (19.29) and (19.27) where
satisfies (19.28). Using the circle criterion of the last section, this system is L2 stable
have negative real parts and
provided all the poles of G
<1
kGk
(19.30)
2 G()
G() < 1 .
H() [1 + k H()]
H() < 1
2 [1 + k H()]
for all IR {}, Recalling the definitions of and k, the above inequality can be
rewritten as
abH()
H() k[H()
+ H()
]<1
(19.31)
We have three cases to consider:
Case 1: ab > 0 In this case, inequality (19.31) can be rewritten as
[H()
+ c] [H()
+ c] > R2
where
This is equivalent to
1
c=
2
1 1
+
a b
1 1 1
R=
2 a b
|H() + c| > R .
[H()
+ c] [H()
+ c] < R2
This is equivalent to
|H() + c| < R .
276
(b/2)(H()
+ H()
)<1
that is,
1
H()
>
(19.32)
b
must lie to the right of the vertical line which
This means that the Nyquist plot of H
intersects the real axis at 1/b.
Chapter 20
Nonlinear H
20.1
Analysis
Consider a system,
x = F (x) + G(x)w
z = H(x)
with initial condition
x(0) = x0
where w : [0, ) IRm is a disturbance input and z is a performance output with z(t) IRp .
Desired performance. We wish that the system has the following property for some
> 0. For every initial state x0 there is a real number 0 such that for every disturbance
input w one has
Z
Z
2
2
||z(t)|| dt
||w||2 dt + 0
0
20.1.1
The HJ Inequality
1
DV (x)G(x)G(x)T DV T (x) + H(x)T H(x) 0
4 2
V (x) 0
(20.1)
for some > 0. Then for every initial state x0 and for every disturbance input w one has
Z
Z
2
2
||z(t)|| dt
||w||2 dt + V (x0 )
(20.2)
0
277
278
Proof. Consider any initial state x0 and any disturbance input w. Along any resulting
solution x(), we have
dV (x(t))
= L(x(t), w(t))
dt
where
L(x, w) = DV (x)F (x) + DV (x)G(x)w
1
1
= DV (x)F (x) + 2 ||G(x)T DV (x)T ||2 + 2 ||w||2 ||w G(x)T DV (x)T ||2
4
2
1
DV (x)F (x) + 2 ||G(x)T DV (x)T ||2 + 2 ||w||2
4
Using inequality (20.1) we have:
DV (x)F (x) +
1
||G(x)T DV (x)T ||2 ||H(x)||2
2
4
Hence
L(x, w) ||H(x)||2 + 2 ||w||2
i.e.,
dV (x(t))
||z(t)||2 + 2 ||w(t)||2
dt
Integrating from 0 to T yields:
Z T
Z T
2
2
V (x(T )) V (x0 )
||w(t)||2 dt
||z(t)|| dt +
(20.3)
2
If
R w is not2L2 , then 0 ||w(t)||
R dt = 2 and (20.2) trivially holds. If w is L2 , the integral
||w(t)|| dt is finite, hence 0 ||z(t)|| dt is finite and satisfies (20.2).
0
1
DV (x)G(x)G(x)T DV T (x) + H(x)T H(x) = 0
2
4
1
G(x)T DV (x)T
2
2
(20.4)
20.1. ANALYSIS
279
Quadratic V . Suppose
V (x) = xT P x
where P is a symmetric real matrix. Then DV (x) = 2xT P and the HJ inequality is
2xT P F (x) + 2 xT P G(x)G(x)T P x + H(x)T H(x) 0
1
DV (x)BB T DV (x) + xT C T Cx 0
4 2
280
20.2
Control
x = F (x) + G1 (x)w + G2 (x)u
z =
H(x)
u
1
DV G1 GT1 DV T + C T C + k T k 0
4 2
(20.5)
20.3
281
R :=
(20.6)
2 0
0 1
(20.7)
Also letting
v :=
we have
20.3.1
u
w
w =
1 T
G1 DV T
2
2
1
v = R1 GT DV T
2
(20.8)
Linearized problem
x = Ax + Bv
Cx
z =
u
(20.9a)
(20.9b)
with
A = DF (0) ,
B = G(0),
C = DH(0)
282
20.3.2
Nonlinear problem
(20.12a)
(20.12b)
Suppose
F = F [1] + F [2] + F [3] +
G = G[0] + G[1] + G[2] +
H = H [1] + H [2] + H [3] +
where F [k], G[k] and H [k] are homogeneous functions of order k. Note that
F [1] (x) = Ax,
G[0] (x) = B,
H [1] (x) = Cx
(20.13)
v[k]
X
1
= R1
G[j] T DV [k+1j] T
2
j=0
(20.15)
DV
[mk]
[k+1]
k=0
m1
X
k=1
v[mk] T Rv[k]
v[1] Rv[1]
Q[2] = 0
Q[m] = 0
(20.16)
283
we obtain
1
DV [2] (x)Ax DV [2] T (x)BR1 B T DV [2] (x) + xT C T Cx = 0
4
which is the HJI equation for the linearized problem. Hence
V [2] (x) = xT P x
where P T = P > 0 solves the ARE with A := A BR1 B T P Hurwitz. Hence
v[1] (x) = R1 B T P x
Consider now any m 3. Using
m2
X
1
G[k] T DV [mk] T
v[m1] = R1
2
k=0
= DV [m] f [1] +
m2
X
k=1
where
f = F + Gv[1]
(20.17)
f [1] = A x
(20.18)
and
The sum of the terms on the lefthandside of (20.16) can now be written as:
DV [m] f [1] +
m2
X
DV [mk] F [k+1] +
k=1
k=1
= DV [m] f [1] +
m2
X
k=1
m2
X
DV [mk] f [k+1]
m2
X
k=2
m2
X
k=2
Hence, using
f [k+1] = F [k+1] + G[k] v [1]
(20.16) can be written as
DV [m] f [1]
m2
X
k=1
DV [mk] f [k+1]
m2
X
k=2
v[mk] T Rv[k]
Q[m]
284
1
v[2] = R1 (B T DV [3] + G[1] T DV [2] )
2
For m = 4:
DV [4] f [1] = DV [3] f [2] DV [2] f [3] + v[2] T Rv[2]
and
1
v[3] = R1 (B T DV [4] + G[1] T DV [3] + G[2] T DV [2] )
2
If
F [2] , G[1] = 0
Then
V [3] , v[2] = 0
and
v[3]
Example 191
x = x x3 + w + u
z = u
HJI equation
Chapter 21
Performance
21.1
Analysis
(21.1a)
(21.1b)
where x0 = x(t0 ).
285
(21.2)
(21.3)
286
(21.4)
(21.5)
where all the eigenvalues of A have negative real part. The Lyapunov equation
P A + A P + C C = 0
(21.6)
has a unique solution for P ; moreover P is symmetric and positive semi-definite. Post- and
pre-multiplying this equation by x and its transpose yields
2x P Ax + (Cx) Cx = 0 ;
thus, (21.2) holds with V (x) = x P x. Actually, in this case one can show that
Z
kz(t)k2 dt = V (x0 ) .
t0
21.2
Polytopic systems
21.2.1
Polytopic models
Here V is a vector space; so V could be IRn or the set of real n m matrices or some more
general space. Suppose v1 , . . . , vl is a finite number of elements of V. A vector v is a convex
combination of v1 , . . . , vl if it can be expressed as
v = 1 v1 + + l vN
287
for j = 1, . . . , N .
(21.7)
To see this, suppose that (21.7) holds. Since L(v) is affine in v, we can express it as
L(v) = L0 + L1 (v)
where
1 (v) is linear in v. Suppose v is in the polytope; then v =
PL
j=1
j vj where j 0
L(v) = L0 + L1 (
j=1 j vj ) = j=1 j L0 + j=1 j L1 (vj ) = j=1 L(vj ) 0 .
G() = G(())
depends in a
where is some parameter vector (which can depend on ). Suppose that G
multi-affine fashion on the components of the l-vector and each component of is bounded,
that is,
for
k = 1, . . . , l
k k () k
where 1 , . . . , l and 1 , . . . , l are known constants. Then, for all , the vector G() can be
expressed as a convex combination of the N = 2l matrices G1 , . . . , GN corresponding to the
extreme values of the components of ; these vertices Gj are given by
G()
where k = k or k for k = 1, . . . , l .
(21.8)
In applying to show satisfaction of Condition 1 below, let = (t, x), G() = (A(t, x) C(t, x)).
Then (Aj Cj ) = Gj for j = 1, . . . , N.
288
For all , the matrix G() can be expressed as a convex combination of the following four
matrices:
0 1 ,
2 1 ,
0 1 ,
2 3 .
Remark 17 The following remark is also useful for obtaining polytopic models. Consider
an n-vector-valued function g of two variables t and which is differential with respect to
its second variable and satisfies the following two conditions:
(a)
g(t, 0) 0 .
(b) There are matrices, G1 , . . . , GN such that, for all t and , the derivative matrix
g
(t, )
sin 1
cos 1
0
g(t, ) =
2
1+|2 |
1
(1+|2 |)2
Hence, g(t, ) = G where G can be expressed as a convex combination of the following four
matrices:
1 0
1 0
1 0
1 0
.
,
,
,
0 1
0 0
0 1
0 0
21.2.2
289
(21.9)
(21.10)
where the matrix (A C) (which can depend on t, x) is a convex combination of (A1 C1 ) , . . . , (AN CN ).
So, when Condition 1 is satisfied, system 21.1 can be described by
x = A(t, x)x
z = C(t, x)x
(21.11a)
(21.11b)
where (A(t, x) C(t, x)) is contained in the polytope whose vertices are (A1 C1 ), . . . , (AN CN ).
Hence, we sometimes refer to a system satisfying Condition 1 as a polytopic uncertain/nonlinear
system.
Suppose that
P Aj + Aj P + Cj Cj 0
for j = 1, , N .
(21.12)
Using a Schur complement result, the above inequalities can be expressed as
P Aj + Aj P Cj
0
for j = 1, , N .
Cj
I
(21.13)
Since the above inequalities are affine in (Aj Cj ), and (A(t, x) C(t, x)) is a convex combination
of (A1 C1 ), . . . , (AN CN ) it now follows that
P A(t, x) + A(t, x) P C(t, x)
0
C(t, x)
I
for all t and x. Reusing the Schur complement now results in
P A(t, x) + A(t, x) P + C(t, x) C(t, x) 0
for all t and x. Post- and pre-multiplying this equation by x and its transpose yields
2x P A(t, x)x + (C(t, x)x) C(t, x)x 0
that is,
2x P f (t, x) + h(t, x) h(t, x) 0
for all t and x; thus, (21.2) holds with V (x) = x P x. Hence every solution of the system
satisfies
Z
kz(t)k2 dt x0 P x0
(21.14)
t0
290
where x0 = x(t0 ).
To obtain a performance estimate for a fixed initial state one could minimize subject
to LMIs (21.13) and
x0 P x0 0 .
(21.15)
Note that, if we let S = P 1 then, (21.13) can be expressed as
Aj S + Aj S SCj
0
for j = 1, , N
Cj S
I
(21.16)
(21.17)
21.3
21.3.1
Consider
x = Ax + Bu
z = Cx + Du
In this case
Z
kz(t)k dt =
(21.19a)
(21.19b)
When C D = 0, this reduces to the usual performance encountered in LQR control, that is,
Z
Z
2
kz(t)k dt =
x(t) Qx(t) + u(t) Ru(t) dt ;
0
here
Q = C C
and
R = DD .
Suppose
u = Kx
(21.20)
(21.21a)
(21.21b)
291
(21.22)
where P = P and
(21.23)
K = (D D)1 (B P + D C) .
(21.24)
Then it can be shown that P P where P is any matrix satisfying inequality (21.22) with
a stabilizing K. In this case
This is the LQR control gain matrix. Note that P is a stabilizing solution if AB(D D)1 (B P +
D C) is asymptotically stable.
21.3.2
(21.25a)
(21.25b)
(21.26)
(21.27)
(21.28a)
(21.28b)
where (A(t, x) B(t, x) C(t, x) D(t, x)) is contained in the polytope whose vertices are
(A1 B1 C1 D1 ), . . . , (AN BN CN DN ). Hence, we sometimes refer to a system satisfying
Condition 3 as a polytopic uncertain/nonlinear system.
292
(21.29)
(21.30a)
(21.30b)
Note that this system satisfies Condition 1 with vertex matrices (A1 +B1 K, C1 +D1 K), . . . , (AN +
BN K). Hence
Z
t0
where S = P
kz(t)k2 dt x0 P x0
Aj S + Bj L + SAj + L Bj SCj + L Dj
Cj S + Dj L
I
for
j = 1, . . . , N
(21.31)
and
K = LS 1
21.3.3
(21.32)
(21.33a)
(21.33b)
Condition 3 There are matrices, Aj , Bj , C1j , D1j . . . , Cpj , Dpj , j = 1, . . . , N so that for each
t, x and u,
F (t, x) = Ax + Bu
Hi (t, x) = Ci x + Di u for i = 1, . . . , p
(21.34)
(21.35)
where the matrix (A B C1 D1 , . . . , Cp Dp ) (which can depend on t, x, u) is a convex combination of (A1 B1 Ci1 Di1 , . . . , Cp1 Dp1 ) , (AN BN CiN DiN , . . . , CpN DpN ) .
Z
t0
kzi (t)k2 dt x0 P x0
for
i = 1, . . . , p
where S = P 1 ,
for
i = 1, . . . , p and j = 1, . . . ,(21.36)
N
and
K = LS 1
(21.37)
Chapter 22
Appendix
22.1
||x|| = (
x1 x1 + . . . + xn xn ) 2
1
= (x x) 2
= (xT x) 2
>>
norm([3; 4])
ans = 5
>>
norm([1; j])
ans = 1.4142
294
iff
x=0