Un Chapitre
Un Chapitre
Thierry Miquel
Thierry Miquel
Classical control theory is intrinsically linked to the frequency domain and the
s-plane. The main drawback of classical control theory is the diculty to
apply it in Multi-Input Multi-Output (MIMO) systems. Rudolf Emil Kalman
(Hungarian-born American, May 19, 1930 July 2, 2016) is one of the greatest
protagonist of modern control theory1 . He has introduced the concept of state
as well as linear algebra and matrices in control theory. With this formalism
systems with multiple inputs and outputs could easily be treated.
The purpose of this lecture is to present an overview of modern control
theory. More specically, the objectives are the following:
− to learn how to model dynamic systems in the state-space and the state-
space representation of transfer functions;
1 State-space representation 11
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 State and output equations . . . . . . . . . . . . . . . . . . . . . 12
1.3 From ordinary dierential equations to state-space representation 14
1.3.1 Brunovsky's canonical form . . . . . . . . . . . . . . . . . 14
1.3.2 Linearization of non-linear time-invariant state-space
representation . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.4 From state-space representation to transfer function . . . . . . . 21
1.5 Zeros of a transfer function - Rosenbrock's system matrix . . . . 23
1.6 Faddeev-Leverrier's method to compute (sI − A)−1 . . . . . . . . 25
1.7 Matrix inversion lemma . . . . . . . . . . . . . . . . . . . . . . . 28
1.8 Interconnection of systems . . . . . . . . . . . . . . . . . . . . . . 29
1.8.1 Parallel interconnection . . . . . . . . . . . . . . . . . . . 29
1.8.2 Series interconnection . . . . . . . . . . . . . . . . . . . . 30
1.8.3 Feedback interconnection . . . . . . . . . . . . . . . . . . 31
Appendices 193
A Refresher on linear algebra 195
A.1 Section overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.2.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . 195
A.2.2 Vectors operations . . . . . . . . . . . . . . . . . . . . . . 196
A.3 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
A.3.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10 Table of contents
State-space representation
1.1 Introduction
This chapter focuses on the state-space representation as well as conversions
from state-space representation to transfer function. The state-space
representation associated to system interconnection is also presented.
The notion of state-space representation has been developed in the former
Soviet Union where control engineers preferred to manipulate dierential
equations rather than transfer functions which originates in the United States
of America. The diusion to the Western world of state-space representation
started after the rst congress of the International Federation of Automatic
Control (IFAC) which took place in Moscow in 1960.
One of the interest of the state-space representation is that it enables to
generalize the analysis and control of Multi-Input Multi-Output (MIMO) linear
systems with the same formalism than Single-Input Single-Output (SISO) linear
systems.
Let's start with an example. We consider a system described by the following
second-order linear dierential equation with a damping ratio denoted m, an
undamped natural frequency ω0 and a static gain K :
1 d2 y(t) 2m dy(t)
+ + y(t) = Ku(t) (1.1)
ω02 dt2 ω0 dt
Here y(t) denotes the output of the system whereas u(t) is its input. The
preceding relationship represents the input-ouput description of the system.
The transfer function is obtained thanks to the Laplace transform and
assuming that the initial conditions are zero (that is ẏ(t) = ÿ(t) = 0). We get:
1 2 2m
ω02
s Y (s) + ω0 sY (s) + Y (s) = KU (s)
Y (s) Kω02 (1.2)
⇔ F (s) = U (s) = s2 +2mω0 s+ω02
Now rather than computing the transfer function, let's assume that we wish
to transform the preceding second order dierential equation into a single rst
order vector dierential equation. To do that we introduce two new variables,
12 Chapter 1. State-space representation
Thanks to the new variables x1 and x2 the second order dierential equation
(1.1) can now be written as follows:
(
dy(t) 2 dx1 (t) = Kω 2 x (t)
dt = Kω0 dt 0 2
d2 y(t)
dt2
2 dx2 (t)
= Kω0 dt (1.4)
dx2 (t)
⇒ dt + 2mω0 x2 (t) + ω02 x1 (t) = u(t)
The second equation of (1.3) and equation (1.4) form a system of two coupled
rst order linear dierential equations:
(
dx1 (t)
= x2 (t)
dt
dx2 (t) (1.5)
dt = −2mω0 x2 (t) − ω02 x1 (t) + u(t)
In is worth
noticing
that variables x1 (t) and x2 (t) constitute a vector which
x1 (t)
is denoted : this is the state vector. Equation (1.5) can be rewritten
x2 (t)
in a vector form as follows:
d x1 (t) 0 1 x1 (t) 0
= 2 + u(t) (1.6)
dt x2 (t) −ω0 −2mω0 x2 (t) 1
− B is the control matrix and determines how the system inputs u(t) aects
the state change; This is a constant n × m matrix where m is the number
of system inputs;
1
https://en.wikibooks.org/wiki/Control_Systems/State-Space_Equations
14 Chapter 1. State-space representation
− C is the output matrix and determines the relationship between the system
state x(t) and the system outputs y(t). This is a constant p × n matrix
where p is the number of system outputs;
− D is the feedforward matrix and allows for the system input u(t) to aect
the system output y(t) directly. This is a constant p × m matrix.
y(t)
x1 (t) dy(t)
x2 (t) dt
d2 y(t)
..
dt2
(1.11)
x(t) = . := ..
xn−1 (t)
.
dn−2 y(t)
xn (t) dtn−2
dn−1 y(t)
dtn−1
ẋ1 (t) x1 (t)
ẋ2 (t) x2 (t)
.. ..
ẋ(t) = . = . := f (x(t), u(t)) (1.12)
ẋn−1 (t) xn−1 (t)
ẋn (t) g (x1 , · · · , xn−1 , u(t))
Furthermore:
(1.13)
y(t) := x1 (t) = 1 0 ··· 0 x(t)
The Brunovsky's canonical form may be used to obtain the rst order
ordinary dierential equations.
In the preceding state equation f is called a vector eld. This is a time-
invariant state-space representation because time t does not explicitly appears
in the vector eld f .
When the vector eld f is non-linear there exists quite few mathematical
tools which enable to catch the intrinsic behavior of the system. Nevertheless
this situation radically changes when vector eld f is linear both in the state x(t)
and in the control u(t). The good news is that it is quite simple to approximate
a non-linear model with a linear model around an equilibrium point.
We will rst dene what we mean by equilibrium point and then we will see
how to get a linear model from a non-linear model.
An equilibrium point is a constant value of the pair (x(t), u(t)), which will
be denoted (xe , ue ), such that:
0 = f (xe , ue ) (1.15)
Where:
δx(t) = x(t) − xe
(1.17)
δu(t) = u(t) − ue
And where matrices A and B are constant matrices:
∂f (x,u)
A = ∂x
u=ue ,x=xe (1.18)
∂f (x,u)
B = ∂u
u=ue ,x=xe
d (x(t) − xe )
ẋ(t) = ẋ(t) − 0 = ẋ(t) − x˙e = = δ ẋ(t) (1.19)
dt
16 Chapter 1. State-space representation
Where:
y e = h (xe , ue ) (1.23)
And where matrices C and D are constant matrices:
∂h(x,u)
C = ∂x u=u ,x=x
e
∂h(x,u)
e (1.24)
D = ∂u
u=ue ,x=xe
The Scilab code to get the state matrix A around the equilibrium point (xe =
0, ue = −2) is the following:
xe = zeros(3,1);
xe(3) = 0;
ue = -2;
disp(f(xe,ue), 'f(xe,ue)=');
disp(numderivative(list(f,ue),xe),'df/dx=');
Where: ( T
x= V γ ψ φ
T (1.45)
u = nx nz p
Let (xe , ue ) be an equilibrium point dened by:
f (xe , ue ) = 0 (1.46)
The equilibrium point (or trim) for the aircraft model is obtained by
arbitrarily setting the values of state vector xe = Ve γe ψe φe T which
are airspeed, ight path angle, heading and bank angle, respectively. From that
20 Chapter 1. State-space representation
We get:
pe = 0
φe = 0
(1.48)
nze = cos(γe )
cos(φe ) here φe = 0 ⇒ nze = cos(γe )
nxe = sin(γe )
The linearization of the vector eld f around the equilibrium point (xe , ue )
reads:
∂f (x, u) ∂f (x, u)
δ ẋ(t) ≈ δx(t) + δu(t) (1.50)
∂x u=u ∂u u=u
e ,x=xe e ,x=xe
Assuming a level ight (γe = 0) we get the following expression of the state
vector at the equilibrium:
Ve
γe = 0
xe =
ψe
(1.51)
φe = 0
Thus the control vector at the equilibrium reads:
nxe = sin (γe ) = 0
ue = nze = cos (γe ) = 1 (1.52)
pe = 0
Consequently:
0 −g cos(γ) 0 0
− g (nz cos(φ) − cos(γ)) g
0 − Vg nz sin(φ)
V sin(γ)
∂f (x,u) 2
= V
sin(φ) sin(φ) sin(γ) g cos(φ)
∂x u=u ,x=x − Vg2 cos(γ) nz − Vg nz 0 V cos(γ) nz
e e cos2 (γ)
0 0 0 0 x = xe
u = ue
0 −g 0 0
0 0 0 0
=
0 g
0 0 Ve
0 0 0 0
(1.53)
1.4. From state-space representation to transfer function 21
And:
g 0 0
g
∂f (x,u)
0 V cos(φ) 0
=
∂u u=u ,x=x g sin(φ)
0 0 V = Ve
e e V cos(γ)
0 0 1 γ = γe = 0
nz = nze = cos (γe ) = 1
(1.54)
φ = φe = 0
g 0 0
0 g
Ve 0
=
0
0 0
0 0 1
between the Laplace transform of the output vector, Y (s) = L y(t) , and the
And using this result in the second equation of (1.58) leads to the expression
of the transfer function F(s) of the system:
Y (s) = CX(s) + DU (s) = C (sI − A)−1 B + D U (s) := F(s)U (s) (1.60)
Where the transfer function F(s) of the system has the following expression:
From the fact that transfer function F(s) reads F(s) = C (sI − A)−1 B + D,
the following relationship holds:
I 0 I 0 sI − A −B
R(s) =
−C (sI − A)−1 I −C (sI − A)−1 I C D
sI − A −B
=
0 F(s)
(1.65)
I 0
Matrix is a square matrix for which the following
−C (sI − A)−1 I
relationship holds:
I 0
det =1 (1.66)
−C (sI − A)−1 I
Now assume that R(s) is a square matrix. Using the property
det (XY) = det (X) det (Y), we get the following property for the
Rosenbrock's system matrix R(s):
I 0 sI − A −B
det −1 R(s) = det
−C (sI − A) I 0 F(s)
I 0
⇒ det det (R(s)) = det (sI − A) det (F(s))
−C (sI − A)−1 I
⇒ det (R(s)) = det (sI − A) det (F(s))
(1.67)
For SISO systems we have det (F(s)) = F (s) and consequently the preceding
property reduces as follows:
det (R(s))
det (F(s)) = F (s) ⇒ F (s) = (1.68)
det (sI − A)
For non-square matrices, the Sylvester's rank inequality states that if X is
a m × n matrix and Y is a n × k matrix, then the following relationship holds:
rank (X) + rank (Y) − n ≤ rank (XY) ≤ min (rank (X) , rank (Y)) (1.69)
For MIMO systems the transfer function between input i and output j is
given by:
sI − A −bi
det
cTj dij
Fij (s) = (1.70)
det(sI − A)
4
https://en.wikipedia.org/wiki/Rosenbrock_system_matrix
24 Chapter 1. State-space representation
x(t) = x0 ezt
(1.71)
u(t) = u0 ezt
Imposing a null output vector y(t) we get from the state-space representation
(1.9):
This relationship holds for a non-zero input vector u(t) = u0 ezt and a
non-zero state vector x(t) = x0 ezt when the values of s are chosen such that
R(s) is not invertible (R(s) is assumed to be square); in such a situation the
(transmission) zeros are the values of s such that det (R(s)) = 0. We thus
retrieve Rosenbrock's result.
Example 1.3. Let's consider the following state-space representation:
−7 −12 1
ẋ(t) = x(t) + u(t)
1 0 0 (1.74)
y(t) = 1 2 x(t)
It can be checked the denominator of the transfer function F (s) is also the
determinant of matrix sI − A.
s + 7 12
det(sI − A) = det = s2 + 7s + 12 (1.76)
−1 s
Furthermore as far as F (s) is the transfer function of a SISO system it can
also be checked that its numerator of can be obtained thanks to the following
relationship:
s + 7 12 −1
sI − A −B
det = det −1 s 0 = s + 2 (1.77)
C D
1 2 0
Thus the only (transmission) zero for this system is s = −2.
The rest of the proof can be found in the paper of Shui-Hung Hou 6 .
Then:
s+5 2
(sI − A)−1 = F0 s+F1
det(sI−A) = 1
(s−1)(s+5) 0 s−1
"
1 2
# (1.87)
s−1 (s−1)(s+5)
= 1
0 s+5
Then:
F0 s2 +F1 s+F2
(sI − A)−1 = det(sI−A)
s2 − 2s + 1
−s + 1 0
= s3 −4s21+5s−2 0 s2 − 3s + 2 0
s−1 −s + 1 2
s − 3s + 2
(s − 1)2
−(s − 1) 0
1
= (s−2)(s−1)2
0 (s − 2)(s − 1) 0
s−1 −(s −
1) (s − 2)(s − 1)
−1
1
s−2 (s−1)(s−2) 0
1
= 0 0
s−1
1 −1 1
(s−1)(s−2) (s−1)(s−2) s−1
(1.93)
This result can also be easily retrieved by summing the realization of each
transfer function:
We nally get:
F(s) = (I − F1 (s)F2 (s))−1 F1 (s) (1.111)
B1 − B1 K2 (I + D1 )−1 D1
ẋ(t) = A f x(t) + u(t)
0
A1 − B1 K2 (I + D1 K2 )−1 C1 0
A = (1.115)
f
0 0
y(t) = (I + D K )−1 C 0 x(t) + D u(t)
1 2 1 1
It is clear from the preceding equation that the state vector of the system
reduces to its rst component x1 (t). Thus the preceding state-space realization
reads:
ẋ1 (t) = A1 − B1 K2 (I + D1 K2 )−1 C1 x1 (t) + B1 − B1 K2 (I + D1 )−1 D1 u(t)
y(t) = (I + D1 K2 )−1 C1 x1 (t) + D1 u(t)
(1.116)
Chapter 2
2.1 Introduction
A realization of a transfer function F(s) consists in nding a state-space model
given the input-output description of the system through its transfer function.
More specically we call realization of a transfer function F(s) any quadruplet
(A, B, C, D) such that:
F(s) = C (sI − A)−1 B + D (2.1)
We said that a transfer function F(s) is realizable if F(s) is rational and
proper. The state-space representation of a transfer function F(s) is then:
ẋ(t) = Ax(t) + Bu(t)
(2.2)
y(t) = Cx(t) + Du(t)
This chapter focuses on canonical realizations of transfer functions that are
the controllable canonical form, the observable canonical form and the
diagonal (or modal) form. Realization of SISO (Single-Input Single Output),
SIMO (Single-Input Multiple-Outputs) and MIMO (Multiple-Inputs
Multiple-Outputs) linear time invariant systems will be presented.
We can match the preceding equations with the general form of a state-space
representation (2.2) by rewriting it as follows:
ẋn (t) = An xn (t) + Bn u(t)
(2.7)
y(t) = Cn xn (t) + Du(t)
Where:
An = P−1
n APn
Bn = P−1
n B (2.8)
Cn = CPn
−1
Now use the fact that I = P−1
n Pn and that (XYZ) = Z−1 Y−1 X−1 (as
soon as matrices X, Y and Z are invertible) to get:
−1 −1
F(s) = CPn sP−1 −1
n Pn − Pn AP n P B+D
−1
−1 −1 n
= CPn Pn (sI − A) Pn Pn B + D
−1 (2.11)
= CPn P−1n (sI − A) P P
n n
−1 B + D
= C (sI − A)−1 B + D
(2.12)
Pn = v1 v2 · · · vn
Thus the state vector x(t) can be decomposed along the components of the
change of basis matrix Pn .
The inverse of the change of basis matrix Pn can be written in terms of rows
as follows: T
w1
wT
2
P−1
n = .. (2.14)
.
wTn
Since P−1
n Pn = I it follows that:
1 if i = j
wTi v j = (2.16)
0 if i 6= j
Where N (s) and D(s) are polynomials in s such that the degree of N (s) is
strictly lower than the degree of D(s):
N (s)
= C (sI − A)−1 B (2.22)
D(s)
That is:
The use of the components of the state vector which have been previously
dened leads to the following expression of the output y(t):
We are looking for the controllable canonical form of this transfer function.
First we have to set to 1 the leading coecient of the polynomial which
appears in the denominator of the transfer function F (s). We get:
0.5s2 + 1.5s + 1
F (s) = (2.34)
1 × s2 + 7s + 12
We nally get:
N (s) −2s − 5
F (s) = +d= 2 + 0.5 (2.37)
D(s) s + 7s + 12
2.3. Realization of SISO transfer function 39
Then we apply Equation (2.23) to get the controllable canonical form of F (s):
0 1 0 1
Ac = =
−a0 −a1 −12 −7
0
Bc =
1 (2.38)
Cc = n0 n1 = −5 −2
D = 0.5
∗ ∗ 1 0
1 ∗ ∗ s ..
.
(sI − Ac )−1 Bc = .. .. ..
det (sI − Ac ) . . .
0
∗ ∗ sn−1 1
(2.39)
1
Cc s
⇒ Cc (sI − Ac )−1 Bc = ..
det (sI − Ac ) .
sn−1
More generally, the characteristic polynomial of the state matrix A sets
the denominator of the transfer function whereas the product B C sets the
coecients of the numerator of a strictly proper transfer function (that is a
transfer function where D = 0). Consequently state matrix A sets the poles of
a transfer function whereas product B C sets its zeros.
Qc = B AB · · · An−1 B (2.41)
At that point matrices Ac and Bc are known. The only matrix which need
to be computed is the output matrix Cc . Let Pc be the change of basis matrix
which denes the new state vector in the controllable canonical basis. From
(2.8) we get:
Cc = CPc (2.43)
And:
Ac = P−1
c APc
(2.44)
Bc = P−1
c B
k
Using these two last equations within (2.42) and the fact that P−1
c APc =
Pc−1 APc · · · P−1 AP = P −1 Ak P , we get the following expression of matrix
| {z c }c c c
k-times
Qcc :
Qcc = h Bc Ac Bc · · · Acn−1 Bc
i
= P−1 −1 AP P−1 B · · · −1 AP n−1 P−1 B
B P c P c
c c c c c
= P−1 −1 −1 n−1 B (2.45)
c B Pc AB · · · Pc A
= P−1 B AB · · · An−1 B
c
= P−1
c Qc
We nally get:
P−1 −1 −1
c = Qcc Qc ⇔ Pc = Qc Qcc (2.46)
Furthermore the controllable canonical form (2.23) is obtained by the
following similarity transformation:
matrix Qc as follows:
q Tc
∗
.. qT A
−1 . −1 c
Qc = ⇒ Pc = .. (2.48)
.
∗
q Tc q Tc An−1
2.3. Realization of SISO transfer function 41
To get this result we write from (2.8) the following similarity transformation:
Ac = P−1 −1 −1
c APc ⇔ Ac Pc = Pc A (2.49)
Ac P−1 −1
c = Pc A
0 1 0 0
..
rT1
rT1
.
0 0 1 0
(2.53)
⇔ .. ..
.. = .. A
. . . .
0
rTn rTn
0 0 0 1
−a0 −a1 −a2 · · · −an−1
Working out with the rst n − 1th rows gives the following equations:
T
r2 = rT1 A
r T = r T A = r T A2
3 2 1
.. (2.54)
.
T
rn = rTn−1 A = rT1 An−1
Qc = B AB · · · An−1 B (2.58)
From the preceding equation it is clear that rT1 is the last row of the inverse
of the controllability matrix Qc . We will denote it q Tc :
rT1 := q Tc (2.60)
Having the expression of rT1 we can then go back to (2.54) and construct all
c .
the rows of P−1
where:
28.5 −17.5
A =
58.5 −35.5
2
B=
4 (2.62)
C = 7 −4
D = 0.5
We are looking for the controllable canonical form of this state-space
representation.
First we build the controllability matrix Qc from (2.41):
2 −13
(2.63)
Qc = B AB =
4 −25
2.3. Realization of SISO transfer function 43
And:
q Tc
−2 1
P−1
c = = (2.70)
q Tc A 1.5 −0.5
Using the similarity relationships (2.8) we nally get the following
controllable canonical form of the state-space representation:
−1 −2 1 28.5 −17.5 1 2 0 1
Ac = Pc APc = =
1.5 −0.5 58.5 −35.5
3 4 −12 −7
−2 1 2 0
Bc = P−1
c B= =
1.5 −0.5
4
1
1 2
Cc = CPc = 7 −4 = −5 −2
3 4
(2.71)
44 Chapter 2. Realization of transfer functions
Iterative method
Equivalently change of basis matrix Pc of the similarity transformation can be
obtained as follows:
(2.72)
Pc = c1 c2 · · · cn
where:
det (sI − A) = sn + an−1 sn−1 + · · · + a1 s + a0
c =B (2.73)
n
ck = Ack+1 + ak B ∀ n − 1 ≥ k ≥ 1
To get this result we write from (2.8) the following similarity transformation:
Ac = P−1
c APc ⇔ Pc Ac = APc (2.74)
(2.77)
Pc = c1 c2 · · · cn
Thus the columns of the unknown matrix Pc can be obtained thanks to the
similarity transformation:
Pc Ac = APc
0 1 0 0
..
.
0 0 1 0
.. ..
⇔ c1 c2 · · · cn . .
= A c1 c2 · · · cn
0
0 0 0 1
−a0 −a1 −a2 · · · −an−1
(2.78)
That is:
0 = a0 cn + Ac1
c = a1 c + Ac
1 n 2 0 = a0 cn + Ac1
.. ⇔ (2.79)
. ck = Ack+1 + ak cn ∀ n − 1 ≥ k ≥ 1
cn−1 = an−1 cn + Acn
2.3. Realization of SISO transfer function 45
Combining the last equation of (2.79) with (2.80) gives the proposed result:
cn = B
(2.81)
ck = Ack+1 + ak B ∀ n − 1 ≥ k ≥ 1
where:
28.5 −17.5
A=
58.5 −35.5
2
B=
4 (2.83)
C = 7 −4
D = 0.5
This is the same state-space representation than the one which has been used
in the previous example. We have seen that the similarity transformation which
leads to the controllable canonical form is the following:
−2 1
P−1
c = (2.84)
1.5 −0.5
2
det (sI − A)
=s + 7s + 12
2
c2 = B =
4 (2.86)
28.5 −17.5 2 2 1
c1 = Ac2 + a1 B = +7 =
58.5 −35.5 4 4 3
46 Chapter 2. Realization of transfer functions
1 2
(2.87)
Pc = c1 c2 =
3 4
0 0 0 −a0
..
.
−a1
1 0
0
..
.
Ao = 0 1
0 −a2
. . . ..
. . . ..
. .
0 0 1 −an−1
n0
n1 (2.88)
..
B =
.
o
nn−2
nn−1
Co = 0 0 · · · 0 1
D=d
counter diagonal):
0 1 0 0
..
.
0 0 1 0
.. ..
Aoa =
. .
0
0 0 0 1
−a0 −a1 −a2 · · · −an−1
nn−1
(2.89)
nn−2
···
Boa =
n1
n0
··· 0 0
Coa = 1 0
D=d
To get the realization (2.88) we start by expressing the output Y (s) of SISO
system (2.19) as follows:
Y (s) N (s)
= + d ⇔ (Y (s) − dU (s)) D(s) = N (s)U (s) (2.90)
U (s) D(s)
That is:
Dividing by sn we get:
a a1 a2 an−1
0
+ + + · · · + + 1 (Y (s) − d U (s))
sn sn−1 sn−2 ns
0 n1 n2 nn−1
= n + n−1 + n−2 + · · · + U (s) (2.92)
s s s s
When regrouping the terms according the increasing power of 1
s we obtain:
1 1
Y (s) = d U (s) + (αn−1 U (s) − an−1 Y (s)) + 2 (αn−2 U (s) − an−2 Y (s)) +
s s
1
· · · + n (α0 U (s) − a0 Y (s)) (2.93)
s
Where:
αi = ni + d ai (2.94)
That is:
1 1
Y (s) = d U (s) + αn−1 U (s) − an−1 Y (s) + αn−2 U (s) − an−2 Y (s) +
s s
1 1
··· + α0 U (s) − a0 Y (s) (2.95)
s s
48 Chapter 2. Realization of transfer functions
Then we dene the Laplace transform of the components of the state vector
x(t) as follows:
sX1 (s) = α0 U (s) − a0 Y (s)
sX2 (s) = α1 U (s) − a1 Y (s) + X1 (s)
sX3 (s) = α2 U (s) − a2 Y (s) + X2 (s) (2.96)
..
.
sXn (s) = αn−1 U (s) − an−1 Y (s) + Xn−1 (s)
So we get:
1
Y (s) = d U (s) + sXn (s) = d U (s) + Xn (s) (2.97)
s
Replacing Y (s) by Xn (s) and using the fact that αi = ni + d ai Equation
(2.96) is rewritten as follows:
sX1 (s) = α0 U (s) − a0 (d U (s) + Xn (s))
= −a0 Xn (s) + n0 U (s)
sX2 (s) = α1 U (s) − a1 (d U (s) + Xn (s)) + X1 (s)
= X1 (s) − a1 Xn (s) + n1 U (s)
sX3 (s) = α2 U (s) − a2 (d U (s) + Xn (s)) + X2 (s) (2.98)
= X2 (s) − a2 Xn (s) + n2 U (s)
.
..
sXn (s) = αn−1 U (s) − an−1 (d U (s) + Xn (s)) + Xn−1 (s)
= Xn−1 (s) − an−1 Xn (s) + nn−1 U (s)
And:
y(t) = xn (t) + d u(t) (2.100)
The preceding equations written in vector form leads to the observable
canonical form of Equation (2.88).
Thus by ordering the numerator and the denominator of the transfer function
F (s) according to the increasing power of s and taking care that the leading
coecient of the polynomial in the denominator is 1, the observable canonical
form (2.88) of a SISO transfer function F (s) is immediate.
We are looking for the observable canonical form of this transfer function.
As in the preceding example we rst set to 1 the leading coecient of the
polynomial which appears in the denominator of the transfer function F (s). We
get:
0.5s2 + 1.5s + 1
F (s) = (2.102)
1 × s2 + 7s + 12
Then we decompose F (s) as a sum between a strictly proper rational fraction
and a constant coecient d. Constant coecient d is obtained thanks to the
following relationship:
0.5s2 + 1.5s + 1
d = lim F(s) = lim = 0.5 (2.103)
s→∞ s→∞ 1 × s2 + 7s + 12
We nally get:
N (s) −2s − 5
F (s) = +d= 2 + 0.5 (2.105)
D(s) s + 7s + 12
Then we apply Equation (2.88) to get the observable canonical form of F (s):
−a0 0 −12
0
A o = =
1 −a1 1 −7
n0 −5
Bo = =
n1 −2 (2.106)
Co = 0 1
D = 0.5
At that point matrices Ao and Co are known. The only matrix which need
to be computed is the control matrix Bo . Let Po be the change of basis matrix
which denes the new state vector in the observable canonical basis. From (2.8)
we get:
Bo = P−1o B (2.110)
And:
Ao = P−1
o APo
(2.111)
Co = CPo
Using these last two equations within (2.109) leads to the following
expression of matrix Qoo :
Co CPo
Co Ao CPo P−1
o APo
Qoo = .. = .
.
.
.
n−1 −1
n−1
C A
o o CPo Po APo
CPo C (2.112)
CAPo CA
= .. = .. Po
. .
CAn−1 Po CAn−1
= Qo Po
We nally get:
Po = Q−1 −1 −1
o Qoo ⇔ Po = Qoo Qo (2.113)
Furthermore the observable canonical form (2.88) is obtained by the
following similarity transformation:
To get this result we write from (2.8) the following similarity transformation:
Ao = P−1
o APo ⇔ Po Ao = APo (2.116)
(2.119)
Po = c1 · · · cn
Thus the columns of the unknown change of basis matrix Po can be obtained
thanks to the following similarity transformation:
Po Ao = APo
0 0 0 −a0
. ..
1 0 0 −a1
.
.
⇔ c1 · · · cn 0 1 . = A c1 · · · cn
0 −a2
.. . . . . ..
. . . .
0 0 1 −an−1
(2.120)
Working out with the rst n − 1th columns gives the following equations:
c2 = Ac1
c = Ac = A2 c
3 2 1
.. (2.121)
.
cn = Acn−1 = An−1 c1
52 Chapter 2. Realization of transfer functions
From the preceding equation it is clear that c1 is the last column of the
inverse of the observability matrix Qo . We will denote it q o :
c1 := q o (2.127)
where:
28.5 −17.5
A=
58.5 −35.5
2
B=
4 (2.129)
C = 7 −4
D = 0.5
We are looking for the observable canonical form of this state-space
representation.
First we build the observability matrix Qo from (2.108):
C 7 −4
Qo = = (2.130)
CA −34.5 19.5
Iterative method
Equivalently the inverse of the change of basis matrix Po of the similarity
transformation can be obtained as follows:
T
r1
rT
2
P−1
o = .. (2.139)
.
rTn
where:
det (sI − A) = sn + an−1 sn−1 + · · · + a1 s + a0
rT = C (2.140)
nT
rk = rTk+1 A + ak C ∀ n − 1 ≥ k ≥ 1
To get this result we write from (2.8) the following similarity transformation:
Ao = P−1 −1 −1
o APo ⇔ Ao Po = Po A (2.141)
Let's denote det (sI − A) as follows:
det (sI − A) = sn + an−1 sn−1 + · · · + a1 s + a0 (2.142)
Thus the coecients ai of the state matrix Ao corresponding to the
observable canonical form are known and matrix Ao is written as follows:
0 0 0 −a0
..
.
1 0 0 −a1
..
Ao = 0 1 . (2.143)
0 −a 2
.. . . . . ..
. . . .
0 0 1 −an−1
2.3. Realization of SISO transfer function 55
Furthermore let's write the inverse of the unknown change of basis matrix
Po as follows:
T
r1
rT
2
P−1
o = .. (2.144)
.
rTn
Thus the columns of the unknown matrix Po can be obtained thanks to the
similarity transformation:
Ao P−1 −1
o = Po A
0 0 0 −a0
.. rT1 rT1
.
1 0 0 −a1 T
.. r2
rT2
(2.145)
⇔ . . = .. A
0 1 0 −a2 ..
.. . . .. ..
.
. . . .
rTn rTn
0 0 1 −an−1
That is:
−a0 rTn = rT1 A
r T − a1 r T = r T A
0 = rT1 A + a0 rTn
1 n 2
.. ⇔
rTk = rTk+1 A + ak rTn ∀ n − 1 ≥ k ≥ 1
.
T
rn−1 − an−1 rTn = rTn A
(2.146)
Furthermore from (2.8) we get the relationship Co = CPo which is rewritten
as follows:
rT1
rT2
Co P−1 = C ⇒ rTn = C (2.147)
o =C⇔ 0 ··· 0 1 ..
.
rTn
Combining the last equation of (2.146) with (2.147) gives the proposed
result:
T
rn = C
(2.148)
rTk = rTk+1 A + ak C ∀ n − 1 ≥ k ≥ 1
where:
28.5 −17.5
A=
58.5 −35.5
2
B=
4 (2.150)
C = 7 −4
D = 0.5
This is the same state-space representation than the one which has been used
in the previous example. We have seen that the similarity transformation which
leads to the controllable canonical form is the following:
2 −4 8.5
Po = (2.151)
3 −7 14.5
2
det (sI − A) = s + 7s + 12
T
r2 = C = 7 −4
28.5 −17.5
rT1 = rT2 A + a1 C = 7 −4
+ 7 7 −4 = 14.5 −8.5
58.5 −35.5
(2.153)
Thus we fortunately retrieve the expression of matrix Po :
−1
rT1
14.5 −8.5
P−1
o = = (2.154)
rT2 7 −4
ri = (s − λi )F (s)|s=λi (2.156)
ri = ci bi (2.157)
ẋ(t) = Ax(t) + Bu(t)
(2.163)
y(t) = Cx(t) + Du(t)
where:
λ1 0 0
..
.
0 λ2
A = ..
..
.
.
0
···
0 0 λn
b1
b2 (2.164)
B= .
..
bn
C = c1 c2 · · · cn
D=d
The two poles of F (s) are −3 and −4. Thus the partial fraction expansion
of F (s) reads:
r1 r2 r1 r2
F (s) = + +d= + − 0.5 (2.169)
s+3 s+4 s+3 s+4
2.3. Realization of SISO transfer function 59
We nally get:
N (s) 1 −3
F (s) = +d= + + 0.5 (2.171)
D(s) s+3 s+4
Residues r1 and r2 are expressed for example as follows:
r1 = 1 = 1 × 1 = c1 × b1
(2.172)
r2 = −3 = −3 × 1 = c2 × b2
Then we apply Equation (2.164) to get the diagonal canonical form of F (s):
−3 0
λ1 0
A= =
0 λ2 0 −4
b1 1
B= =
b2 −3 (2.173)
C = c1 c2 = 1 1
D = d = 0.5
(2.175)
Pm = v 1 v 2 · · · v n
It can be seen that vectors v i are the eigenvectors of matrix A. Indeed let
λi be an eigenvalue of A. Then:
Av 1 = λ1 v 1
.. (2.176)
.
Av n = λn v n
60 Chapter 2. Realization of transfer functions
That is:
λ1 0 0
..
.
0 λ2
Pm
.. ..
= APm (2.178)
. .
0
0 ··· 0 λn
Or equivalently:
λ1 0 0
..
.
0 λ2
= P−1
m APm (2.179)
.. ..
. .
0
0 ··· 0 λn
The inverse of the change of basis matrix Pn can be written in terms of rows
as follows: T
w1
wT
2
P−1
m = .. (2.180)
.
wTn
wTi v j = 0 if i 6= j (2.183)
1 if i = j
T
wi v j = (2.184)
0 if i 6= j
2.3. Realization of SISO transfer function 61
r1 r1
F (s) = + (2.185)
s−λ s−λ
Let α be the real part of the pole λ and β its imaginary part:
λ = α + jβ ⇔ λ = α − jβ (2.186)
Where:
r1 = b1 c1 ⇒ r1 = b1 c1 (2.188)
It is clear that the diagonal form of transfer function F (s) is complex. From
the preceding realization we get the following equations:
ẋ1 (t) = (α + jβ) x1 (t) + b1 u(t)
(2.189)
ẋ2 (t) = (α − jβ) x2 (t) + b1 u(t)
We deduce from the preceding equation that the state components x1 (t) and
x2 (t) are complex conjugate. Let xR (t) be the real part of x1 (t) and xI (t) its
imaginary part:
x1 (t) = xR (t) + jxI (t) ⇒ x2 (t) = x1 (t) = xR (t) − jxI (t) (2.190)
We deduce two new equations from the two preceding equations as follows:
the rst new equation is obtained by adding the two preceding equations and
62 Chapter 2. Realization of transfer functions
−b1 (2.192)
ẋI (t) = βxR (t) + αxI (t) + b12j u(t)
We nally get:
2−3j 2+3j
r1 r2
F (s) = + = 4
+ 4
(2.199)
s − λ1 s − λ1 s − (1 + 2j) s − (1 − 2j)
Then we apply Equation (2.164) to get the diagonal canonical form of F (s):
λ1 0 α + jβ 0 1 + 2j 0
Am = = =
0 λ1 0 α − jβ 0 1 − 2j
b1 1
1
Bm = =
b1 4 1 (2.201)
2 − 3j 2 + 3j
Cm = c1 c1 =
D=0
For both realizations we can check that F (s) = Cm (sI − Am )−1 Bm + D but
in the last realzation matrices (Am , Bm , Cm , D) are real.
(A − λi I)ni v λi ,ni = 0
(2.210)
(A − λi I)ni −1 v λi ,ni 6= 0
Then the following chain of vectors can be formed:
v λi ,ni −1 = (A − λi I) v λi ,ni
2
λi ,ni −2 = (A − λi I) v λi ,ni −1 = (A − λi I) v λi ,ni
v
.. (2.211)
.
v λi ,1 = (A − λi I)ni −1 v λi ,ni
Xn (s)
1
U (s) = s−λ
Xn−1 (s) 1 Xn (s)
⇒ Xn−1 (s) =
U (s) = (s−λ)2 (2.218)
s−λ
..
⇔ .
X2 (s) 1
⇒ X2 (s) = Xs−λ
3 (s)
U (s) =
(s−λ)n−1
X1 (s)
1 X2 (s)
U (s) = (s−λ)n ⇒ X1 (s) = s−λ
Coming back in the time domain and reversing the order of the equations
we get:
ẋ1 (t) = λx1 (t) + x2 (t)
ẋ2 (t) = λx2 (t) + x3 (t)
.. (2.219)
.
ẋn−1 (t) = λxn−1 (t) + xn (t)
ẋn (t) = λxn (t) + u(t)
Then it is shown in 1
that a diagonal form realization of transfer function
F (s) is the following:
A1
A=
A2
..
.
B1
B=
B2 (2.225)
..
.
C = C1 C2 · · ·
D=d
Matrix Ai is a ni × ni square matrix, Bi is a vector with ni rows and Ci is
1
Bernard Pradin, Automatique Linéaire - Systémes multivariables, Notes de cours INSA
2000
68 Chapter 2. Realization of transfer functions
λi 1 0 ···
. . ..
0 .. ..
.
Ai =
.. . . . .
. . .
1
0 ··· 0 λi
| {z }
ni terms
0 (2.226)
..
Bi = .
0
1
Ci = rini · · · ri2 ri1
D=d
or equivalently:
0 ···
λi 1
0 ... ... ...
Ai = . .
..
. .
. . .
1
0 ··· 0 λi
| {z }
ni terms
..
.
(2.227)
1 d22 Ni2 (s)
Bi = 2! ds
s=λi
1 d
1! ds Ni2 (s) s=λi
Ni2 (s)|s=λi
h i
1 d2
Ni1 (s)|s=λi 1!1 ds d
C = Ni1 (s)s=λ 2! ds2 Ni1 (s)s=λ ...
i
i
i
D=d
If some of the poles are complex so are the residues and so is the Jordan form.
This may be inconvenient. Assume that λ and λ is a complex conjugate pair of
2.3. Realization of SISO transfer function 69
r1 r2 r3
F (s) = + 2
+
s − λ (s − λ) (s − λ)3
r1 r2 r3
+ + 2
+ (2.228)
s − λ (s − λ) (s − λ)3
Let α be the real part of the pole λ and β its imaginary part:
λ = α + jβ ⇔ λ = α − jβ (2.229)
Using the result of the preceding section the Jordan form of transfer function
F (s) is the following:
λ 1 0 0 0 0
0 λ 1 0 0 0
0 0 λ 0 0 0
A =
0 0 0 λ 1 0
0 0 0 0 λ 1
0 0 0 0 0 λ
0
0
(2.230)
1
B=
0
0
1
C = r3 r2 r1 r3 r2 r1
D=0
It is clear that the Jordan form of transfer function F (s) is complex. This
complex Jordan form is rendered real by using the real part and the imaginary
part of the complex state components which appear in the state vector rather
than the complex state components and its conjugate. This is the same kind of
trick which has been used in the section dealing with complex conjugate pair of
poles. The real state matrix An is the following:
Jab I 0
α −β
An = 0 Jab I where Jab = (2.231)
β α
0 0 Jab
It can be seen that complex matrix A has the same determinant than the
following real matrix An :
As in the SISO case, the realization of a SIMO transfer function F(s) consists
in nding any quadruplet (A, B, C, D) such that:
We will consider in the following a SIMO system with p outputs. Thus Y (s)
is a vector of p rows and U (s) a scalar. Several kind of realizations are possible
which will be presented hereafter.
F1 (s)
F(s) = ... (2.235)
Fp (s)
Ai Bi
If we realize Fi (s) by then one realization of F(s) is the
Ci di
following:
A1 0 ··· B1
.. ..
. .
0
..
.
Ai Bi Ap Bp
(2.236)
Fi (s) = ⇒ F(s) =
Ci di C1 0 ··· d1
.. ..
. .
0
..
. Cp dp
2.4. Realization of SIMO transfer function 71
dp
where d is a constant vector and Fsp (s) a strictly proper transfer function:
lims→∞ F(s) = d
(2.245)
lims→∞ Fsp (s) = 0
dp
To get the controllable canonical form we write the transfer function Fsp (s)
as the ratio between a polynomial vector N(s) with p rows and a polynomial
Ψ(s):
N1 (s)
..
.
N(s) Np (s)
Fsp (s) = = (2.248)
Ψ(s) Ψ(s)
Then we build for each SISO transfer function Ni (s)/Ψ(s) a controllable
realization (Ac , Bc , Ci , 0). Note that:
Then the controllable canonical form of the SIMO transfer function F(s) is
the following:
Ac Bc
C1 d1
F(s) = . .. (2.252)
. . .
Cp dp
Then we write the transfer function Fsp (s) := F(s) as the ratio between a
polynomial vector N(s) with p = 2 rows and a polynomial Ψ(s):
s+2 s+2
N(s) 2(s + 1) 2(s + 1)
F(s) := Fsp (s) = = = 2 (2.255)
Ψ(s) (s + 1)(s + 2) s + 3s + 2
Then we write the transfer function Fsp (s) := F(s) as the ratio between a
polynomial vector N(s) with p = 2 rows and a polynomial Ψ(s):
s+1
N(s) 5
F(s) := Fsp (s) = = 2 (2.261)
Ψ(s) s + 6s + 9
We nally get:
0 1 0
−9 −6 1
F(s) =
1
(2.264)
1 0
5 0 0
of m rows and transfer function F(s) is a matrix with m columns and p rows.
As in the SIMO case, the realization of a MIMO transfer function F(s)
consists in nding any quadruplet (A, B, C, D) such that:
The transfer function F(s) can be written as the sum of SIMO systems:
F11 (s)
..
F(s) = . 1 0 ··· 0 +
Fp1 (s)
F1m (s)
..
(2.268)
··· + . 0 ··· 0 1
Fpm (s)
76 Chapter 2. Realization of transfer functions
That is:
F(s) = F1 (s) 1 0 · · · 0 + · · · + Fm (s) 0 · · · 0 1
0 ... 0 1 0 ... 0 (2.269)
= m
P
i=1 Fi (s)
|{z}
i-th column
F1i (s)
If we realize the SIMO system Fi (s) = ..
. in the ith column of F(s)
Fpi (s)
Ai Bi
by then one realization of transfer function F(s) is the following:
Ci Di
F1i (s)
Fi (s) = .
..
Ai Bi
=
Ci Di
Fpi (s)
A1 0 · · · B1 0 · · ·
0 ... ..
.
0 (2.270)
⇒ F(s) =
.. ..
. .
Am Bm
C1 · · · Cm D1 · · · Dm
F(s) = F1 (s) 1 0 · · · 0 + · · · + Fm (s) 0 · · · 0 1
= F1 (s) · · · Fm (s)
= C1 (sI − A1 )−1 B −1
1 + D 1 · · · C m (sI − A m ) B m + D m
(sI − A1 )−1 B1 0
···
..
.
= C1 · · · Cm 0 + D1 · · · Dm
..
. (sI − Am )−1 Bm
−1
(sI − A1 ) 0 ··· B1 0 · · ·
..
. 0 ...
= C1 · · · Cm 0
.. ..
−1
. (sI − Am ) . Bm
+ D1 · · · Dm
−1
(sI − A1 ) 0 ··· B1 0 · · ·
.. 0 . . .
.
= C1 · · · Cm 0
.. ..
. (sI − Am ) . Bm
+ D1 · · · Dm
(2.271)
2.5. Realization of MIMO transfer function 77
where:
0 Im 0 0
.
.
.
0 0 0
Im
. .
.. ..
Ac =
0
0 0 0
I m
−a0 Im −a1 Im −a2 Im · · · −an−1 Im
0
0 (2.274)
..
Bc =
.
0
Im
Cc = C0 C1 · · · Cn−2 Cn−1
D = lims→∞ F(s)
where:
0 0 1 0
0 0 0 1
Ac =
−6 0 −5 0
0 −6 0 −5
0 0
0 0
Bc =
(2.278)
1 0
0 1
Cc = 6 −4 2 −2
3 15 1 5
0 1
D=
0 0
This result can be checked by using Equation (2.1).
where D is a constant matrix and Fsp (s) a strictly proper transfer function:
lims→∞ F(s) = D
(2.280)
lims→∞ Fsp (s) = 0
To get the diagonal (or modal) form we write the transfer function Fsp (s)
as the sum between rational fractions. Let λ1 , · · · , λr be the r distinct roots
of Ψ(s) and ni the multiplicity of root λi . Then we get the following partial
fraction expansion of Fsp (s) where matrices Rij are constant:
ni
XX Rij
Fsp (s) = (2.282)
(s − λi )j
i j=1
2.5. Realization of MIMO transfer function 79
The diagonal (or modal) form of the MIMO transfer function F(s) is the
following:
J1 0 · · · B1
..
0 ... .
F(s) = .
(2.283)
. . Jr Br
C1 · · · Cr D
Denoting by ni the multiplicity of the root λi , m the number of inputs of
the system and Im the identity matrix of size m × m, matrices Ji , Bi and Ci
are dened as follows:
0 · · · 0 λi Im
| {z }
ni termes λi Im
− Bi is a (m × ni ) × m matrix:
0
..
Bi = . (2.286)
0
Im
− Ci is a p × (m × ni ) matrix:
(2.287)
Ci = Rini · · · Ri2 Ri1
An alternative diagonal (or modal) form also exists. To get it rst let's focus
on the realization of the following p × m transfer function Fi (s) with a pole λ
of multiplicity i:
Iρi 0ρi ×(m−ρi )
0(p−ρi )×ρi 0(p−ρi )×(m−ρi )
Fi (s) = (2.288)
(s − λ)i
80 Chapter 2. Realization of transfer functions
where Iρi is the identity matrix of dimension ρi and 0p×m the null matrix
with p rows and m columns.
Then we recall the inverse of the following n × n bidiagonal matrix:
−1
λ−2 · · · λ−n
λ −1 0 λ
.. −1 . . .
.
λ −1 λ
(2.289)
L= ⇒L =
.. ..
. −1 . λ−2
0 λ 0 λ−1
The alternative diagonal (or modal) form of Fi (s) is then the following 2 :
Ai Bi 0nρi ×(m−ρi )
(2.290)
Fi (s) = Ci
0p×m
0(p−ρi )×nρi
Then the alternative diagonal (or modal) form of the MIMO transfer function
F(s) is the following:
A1 0 ··· B1 N12
n .. ..
. .
X 0
(2.298)
F(s) = Fi (s) + D =
..
.
i=1 An Bn Nn2
N11 C1 · · · Nn1 Cn D
This diagonal (or modal) form of F(s) is in general not minimal (see section
2.6).
From the preceding sections it can be seen that the controllable canonical
form of transfer functions F1 (s) and F2 (s) are the following:
A1 B1 −1 1
F1 (s) = =
C1 D1 1 0
(2.300)
0 1 0
A2 B2
F (s) = = −2 −3 1
2
C2 D2
2 1 0
Because F(s) has distinct roots we can also use for this example Gilbert's
realization as explained in section 2.6.2:
1 1 1 1 0 1 0 1
F(s) = R1 + R2 = + (2.310)
s+1 s+2 s+1 2 3 s+2 0 0
1 0
− The rank of matrix R1 = is ρ1 = 2. Thus we write R1 = C1 B1
2 3
where C1 is a p × ρ1 = 2 × 2 matrix and B1 is a ρ1 × m = 2 × 2 matrix.
We choose for example:
1 0
C = R =
1
1
2 3
(2.311)
1 0
B1 = I =
0 1
0 1
− The rank of matrix R2 = is ρ2 = 1. Thus we write R2 = C2 B2
0 0
where C2 is a p × ρ2 = 2 × 1 matrix and B2 is a ρ2 × m = 1 × 2 matrix.
We choose for example:
1
C2 =
0 (2.312)
B2 = I = 0 1
Then we get:
−1
1 0
Λ1 0 B1 −1 0 1
(2.313)
F(s) = 0 λ2 B2 =
−2 0 1
C1 C2 D 1 0 1 0 0
2 3 0 0 0
where D is a constant matrix and Fsp (s) a strictly proper transfer function:
lims→∞ F(s) = D
(2.315)
lims→∞ Fsp (s) = 0
2.6. Minimal realization 85
− Let r be the dimension of the state matrix A, which may not be minimal.
First compute the observability
matrix Qo and the controllability matrix
A B
Qc of the realization :
C D
C
CA
Qo = ..
. (2.322)
r−1
CA
Qc = B AB · · · Ar−1 B
4
Thomas Kailath, Linear Systems, Prentice-Hall, 1st Edition
86 Chapter 2. Realization of transfer functions
Qo Qc = UΣVT (2.325)
Matrix Σ is a rectangular diagonal matrix with non-negative real
coecients situated on its diagonal. The strictly positive coecients of
Σ are called the singular values of Qo Qc . The number of singular values
of Qo Qc (which are the strictly positive coecients within the diagonal
of matrix Σ) isthe dimension
n of the system. Again note that n 6= r if
A B
the realization is not minimal.
C D
− Let Σn be the square diagonal matrix built from the n singular values
of Qo Qc (which are the non-zero coecients within the diagonal matrix
Σ), Un the matrix built from the n columns of U corresponding to the
aux n singular values and Vn the matrix built from the n columns of V
corresponding to the aux n singular values:
T
Σn 0 Vn
T
(2.326)
Qo Qc = UΣV = Un Us
0 Σs VsT
− Matrices On and Cn are dened as follows:
(
1/2
On = Un Σn
On Cn = Un Σn VnT where 1/2 (2.327)
Cn = Σn VnT
− Let m be the number of inputs of the system and p its number of outputs
and Im the identity matrix of size m. Matrix Bm and Cm of the minimal
realization are obtained as follows:
Im
Im
1/2
0
B =C
m n
0 = Σn VnT
..
.. . (2.329)
.
0
1/2
0 ··· 0 On = Ip 0 · · · 0 Un Σn
Cm = Ip
− Matrix D is independent of the realization.
Chapter 3
3.1 Introduction
This chapter is dedicated to the analysis of linear dynamical systems. More
specically we will concentrate on the solution of the state equation and we will
present the notions of controllability, observability and stability. Those notions
will enable the modal analysis of Linear Time Invariant (LTI) dynamical systems
The purpose of this section is to obtain the general solution of this linear
dierential equation, which is actually a vector equation.
The solution of the non-homogeneous state equation ẋ(t) = Ax(t) + Bu(t)
can be obtained by the Laplace transform. Indeed the Laplace transform of this
equation yields:
sX(s) − x(0) = AX(s) + BU (s) (3.2)
That is:
(sI − A) X(s) = x(0) + BU (s) (3.3)
To inverse the preceding equation in the s domain and come back in the
time domain we will use the following properties of the Laplace transform:
− Convolution theorem: let x(t) and y(t) be two causal scalar signals and
denote by X(s) and Y (s) their Laplace transforms, respectively. Then the
product X(s)Y (s) is the Laplace transform of the convolution between
x(t) and y(t) which is denoted by x(t) ∗ y(t):
X(s)Y (s) = L [x(t) ∗ y(t)] ⇔ L−1 [X(s)Y (s)] = x(t) ∗ y(t) (3.6)
Where: Z t
x(t) ∗ y(t) = x(t − τ )y(τ )dτ (3.7)
0
Thus the inverse Laplace transform of Equation (3.5) leads to the expression
of the state vector x(t) which solves the state equation (3.1):
Z t
At
x(t) = e x(0) + eA(t−τ ) Bu(τ )dτ (3.11)
0
The solution x(t) of Equation (3.5) is often referred to as the state trajectory
or the system trajectory.
Exponential eAt is dened as the transition matrix Φ(t):
matrix (or Green's matrix). In this case the transition matrix is a solution of
the homogeneous equation ∂Φ(t,t ∂t
0)
= A(t)Φ(t, t0 ). In addition Φ(t, t) = I ∀t
and Φ(t, t0 ) = φ(t)φ−1 (t0 ) where φ(t) is the solution of ẋ(t) = A(t)x(t). For a
linear time invariant system the transition matrix Φ(t, t0 ) is Φ(t, t0 ) = eA(t−t0 ) ;
as far as for the time invariant case the initial time t0 is meaningless we can
choose t0 = 0 and we retrieve Φ(t) = eAt .
y(t) = Cx(t)
+ Du(t)R
t
= C eAt x(0) + 0 eA(t−τ ) Bu(τ )dτ + Du(t) (3.13)
Rt
= CeAt x(0) + 0 CeA(t−τ ) Bu(τ )dτ + Du(t)
− The term CeAt x(0) is called the zero-input response (or output) of the
system; this is the response of the system when there is no input signal
u(t) applied on the system;
Rt
− The term 0 CeA(t−τ ) Bu(τ )dτ + Du(t) is called the zero-state output (or
response) of the system; this is the response of the system when there is
no initial condition x(0) applied on the system.
Using the fact that the Dirac delta function δ(t) is the neutral element for
convolution we can write CeAt B ∗ δ(t) = CeAt B. Consequently the output
vector y(t), that is the impulse response of a linear time invariant system which
will be denoted h(t), can be expressed as follows:
The unit step response is the response of the system to the unit step input.
Setting in (3.13) the input signal u(t) to u(t) = 1 ∀t > 0 and putting the initial
conditions x(0) to zero leads to the following expression of the unit step response
of the system:
Rt
y(t) = 0 CeA(t−τ
R
) Bdτ + D1
t
= CeAt 0 e−Aτ dτ B + D1
t (3.18)
= Ce At −A e −1 −Aτ B + D1
τ =0
= CeAt A−1 − A−1 e−At B + D1
Using the fact that eAt A−1 = A−1 eAt (which is easy to show using the
series expansion of eAt ) and assuming that matrix A−1 exists, we nally get the
following expression for the unit step response of the system:
t2 tk−1
eAt = I + At + A2 + · · · + Ak−1 (3.21)
2! (k − 1)!
t2 tk−1
e(A−λI)t = I + (A − λI) t + (A − λI)2 + · · · + (A − λI)k−1 (3.24)
2! (k − 1)!
eAt =
t2 tk−1
2
eλt
I + (A − λI) t + (A − λI) + · · · + (A − λI)k−1 (3.26)
2! (k − 1)!
0 1
Example 3.1. Let A = . The characteristic polynomial of A is:
0 0
s −1
det (sI − A) = det = s2 (3.27)
0 s
92 Chapter 3. Analysis of Linear Time Invariant systems
3.5.2 Properties
The following properties hold 1 :
− Value at t = 0:
eAt t=0 = e0 = I (3.29)
− Derivation:
d At
e = AeAt = eAt A (3.30)
dt
− Integration: Z t
e At
=I+A eAτ dτ (3.31)
0
− In general:
e(A+B)t 6= eAt eBt 6= eBt eAt (3.32)
− Let det (A) be the determinant of matrix A and tr (A) be the trace of
matrix A. Then:
det eAt = etr(A)t (3.37)
Consequently we expect that eAt eBt 6= eBt eAt . We will check it by using the
preceding denitions and properties:
1 t
e At
= (3.40)
0 1
And:
tk 0
P∞ tk
k P∞ (Bt)k 1+ 0
(Bt) = ⇒ eBt =I+ k=1 k! = k=1 k!
0 0 0 1
(3.41)
et
0
⇔ eBt =
0 1
It is clear that:
et t et tet
e e At Bt
= 6= e eBt At
= (3.42)
0 1 0 1
The change of basis matrix P, as well as its inverse P−1 , can be obtained as
follows:
(3.47)
P = v1 v2 · · · vn
0 if j =
6 i
Indeed using wTj v i = we get:
1 if j = i
T
w1
wT
2
P−1 P = . v 1 v 2 · · · v n
. .
T
wnT
w1 v 1 wT1 v 2 · · · wT1 v n
wT v wT v · · · wT2 v n
2 1 2 2
= . .. .. (3.51)
. . . .
T T ·· · wTn v n
wn v 1 wn v 2
1 0 ··· 0
0 1 ··· 0
= . . .. =I
.. . . .
0 0 ··· 1
eλ n t
wT1 (3.55)
λt
e 1
wT
.
.. 2
= v1 v2 · · · vn ..
.
e λn t
wTn
We nally get:
n
X n
X
eAt = v i eλi t wTi = v i wTi eλi t (3.56)
i=1 i=1
1 2
Example 3.3. Compute where A =
eAt .
0 −5
Le characteristic polynomial of A reads:
s − 1 −2
det (sI − A) = det = (s − 1)(s + 5) (3.57)
0 s+5
The two eigenvalues λ1 = 1 and λ2 = −5 of A are distinct. Since the size
of A is equal to the number of the distinct eigenvalues we conclude that matrix
A is diagonalizable.
v11
− Let v1 = be the eigenvector of A corresponding to λ1 = 1. We
v12
have:
1 2 v11 v11
=1×
0 −5 v12 v12
v11 + 2v12 = v11 (3.58)
⇔
−5v12 = v12
⇒ v12 = 0
Thus the expression of eigenvector v1 is:
v11
v1 = (3.59)
0
96 Chapter 3. Analysis of Linear Time Invariant systems
v21
− Let v2 = be the eigenvector of A corresponding to λ2 = −5. We
v22
have:
1 2 v21 v21
= −5 ×
0 −5 v22 v22
v21 + 2v22 = −5v21
⇔ (3.60)
−5v22 = −5v22
⇒ 6v21 + 2v22 = 0
⇔ v22 = −3v21
Thus the expression of eigenvector v2 is:
v21
v2 = (3.61)
−3v21
w11
− Let w1 = be the eigenvector of AT corresponding to λ1 = 1. We
w12
have:
1 0 w11 w11
=1×
2 −5 w12 w12
w11 = w11
⇔ (3.62)
2w11 − 5w12 = w12
⇒ 2w11 − 6w12 = 0
⇔ w11 = 3w12
Thus the expression of eigenvector w1 is:
3w12
w1 = (3.63)
w12
And:
− 31
v21 0 0
v2 = = and w2 = = (3.69)
−3v21 1 w22 1
Then applying Equation (3.52) we get:
eAt = ni=1 v i wTi eλi t
P
= v1 wT1eλ1 t + v 2 wT2 eλ2t
− 13
1 t 1
e−5t 0 1
= e 1 3 +
0 1 1
1
(3.70)
1 3 0 −
= et + 3 e−5t
0 0
t 1 t 1 −5t 0 1
e 3e − 3e
=
0 e−5t
poles of F (s)
The residue Res F (s)est shall be computed around each pole of F (s).
Thus:
1 1
−1 1 1 s 1
(sI − A) = (F0 s + F1 ) = 2 = s s2
1 (3.77)
det (sI − A) s 0 s 0 s
We nally get:
1 1
0 1 −1 1 t
exp t =L s s2
1 = (3.79)
0 0 0 s 0 1
Thus (sI − A)−1 = N(s)Ψ(s) where Ψ(s) = s(s − 1) has two roots, λ1 = 0 and
λ2 = 1, each of multiplicity 1: n1 = n2 = 1.
The use of Mellin-Fourier integral leads to the following expression of eAt :
(sI − A)−1 = N(s)
Ψ(s)
dnk −1
⇒ eAt = 1
(s − λk )nk N(s) st
P
k (nk −1)! dsnk −1 Ψ(s) e
s=λk
1 d1−1 s 1
s 1
= (1−1)! ds1−1 s(s−1)
est
0 s−1 s=0
1 d 1−1 1 s 1 st
+ (1−1)! ds1−1 (s − 1) s(s−1) e
0 s−1
s=1 (3.84)
1 s 1 st
1 s 1
= s−1 e + s est
0 s−1 s=0
0 s−1 s=1
0 −1 1 1 t
= + e
0t 1t 0 0
e e −1
=
0 1
We nally get:
e2t et − e2t 0
eAt = 0 et 0 (3.95)
e − e e − e2t et
2t t t
3.6 Stability
There are two dierent denitions of stability: internal stability and input-
output stability:
− A linear time-invariant system is internally stable if its the zero-input state
eAt x0 moves towards zero for any initial state x0 ;
− A linear time-invariant system is input-output stable if its zero-state
output is bounded for all bounded inputs; this type of stability is also
called Bounded-Input Bounded-Output (BIBO) stability.
We have seen in (3.13) that the output response y(t) of a linear time-invariant
system is the following:
Z t
At
y(t) = Ce x0 + CeA(t−τ ) Bu(τ )dτ + Du(t) (3.96)
0
Thus;
102 Chapter 3. Analysis of Linear Time Invariant systems
− The zero-input state, which is obtained when u(t) = 0, has the following
expression:
Xn
At
x(t) = e x0 = v i wTi eλi t x0 (3.98)
i=1
Consequently the zero-input state moves towards zero for any initial state
x0 as soon as all the eigenvalues λi of matrix A are situated in the open
left-half plane (they have strictly negative real part). This means that a
linear time-invariant system is internally stable when all the eigenvalues
λi of matrix A are situated in the open left-half plane (i.e. they have
strictly negative real part).
The result which have been shown assuming that matrix A is
diagonalizable can be extended to the general case where matrix A is
not diagonalizable; in that situation this is the Jordan form of A which
leads to the same result concerning internal stability.
It can be shown that the zero-state output is bounded if and only all the
poles of each term of the transfer function F(s) are situated in the open
left-half plane (i.e. they have strictly negative real part):
− Nevertheless the converse is not true since matrix A could have unstable
hidden modes which do not appear in the poles of F(s). Indeed there may
be pole-zero cancellation while computing F(s). Thus a system may be
BIBO stable even when some eigenvalues of A do not have negative real
part.
Matrix A has a stable mode, which is −1, and an unstable mode, which is
1. Thus the system is not internally stable.
3.7. Controllability 103
When computing the transfer function of the system we can observe a pole
/ zero cancellation of the unstable mode:
The pole of the transfer function F (s) is −1. Thus the system is BIBO stable
but not internally stable.
3.7 Controllability
3.7.1 Denition
Let's consider the state trajectory x(t) of a linear time-invariant system:
Z t
At
x(t) = e x0 + eA(t−τ ) Bu(τ )dτ (3.103)
0
T (t −t)
u(t) = −BT eA f
Wc−1 (tf )eAtf x0 (3.104)
Z tf
Tτ
Wc (tf ) = eAτ BBT eA dτ (3.105)
0
2
https://en.wikibooks.org/wiki/Control_Systems/Controllability_and_Observability
3
S. Skogestad and I. Postlethwaite: Multivariable Feedback Control Analysis and design,
Wiley, 1996; 2005
104 Chapter 3. Analysis of Linear Time Invariant systems
More generally one can verify that a particular input which achieves x(tf ) =
xf is given by3 :
T (t −t)
u(t) = −BT eA Wc−1 (tf ) eAtf x0 − xf (3.107)
f
d
AWc (t) + Wc (t)AT + BBT = Wc (t) (3.108)
dt
independent eigenvectors :
λ1 0 0
..
.
0 λ2
A =
.. ..
. . 0
0 · · · 0 λn
(3.111)
b1
b2
B = ..
.
bn
Thus in the time domain the diagonal form of the state space representation
ẋ(t) = Ax(t) + Bu(t) reads:
ẋ1 (t) = λ1 x1 (t) + b1 u(t)
ẋ2 (t) = λ2 x2 (t) + b2 u(t)
.. (3.112)
.
ẋn (t) = λn xn (t) + bn u(t)
Consequently if at least one of the bi 's coecients is zero then the state
component xi (t) is independent of the input signal u(t) and the state is
uncontrollable.
For multi inputs system with m inputs then matrix B has m columns and
the preceding analysis is readily extended to each column of matrix B assuming
that the state space representation is the diagonal form.
Gilbert's controllability criteria (1963) states that a multi inputs system
with distinct eigenvalues is controllable if and only if each row of control matrix
B of the diagonal realization (all eigenvalues are distinct) has at least one non
zero element.
Qc = B AB · · · An−1 B (3.116)
It can be shown that a linear system is controllable if and only if the rank of
the controllability matrix Qc is equal to n. This is the Kalman's controllability
rank condition.
The sketch of the demonstration is the following:
− First we recall that the expression of the state vector x(t) at time t = tf
which solves the state equation (3.1) is:
Z tf
x(tf ) = e Atf
x0 + eA(tf −τ ) Bu(τ )dτ (3.117)
0
− Using (3.123) and the fact that functions γk (t) are scalar functions (3.118)
is rewritten as follows:
Rt
e−Atf x(tf ) − x0 = 0 f e−Aτ Bu(τ )dτ
R t Pn−1
= 0 f k=0 γ (−τ )Ak Bu(τ )dτ
Pn−1 R tf k (3.125)
= k=0 0 γk (−τ )Ak Bu(τ )dτ
Pn−1 k R tf
= k=0 A B 0 γk (−τ )u(τ )dτ
0 = wTi B AB · · · An−1 B
= wTi B λi B · · · λn−1
i B (3.131)
T n−1
= w i B 1 λi · · · λi
Matrix A has two modes, λ1 = −1 and λ2 = 1. We have seen that the mode
λ2 = 1 is not controllable. We will check that there no input signal u(t) which
is able to move towards zero an initial state x0 which is set to the value of an
eigenvector of AT corresponding to the uncontrollable mode λ2 = 1.
Let w2 be an eigenvector of AT corresponding to the uncontrollable mode
λ2 = 1:
−1 0
AT w 2 = λ2 w 2 ⇔ w2 = w2 (3.134)
10 1
w21
We expand w2 as to get:
w22
−1 0 w21 w21 −w21 = w21
= ⇒ (3.135)
10 1 w22 w22 10w21 + w22 = w22
We nally get:
0
w21 = 0 ⇒ w2 = (3.136)
w22
Now let's express the state vector x(t) assuming that the initial state x0 is
set to w2 . We have:
Rt
x(t) = eAt x0 + 0 eA(t−τ ) Bu(τ )dτ
(3.137)
At 0 R t A(t−τ ) −2
=e + 0e u(τ )dτ
w22 0
Where: h i
eAt = L−1 (sI − A)−1
−1 !
s + 1 −10
= L−1
0 s−1
−1 1 s−1 10
=L (s+1)(s−1) (3.138)
" 0 s+1
#!
1 10
= L−1 s+1 (s+1)(s−1)
1
0 s−1
−t t 5e−t
e 5e −
=
0 et
Consequently state vector x(t) reads:
e−t 5et − 5e−t
0
x(t) =
0 et w22
Z t −(t−τ )
5e(t−τ ) − 5e−(t−τ )
e −2
+ u(τ )dτ (3.139)
0 0 e(t−τ ) 0
110 Chapter 3. Analysis of Linear Time Invariant systems
That is:
Z t
5et − 5e−t −2e−(t−τ )
x(t) = w22 + u(τ )dτ (3.140)
et 0 0
It is clear that for this specic initial state the input vector u(t) will not act
on the second component of x(t) whatever its expression. Consequently it will
not be possible to nd a control u(t) which moves towards zero the initial state
vector x0 = w2 : this state is uncontrollable and the system is said uncontrollable.
3.7.6 Stabilizability
A linear system is stabilizable if all unstable modes are controllable or
equivalently if all uncontrollable modes are stable.
3.8 Observability
3.8.1 Denition
Let's consider the output response y(t) of a linear time-invariant system:
Z t
At
y(t) = Ce x0 + CeA(t−τ ) Bu(τ )dτ + Du(t) (3.141)
0
Thus we get:
CeAt x0 = ỹ(t) (3.143)
Observability answers the question whether it is possible to determine the
initial state x0 through the observation of ỹ(t), that is from the output signal
y(t) and the knowledge of the input signal u(t).
More precisely an initial state x0 is observable if and only if the initial
state can be determined from ỹ(t) which is observed through the time interval
0 ≤ t ≤ tf , that is from the knowledge of the output signal y(t) and the input
signal u(t) that are observed through the time interval 0 ≤ t ≤ tf . A system is
said to be observable when any arbitrary initial state x0 ∈ Rn is observable.
If the system is observable then the value x0 of the initial state can be
determined from signal ỹ(t) that has been observed through the time interval
0 ≤ t ≤ tf as follows:
Z tf
T
x0 = Wo−1 (tf ) eA τ CT ỹ(τ )dτ (3.144)
0
(s − λ1 ) · · · (s − λq ) = sq + aq−1 sq−1 + · · · + a1 s + a0
(3.152)
⇒ CAq + aq−1 CAq−1 + · · · + a1 CA + a0 C = 0
It can be shown that a linear system is observable if and only if the rank of
the observability matrix Qo is equal to n. This is the Kalman's observability
rank condition.
The sketch of the demonstration is the following:
− First we recall that the expression of the output vector y(t) at time t with
respect to the state vector x(t) is:
Thus:
Z t
At
y(t) = Ce x0 + C eA(t−τ ) Bu(τ )dτ + Du(t) (3.159)
0
As far as y(t), t and u(t) are assumed to be known we rewrite the preceding
equation as follows:
Z t
y(t) − C eA(t−τ ) Bu(τ )dτ − Du(t) = CeAt x0 (3.160)
0
− Using (3.123) and the fact that functions γk (t) are scalar functions CeAt x0
reads: Pn−1
CeAt x0 = C k=0 γk (t)Ak x0
Pn−1
= k=0 Cγk (t)Ak x0 (3.162)
Pn−1 k x
= γ
k=0 k (t)CA 0
114 Chapter 3. Analysis of Linear Time Invariant systems
That is:
Rt
y(t1 ) − C 0 1 eA(t1 −τ ) Bu(τ )dτ − Du(t1 )
Rt
y(t2 ) − C 0 2 eA(t2 −τ ) Bu(τ )dτ − Du(t2 )
.. = VQo x0 (3.164)
.
R tn A(t −τ )
y(tn ) − C 0 e n Bu(τ )dτ − Du(tn )
Where:
γ0 (t1 ) γ1 (t1 ) · · · γn−1 (t1 )
γ0 (t2 ) γ1 (t2 ) · · · γn−1 (t2 )
V= .. .. .. .. (3.165)
. . . .
γ0 (tn ) γ1 (tn ) · · · γn−1 (tn )
3.8.6 Detectability
A linear system is detectable if all unstable modes are observable or equivalently
if all unobservable modes are stable.
On the other hand we know from (3.13) that the output response of the
system can be expressed as follows:
Z t
At
y(t) = Ce x(0) + CeA(t−τ ) Bu(τ )dτ + Du(t) (3.172)
0
Gathering the two previous results leads to the following expression of the
output vector y(t) where it is worth noticing that wTi x(0) is a scalar:
n
X n
X Z t
λi t
wTi x(0) eλi (t−τ ) wTi Bu(τ )dτ
y(t) = Cv i e + Cv i
i=1 i=1 0
+ Du(t) (3.173)
The product Cv i is called the direction in the output space associated with
eigenvalue λi . From the preceding equation it is clear that if Cv i = 0 then any
motion in the direction v i cannot be observed in the output y(t) and we say
that eigenvalue λi is unobservable.
The product wTi B is called the direction in the input space associated with
eigenvalue λi . From the preceding equation we cannotice that if wTi B = 0 the
control input u(t) cannot participate to the motion in the direction v i and we
say that eigenvalue λi is uncontrollable.
As a consequence the coupling between inputs, states and outputs is set by
the eigenvectors v i and wTi . It can be seen that those vectors also inuence the
numerator of the transfer function F(s) which reads:
n
Cv i wTi B
F(s) = C (sI − A)−1 B + D =
X
+D (3.174)
s − λi
i=1
We have seen that the change of basis matrix P as well as its inverse P−1
have the following expression:
P = v1 v 2 · · · v n
wT1
wT
Λ = P−1 AP where (3.176)
−1 = 2
P ..
.
wTn
Using the fact that (XY)−1 = Y−1 X−1 for any two inversible square
3.9. Interpretation of the diagonal (or modal) decomposition 117
Figure 3.1 presents the diagonal (or modal) decomposition of the transfer
function F(s) where xm (t) is the state vector expressed in the diagonal (or
modal) basis and matrices Λ, P and P−1 are dened as follows:
λ1
..
Λ= .
λn
(3.179)
P = v1 · · · v n
wT1
P−1 = ...
wTn
118 Chapter 3. Analysis of Linear Time Invariant systems
T
AT CT
A B A B
Σ= ⇒ ΣD = dual (Σ) = = (3.180)
C D C D BT DT
B AB · · · An−1 B (3.181)
Qc =
Let Pcc̄ be the following change of basis matrix which denes a new state
vector xcc̄ (t) as follows:
where:
−1 Ac A12
Acc̄ = Pcc̄ APcc̄ :=
0 Ac̄
(3.186)
−1 Bc
Bcc̄ = Pcc̄ B :=
0
Ccc̄ = CPcc̄ := Cc Cc̄
Let Poō be the following change of basis matrix which denes a new state
vector xoō (t) as follows:
where:
−1 Ao 0
Aoō = Poō APoō :=
A
21 Aō
(3.194)
−1 Bo
Boō = Poō B :=
Bō
Coō = CPoō := Co 0
Figure 3.4: Kalman decomposition in the special case where matrix A has
distinct eigenvalues
3.11. Kalman decomposition 123
(3.201)
PK = v1 v2 · · · vn = Pcō Pco Pc̄ō Pc̄o
Where7 :
− Pcō is a matrix whose columns span the subspace of states which are both
controllable and unobservable: wTi B 6= 0 and Cv i = 0;
− Pco is chosen so that the columns of Pcō Pco are a basis for the
− Pc̄ō is chosen so that the columns of Pcō Pc̄ō are a basis for the
It is worth noticing that some of those matrices may not exist. For example
if the system is both controllable and observable then PK = Pco ; thus other
matrices do not exist. In addition Kalman decomposition is more than getting
a diagonal form for the state matrix A. When state matrix A is diagonal
observability and controllability have still to be checked thanks to the rank
condition test. Finally all realizations obtained from a transfer function are
both controllable and observable.
Matrix A has a stable mode, which is −1, and an unstable mode, which is
1. When computing the transfer function of the system we can observe a pole /
zero cancellation of the unstable mode:
F (s) = C (sI − A)−1 B + D
1 10
s+1 s2 −1 −2
= −2 3 −2
0 1
s−1 0 (3.203)
4
= s+1 −2
−2s+2
= s+1
From PBH test it can be checked that mode −1 is both controllable and
observable whereas mode 1 is observable but not controllable. Thus the system
is not stabilizable.
Internal stability (which implies input-output stability, or BIBO stability) is
required in practice. This cannot be achieved unless the plant is both detectable
and stabilizable.
7
https://en.wikipedia.org/wiki/Kalman_decomposition
124 Chapter 3. Analysis of Linear Time Invariant systems
Indeed:
−1
sI − Acō −A12 −A13 −A14 Bcō
0 sI − Aco 0 −A24 Bco
(sI − A)−1 B =
0 0 sI − Ac̄ō −A34 0
0 0 0 sI − Ac̄o 0
−1
(sI − Acō ) ∗ ∗ ∗
Bcō
0 (sI − Aco )−1 ∗ ∗ Bco
=
(sI − Ac̄ō )−1
0 0 ∗ 0
0 0 0 (sI − Ac̄o )−1 0
∗
(sI − Aco )−1 Bco
=
0
0
(3.205)
And:
8
Albertos P., Sala A., Multivariable Control Systems, Springer, p78
Chapter 4
Observer design
4.1 Introduction
The components of the state vector x may not be fully available as
measurements. Observers are used in order to estimate state variables of a
dynamical system, which will be denoted x̂ in the following, from the output
signal y(t) and the input signal u(t) as depicted on Figure 4.1.
Several methods may be envisioned to reconstruct the state vector x(t) of a
system from the observation of its output signal y(t) and the knowledge of the
input signal u(t):
− From the output equation y(t) = Cx(t) + Du(t) we can imagine to build
x(t) from the relationship x(t) = C−1 y(t) − Du(t) . Unfortunately this
− Assuming that the size of the state vector is n we may also imagine to take
the derivative of the output signal n − 1 times and use the state equation
ẋ(t) = Ax(t) + Bu(t) to get n equations where the state vector x(t) is
the unknown. Unfortunately this not possible in practice because each
derivative of an unsmoothed signal increases its noise ;
We assume that state vector x(t) cannot be measured. The goal of the
observer is to estimate x(t) based on the observation y(t). Luenberger observer
provides an estimation x̂(t) of the state vector x(t) through the following
dierential equation where output signal y(t) and input signal u(t) are known
and where matrices F , J and L have to be determined:
˙
x̂(t) = Fx̂(t) + Ju(t) + Ly(t) (4.2)
Thanks to the output equation y(t) = Cx(t) + Du(t) and the relationship
x(t) = e(t) + x̂(t) we get:
As soon as the purpose of the observer is to move the estimation error e(t)
towards zero independently of control u(t) and true state vector x(t) we choose
matrices J and F as follows:
J = B − LD
(4.6)
F = A − LC
1
https://en.wikipedia.org/wiki/David_Luenberger
4.2. Luenberger observer 127
In order that the estimation error e(t) moves towards zero, meaning that the
estimated state vector x̂ becomes equal to the actual state vector x(t), matrix
L shall be chosen such that all the eigenvalues of A − LC are situated in the
left half plane.
With the expression of matrices J and F the dynamics of the Luenberger
observer can be written as follows:
˙
x̂(t) = Fx̂(t) + Ju(t) + Ly(t)
(4.8)
= (A − LC) x̂(t) + (B − LD) u(t) + Ly(t)
That is:
˙ (4.9)
x̂(t) = Ax̂(t) + Bu(t) + L y(t) − ŷ(t)
Where:
ŷ(t) = Cx̂(t) + Du(t) (4.10)
Thus the dynamics of the Luenberger observer is the same than the dynamics
of the original system with the additional term L y(t) − ŷ(t) where L is a gain
to be set. This additional term is proportional to the error y(t) − ŷ(t). It
enables to drive the estimated state x̂(t) towards its actual value x(t) when the
measured output y(t) deviates from the estimated output ŷ(t).
In order to compute a state space representation and the transfer function
of the observer we rst identify its input and output.
− The output y o (t) of the observer is the estimated state vector x̂(t) of the
plant:
y o (t) = x̂(t) (4.12)
Ln
Then matrix Ao − Lo Co reads:
0 0 0 −a0
..
.
1 0 0 −a1 L1
. .. 0 0 · · · 0 1
Ao − Lo Co = 0 1
0 .. −a2 −
.
.. . . . . ..
. . . Ln
.
0 0 1 −an−1
0 0 0 −a0 − L1
..
.
1 0 0 −a1 − L2
..
= .
0 1 0 −a2 − L3
.. . . .. ..
. . . .
0 0 1 −an−1 − Ln
(4.19)
Since this matrix still remains in the observable canonical form its
characteristic polynomial is readily written as follows:
χA−LC (s) = det (A − LC)
= det (Ao − Lo Co ) (4.20)
= sn + (an−1 + Ln ) sn−1 + · · · + (a1 + L2 ) s + a0 + L1
Identifying Equations (4.16) and (4.20) leads to the expression of each
component of the observer matrix Lo :
p0 = a0 + L1 L1 p0 − a0
p1 = a1 + L2 L2 p1 − a1
.. ⇔ L = .. = .. (4.21)
o
. . .
pn−1 = an−1 + Ln Ln pn−1 − an−1
130 Chapter 4. Observer design
Po = Q−1
o Qoo (4.23)
Where:
⇔ P−1 ˙ −1 (4.25)
o x̂(t) = Ao Po x̂(t) + Bo u(t) + Lo y(t) − ŷ(t)
˙ = Po Ao P−1
⇔ x̂(t) o x̂(t) + Po Bo u(t) + Po Lo y(t) − ŷ(t)
That is:
˙ (4.26)
x̂(t) = Ax̂(t) + Bu(t) + L y(t) − ŷ(t)
Where:
L = Po Lo (4.27)
And:
A = Po Ao P−1
o
(4.28)
B = Po Bo
− and Qoo the observability matrix expressed in the observable canonical basis
(which is readily obtained through det (sI − A)):
0 1
Co 0 1
(4.35)
Qoo = = 0 −2 =
Co Ao 0 1 1 −3
1 −3
Thus: −1
3 5 0 1
Po = Q−1
o Qoo
=
−3 −10 1 −3
−10 −5 0 1
= −1
15
(4.36)
3 3 1 −3
−1 −5 5 1 5 −5
= 15 = 15
3 −6 −3 6
132 Chapter 4. Observer design
We nally get:
1 5 −5 198 57
L = Po Lo = = (4.37)
15 −3 6 27 −28.8
Ao = P−1
o APo
−1
Bo = P o B (4.39)
Co = CPo
Ao − Lo Co = P−1
o APo − Lo CPo
(4.40)
= P−1
o (A − Po Lo C) Po
This equation indicates that the observer gain matrix L in arbitrary state-
space representation reads:
L = Po Lo (4.41)
We have seen in the chapter dedicated to Realization of transfer functions
that Po is a constant nonsingular change of basis matrix which is obtained
through the state matrix A and vector q o :
q o Aq o · · · An−1 q o (4.42)
Po =
Note that χA−LC (Ao ) is not equal to 0 because χA−LC (s) is not the
characteristic polynomial of matrix Ao .
Thanks to Equation (4.21) and the relationship pi = ai + Li we get:
Thus we get the expression of the observer matrix Lo when we use the
observable canonical form.
We multiply Equation (4.52) by Po and use the fact that
k
Ao = P−1 = P−1o A Po to get the expression of the observer gain
k k
o APo
matrix L in arbitrary state space representation:
L = P o Lo
= Po χA−LC (Ao )u
= χA−LC (Po Ao )u
(4.53)
= χA−LC (Po Ao P−1
o Po )u
= χA−LC (Po Ao P−1
o )Po u
= χA−LC (A)Po u
1
0
Because u is the vector dened by u = . we get using (4.42):
..
0
Po u = q o (4.54)
That is:
wLi
AT CT (4.68)
− λLi I =0
pi
It
is clear
from the previous equation that each (n + p) × 1 vector
wLi
belong to the null-space of matrix AT − λLi I CT . So once
pi
any
T (n + p) × 1T vector which belongs to the null-space of matrix
A − λLi I C has been found, its p bottom rows are used to form
vector parameter pi . In the MIMO case several possibilities are oered.
− If we chose a complex eigenvalue λLi then its complex conjugate must also
be chosen. Let's λLiR and λLiI be the real part and the imaginary part
of λLi , wLiR and wLiI be the real part and the imaginary part of wLi and
piR and piI be the real part and the imaginary part of pi respectively:
λLi = λLiR + jλLiI
w = wLiR + jwLiI (4.71)
Li
pi = piR + jpiI
− In the SISO case the observer gain matrix L no more depends on parameter
vectors pi . Indeed is that case the observer gain matrix L is obtained as
follows:
LT =
1 ··· 1
h −1 T −1 T i−1
AT − λL1 I C ··· AT − λLn I C (4.76)
To get this result we start by observing that in the SISO case parameter
vector are scalars; they will be denoted pi . Let vector li be dened as
follows:
−1 T
li = − AT − λLi I C (4.77)
−1
Let's rearrange the term as follows:
l1 p1 · · · ln pn
−1
p1 0
−1 ..
l 1 p1 · · · l n pn = l1 · · · ln .
0 pn
−1
p1 0
.. −1
= . l1 · · · ln
0 pn
Q
p
Qi6n=1 i 0
i=1 pi
..
−1
= . l1 · · · ln
Q
p
0 Qi6n=n i
i=1 pi
(4.79)
Multiplying this expression by − leads to the expression
p1 · · · pn
138 Chapter 4. Observer design
of LT :
−1
LT
=− p1 · · · pn
l
Q 1 p 1 · · · l n p n
p
Qi6n=1 i 0
i=1 pi
.
−1
..
= − p1 · · · pn l1 · · · ln
Q
p
0 Qi6n=n i
i=1 pi
−1
= − 1 ··· 1 l1 · · · ln
(4.80)
−1 T
Using the expression of vector li = − AT − λLi I C provided by
LT =
1 ··· 1
h −1 T −1 i−1
AT − λL1 I C ··· AT − λLn I CT (4.81)
We conclude that in the SISO case the observer gain matrix L no more
depends on parameter vectors pi .
(4.83)
CP = Ip 0p,n−p
C
Indeed, let C be a (n − p) × n matrix such that matrix is nonsingular.
C
Then a possible choice for P is the following:
−1
C
P= (4.84)
C
4.7. Reduced-order observer 139
where:
x∗ (t) = P−1 x(t) ⇔ x(t) = Px∗ (t) (4.86)
Hence, mapping the system in the new state vector x∗ (t) via the similarity
transformation P, we obtain the following state space representation:
∗
ẋ (t) = P−1 APx∗ (t) + P−1 Bu(t)
(4.87)
y(t) = CPx∗ (t)
From the fact that CP = Ip 0p,n−p , it can be seen that the rst p
components of the new state vector x∗ (t) are equal to y(t). Thus we can write:
∗ y(t)
(4.88)
CP = Ip 0p,n−p ⇒ x (t) =
xr (t)
As far as the p rst components of the new state vector x∗ (t) are equal
to y(t), they are available through measurements and thus there is no need to
estimate those components. Consequently the reduced-order observer focuses
on the estimation of the remaining state vector xr (t).
The state equation (4.87) can be written as follows:
∗
A11 A∗12
∗
∗ ẏ(t) y(t) B1
ẋ (t) := = + u(t)
∗ ∗ B∗2
ẋr (t) A A x (t)
21 22
y(t)
r
(4.89)
:= C∗ x∗ (t)
y(t) = Ip 0p,n−p
xr (t)
To design an observer for xr (t), we use the knowledge that an observer has
the same structure as the system plus the driving feedback term whose role is to
reduce the estimation error to zero 3 . Hence, an observer for xr (t) reads:
x̂˙ r (t) = A∗21 y(t) + A∗22 x̂r (t) + B∗2 u(t) + Lr y(t) − ŷ(t) (4.92)
3
Zoran Gajic, Introduction to Linear and Nonlinear Observers, Rutgers University,
https://www.ece.rutgers.edu/gajic/
140 Chapter 4. Observer design
Unfortunately, since y(t) = ŷ(t), the dierence y(t) − ŷ(t) does not carry any
information about xr (t). Nevertheless, by taking the time derivative of y(t), we
get the rst equation of (4.89) which carries information about xr (t):
ẏ(t) = A∗11 y(t) + A∗12 xr (t) + B∗1 u(t)
(4.93)
⇒ A∗12 xr (t) = ẏ(t) − A∗11 y(t) − B∗1 u(t)
Regarding y r (t) := A∗12 xr (t) as a virtual output of the reduced state
equation, the observer for xr (t) nally reads:
(
x̂˙ r (t) = A∗21 y(t) + A∗22 x̂r (t) + B∗2 u(t) + Lr y r (t) − A∗12 x̂r (t)
(4.94)
y r (t) = ẏ(t) − A∗11 y(t) − B∗1 u(t)
Furthermore the dynamics of the error er (t) = xr (t) − x̂r (t) reads as follows:
er (t) = xr (t) − x̂r (t)
⇒ ėr (t) = ẋr (t) − x̂˙ r (t)
= A∗21 ∗ ∗
y(t) + A22 xr (t) + B2 u(t)
− A∗21 y(t) + A∗22 x̂r (t) + B∗2 u(t) + Lr y r (t) − A∗12 x̂r (t)
= (A∗22 − Lr A∗12 ) er (t)
(4.95)
Consequently, designing the observer gain Lr such that the characteristic
polynomial of matrix A∗22 − Lr A∗12 is Hurwitz leads to the asymptotic
convergence of the estimates x̂r (t) towards xr (t). Such a design is always
possible as soon as the pair (A∗22 , A∗12 ) is observable, which is a consequence of
the observability of the pair (A, C) (this can be shown using PBH test3 ).
Since it is not wise to use ẏ(t) because in practice the dierentiation process
introduces noise, we will estimate vector x̂ry (t) rather than xr (t). Vector x̂ry (t)
is dened as follows:
x̂ry (t) := x̂r (t) − Lr y(t) (4.96)
From (4.94), we get the following observer for x̂ry (t):
Assembling the previous results, the estimation of state vector x(t) nally
reads as follows where the dynamics of x̂ry (t) is given by (4.97):
∗ y(t) y(t)
x̂(t) = Px̂ (t) = P =P (4.99)
x̂r (t) x̂ry (t) + Lr y(t)
Chapter 5
Controller design
5.1 Introduction
Controller enables to obtain stable systems which meet performances
specications. In the case where the full state vector x(t) is available then
controller design involves state feedback. In the more usual case where only
some components of the state vector are available through the output vector
y(t) then controller design involves output feedback in association with a state
observer.
This chapter focuses on controllers design. More specically state feedback
controller for SISO systems in controllable canonical form, state feedback
controller for SISO systems in arbitrary state-space representation, static state
feedback controller and static output feedback controller for MIMO systems
will be presented. We will also present controller with integral action.
systems there are additional degrees of freedom which may be used for
others purposes like eigenstructure assignment;
On the other hand, using (5.2) the output equation y(t) = Cx(t) + Du(t)
reads:
y(t) = Cx(t) + Du(t)
⇒ y(t) = (C − DK) x(t) + DHr(t) (5.5)
u(t) = −Kx(t) + Hr(t)
Then matrix H is computed such that the closed-loop system has no steady
state error to any constant value of the reference input r(t). So imposing
y(t) = r(t) leads to the following expression of the feedforward gain matrix
H:
−1
y(t) = r(t) ⇒ H = D − (C − DK) (A − BK)−1 B (5.7)
We will see in section 5.7 that adding an integral action within the controller
is an alternative method which avoid the computation of feedforward gain matrix
H.
In the following we will assume that the system is controllable, or at least
stabilizable, such that it is possible to design a state feedback controller. Indeed
Wonham1 has shown that controllability of an open-loop system is equivalent
to the possibility of assigning an arbitrary set of poles to the transfer matrix
of the closed-loop system, formed by means of suitable linear feedback of the
state.
1
Wonham W., On pole assignment in multi-input controllable linear systems, IEEE
Transactions on Automatic Control, vol. 12, no. 6, pp. 660-665, December 1967. doi:
10.1109/TAC.1967.1098739
5.3. Control of SISO systems 143
(5.12)
Kc = K1 · · · Kn
Using the duality principle we can infer that the expression of the state
feedback controller for SISO systems in controllable canonical form has the
following expression:
Kc = LTo = (5.13)
p0 − a0 · · · pn−1 − an−1
144 Chapter 5. Controller design
To check it just notice that when the controllable canonical form of the
system is used then matrix Ac − Bc Kc reads:
0 1 0 0
0
..
. 0
0 0 1 0
..
. .
Ac − Bc Kc = .. .. −
. K1 · · · Kn
0
0
0 0 0 1
−a0 −a1 −a2 · · · −an−1 1
0 1 0 0
..
.
0 0 1 0
.. ..
= . .
0
0 0 0 1
−a0 − K1 −a1 − K2 −a2 − K3 · · · −an−1 − Kn
(5.14)
Since this matrix still remains in the controllable canonical form its
characteristic polynomial is readily written as follows:
P−1 −1
c = Qcc Qc (5.18)
Where:
5.3. Control of SISO systems 145
Qc = B AB · · · An−1 B (5.19)
That is:
u(t) = −Kx(t) + Hr(t) (5.21)
Where:
K = Kc P−1
c (5.22)
Example 5.1. Design a state feedback controller for the following unstable
plant:
1 0 1
ẋ(t) = x(t) + u(t)
0 2 2
(5.23)
y(t) = 3 5 x(t)
Now let's compute the inverse of the similarity transformation matrix P−1
c
which enables to get the controllable canonical form.
P−1 −1
c = Qcc Qc (5.27)
Where:
− Qc is the controllability matrix in the actual basis:
1 1
(5.28)
Qc = B AB =
2 4
Thus: −1
0 1 1 1
P−1
c = Q Q−1 =
cc c
1 3 2 4
0 1 4 −1
=21 (5.30)
1 3 −2 1
−2 1
= 21
−2 2
We nally get:
1 −2 1 1
K = Kc P−1 (5.31)
c = 0 6 = −12 12 = −6 6
2 −2 2 2
A B
from a state-space representations in an arbitrary basis, the
C D
controllable canonical form is obtained through the following relationships:
Ac = P−1
c APc
−1
Bc = Pc B (5.34)
Cc = CPc
Ac − Bc Kc = P−1 −1
c APc − Pc BKc
(5.35)
= P−1
c A − BKc P−1
c Pc
This equation indicates that the controller gain matrix K in arbitrary state-
space representation reads:
K = Kc P−1c (5.36)
We have seen in the chapter dedicated to Realization of transfer functions
that P−1
c is a constant nonsingular change of basis matrix which is obtained
through the state matrix A and vector q Tc :
q Tc
q Tc A
P−1 = .. (5.37)
c
.
q Tc An−1
Note that χA−BK (Ac ) is not equal to 0 because χA−BK (s) is not the
characteristic polynomial of matrix Ac .
Thanks to Equation (5.16) and the relationship pi = ai + Ki we get:
Due to the fact that coecients Ki are scalar we can equivalently write:
uT = 1 0 · · · 0 (5.46)
uT A2c = 0 0 1 0 · · ·
.. (5.47)
.
uT Acn−1 = 0 · · · 0 1
Thus we get the expression of the controller matrix Kc when we use the
controllable canonical form.
We multiply Equation (5.48) by P−1 c and use the fact that
k
Ac = Pc APc = Pc A Pc get the expression of the controller gain matrix
k −1 −1 k
uT P−1 T
c = qc (5.50)
Consequently Equation (5.49) reduces to be the Ackermann's formula (5.33):
K = q Tc χA−BK (A) (5.51)
∗ ∗ 1 0
1 ∗ ∗ s ..
.
(sI − Ac + Bc Kc )−1 Bc = .. .. ..
det (sI − Ac + Bc Kc ) . . .
0
∗ ∗ s n−1 1
1
Cc s
⇒ Cc (sI − Ac + Bc Kc )−1 Bc H = .. H
det (sI − Ac + Bc Kc ) .
sn−1
(5.52)
A practical use of this observation is that state feedback gain K can be used
to annihilate some zeros with negative real part (that are stable zeros).
˙
x̂(t) = Ax̂(t) + Bu(t) + L y(t) − (Cx̂(t) + Du(t))
(5.54)
u(t) = Hr(t) − Kx̂(t)
Gain matrices L, K and H are degrees of freedom which shall be set to
achieve some performance criteria.
The block diagram corresponding to the observer-based controller is shown
in Figure 5.1.
The estimation error e(t) is dened as follows:
Combining the dynamics of the state vector x(t) and of the estimation error
e(t), and using the fact that x̂(t) = x(t)−e(t), yields to the following state-space
representation for the closed-loop system:
ẋ(t) x(t) BH
= Acl + r(t) (5.57)
ė(t) e(t) 0
5.4. Observer-based controller 151
where:
A − BK BK
Acl = (5.58)
0 A − LC
Gain matrices L and K shall be chosen such that the eigenvalues of matrices
A − BK and A − LC are situated in the left half complex plane so that the
closed-loop system is asymptotically stable.
Furthermore it is worth noticing that matrix is block triangular.
Consequently we can write:
sI − A + BK −BK
det (sI − Acl ) = det
0 sI − A + LC
= det (sI − A + BK) det (sI − A + LC) (5.59)
5.4.2 Example
Design an output feedback controller for the following unstable plant:
1 0 1
ẋ(t) = x(t) + u(t)
0 2 2
(5.60)
y(t) = 3 5 x(t)
The poles of the controller shall be chosen to render the closed-loop stable
and to satisfy some specications. We choose (for example) to locate the poles
of the controller at λK1 = −1 and λK2 = −2.
First we check that is system is observable and controllable.
We have seen in example 5.1 how to design a state feedback controller. By
applying the separation principle the observer which estimates the state vector
x̂(t) which will feed the controller can be designed separately from the controller.
We have obtained:
K = −6 6
(5.61)
H = −0.125
152 Chapter 5. Controller design
Thus: −1
3 5 0 1
Po = Q−1
o Qoo =
3 10 1 3
10 −5 0 1
= 1
15
(5.68)
−3 3 1 3
1 −5 −5
= 15
3 6
We nally get:
1 −5 −5 198 −77
L = Po Lo = = (5.69)
15 3 6 33 52.8
5.4. Observer-based controller 153
We get:
U (s) = C(s)Y (s) (5.71)
Where:
C(s) = −K (sI − A + BK + LC)−1 L (5.72)
On the other hand, the realization of the controller transfer function C(s)
is assumed to be the following, where gain matrices K and L are the design
parameters of the controller :
ẋc (t) = (A − BK − LC) xc (t) + L(t)
(5.74)
u(t) = Kxc (t)
Thus the state space realization of the unity feedback loop reads:
ẋp (t) = Axp (t) + BKxc (t)
(5.76)
ẋc (t) = (A − BK − LC) xc (t) + L r(t) − Cxp (t)
y(t) = Cxp (t)
154 Chapter 5. Controller design
That is:
ẋp (t) A BK x p (t) 0
= + r(t)
ẋc (t) −LC A− BK − LC xc (t) L
xp (t) (5.77)
y(t) = C 0
xc (t)
Now we will use the fact that adding one column / row to another column
/ row does not change the value of the determinant. Thus
adding the second
sI − A −BK
row to the rst row of leads to the following
LC sI − A + BK + LC
expression of χAcl (s):
sI − A + LC sI − A + LC
χAcl (s) = det (5.80)
LC sI − A + BK + LC
χAcl (s) = det (sI − Acl ) = det (sI − A + LC) det (sI − A + BK) (5.82)
Setting A11 = Φ−1 (s), A21 = −K, A12 = B and A22 = I we get:
det (sI − A + BK) = det(Φ −1
−1(s) + BK)
Φ (s) B
= det
−K I
(5.87)
= det(A11 ) det(A22 − A21 A−1
11 A12 )
= det(Φ−1 (s)) det(I + KΦ(s)B)
= det (sI − A) det(I + KΦ(s)B)
It is worth noticing that the same result can be obtained by using the
following properties of determinant:
det(I + M1 M2 M3 ) = det(I + M3 M1 M2 ) = det(I + M2 M3 M1 ) and
det(M1 M2 ) = det(M2 M1 ) . Indeed:
det (sI − A + BK) = det (sI − A) I + (sI − A)−1 BK
= det ((sI − A) (I + Φ(s)BK)) (5.88)
= det (sI − A) det (I + Φ(s)BK)
= det (sI − A) det (I + KΦ(s)B)
156 Chapter 5. Controller design
This relationship does not lead to the value of gain K as soon as Nol (λKi )ω i
is a vector which is not invertible. Nevertheless assuming that n denotes the
order of state matrix A we can apply this relationship for the n desired closed-
loop eigenvalues. We get:
(5.94)
K v K1 · · · v Kn =− p1 · · · pn
We nally retrieve expression (5.136) of the static state feedback gain matrix
K:
−1
(5.96)
K=− p1 · · · pn v K1 · · · v Kn
3
L. S. Shieh, H. M. Dib and R. E. Yates, Sequential design of linear quadratic state
regulators via the optimal root-locus techniques, IEE Proceedings D - Control Theory and
Applications, vol. 135, no. 4, pp. 289-294, July 1988.
5.6. Pre-ltering applied to SISO plants 157
We have seen in section 1.5 that the (transmission) zeros of the open-loop
transfer function F(s) = C (sI − A)−1 B + D are dened as the values of s such
sI − A −B
that the rank of the Rosenbrock's system matrix R(s) = is
C D
lower than its normal rank, meaning that the rank of R(s) drops.
Now, assume that we apply the following feedback on the plant:
The (transmission) zeros of the closed-loop transfer function G(s) are dened
as the values of s such that the rank of the Rosenbrock's system matrix Rcl (s) is
lower than its normal rank, meaning that the rank of R(s) drops, where Rcl (s)
is dened as follows:
sI − (A − BK) −BH
Rcl (s) = (5.101)
(C − DK) DH
Thus, assuming that R(s) is a square matrix, we can write det (Rcl (s)) =
det (R(s)) det (H), from which it follows that the (transmission) zeros of a plant
are invariant under state feedback.
As shown in Figure 5.4, the prelter Cpf (s) is a controller which is situated
outside the feedback loop.
158 Chapter 5. Controller design
What is the purpose of the prelter ? Once the state feedback gain K is
designed, the eigenvalues of closed-loop state matrix A − BK are set, but not
the zeros of the closed-loop transfer function G(s):
Z(s)
G(s) = = N (sI − (A − BK))−1 B (5.104)
Rpf (s)
Z(s) Kpf
= (5.106)
R(s) Dcl (s)
As a consequence the transfer function of the prelter reads:
Kpf
Cpf (s) = (5.107)
Ncl (s)
Note that this is only possible because the roots of Ncl (s) have negative
real-parts, meaning Cpf (s) is stable.
5.7. Control with integral action 159
Figure 5.5: State feedback loop with prelter inside the closed-loop
Z(s)
Usually constant Kpf is set such that the static gain of R(s) is unitary,
meaning that the position error is zero:
Y (s)
= 1 ⇒ Kpf = Dcl (0) (5.108)
R(s) s=0
Additionally the numerator of the prelter may also cancel some slow stable
poles (poles in the left plane) of the closed-loop system when they are not placed
by the controller K. In this case, the numerator of the prelter Cpf (s) is no
more a constant.
Equivalently, the pre-lter may be inserted inside the closed-loop, as shown
in Figure 5.5.
Figure 5.4 and 5.5 are equivalent as soon as the following relationship holds:
C2 (s)G(s)
Cpf (s)G(s) = (5.109)
1 + C2 (s)G(s)
ẋ(t) A 0 x(t) B 0
= + u(t) + r(t)
ėI (t) TC 0 eI (t) 0 −T
(5.114)
z(t) C 0 x(t)
=
eI (t) 0 I eI (t)
The regulation problem deals with the case where r(t) = 0. In that situation
the preceding augmented state-space realization has the same structure than the
state-space realization (5.111). Thus the same techniques may be applied for
the purpose of regulator design.
On the other hand the tracking problem deals with the case where r(t) 6= 0.
Let's denote xa (t) the augmented state-space vector:
x(t)
xa (t) = (5.115)
eI (t)
Using the output equation y(t) = Cx(t) + Du(t) the control input u(t) can
be expressed as a function of the state vector x(t) and the reference input r(t):
Substituting the control law (5.119) into the state equation (5.117) of the
system reads:
It is worth noticing that in the special case where the feedforward gain
matrix D is zero (D = 0) and where the output matrix C is equal to identity
(C = I) then the static output feedback controller K reduces to be a static state
feedback controller.
Let λK1 , · · · , λKn be n distinct specied eigenvalues of the closed-loop state
matrix Acl . Furthermore we assume that eigenvalues of matrix A do not shift
(meaning that they are dierent) the eigenvalues λKi of the closed-loop state
matrix Acl . Let v Ki be an eigenvector of the closed-loop state matrix Acl
corresponding to eigenvalue λKi :
A − B (I + KD)−1 KC v Ki = λKi v Ki (5.122)
AV − VΛp + BP = 0 (5.126)
162 Chapter 5. Controller design
Where: ( T
pi = pi,1 · · · pi,m
(5.129)
Wi = − (A − λKi I)−1 B
P = − (I + KD)−1 KCV
⇔ P = − (I + KD)−1 KCV
(5.130)
⇔ (I + KD) P = −KCV
⇔ K (CV + DP) = −P
Or equivalently:
−1
(5.133)
K=− p1 . . . pn C v K1 . . . v Kn +D p1 . . . pn
v Ki
It is clear that each (n + m) × 1 vector belongs to the kernel of
pi
matrix A − λKi I B . So once any (n + m) × 1 vector which belongs
To get this result we start by observing that in the SISO case where D = 0
and C = I parameter vector are scalars; they will be denoted pi . Let vector
K i be dened as follows:
K i = − (A − λKi I)−1 B (5.142)
of K:
−1
K = − p1 · · · pn KQ1 p1 · · · K n pn
pi
Qi6n=1 0
i=1 pi
..
−1
=− p1 · · · pn . K1 · · · Kn
Q
p
0 Qi6n=n i
p
−1 i=1 i
=− 1 ··· 1 K1 · · · Kn
(5.145)
−1
Using the expression of vector K i = − (A − λKi I) B provided by
Equation (5.142) we nally get:
K= 1 ··· 1
−1
(A − λK1 I)−1 B · · · (A − λKn I)−1 B (5.146)
AX + XB + C + XDX = 0 (5.147)
Matrices A, B, C and D are known whereas matrix X has to be determined.
The general algebraic Lyapunov equation is obtained as a special case of the
algebraic Riccati by setting D = 0.
The general algebraic Riccati equation can be solved5 by considering the
following 2n × 2n matrix H:
B D
H= (5.148)
−C −A
We will focus our attention on the rst equation and split matrix M1 as
follows:
M11
M1 = (5.151)
M12
Assuming that matrix M11 is not singular, we can check that a solution X
of the general algebraic Riccati equation (5.147) reads:
X = M12 M−1
11 (5.153)
Indeed:
BM11 + DM12 = M11 Λ1
CM11 + AM12 = −M12 Λ1
X = M12 M−1
11
⇒ AX + XB + C + XDX = AM12 M−1 −1
11 + M12 M11 B + C
+M12 M−1 −1
11 DM12 M11
= (AM12 + CM11 ) M−1 11
+M12 M−1 11 (BM11 + DM12 ) M11
−1
Let's consider a static output feedback where the control u(t) is proportional
to output y(t) through gain K as well as reference input r(t):
Then, given any set Λp there exists a static output feedback gain K such
that the eigenvalues of A − BKC are precisely the elements of the set Λp .
Furthermore, in view of (5.158), the same methodology than in section 5.5.1
can be applied to compute K.
Let Nol (s) := adj (sI − A) B ∈ Rn×m , where adj (sI − A) stands for the
adjugate matrix of sI − A, and D(s) := det (sI − A) is the determinant of
sI − A, that is the characteristic polynomial of the plant :
adj (sI − A) B Nol (s)
(sI − A)−1 B = := (5.159)
det (sI − A) D(s)
Consequently, we get from (5.158) the expression of the characteristic
polynomial of the closed-loop transfer function G(s):
This relationship does not lead to the value of gain K as soon as Nol (λKi )ω i
is a vector which is not invertible. Nevertheless assuming that n denotes the
order of state matrix A we can apply this relationship for the p closed-loop
eigenvalues given by Λp . We get:
h i
KC v K1 · · · v Kp = − p1 · · · pp (5.164)
We nally retrieve expression (5.136) of the static state feedback gain matrix
K to get the p closed-loop eigenvalues given by Λp :
K = −P (CV)−1 (5.166)
where:
( h i
P = D(λK1 ) ω 1 · · · D(λKp ) ω p := p1 · · · pp
(5.167)
V = Nol (λK1 ) ω 1 · · · Nol (λKp ) ω p := v K1 · · · v Kp
Then relationship (5.167) still holds when vectors v Ki and pi are dened as
follows where vector ν i 6= 0 ∈ Cp is used as a design parameter:
v Ki = Ndol (λKi ) ν i
∀ i = 1, · · · , m (5.169)
pi = Dd (λKi ) ν i
6
L. S. Shieh, H. M. Dib and R. E. Yates, Sequential design of linear quadratic state
regulators via the optimal root-locus techniques, IEE Proceedings D - Control Theory and
Applications, vol. 135, no. 4, pp. 289-294, July 1988.
7
G. R. Duan, Parametric eigenstructure assignment via output feedback based on singular
value decompositions, Proceedings of the 40th IEEE Conference on Decision and Control (Cat.
No.01CH37228), Orlando, FL, USA, 2001, pp. 2665-2670 vol.3.
5.9. Static output feedback 169
Furthermore, and assuming that rank (B) = m and rank (C) = p, the
remaining n − p eigenvalues of the closed-loop matrix A − BKC can be
achieved by selecting parameter vectors ω i 6= 0 and ν j 6= 0 such that the
following constraints hold:
ω i 6= 0 ∈ Cm×1 , i = 1, · · · , p
ν j Nji ω i = 0 where
T
(5.171)
ν j 6= 0 ∈ Cp×1 , j = p + 1, · · · , n
Matrices Ndol (λKi ) and Nol (λKj ) are dened in (5.159) and (5.170).
The last component of each parameter vectors as follows is set as follows:
− If the eigenvalue λKi and λKj are complex conjugate, the last component
of parameter vectors ω i and ν i is set to 1 + j whereas the last component
of parameter vectors ω j and ν j is set to 1 − j ;
− More generally, Duan7 has shown that to fulll (5.171) parameter vectors
ω i and ν i are real as soon as λKi is real. But if λKi and λKj are complex
conjugate, that is λKi = λ̄Kj , then ω i = ω̄ j and ν i = ν̄ j .
Alexandridis & al.8 have shown that given a set Λ of n eigenvalues λKi for
the closed-loop system, we have to determine p parameter vectors ω i such that
there exits n − p parameter vectors ν i which solve the set of bilinear algebraic
equations (5.171).
From (5.171) there is p × (n − p) equality constraints which shall be fullled.
On the other hand, p parameter vectors ω i with m − 1 free parameters (the last
component is set) and n−p parameter vectors ν j with p−1 free parameters (the
last component is set) have to be found. A necessary condition for constraints
(5.171) to be solvable is that the number of equations must be equal or less than
the sum of the free parameters:
p × (n − p) ≤ p × (m − 1) + (n − p) × (p − 1) ⇔ m × p ≥ n (5.173)
8
A. T. Alexandridis and P. N. Paraskevopoulos, A new approach to eigenstructure
assignment by output feedback, IEEE Transactions on Automatic Control, vol. 41, no. 7,
pp. 1046-1050, July 1996
170 Chapter 5. Controller design
where:
Uj and Vi are unitary matrices
σi ∈ R+ ∀i = 1, 2, · · · , q
(5.175)
σ ≥ σ2 ≥ · · · ≥ σq > 0
1
q = min(m, p) assuming that Nji has no eigenvalue equal to 0
In all cases, and assuming that ω i and possibly ν j have be chosen such that
det (CV) 6= 0, static output feedback K is computed thanks to (5.166).
Thus:
0
ẋa (t) = Aa xa (t) + Ba u(t) + r(t) (5.180)
−I
where:
A 0
Aa =
C 0 (5.181)
B
Ba =
0
Furthermore, assuming that ṙ(t) = 0, we have:
d
ṙ(t) = 0 ⇒ e(t) = Cẋ(t) = CAx(t) + CBu(t) (5.182)
dt
Using the denition of xa (t), the PID controller reads:
Rt
d
u(t) = − Kp e(t) + Ki 0 e(τ )dτ + Kd dt e(t)
d
= −Kp Cx(t)
p Cr(t) − K
+K i dt e(t)− Kd (CAx(t)
+ CBu(t))
= −Kp C 0 xa (t) − Ki 0 I xa (t) − Kd CA 0 xa (t)
−K d CBu(t) + Kp Cr(t)
C 0
= − Kp Ki Kd 0 I xa (t) − Kd CBu(t) + Kp Cr(t)
CA 0
(5.183)
We will assume that I+Kd CB is invertible and dene Ca and Ka as follows:
C 0
C = 0 I
a
(5.184)
CA 0
Ka = (I + Kd CB)−1 Kp Ki Kd
Let K
e p, K
e i and K
e d be dened as follows:
K
e p = (I + Kd CB)−1 Kp
e i = (I + Kd CB)−1 Ki
K (5.185)
Ke d = (I + Kd CB)−1 Kd
Assuming that K e p, K
e i and Ke d are known, gains Kp , Ki and Kd are obtained
as follows where it can be shown9 that matrix I − CBK e d is always invertible:
−1
d
K = K
e d I − CBKe d
Kp = (I + Kd CB) Kp e (5.186)
K = (I + K CB) K
i d
ei
Thus the problem of PID controller design is changed into the following
static output feedback problem:
ẋa (t) = Aa xa (t) + Ba u(t)
y (t) = Ca xa (t) (5.187)
a −1
u(t) = −Ka y a (t) + (I + Kd CB) Kp Cr(t)
172 Chapter 5. Controller design
It is worth noticing that the same results are obtained, but without the
assumption that ṙ(t) = 0, when a PI-D controller is used; for such a controller
the term multiplied by Kd is y(t) rather than e(t):
Z t
d
u(t) = − Kp e(t) + Ki e(τ )dτ + Kd y(t) (5.188)
0 dt
Brasch & Pearson10 have computed the number ni of integrators that can
be added to increase the size of the output vector:
ni = min(pc − 1, po − 1) (5.190)
where pc is the controllability index of the plant and po the observability index
of the plant.
The controllability index pc of the plant is the smallest integers such that:
B AB · · · Apc −1 B (5.191)
rank =n
Furthermore the control u(t) of the augmented plant, that is the plant and
the ni integrators, will be taken to be the actual input u(t) of the plant and the
10
F. Brasch and J. Pearson, Pole placement using dynamic compensators, IEEE
Transactions on Automatic Control, vol. 15, no. 1, pp. 34-43, February 1970.
5.10. Mode decoupling 173
Then we dene matrices Ani , Bni and Cni of the augmented plant as follows
where 0ni is the null matrix of size ni and Ini is the identity matrix of size ni :
A 0
Ani =
0 0ni
B 0
Bni = (5.195)
0 Ini
C 0
Cni =
0 Ini
Alternatively matrices Ani , Bni and Cni of the augmented plant can be
dened as follows where 0p×r is the null matrix of size p × r and Ini is the
identity matrix of size ni :
A 0
Ani =
Ci 01×ni
..
0(ni −1)×n Ini −1 . 0(ni −1)×1
B (5.196)
Bni =
0
C 0
Cni =
0 Ini
T
Let's assume that u(t) can be split into u1 (t) u2 (t) ; similarly we
T
assume that y(t) can be split into y 1 (t) y 2 (t) . Thus the state-space
representation reads:
u1 (t)
ẋ(t) = Ax(t) + B1 B2
u2 (t)
(5.199)
y 1
(t) C1
= x(t)
y 2 (t) C2
Thus input u1 (t) and output y 2 (t) will be decoupled as soon as transfer
function Fu1 y2 (s) is null:
We conclude that transfer function Fu1 y2 (s) is null as soon as the following
relationship holds:
Where:
" #
A
e 11 0
An = P−1
n APn :=
A
e 21 A
e 22
h i 0
Bn = B1 B2 where B1 = Pn B1 := e
e e e −1 (5.211)
" # B21
C
e1 h i
C = where C = C P :=
n
e 2 2 n C
e 21 0
Ce2
Similarly to the open-loop case the transfer function GK (s) of the closed-
loop system when the control u(t) is −Kx(t) + Hr(t) reads:
Where:
GK (s) = C (sI − (A − BK))−1 BH (5.216)
As in the open-loop case the transfer function GK (s) of the closed-loop
system may be expressed as a function of the closed-loop eigenvalues λKi and
the left and right eigenvectors of matrix A−BK. Assuming that matrix A−BK
is diagonalizable we have:
n
X Cv Ki wTKi BH
GK (s) = (5.217)
s − λKi
i=1
Figure 5.6 presents the modal decomposition of the transfer function where
xm (t) is the state vector expressed in the modal basis and matrices Λcl , P and
P−1 are dened as follows:
λK 1
..
Λcl = .
λKn
(5.218)
P = vK1 · · · v Kn
T
wK1
..
−1
P = .
wTKn
− Mode λKi will not appear in the j th component of the state vector x(t) if
the following relationship holds:
f Tj v Ki = 0 (5.219)
T
For instance if v Ki = ∗ ∗ 0 ∗ where ∗ represents unspecied
T
⇒ f Tj v Ki = (5.221)
v Ki = ∗ ∗ 0 ∗ 0 0 1 0 v Ki = 0
− Mode λKi will not appear in the j th component of the output vector y(t)
if the following relationship holds:
f Tj Cv Ki = 0 (5.222)
− Similarly mode λKi will not be excited by the j th component of the control
vector u(t) if the following relationship holds:
f Tj Kv Ki = 0 (5.223)
This relationship comes from the fact that the the control vector u(t) is
built from the state feedback −Kx(t).
− Finally mode λKi will not be excited by the j th component of the reference
input r(t) if the following relationship holds:
σin
σi1
Denoting Σ = .. we get:
.
σin
S(λKi ) = U Σ 0 VT
⇔ S(λKi )V = U
Σ 0 (5.228)
⇔ S(λKi )V = UΣ 0
(5.229)
V= v i,1 · · · v i,n v i,(n+1) · · · v i,(n+m)
From (5.228) it is clear that the set of vectors v i,(n+1) , · · · , v i,(n+m) satisfy
the following relationship:
(5.231)
R(λKi ) = v i,(n+1) · · · v i,(n+m)
14
P. Kocsis, R. Fonod, Eigenstructure Decoupling in State Feedback Control Design, ATP
Journal plus, HMH s.r.o., 2012, ATP Journal plus, 2, pp.34-39. <hal-00847146>
5.10. Mode decoupling 179
v Ki = N(λKi ) z i (5.234)
− Finally decompose
matrix B
as follows where Y is a non-singular matrix
and where U = U0 U1 is an orthogonal matrix such that:
Y
(5.235)
B = U0 U1
0
One possible way to derive this decomposition is to use the singular value
decomposition of B:
Σ
B=U VT (5.236)
0
Y = ΣVT
(5.237)
U = U0 U1
(5.244)
· · · v Ki v Ki · · · → · · · Re(v Ki ) Im(v Ki ) · · ·
Furthermore eigenvalues λKi and λKi in the diagonal matrix Λcl are
replaced by Re(λKi ) and Im(λKi ) as follows:
.. ..
. .
λKi
→
Re(λKi ) Im(λKi ) (5.245)
λ Ki
− Im(λ K i ) Re(λ Ki )
.. ..
. .
That is:
Acl Re(v Ki ) Im(v Ki )
Re(λKi ) Im(λKi )
(5.248)
= Re(v Ki ) Im(v Ki )
− Im(λKi ) Re(λKi )
5.10.4 Example
Following an example provided by A. Fossard16 we consider the following system:
ẋ(t) = Ax(t) + Bu(t)
(5.249)
y(t) = Cx(t)
where:
1 0 0
A= 1 0 1
0 1 1
0 1
1 0 (5.250)
B =
0 1
C = 0 1 −1
1 0 0
This system has m = 2 inputs, n = 3 states and p = 2 outputs and is both
controllable and observable. We wish to nd a state feedback matrix K such
that the closed-loop eigenvalues are λK1 = −1, λK2 = −2, λK3 = −3.
Moreover it is desired that the rst output y1 (t) of y(t) is decoupled from
the rst mode λK1 whereas the second output y2 (t) of y(t) is decoupled from
the last two modes λK2 , λK3 .
The decoupling specications leads to the following expression of the product
CP where ∗ represents unspecied components:
0 ∗ ∗
(5.251)
CP = C v K1 v K2 v K3 =
∗ 0 0
0 1 −1 v K1 = 0 ⇒ 1 0 Cv K1 = f T1 Cv K1 = 0 (5.252)
16
A. Fossard, Commande modale des systèmes dynamiques, notes de cours, Sup'Aéro, 1994
182 Chapter 5. Controller design
As far as each matrix R(λKi ) reduces here to be a vector we set the non
zero parameter vector z i to 1; as a consequence vector v Ki = N(λKi )z i is set to
5.10. Mode decoupling 183
N(λKi ):
−0.2970443
v K1 = N(λK1 ) = −0.1980295
−0.1980295
0
v K2 = N(λK2 ) = 0.5070926 (5.257)
−0.1690309
0
v = N(λK3 ) = 0.3405026
K3
−0.0851257
Furthermore no update of vector v Ki has to be considered because the
number of columns of N(λKi ) is equal to 1.
Finally a singular value decomposition of B is performed:
Σ
B =U VT
0
0.7071068 0 −0.7071068 1.4142136 0
0 1
= 0 −1 0 0 1
−1 0
0.7071068 0 0.7071068 0 0
(5.258)
Then we dene Y = ΣVT and suitably split U = U0 U1 such that
U0 has m = 2 columns:
Y = ΣV = T 1.4142136 0 0 1 0 1.4142136
=
0 1 −1 0 −1 0
(5.259)
0.7071068 0
U = 0 −1
0
0.7071068 0
Assuming that compensator C(s) has the same dimension than plant F(s),
that is nc = n, and from the following settings:
Bcy = L
Bcr = B
Cc = −K (5.265)
D =0
cy
Dcr = I
We get:
ẋc (t) = Ac xc (t) + Ly(t) + Br(t)
(5.266)
u(t) = −Kxc (t) + r(t)
17
G. Radman, Design of a dynamic compensator for complete pole-zero placement, The
Twentieth Southeastern Symposium on System Theory, Charlotte, NC, USA, 1988, pp. 176-
177
5.11. Dynamical output feedback control 185
From the second relationship we get r(t) = u(t)+Kxc (t). Thus the previous
state space representation reads:
ẋc (t) = (Ac + BK) xc (t) + Bu(t) + Ly(t)
(5.267)
u(t) = −Kxc (t) + r(t)
Combining the dynamics of the state vector x(t) and of the estimation error
e(t) yields to the following state-space representation for the closed-loop system:
ẋ(t) A − BK BK x(t) B
= + r(t)
ė(t) A − LC − (Ac + BK) Ac + BK e(t) 0
(5.271)
Then, setting Ac such that:
Ac = A − LC − BK (5.272)
we get:
ẋ(t) A − BK BK x(t) B
= + r(t) (5.273)
ė(t) 0 A − LC e(t) 0
186 Chapter 5. Controller design
To get:
R(s)
U (s) = C(s) (5.276)
Y (s)
Where:
When setting:
Ac = Bcy = Bcr = Cc = 0
Dcy = −Kc (5.280)
Dcr = H
5.11. Dynamical output feedback control 187
− Ba is a matrix of size (n + nc ) × (m + nc );
− Ca is a matrix of size (p + nc ) × (n + nc );
− Ka is a matrix of size (m + nc ) × (p + nc );
If we wish to apply the Roppenecker's formula (5.133) to set the static output
feedback gain Ka so that np predened closed-loop eigenvalues are achieved, we
have to notice that matrix Ca Va is a (p + nc ) × np matrix. Consequently matrix
Ca Va is square and possibly invertible as soon as:
p + nc = n p (5.289)
The transfer function G(s) of the closed-loop system between the output
vector y(t) and the reference input vector r(t) reads:
y(t) = G(s)r(t) (5.294)
Where:
(sI − (Aa − Ba Ka Ca ))−1
G(s) = C 0 Ba Ha
BDcr
= C 0 (sI − (Aa − Ba Ka Ca ))−1
Bcr (5.295)
−1
sI − (A − BDcy C) BCc BDcr
= C 0
Bcy C sI + Ac Bcr
where:
λK1
P−1 Acl P = .. (5.300)
.
λKn
19
H. Sirisena, S. Choi, Pole placement in prescribed regions of the complex plane using
output feedback, IEEE Transactions on Automatic Control, 1975, Page(s):810 - 812
20
https://en.wikipedia.org/wiki/Bauer-Fike_theorem
190 Chapter 5. Controller design
The induced matrix 2-norm kPk2 is dened as the largest singular value of
P, that is the root square of the largest eigenvalue of PT P (or PPT ); similarly
k∆Acl k2 is the largest singular value of ∆Acl .
According to the preceding equation, to guarantee a small variation of the
assigned poles against possible perturbations, one has to achieve a small
condition number κ(P) of the eigenvector matrix.
To get this result we rst rewrite the relationship which links the eigenvalue
λKi and the corresponding right eigenvector v K1 :
On the other hand the relationship which links the eigenvalue λKi and the
corresponding left eigenvector v K1 is the following:
As far as the left and right eigenvectors are normalized such that wTKi v Ki = 1
we get:
∆λKi = wTKi ∆Acl v Ki (5.307)
Be taking the norm of the preceding relationship we nally obtain:
(5.313)
Vi = v K1 · · · v Ki−1 v Ki+1 · · · v Kn
Thus Ji can be interpreted as the inverse of the sinus of the angle between
v Ki and Vi . Minimizing the sensitivity of the eigenvalues of Acl = A − BK to
perturbations can be done by choosing a set of eigenvectors v Ki so that each is
maximally orthogonal to the space spanned by the remaining vectors. In others
words eigenvectors v Ki are shaped such that they are as orthogonal as possible to
the remaining eigenvectors, which consequently minimizes the condition number
of κ(P) where P = v K1 · · · v Kn .
(5.314)
S(λKi ) = A − λKi I B
For complex conjugate eigenvalues λKi and λKi , the corresponding free
parameters matrices z(λKi ) and z(λKi ) shall be chosen to be equal:
(5.318)
R(Z) = R(λK1 ) · · · R(λKn ) ×Z
− Then Schmid et al.22 have shown that for almost every choice of the
parameter matrix Z the rank of matrix N(Z) is equal to n as well as the
rank of matrix Z. Furthermore the m × n gain matrix K such that the
eigenvalues of Acl = A − BK read (λK1 , · · · , λKn ) is given by:
K = −M(Z)N(Z)−1 (5.320)
22
R. Schmid, P. Pandey, T. Nguyen, Robust Pole Placement With Moore's Algorithm,
IEEE Trans. Automatic Control, 2014, 59(2), 500-505
Appendices
Appendix A
A.2 Vectors
A.2.1 Denitions
A column vector, or simply a vector, is a set of numbers which are written in a
column form:
x1
x2
x= . (A.1)
..
xn
xT = (A.2)
x1 x2 · · · xn
1
https://www.researchgate.net/publication/242366881_Linear_Algebra_Primer
2
https://atmos.washington.edu/ hakim/591/LA_primer.pdf
196 Appendix A. Refresher on linear algebra
Commutative:
x+y =y+x (A.4)
Associative:
(x + y) + z = x + (y + z) (A.5)
− The sum (or subtraction) of two vectors which are not of the same size is
undened.
− The inner product (or dot product ) xT y of two vectors x and y of the same
size is obtained by multiplying each number element-wise:
x1
x2
x = ..
.
n
xn
X
⇒ xT y = x i yi (A.7)
y1
i
y2
y = ..
.
yn
A.3. Matrices 197
A.3 Matrices
A.3.1 Denitions
A n × m matrix is a rectangular array of numbers formed by n rows and m
columns:
a11 · · · a1m
A = ... ..
. (A.8)
an1 · · · anm
Number aij refers to the number which is situated on the ith row and the
column.
j th
Matrix and vectors can be used to represent a system of equations in a
compact form:
a11 x1 + · · · a1m xm = b1
..
.
a x + · · · a x = bm
n1 1 nm m
(A.9)
a11 · · · a1m x1 b1
.. . . .
.. .. = ..
⇔ .
an1 · · · anm xm bm
⇔ Ax = b
− A square matrix is a matrix with the same number of rows and columns;
− A diagonal matrix is a square matrix in which the numbers outside the
main diagonal are all zero;
− The identity matrix I is a diagonal matrix having only ones along the main
diagonal:
1 0 ··· 0
.. (A.10)
I= .
0 ··· 0 1
− The transpose of a matrix A has rows and columns which are interchanged:
the rst row becomes the rst column, the second row becomes the second
column and so on. The transpose of a matrix A is denoted AT :
a11 · · · a1m a11 · · · an1
A = ... .. ⇒ AT = ..
. .
..
. (A.11)
A.3.3 Properties
For any matrices A, B and C the following hold:
− A+B=B+A
− (A + B) + C = A + (B + C)
− IA = AI = A
− (AB) C = A (BC)
− A (B + C) = AB + AC
− A0 = I
− (AB)T = BT AT
The inverse of a square matrix A is the matrix denoted A−1 such that:
The number on the ith and j th column of the adjoint matrix adj (A) is the
cofactor of aij . The cofactor of aij is the determinant of the submatrix Aij
200 Appendix A. Refresher on linear algebra
obtained by removing the ith row and the j th column from A multiplied by
(−1)i+j .
For of a 2 × 2 square matrix we get:
a11 a22 − a21 a12
det (A) =
a11 a12
A= ⇒ a22 −a12
a21 a22 adj (A) =
−a
21 a11 (A.22)
a22 −a12
⇒ A−1 = a11 a22 −a
1
21 a12 −a21 a11
It can be shown that:
− If det (A) 6= 0 then A is nonsingular ;
sx = Ax (A.24)
(sI − A) x = 0 (A.25)
Overview of Lagrangian
Mechanics
where:
q = [q1 , · · · , qn ]T (B.2)
− The Lagrangian L denotes the dierence between the kinetic energy, which
is denoted T (q, q̇), and the potential energy, which is denoted V (q). The
kinetic energy T (q, q̇) depends on the generalized coordinates q and also
on their derivatives q̇ whereas the potential energy V (q) is a function of
only the generalized coordinates q :
− For a rigid body with mass m and moment of inertia I the kinetic energy
T is obtained as the sum between the kinetic energy due to the linear
204 Appendix B. Overview of Lagrangian Mechanics
velocity v of the body and its angular velocity ω , both velocities being
expressed in an inertial frame:
1 1
T = mv T v + ω T I ω (B.4)
2 2
It is worth noticing that the kinetic and the potential energy have to be
evaluated in an inertial frame.
1 T 1 b T
T = m vb vb + ω I ωb (B.5)
2 2
v = Rib η v b (B.6)
ω b = W(η) ω (B.7)
(B.9)
B.1. Euler-Lagrange equations 205
1 1
T = mv T v + ω T J η ω (B.10)
2 2
Finally, let IP be the inertia matrix with respect to a point P of the rigid
body, v bP the linear velocity of P expressed in the non-inertial frame, ω bP
its angular velocity expressed in the non-inertial frame and rP G the vector
between the rigid body centre of mass G and P . Then, denoting by ×
the cross product between two vectors, the kinetic energy T of P reads as
follows1 :
1 T b 1 b T T
T = m v bP vP + ωP IP ω bP + m v bP ω bP × rP G (B.12)
2 2
1
Complete dynamic model of the Twin Rotor MIMO System (TRMS) with experimental
validation, Azamat Tastemirov, Andrea Lecchini-Visintini, Rafael M. Morales-Viviescas,
Control Engineering Practice 66 (2017) 8998
206 Appendix B. Overview of Lagrangian Mechanics
− The term C(q, q̇)q̇ is the so called Coriolis (terms involving products
q̇i q̇j i 6= j ) and centrifugal (terms involving products q̇i2 ) forces matrix.
It is worth noticing that the k th row of matrix C(q, q̇), which will be
denoted cTk (q, q̇), can be obtained thanks to the following relationship:
q̇ T Sk (q)
T
ck (q, q̇) =
1 ∂J k (q)
∂J (q) T ∂J(q)
k (B.16)
Sk (q) = 2 ∂q + ∂q − ∂qk
Assume now that the generalized coordinates q are not all independent but
subject to m constraints:
gj (q) = 0 j = 1, · · · , m (B.17)
Then the variations of δqi are not free but must obey to the following
relationships:
n
X ∂gj (q)
δgj (q) = δqi = 0 j = 1, · · · , m (B.18)
∂qi
i=1
The coordinates of the centre of gravity within the inertial frame read:
~ xG (t) l sin(θ(t))
OG(t) = = (B.20)
yG (t) −l cos(θ(t))
By taking the derivative we get the components of the velocity vector as well
as the square of its norm:
d ~ lθ̇ cos(θ)
v(t) = OG(t) = ⇒ v(t)T v(t) = l2 θ̇2 (B.21)
dt lθ̇ sin(θ)
The kinetic energy T (q, q̇) and the potential energy V (q) read:
The non-conservative generalized forces (forces and torques) are here the
torque u(t) applied by the motor as well as the friction torque −k θ̇ which is
proportional to the angular velocity θ̇:
Q = u(t) − k θ̇ (B.25)
That is:
(ml2 + I)θ̈ + k θ̇ + mgl sin (θ) = u(t) (B.27)
It is clear that the preceding equation can be written as J(q)q̈ + D(q)q̇ +
G(q) = Q (cf. (B.14)) where the term D(q)q̇ corresponds to the friction torque
k θ̇.
B.3 Quadrotor
The quadcopter structure is presented in Figure B.2. It shows angular velocities
ωi and forces fi created by the four rotors, numbered from i = 1 to i = 4. Torque
direction is opposite to velocities ωi .
Rbi (η) =
Rφ Rθ Rψ
1 0 0 cθ 0 −sθ cψ sψ 0
= 0 cφ sφ 0 1 0 −sψ cψ 0
0 −sφ cφ sθ 0 cθ 0 0 1 (B.29)
cθ cψ cθ sψ −sθ
= (sφ sθ cψ − cφ sψ ) (sφ sθ sψ + cφ cψ ) sφ cθ
(cφ sθ cψ + sφ sψ ) (cφ sθ sψ − sφ cψ ) cφ cθ
Rib (η) can be seen as matrix Ω(ν) of the angular velocities in the body frame
expressed in the inertial frame:
0 −r q
d i
R (η) = Rib (η) Ω(ν) where Ω(ν) = −Ω(ν)T = r 0 −p ] (B.35)
dt b
−q p 0
Conversely we have:
η̇ = W(η)−1 ν (B.36)
where:
1 sin(φ) tan(θ) cos(φ) tan(θ)
W(η)−1 = 0 cos(φ) − sin(φ) (B.37)
sin(φ) cos(φ)
0 cos(θ) cos(θ)
− d is the distance between the rotor and the centre of mass of the
quadcopter, that is the arm length basically;
− fi is the thrust force created by each rotor in the direction of the body
zb -axis;
Let vector f ia be the thrust force created by all rotors in the inertial frame:
0 0
f ia = Rib (η) 0 = Rib (η) 0 (B.39)
P4 2
ft i=1 Cl ωi
Where Rib (η) denotes the rotation matrix from the body frame to the
inertial frame.
B.3. Quadrotor 211
d Cl ω42 − ω22
τφ
τ ba = τθ = d Cl ω32 − ω12 (B.40)
2 2 2 2
τψ Cd −ω1 + ω2 − ω3 + ω4
0 p 0
d
τ bg = − Ir dt 0 + q × Ir 0
P4 P4
i=1 sgn(ωi ) ωi r i=1 sgn(ωi ) ωi
Ir q (ω1 − ω2 + ω3 − ω4 )
= −Ir p (ω1 − ω2 + ω3 − ω4 )
Ir (ω̇1 − ω̇2 + ω̇3 − ω̇4 )
(B.41)
where sgn(ωi ) = +1 for counterclockwise propeller rotation and
sgn(ωi ) = −1 for clockwise propeller rotation.
We nally get:
τ b = τ ba + τ bg
T
T q, q̇ = 12 mξ̇ ξ̇ + 21 ν T I ν
T
= 12 mξ̇ ξ̇ + 21 η̇ T W(η)T I W(η)η̇ (B.45)
T
= 12 mξ̇ ξ̇ + 21 η̇ T J(η) η̇
where we use symmetric matrix J(η) dened as follows:
J(η) = W(η)T I W(η) = J(η)T (B.46)
From (B.34) and (B.44) matrix J(η) reads:
1 0 0 Ix 0 0 1 0 −sθ
J(η) = 0 cφ −sφ 0 Iy 0 0 cφ sφ cθ
−sθ sφ cθ cφ cθ 0 0 Iz 0 −sφ cφ cθ
(B.47)
−Ix sθ
Ix 0
= 0 Iy c2φ + Iz s2φ (Iy − Iz ) cφ sφ cθ
2 2 2 2 2
−Ix sθ (Iy − Iz ) cφ sφ cθ Ix sθ + Iy sφ cθ + Iz cφ cθ
Thus:
1 T 1 2 1 2
η̇ J(η) η̇ = Ix φ̇ − ψ̇ sin θ + Iy θ̇ cos φ + ψ̇ sin φ cos θ
2 2 2
1 2
+ Iz θ̇ sin φ − ψ̇ cos φ cos θ (B.48)
2
Kinetic energy T q, q̇ as a function of the chosen generalized coordinates
nally reads:
1 1 2
T q, q̇ = m ẋ2 + ẏ 2 + ż 2 + Ix φ̇ − ψ̇ sin θ
2 2
1 2 1 2
+ Iy θ̇ cos φ + ψ̇ sin φ cos θ + Iz θ̇ sin φ − ψ̇ cos φ cos θ (B.49)
2 2
It can be shown that the determinant of symmetric matrix J(η) reads as
follows and that this is a positive denite matrix ∀ θ 6= (2k+1) π/2, k = 1, 2, · · · :
det J(η) = Ix Iy Iz (cos(θ))2 (B.50)
B.3. Quadrotor 213
B.3.8 Lagrangian
Consequently Lagrangian L reads:
L = T q, q̇ − V (q)
T
= 12 mξ̇ ξ̇ + 21 η̇ T J(η) η̇ − mg 0 0 1 ξ
2
= 12 m ẋ2 + ẏ 2 + ż 2 + 21 Ix φ̇ − ψ̇ sin θ
(B.52)
2 2
+ 12 Iy θ̇ cos φ + ψ̇ sin φ cos θ + 12 Iz θ̇ sin φ − ψ̇ cos φ cos θ
−m g z
The preceding equation can be rewritten as follows where C(η, η̇) η̇ is the
Coriolis and centrifugal forces matrix:
The expression of J(η) has been provided in (B.47) whereas the expression
of coecients Cij of matrix C(η, η̇) are the following:
C11 C12 C13
C(η, η̇) = C21 C22 C23 (B.58)
C31 C32 C33
where:
C11 = 0
C12 = (Iy − Iz )θ̇cφ sφ + 21 ψ̇cθ (Iy − Iz )(s2φ − c2φ ) − Ix
C13 = (Iz − Iy )ψ̇cφ sφ c2θ + 12 θ̇cθ (Iy − Iz )(s2φ − c2φ ) − Ix
C21 = (Iz − Iy )θ̇cφ sφ + 21 ψ̇cθ (Iz − Iy )(s2φ − c2φ ) + Ix
C22 = (Iz − Iy )φ̇cφ sφ
C23 = Iy s2φ + Iz c2φ − Ix ψ̇sθ cθ
1
2 − c2 ) + I
(B.59)
+ 2 φ̇c θ (Iz − I y )(sφ φ x
C = (I − I ) ψ̇c 2s c
31 y z θ φ φ
1 2 − s2 ) − I
−
+ θ̇c
2 θ (I y Iz )(c x
φ φ
C = (I − I ) θ̇c s s + I − I s2 − I c2 ψ̇s c
32 z y φ φ θ x y φ z
φ
θ θ
1 2 2
+ 2 φ̇cθ (Iy − Iz )(cφ − sφ ) − Ix
2 2 2
33 = (Iy − Iz )φ̇cφ sφ cθ + Ix θ̇cθ sθ − ψ̇cθ sθ Iy sφ + Iz cφ
C
It is worth noticing that the k th row of matrix C(η, η̇) η̇ , which will be
denoted ck (η, η̇), can be obtained thanks to the following relationship:
(B.60)
∂J T
1 ∂J k (η) k (η) ∂J(η)
Sk (η) = 2 ∂η + ∂η − ∂ηk
Where:
0 −r q
Ω(ν) = −Ω(ν)T = r 0 −p (B.63)
−q p 0
The preceding Newton-Euler equations are equivalent to equations (B.55)
and (B.61) obtained through the Euler-Lagrange formalism:
b
v = Rbi (η)ξ̇
0 0
1
m v̇ b + m Ω(ν) v b = f b Rib (η)
ξ̈ = m 0 −g 0
−1 ⇔ P4
η̇ = W(η) ν
i=1 fi 1
η̈ = J(η)−1 τ i − C(η, η̇) η̇
ν̇ = I−1 τ b − Ω(ν) I ν
(B.64)
Where:
τ i = W(η)T τ b (B.65)
The equivalence of the translational equations of motion is easily veried
thanks to the kinematics relationships.
As far as the Newton-Euler equations related to the rotational equations of
motion we get:
ν = W(η) η̇
I(ν̇ + Ω(ν) I ν = τ b
ν̇ = Ẇ(η) η̇ + W(η)η̈
⇒ (B.66)
I Ẇ(η) η̇ + W(η) η̈ + Ω(ν) IW(η) η̇ = τ b
⇒ IW(η) η̈ + I Ẇ(η) + Ω(ν) IW(η) η̇ = τ b
components of v b , that is the velocity of the center of mass when the center of
mass is considered, that is when ∆x = ∆y = ∆z = 0) reads as follows2 :
v̇ b vb fb
mI −∆ m Ω(ν) −Ω(ν)∆
+ = (B.68)
∆ I ν̇ Ω(ν)∆ Ω(ν)I − V∆ ν τb
Where:
0 −r q
Ω(ν) = r 0 −p
−q p 0
0 −m ∆ m ∆y
z
∆ = m ∆z 0 −m ∆x (B.69)
−m ∆y m ∆x 0
0 −wb vb
−ub
V = w b 0
−vb ub 0
where:
u
vb = v (B.71)
w
Rotation matrix Rib (η) is given by (B.30).
Taking the time derivative of the velocity in the inertial frame, v i , we get:
We get:
f b = m(Rbi (η)Ṙib v b + v̇ b + Rbi (η)ẇ)
P
P
fb (B.75)
⇔ v̇ b = m − Rbi (η)Ṙib v b − Rbi (η)ẇ
2
Barton J. Bacon and Irene M. Gregory, General Equations of Motion for a Damaged
Asymmetric Aircraft, NASA Langley Research Center, Hampton, VA, 23681
B.3. Quadrotor 217
where Rbi (η)Ṙib := Ω(ν) has been seen previously. We nally get the
following equation of motion taking into account the wind component w reads:
fb
P
b
v̇ = − Ω(ν)v b − Rbi (η)ẇ (B.76)
m
Furthermore the wind is assumed to be not a constant but dependent on time
t as well as on the quadcopter location ξ := [x, y, z]T . So we have: w := w(t, ξ).
Taking into account the rule of chain derivative we have:
∂w(t, ξ) ∂w(t, ξ) ∂ξ
ẇ(t, ξ) = + (B.77)
∂t ∂ξ ∂t
Taking into account that the time derivative of the location of the quadcopter
is its velocity expressed in the inertial we have:
∂ξ
= v i = Rib (η)v b + w (B.78)
∂t
fb
P
∂w(t, ξ) ∂w(t, ξ)
b
v̇ = − (Ω(ν) + Ω(w)) v b − Rbi (η) + w (B.79)
m ∂t ∂ξ
where:
∂w(t, ξ) i
Ω(w) = Rbi (η) Rb (η) (B.80)
∂ξ
Iν̇ + Ω(ν) Iν = τ b
(B.81)
⇔ ν̇ = I−1 τ b − Ω(ν) Iν
We have seen that angular velocities in the inertial frame are expressed in
the body frame through the transformation matrix W(η)−1 :
η̇ = W(η)−1 ν (B.82)
The derivative of (B.82) with respect to time of the preceding equation leads
to the expression of η̈ :
dW(η)−1
η̈ = ν + W(η)−1 ν̇ (B.83)
dt
218 Appendix B. Overview of Lagrangian Mechanics
η = 0 ⇒ W(η)−1 ≈ I ⇒ η̈ ≈ ν̇ ⇒ η̇ ≈ ν (B.85)
Where:
As = A11 − A12 L
(C.3)
Af = A22 + LA12
and:
Bf = LB1 + B2
(C.4)
Cs = C1 − C2 L
Conversely:
xs I −M I 0 x1 I − ML −M x1
= = (C.9)
xf 0 I L I x2 L I x2
Assuming that polynomials χAs (s) and χAf (s) are coprime (no common
root), (n − ns ) × ns matrix L and ns × (n − ns ) matrix M can be obtained as
follows1 :
L = −TS−1
(C.18)
M = U (V + LU)−1
Matrices S, T, U and V belongs to the nullspace (or Kernel) of χAs (A)
and χAf (A) respectively (notice that in the characteristic polynomial the scalar
variable s has been replaced by the n × n state matrix A), each nullspace being
partitioned appropriately:
S
= ker (χAs (A))
T
(C.19)
U
= ker χAf (A)
V
222 Appendix C. Singular perturbations and hierarchical control
Conversely:
x1 I M xs
= (C.26)
x2 −L I − LM xf
2
Jaw-Kuen Shiau & Der-Ming Ma, An autopilot design for the longitudinal dynamics
of a low-speed experimental UAV using two-time-scale cascade decomposition, Transactions-
Canadian Society for Mechanical Engineering, 2009 33(3):501-521, DOI: 10.1139/tcsme-2009-
0034
C.3. Two-frequency-scale transfer function 223
the real part of the eigenvalues λi of A are sorted in a descending manner, the
value of ns which delimits the border between the slow and the fast modes can
be obtained by nding the minimum of |λ|λnn+1 s|
|.
s
The slow subsystem is obtained by setting ẋf = 0 in (C.14). Physically, it
means that the fast components of the state vector have achieved the equilibrium
point well before the slow components of the state vector. We get from (C.14):
ẋs = As xs + Bs u
ẋ := 0 = Af xf + Bf u (C.32)
f
y = Cs xs + Cf xf
The so-called fast outputs are the outputs for which the Bode magnitude
plot of Ff (s) and F(s) match for high frequencies. In the time domain, the
impulse response of Ff (s) and F(s) match on the fast scale time.
Furthermore it can be noticed that the Bode magnitude and phase plots of
Fs (s) and F(s) match pretty well for low frequencies, both for fast and for slow
outputs.
Finally the following property holds:
The eigenvalues of A are λf , λ̄f = −1.919 ± 2.176j , which are the fast
eigenvalues, and λs , λ̄s = −0.007 ± 0.041j , which are the slow eigenvalues.
Figure C.1 shows the Bode magnitude plot and the impulse response of the
fast outputs, that are α and q : it can be seen that the Bode magnitude plot of
Ff (s) and F(s) match for high frequencies. In the time domain, the impulse
response of Ff (s) and F(s) match on the fast scale time (here 5 seconds) .
On the other hand, Figure C.2 shows the Bode magnitude plot and the
impulse response of the slow outputs, that are Vp and θ: contrary to the fast
outputs, the high frequencies Bode magnitude plot of Ff (s) and F(s) do not
match for high frequencies. The mismatch is also clear between the impulse
response of Ff (s) and F(s) on the fast scale time (here 5 seconds) .
Furthermore it can be noticed that the Bode magnitude and phase plots of
Fs (s) and F(s) match pretty well for low frequencies, both for fast and for slow
outputs.
Figure C.1: Bode magnitude plot and impulse response of fast outputs
Figure C.2: Bode magnitude plot and impulse response of slow outputs
C.4. Hierarchical state feedback of singularly perturbed system 227
Thus the state matrix Acl of the closed loop system reads as follows:
" #
As − Bs Ks −Bs Kf A
e 11 Ae 12
Acl = := A (C.44)
−Bf Ks Af − Bf Kf e 21 A
e 22
Then, assuming that 0 < 1 and that feedbacks Kf and Ks maintain the
distinction between the slow and the fast modes (in other words the real part
of the closed loop eigenvalues shall be chosen with the same order of magnitude
than the open loop eigenvalues), the eigenvalues of Acl can be approximated as
follows3 :
" #!
Ae 11 A
e 12
A
λ (Acl ) = λ ≈ λ ∪ λ A
e 22 e0
A A
(C.45)
e 21 e 22
≈ λ (Af − Bf Kf ) ∪ λ A0
e
A e 11 − A
e0 = A e −1 A
e 12 A e 21
22
−1
= As − Bs K s − Bs Kf (Af − Bf Kf ) Bf K s (C.46)
= As − Bs I + Kf (Af − Bf Kf )−1 Bf Ks
The preceding relationship indicates that the closed loop eigenvalues are
obtained by the union of two sets:
− the set of the fast closed loop eigenvalues λ (Af − Bf Kf ). This
corresponds to put the feedback u = −Kf xf on the following fast
subsystem:
ẋf = Af xf + Bf u (C.47)
ẋs = As xs + B
es u
e s = Bs I + Kf (Af − Bf Kf )−1 Bf
(C.48)
where B
3
Hassan K. Khalil, On the robustness of output feedback control methods to modeling
errors, IEEE Transactions on Automatic Control, Vol. AC-26, April 1981, pp 524-526
228 Appendix C. Singular perturbations and hierarchical control
Coming back to the physical states x1 and x2 of the initial system (C.1), we
nally get:
u = −K1 x1 − K2 x2 + r := −K x + r (C.49)
Where:
P−1
K := K1 K2 = Ks Kf
I − ML −M
(C.50)
= Ks Kf
L I
The core of this result is the Schur complement, which is stated hereafter:
X11 X12
= det (X22 ) det X11 − X12 X−1 (C.51)
det 22 X21
X21 X22
The Schur complement applied to the closed loop state matrix Acl reads:
" #!
sI − Ae 11 −A e 12
det
− A21 sI − A22
e e
! !−1
A
e 22 A
e 22 A
e 21
(C.52)
= det sI − det sI − A
e 11 − A
e 12 sI −
→0
as follows:
" #!
sI − A A
e 11
e 12
det A
sI − A
e 21 e 22
(C.53)
e 22 −1
≈ det sI − A22 det sI − A e 12 − A
e 11 − A A
e e 21
A −1
= det sI − det sI − A e 11 + A
e 12 A
e A
e 22 e 21
22
Example C.2. We extend example C.1 in order the achieve the following closed
loop eigenvalues:
λclf , λ̄clf = −1 ± j
(C.54)
λcls , λ̄cls = −0.01 ± 0.01j
It is worth noticing that the choice of the closed loop eigenvalues maintain
the distinction between the slow and the fast modes (in other words the real
part of the closed loop eigenvalues have been be chosen with the same order of
magnitude than the open loop eigenvalues).
The block diagonal form of (C.39) is obtained with change of basis matrix
P set as follows:
1. 0. 0.0002809 −0.3419851
0. 1. 0.0000081 0.000029
(C.55)
P=
9.8571029 −122755.35 0.0001549 −0.1884762
We get:
−96.657765
119379.91
Af =
−0.0752234 92.819752
−35266.671
B =
f
−51.728931
(C.56)
−96.700593 1204088.4
As =
−0.0077649 96.686006
−8.4994064
Bs =
−0.0006476
Then state feedback gains Kf and Ks are set as follows:
− state feedback gain Kf is set such that the eigenvalues of Af − Bf Kf are
equal to {λclf , λ̄clf }. This leads to the following value of Kf :
−5.913 562 21 × 10−5 7.584 822 89 × 10−2 (C.57)
Kf =
(C.60)
We check that the eigenvalues of A − BK of the whole system are close to
the expected eigenvalues {λclf , λ̄clf , λcls , λ̄cls }. Indeed:
λ (A − BK) = {−0.99748 ± 0.99930j, −0.01045 ± 0.00955j} (C.61)
Introduction to fractional
systems
D.1 Pre-ltering
A prelter Cpf (s) is a controller which is situated outside the feedback loop as
shown in Figure D.1.
What is the purpose of the prelter ? Once the controller C(s) is designed
the poles of the feedback loop transfer function RYpf(s)
(s) are set. Nevertheless the
numerator of this transfer function is not mastered and the zeros of RYpf(s)
(s) may
cause undesirable overshoots in the transient response of the closed loop system.
The purpose of the prelter Cpf (s) is to reduce or eliminate such overshoots in
the closed loop system.
Let Ncl (s) be the numerator of transfer function RYpf(s) (s) and Dcl (s) its
denominator:
Y (s) Ncl (s)
= (D.1)
Rpf (s) Dcl (s)
The prelter Cpf (s) is then designed such that its poles cancel the zeros of
the closed loop system, that are the roots of Ncl (s). Furthermore the numerator
of the prelter is usually set to be a constant Kpf such that the transfer function
of the full system reads:
Y (s) Kpf
= (D.2)
R(s) Dcl (s)
Kpf
Cpf (s) = (D.3)
Ncl (s)
Y (s)
Usually constant Kpf is set such that the static gain of R(s) is unitary,
meaning that the position error is zero:
Y (s)
= 1 ⇒ Kpf = Dcl (0) (D.4)
R(s) s=0
− Design the controller C(s) such that transfer function of feedback loop
without preltering (Cpf (s) = 1) has the desired denominator Dcl (s).
In other words controller C(s) is used to set the poles of the controlled
system.
− Design the prelter Cpf (s) such that transfer function of the full system
does not have any zero:
Y (s) Kpf
= (D.6)
R(s) Dcl (s)
In other words prelter Cpf (s) is used to shape the numerator of the
transfer function of the controlled system.
Obviously the plant is not stable, indeed there is one pole at +2. In order to
stabilize the plant we decide to use the following PD controller (we do not use
an integral action because the plant F (s) has already an integral term):
C(s) = Kp + Kd s (D.8)
D.3. Pre-ltering design for non-minimum phase feedback loop 235
Y (s) Kp + Kd s 2 + 7s
= 2 = 2 (D.12)
Rpf (s) s + s (Kd − 2) + Kp s + 5s + 2
Taking now into account prelter Cpf (s) transfer function Y (s)
R(s) reads:
Y (s)
Suppose that feedback loop transfer function P (s) = Rpf (s) has a positive
dP (s)
real zero of order one at s = z > 0, that is P (z) = 0 and ds s=z 6= 0. Such
a transfer function can be decomposed as follows where Pmp (s) is a minimum
1
That is:
s
s 1/M log2Y
(M/2) s 2k /M
1− = 1− 1+ (D.17)
z z z
k=0
where log2 (M/2) is the base 2 logarithm of M/2 and M is any number
multiple of 2.
The positive real zeroz can then be partially compensated through the
Qlog2 (M/2) k
s 2 /M
term DM (s) = k=0 1+ z which will appear in the denominator
of the transfer function of the prelter. Indeed it can be shown that PM (s) =
1
The next step consists in approximating the state space fractional system
with the following transfer function Pf (s) which will appear in the prelter:
1
Pf (s) = Q k (D.18)
log2 (M/2) s 2 /M
k=0 1+ z
We recall that as far as the approximated transfer function Gαi (s) of sαi has
distinct real poles λi , its partial fraction expansion reads:
N (s)
sαi ≈ Gαi (s) = D(s) +d
= N (s)
(s−λ1 )(s−λ2 )···(s−λn ) + d
(D.21)
r1 r2 rn
= s−λ1 + s−λ2 + · · · + s−λn +d
Number ri is called the residue of transfer function Gαi (s) in λi . When the
multiplicity of the pole (or eigenvalue) λi is 1 we have seen that residue ri can
be obtained thanks to the following formula:
ri = bi ci (D.23)
3
Comparison between two approximation methods of state space fractional systems
Mansouri Rachid, Bettayeb Maamar, Djennoune Said, Signal Processing 91 (2011) 461469
238 Appendix D. Introduction to fractional systems
where:
λ1 0 0
..
.
0 λ2
Aαi = .
.
.. ..
0
0 ···
0 λn
b1
b2 (D.25)
B = ..
αi
.
bn
C = c c · · · c
α 1 2 n
i
Dαi = d
Similarly the Laplace transform of the fractional integration operator I αi
iss−αi .The approximation of the fractional integration operation s−αi can be
obtained by exploiting the following equality4 :
1 1−αi
s−αi = s (D.26)
s
Because 0 ≤ 1−αi ≤ 1 as soon as αi ∈ (0, 1), fractional integration operation
s1−αi can be approximated by a transfer function similar to (D.19). Then the
nite dimension rational model is multiplied by 1s which leads to a strictly proper
approximation 1s s1−αi of the fractional order integration operation s−αi :
1 1−αi 1
s−αi = s ≈ G1−αi (s) (D.27)
s s
If all fractional orders are multiples of the same real number α ∈ (0, 1)
(commensurate fractionalorder systems), operator Dα x(t) simplies as follows:
T
Dα x(t) = Dα x1 (t) · · · Dα xn (t) (D.34)
I −1
I
G(s) = C I−A B+D (D.43)
s s
Denoting by I αi the fractional integration operator, state space
representation (D.42) can be extended to fractional case as follows3 :
where: T
I α w(t) = I α1 w1 (t) · · · I αn wn (t) (D.45)
Block-diagonal matrices A−α ∈ R(2N +2)·n×(2N +2)·n , B−α ∈ R(2N +2)·n×n and
C−α ∈ Rn×(2N +2)·n are obtained from (D.28) as follows:
A−α = diag A−α1 · · · A−αn
B−α = diag B−α1 · · · B−αn (D.47)
C−α = diag C−α1 · · · C−αn
Using the expression of w(t) provided in the rst equation of (D.46) within
the expression of ż(t) in the third equation of (D.46) and using the
approximation I α w(t) ≈ C−α z(t) provided in the fourth equation yields:
Example D.3. Coming back to Figure D.1, let's consider the following non-
minimum phase transfer function:
Y (s) 10s − 1
P (s) = = 2 (D.50)
Rpf (s) s + 1.4s + 1
It is clear that in order to obtain no static error, that is Y (s)
R(s) = 1
s2 +1.4s+1
,
we shall choose the prelter Cpf (s) as follows:
Y (s) 1
=
s2 +1.4s+1
= P (s)Cpf (s)
R(s)
(D.51)
⇒ Cpf (s) = 10s−1 = 1−−1s
1
0.1
Obviously Cpf (s) is not a stable system and this prelter cannot be
implemented.
Alternatively we can write P (s) as follows where Pmp (s) is a minimum phase
transfer function:
s s −1
P (s) = 1 − Pmp (s) = 1 − 2
(D.52)
z 0.1 s + 1.4s + 1
Then, choosing for example M = 4 we write:
s
s 1/M Qlog2 (M/2)
k
s 2 /M
1− 0.1 = 1− 0.1 k=0 1 + 0.1
(D.53)
s 0.25 s 0.25 s 0.5
= 1− 0.1 1 + 0.1 1 + 0.1
D.7. Approximation of fractional systems based on integration operator 243
Figure D.4 shows the step response of the plant with the rational prelter
Cpf (s): it can be seen that the non-minimum phase eect has been reduced but
the time response has been highly increased compared with the result obtained
with a static prelter Cpf (s) = −1 which leads to YR(s)
(s) 10s−1
= − s2 +1.4s+1 = −P (s)
244 Appendix D. Introduction to fractional systems
Figure D.4: Step response with approximated fractional prelter Cpf (s)