0% found this document useful (0 votes)
7 views

Linear System Theory and Design (Part 4)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Linear System Theory and Design (Part 4)

Uploaded by

Jack Ronms
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Linear System Theory and

Design

Zhu Fanglai
Chapter 6 Controllability and
Observability
6.1 Introduction
™ This chapter introduces the concepts of
controllability and observability.
™ Controllability deals with whether or not the
state-space equation can be controlled from
the input.
™ Observability deals with whether or not the
initial state can be observed from the output.
™ These concepts can be illustrated using the
network shown in Fig. 6.1

Fig. 6.1 Network


™ From the network we know that
\The input has no effect on x2 or can not control x2
because of the open circuit across y.
\The current passing through the 2 − Ω resistor
always equals the current source u; therefore the
response excited by the initial state x1 will not
appear in y, that is the initial state x1 can not be
observed from the output y.
6.2 Controllability
Consider the n-dimensional p-input state
equation
x& = Ax + Bu (6.1)
where A and B are n × n and n × p real constant
matrices, respectively.

Definition 6.1: The state equation (6.1) or the pair (A, B) is said
to be controllable if for any initial state x(0)=x0 and any final state
x1, there exists an input that transfers initial state x0 to x1 in a
finite time. Otherwise (6.1) is said to be uncontrollable.
™ Example 6.1
Consider the network shown in Fig.6.2 (a)

Fig. 6.2 Uncontrollable network


™ If x(0)=0, then x(t)=0 for all t≥0 no matter
what input u is applied because of the
symmetry of the network.
™ So the input u has no effect on the voltage
across the capacitor
™ That is the system is uncontrollable.
Next we consider the network shown in Fig.
6.2 (b)
Fig. 6.2 Uncontrollable network

™ If x1(0)=x2(0)=0, no matter what input is


applied, x1(t) always equals x2(t) for all t ≥0.
™ This means that there exist no control u which
transfers 0 to state with x1≠x2.
™ Theorem 6.1
The following statements are equivalent.
1. The n-dimensional pair (A, B) is controllable.
2. The n×n matrix
t t
Wc (t ) = ∫ e BB e
Aτ T ATτ
dτ = ∫ e A ( t −τ ) T
BB e A T ( t −τ )
dτ (6.2)
0 0

is nonsingular for any t>0 .


3. The n×np controllability matrix

[
C = B AB A 2 B L A n −1B ] (6.3)

has rank n (full row rank).


4. The n×(n+p) matrix [A − λI B] has full row rank at
every eigenvalue, λ , of A.
5. If, in addition, all eigenvalues of A have negative real
parts, then the unique solution of
AWc+WcAT=-BBT (6.4)
is positive definite. The solution is called the
controllability Gramian given by

WC = ∫ e Aτ BB T e A τ dτ
T
(6.5)
0

Proof: 2 ⇒ 1: The response of (6.1) at time t1


is
t1
x(t1 ) = e At1
x (0) + ∫ e A ( t1 −τ ) Bu (τ )dτ (6.6)
0

We verify that for any x(0)=x0 and any x(t1)=x1,


the input
A T ( t1 −t )
u(t ) = −B e T
WC−1 (t1 )[e At1 x 0 − x1 ] (6.7)
will transfer x0 to x1 at time t1. In fact,
substituting (6.7) into (6.6) yields
t1
x(t1 ) = e x 0 − ∫ e
At1 A ( t1 −τ )
BB e T A T ( t1 −τ )
WC−1 (t1 )[e At1 x 0 − x1 ]dτ
0

t1
= e x0 − ∫ e
At1 A ( t1 −τ )
BB e T A T ( t1 −τ )
dτ ⋅WC−1 (t1 )[e At1 x 0 − x1 ]
0

= e At1 x 0 − WC (t1 ) WC−1 (t1 )[e At1 x 0 − x1 ] = x1

1 ⇒ 2: We proof this by contradictive method.


Suppose Wc(t1) is not nonsingular.
And this means that there exists an n×1
nonzero vector v such that
t1 t1 2
0 = v WC (t ) v = ∫ v e BB e
T T Aτ T AT τ
vdτ = ∫ B eT AT τ
v dτ
0 0

which implies that


AT τ
B e T
v ≡ 0 or v T e Aτ B ≡ 0 for all τ ∈ [0, t1 ] (6.8)
Because (6.1) is controllable, there must exist
an input that transfers the initial state
x(0) = e − At1 v

to 0 at time t2. At this time, (6.6) becomes


t2
0 = x(t 2 ) = v + ∫ e A ( t 2 −τ ) Bu (τ )dτ
0
t2 ( 6.8 )
0= v v+∫ v e A ( t 2 −τ )
Bu(τ )dτ = v + 0
T T 2
⇒ 0

⇒v=0. But actually v ≠ 0 . This contradiction


shows that Wc(t1) is nonsingular.
2 ⇒ 3: Suppose that the controllability matrix C
doesn’t have full row rank, then there exists
an n-dimensional nonzero column vector v
such that vTC=0, i.e.,
vT A k B = 0 (k = 0,1,2, L, n − 1)
Based on Kelai-Hamilton Theorem, we know
that A k (k = n + 1, n + 2,L) can be expressed as a
n −1
linear combination of I , A , A 2
, L , A ,
For this reason, the above equations can be
expanded as
vT A k B = 0 (k = 0,1,2, L)

so
A k
( −t ) k
vT B = 0 (k = 0,1,2,L)
k!
holds for all t ∈ [0, t1 ] .
The summation of above equations is

A k
( −t ) k
0 = vT ∑ B = v T e − At B for all t ∈ [0, t1 ]
k =0 k!
Then we have
t1

− Aτ − ATτ
0=v T
e T
BB e dτ = WC (t1 )
0

This contradicts the preassumption of Wc(t)


being nonsingular.
3 ⇒ 2: Suppose that C has full row rank but
Wc(t) is singular, then there exists a nonzero
vector v such that (6.8) holds.
Differentiating (6.8) for k ( k = 0,1,2, L , n − 1) times, we
have
v T A k e Aτ B = 0

Setting τ = 0 to above equatins leads to


v T A k B = 0 (k = 0,1,2, L, n − 1)

and they are equivalent to


[
v T B AB A 2 B L A n −1B = v T C = 0 ]
This contradicts the preassumption of C having
full row rank. So we have obtained 2 ⇔ 3.
4 ⇒ 3: Suppose that Rank(C)<n, we want to
prove that Rank [A − λI B ] < n for some eigenvalue
λ1 of A.
By Theorem 6.2 which will be given late, we
know that there is a equivalnet transformation
x = Px such that the equivalent system is {A, B}
with
−1 ⎡A C A12 ⎤ ⎡BC ⎤
A = P AP = ⎢ ⎥, B = ⎢ ⎥
⎣ 0 A 22 ⎦ ⎣ 0 ⎦
where AC is an m×m matrix with m<n.
Let λ1 be an eigenvalue of A C and q1 be a
corresponding 1×m nonzero left eigenvector
or q1A C = λ1q1 , that is, q1 ( A C − λ1I) = 0 .
For 1×n nonzero vecotr q=[0,q1], it is not
difficult to verify that
⎡ A C − λ1I BC ⎤
[ ]
q A C − λ1I B = [0 q1 ]⎢
A12
⎥=0
⎣ 0 A C − λ1I 0 ⎦

So, we have Rank[A-λ1I ]


B < n ⇒ Rank[A-λ1I B ] < n
Example 6.2
Consider the inverted pendulum studied in
Example 2.8. Its state equation was
developed in (2.27). Suppose for a given
pendulum, the equation becomes
⎡0 1 00⎤ ⎡0⎤
⎢0 0⎥⎥
0 −1 ⎢1 ⎥
x& = ⎢ x + ⎢ ⎥u
⎢0 0 01⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 50⎦ ⎣ − 2⎦ (6.11)
y = [1 0 0 0]x
We compute
⎡0 1 0 2 ⎤
⎢1 ⎥
[ ]
C = B AB A 2 B A 3B = ⎢
0
⎢ 0 −2 0
2 0 ⎥
− 10⎥
⎢ ⎥
⎣ − 2 0 − 10 0 ⎦

Clearly Rank(C)=4, so the system is


controllable.
Example 6.3
Consider linear system
⎡0 1 0 0⎤ ⎡0 1⎤
⎢0 0 −1 0⎥⎥ ⎢1 0⎥⎥
x& = ⎢ x+⎢ u
⎢0 0 0 1⎥ ⎢0 1⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 5 0⎦ ⎣− 2 0⎦

For this system, we have

⎡λ − 1 0 0 0 1⎤
⎢0 λ 1 0 1 0⎥⎥
[λI − A B] = ⎢
⎢0 0 λ − 1 0 1⎥
⎢ ⎥
⎣0 0 − 5 λ − 2 0⎦
The eigenvalues of A are λ1 = λ2 = 0, λ3 = 5 , λ4 = − 5
When λ = λ1 = λ2 = 0 ,
⎡0 − 1 0 0 0 1⎤
⎢0 0 1 0 1 0⎥⎥
Rank[λI − A B ] = Rank ⎢ =4
⎢0 0 0 −1 0 1⎥
⎢ ⎥
⎣0 0 − 5 0 − 2 0⎦
When λ = λ3 = 5
⎡ 5 −1 0 0 0 1⎤
⎢ ⎥
Rank[λI − A B ] = Rank ⎢
0 5 1 0 1 0⎥
=4
⎢ 0 0 5 −1 0 1 ⎥
⎢ ⎥
⎢⎣ 0 0 −5 5 − 2 0⎥⎦
When λ = λ3 = − 5

⎡− 5 −1 0 0 0 1⎤
⎢ ⎥
− 5
Rank[λI − A B ] = Rank ⎢
0 1 0 1 0⎥
=4
⎢ 0 0 − 5 −1 0 1⎥
⎢ ⎥
⎢⎣ 0 0 −5 − 5 − 2 0⎥⎦

So by Statement 4 of Theorem 6.1, we know


that the system is controllable.
6.2.1 Controllability Indices
Let A and B be n×n and n×p constant
matrices and we assume that Rank(B)=p (that
is it is full column rank). We also assume that
(A, B) is controllable, that is
[
RankC = Rank B AB A 2 B L A n −1B = n ]
Let bi be the ith column of B. Then C can
be rewritten as
[
C = b1 L b p | Ab1 L Ab p | L | A n −1b1 L A n −1b p ] (6.13)
Let us search linearly independent columns of
C from left to right and let μ m be the number of
the linearly independent columns associated
with bm in C.
That is, the columns
b m , Ab m , L , A μ m −1b m

are linearly independent in C and A jb m ( j ≥ μm )

are linearly dependent. Obviously, we have


μ1 + μ 2 + L + μ p = n (6.14)
™ Remark: Because of the patten of C, if Aibm
depends on its left-hand-side (LHS) columns,
Ajbm(j>i) will also depend on its LHS columns.
Definition 6.A1: The set {μ1 , μ 2 ,L , μ p } is called
the controllability indices and
μ = max( μ1 , μ 2 , L , μ p )

is called the controllability index of (A, B).


Definition 6.A2: The controllability index can
also be defined as the least integer μ such that
ρ (Cμ ) = ρ ([B AB L Aμ −1B ]) = n
Example 6.5: Consider system
⎡0 1 0 0⎤ ⎡0 0⎤
⎢3 0 0 2⎥⎥ ⎢1 0⎥⎥
x& = ⎢ x+⎢ u
⎢0 0 0 1⎥ ⎢0 0⎥
⎢ ⎥ ⎢ ⎥ (6.18)
⎣0 − 2 0 0⎦ ⎣0 1⎦
⎡1 0 0 0⎤
y=⎢ ⎥ x
⎣0 0 1 0 ⎦
The controllability matrix is
⎡0 0 1 0 0 2 −1 0 ⎤
⎢1 2 −1 0 0 − 2⎥⎥
[ ]
B AB A 2 B A 3 B = ⎢
⎢0
0 0
0 0 1 −2 0 0 − 4⎥
⎢ ⎥
⎣0 1 −2 0 0 −4 2 0⎦
searching linearly independent columns of C
from left to right, we obtain
⎡0 ⎤ ⎡0 ⎤ ⎡1⎤ ⎡0 ⎤
⎢1 ⎥ ⎢0 ⎥ ⎢0⎥ ⎢2⎥
b1 = ⎢ ⎥, b 2 = ⎢ ⎥, Ab1 = ⎢ ⎥, Ab 2 = ⎢ ⎥
⎢0 ⎥ ⎢0 ⎥ ⎢0⎥ ⎢1 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0 ⎦ ⎣1⎦ ⎣ − 2⎦ ⎣0 ⎦

So the controllability indices are μ1 = 2 and μ2 = 2


and the controllability index is 2.
Theorem 6.A1: Let n be the degree of the
minimal polynomial of A, then we have
n
≤ μ ≤ min(n , n − p + 1) (6.16)
p
Proof: Consider that C μ is a matrix with
dimensions of n × μp and ρ (C μ ) = n , so we know
that n ≤ μp , that is μ ≥ n / p.
On the other hand, suppose that
μ = max( μ1 , μ 2 , L , μ p ) = μ1
then
μ = μ1 = n − ( μ 2 + L + μ p ) ≤ n − (1
1 + L + 1) = n − ( p − 1) = n − p + 1
4243
p −1

Suppose that the minimal polynomial of A is


φ (λ ) = λn + α 1λn −1 + L + α n −1λ + α n
Since
φ ( A) = 0 ⇒ A n = −α 1 A n −1 − L − α n −1 A − α n I
which implies that A n B can be written as a
linear combination of
{B AB A 2 B L A n −1B}

Thus we have μ ≤ n.

Corollary 6.1: The n-dimensional pair (A, B)


is controllable if and only if the matrix
Cn-p+1:=[B AB … An-pB]
where Rank(B)=p, has rank n or the n×n
matrix Cn-p+1CTn-p+1 is nonsingular.
Theorem 6.2: The controllability is invariant
under any equivalence transformation.
Theorem 6.3: The set of the controllability
indices of (A,B) is invariant under any
equivalence transformation and any reordering
of the columns of B.
6.3 Observability
™ The concept of observability is dual to that of
controllability.
™ Controllability studies the possibility of steering
the state from the input;
™ Observability studies the possibility of
estimating the state from the output.
™ These two concepts are defined under the
assumption that the state equation or,
equivalently, all A, B, C and D are known.
Consider the n-dimensional p-input q-output
state equation
x& = Ax + Bu
y = Cx + Du (6.22)
where A, B, C and D are, respectively, n×n,
n×p, q×n and q×p constant matrices.
Definition 6.O1 The state equation (6.22) is said to
be observable if for any unknown initial state x(0),
there exists a finite t1>0 such that the knowledge of
the input u and the output y over [0, t1] suffices to
determine uniquely the initial state x(0). Otherwise,
the equation is said to be unobservable.

The response of (6.22) excited by the initial


state x(0) and the input u(t) is
t
y (t ) = Ce x(0) + C∫ e A (t −τ ) Bu(τ )dτ + Du(t )
At (6.23)
0
or
Ce At x(0) = y (t ) (6.23)
where
t
y (t ) := y (t ) − C∫ e A (t −τ ) Bu(τ )dτ − Du(t ) (6.24)
0

is known function.

Definition 6.A1 The state equation (6.22) is said to


be observable if and only if the initial state x(0) can
be determined uniquely from its zero-input response
over a finite time interval.
™ Theorem 6.4
The state equation (6.22) is observable if and
only if the n×n matrix
t
Wo (t ) = ∫ e ATτ
CT Ce Aτ dτ (6.25)
0

is nonsingular for any t>0.


AT t
Proof: We premultiply (6.24) by e CT and then
integrate it over [0, t1] to yield
⎛⎜ t1 e AT t C T Ce At dt ⎞⎟x(0) = t1 e AT t C T y (t )dt
⎝ ∫0 ⎠ ∫0
if Wo(t1) is nonsingular, then
t1
x(0) = W (t1 ) ∫ e
−1 AT t
CT y (t )dt (6.26)
0

this yields a unique x(0) and means that if Wo(t)


is nonsingular for any t>0, then (6.22) is
observable.
By the form of Wo(t), we know that if Wo(t) is
singular, it is positive semidefinite. So, there
exists an n×1 nonzero constant vector v such
that t
v Wo (t1 ) v = ∫ v e
T
T T A t T
1 At
C Ce vdt
0
t1
= ∫ Ce v dt At 2

0
which implies
CeAtv≡0 (6.27)
for all t in [0, t1]. If u ≡0, then two different
initial states, x1(0)=v and x2(0)=0, both yield
the same zero-input response of 0:
y(t)=CeAtxi(0) ≡0
By Theorem 6.A1, we know that (6.22) is not
observable and this completes the proof of
the Theorem 6.4.
™ Observability is a property of the pair (A, C)
and is independent of B and D.
™ If Wo(t) is nonsingular for some t, then it is
nonsingular for every t and the initial state can
be computed form (6.26) by using any
nonzero time interval.
™ Theorem 6.5(Theorem of duality)
The pair (A, B) is controllable if and only if the
pair (AT, BT) is observable.
™ Theorem 6.O1
The following statements are equivalent.
1. The n-dimensional pair (A, B) is observable.
2. The n×n matrix
t
Wo (t ) = ∫ e A τ CT Ce Aτ dτ
T

0
(6.28)

is nonsingular for all t>0

3. The nq×n observability matrix


⎡ C ⎤
⎢ CA ⎥
O=⎢ ⎥
(6.29)
⎢ M ⎥
⎢ n −1 ⎥
⎣CA ⎦
has rank n (full column rank)
4. The (n+q) ×n matrix
⎡ A − λI ⎤
⎢ C ⎥
⎣ ⎦

has full column rank at every eigenvalue, λ, of A.


5. If, in addition, all eigenvalues of A have negative
real parts, then the unique solution of
A T Wo + Wo A = −C T C (6.30)
is positive definite. The solution is called
observability Gramian and can be expressed as
t
Wo (t ) = ∫ e A τ CT Ce Aτ dτ
T
(6.31)
0
6.4 Canonical Decomposition
™ This section discusses canonical decomposition
of system equations.
™ This fundamental result will be used to establish
relationship between the state-space description
and the transfer-matrix description.
Consider
x& = Ax + Bu
y = Cx + Du (6.38)
™ Theorem 6.6
Consider the n-dimensional state equation in
(6.38) with
ρ (C ) = ([B AB L A n −1B ]) = n1 < n

We definite the n×n matrix


[
P = q1 L q n1 L q n ]
−1

where the first n1 columns are any n1 linearly


independent columns of C, and the remaining
columns can arbitrarily be chosen as long as
P is nonsingular.
Then the equivalent transformation x = Px will
transfer (6.38) into

⎡ x& C ⎤ ⎡ A C A12 ⎤ ⎡ x C ⎤ ⎡ BC ⎤
⎢ ⎥=⎢ ⎥ ⎢ ⎥ + ⎢ ⎥u (6.40)
⎣xC ⎦ ⎣ 0 A C ⎦ ⎣x C ⎦ ⎣ 0 ⎦

⎡ xC ⎤
[
y = CC ]
CC ⎢ ⎥ + Du
⎣x C ⎦

where A C is n1×n1 and A C is (n-n1)×(n-n1).


The n1-dimensional subequation of (6.40),
x& c = A c x c + B c u
y = Cc x c + Du (6.41)
is controllable and has the same transfer
matrix as (6.38).
Proof: Let
⎡p1T ⎤
−1 ⎢ ⎥
P=Q =⎢ M ⎥
⎢pTn ⎥
⎣ ⎦
Because PQ=I and it is actually
⎡p1T q1 p1T q 2 L p1T q n ⎤
⎡p1T ⎤ ⎢ T ⎥
[ ]
T T
⎢ ⎥ ⎢ p q p 2 q2 L p2 qn ⎥
⎢ M ⎥ q1 L q n1 L qn = 2 1
=I
⎢ L L L L ⎥
⎢pTn ⎥ ⎢ T ⎥
⎣ ⎦ T T
⎢⎣p n q1 p n q 2 L p n q n ⎥⎦
so we have
pTi q j = 0, ∀i ≠ j (6.42)
By the form of C we know that for any a which
is a column of C, Aa can be shown as
linearly combination of basis of columns of C,
that is {q1,q2,…,qn1}.
For this reason, Aqj is a linearly combination
of {q1,q2,…,qn1} for j=1,2,…n1.
Consider (6.42), we have
p Ti Aq j = 0, i = n1 + 1, n1 + 2,L, n; j = 1,2,L, n1

Now we can compute out

⎡p1T Aq1 L p1T Aq n1 M p1T Aq n1 +1 L p1T Aq n ⎤


⎢ ⎥
⎢ M M M M M ⎥
⎢pT Aq L pT Aq M pT Aq L p T
Aq ⎥
⎢ n1 1 n1 n1 n1 n1 +1 n1 n ⎥
⎢ ⎥ ⎡A C A12 ⎤
A = LLLLLLLLLMLLLLLLLLL =⎢ ⎥
⎢ T T T T
⎥ ⎣ 0 AC ⎦
⎢ p n1 +1Aq1 Lp n1 +1Aq n1 M p n1 +1Aq n1 +1 L p n1 +1Aq n ⎥


⎣ n 1
M
0 n
M
⎢pT Aq L pT Aq M pT Aq
n1
M
n
M
n1 +1 L p
M
T
n Aq n




Consider also that the any columns of B can
be shown as the linearly combination of
{q1,q2,…,qn1}, we can compute out that
⎡ p1T B ⎤
⎢ ⎥
⎢ M ⎥
⎢ pTn B ⎥
⎢ 1 ⎥ ⎡ BC ⎤
B = PB = ⎢ LL ⎥ = ⎢ ⎥
⎢pT B ⎥ ⎣ 0 ⎦
⎢ n1 +1 ⎥
⎢ M ⎥
⎢ pT B ⎥
⎣ n ⎦
The C has no special form and it can be
described by
[
C = CP −1 = [Cq1 L Cq n1 M Cq n1 +1 L Cq n ] = CC CC ]
and this end of our proof.
The system (6.41) is called the controllable
subsystem of (6.38).
Example 6.8 Consider the three-dimensional
state equation
⎡1 1 0⎤ ⎡0 1 ⎤
x& = ⎢⎢0 1 0⎥⎥ x + ⎢⎢1 0⎥⎥u
⎢⎣0 1 1⎥⎦ ⎢⎣0 1⎥⎦ (6.43)
y = [1 1 1]x

use Cn-p+1=C2 instead of C to check the


controllability of (6.43). Since

⎡0 1 1 1 ⎤
ρ (C 2 ) = ρ ([B AB]) = ρ ⎢⎢1 0 1 0⎥⎥ = 2 < 3
⎢⎣0 1 1 1⎥⎦

the state equation (6.43) is not controllable.


Let us choose
⎡0 1 1 ⎤ ⎡0 1 0 ⎤
P −1 := Q = ⎢⎢1 0 0⎥⎥ ⇒ P = ⎢0 0 1 ⎥
⎢ ⎥
⎢⎣0 1 0⎥⎦ ⎢⎣1 0 − 1⎥⎦
Let x = Px . We compute
⎡0 1 0 ⎤ ⎡1 1 0⎤ ⎡0 1 1⎤
A = PAP −1 = ⎢⎢0 0 1 ⎥⎥ ⎢⎢0 1 0⎥⎥ ⎢⎢1 0 0⎥⎥
⎢⎣1 0 − 1⎥⎦ ⎢⎣0 1 1⎥⎦ ⎢⎣0 1 0⎥⎦
⎡1 0 M 0⎤
⎢1 1 M 0⎥
=⎢ ⎥
⎢L L L L⎥
⎢ ⎥
⎣ 0 0 M 1 ⎦
⎡1 0⎤
⎡0 1 0 ⎤ ⎡0 1 ⎤ ⎢ ⎥
B = PB = ⎢⎢0 0 1 ⎥⎥ ⎢⎢1 0⎥⎥ = ⎢
0 1 ⎥
⎢L L⎥
⎢⎣1 0 − 1⎥⎦ ⎢⎣0 1⎥⎦ ⎢ ⎥
⎣0 0⎦
⎡0 1 1 ⎤
C = CP −1 = [1 1 1]⎢⎢1 0 0⎥⎥ = [1 2 M 1]
⎢⎣0 1 0⎥⎦

The controllable subsystem is

&x c = ⎡
1 0⎤ ⎡1 0 ⎤
⎢1 1⎥ x c + ⎢0 1⎥u
⎣ ⎦ ⎣ ⎦
y = [1 2]x c
™ Theorem 6.O6
Consider the n-dimensional state equation in
(6.38) with
⎡ C ⎤
⎢ CA ⎥
ρ (O) = ρ ⎢ ⎥ = n2 < n
⎢ M ⎥
⎢ n −1 ⎥
⎣CA ⎦
We form the n×n matrix
⎡ p1 ⎤
⎢ M ⎥
⎢ ⎥
P = ⎢p n2 ⎥
⎢ ⎥
⎢ M ⎥
⎢⎣ p n ⎥⎦
where the first n2 rows are any n2 linearly
independent rows of O, and the remaining
rows can be chosen arbitrarily as long as P is
nonsingular. Then the equivalent
transformation x = Px will transform (6.38) into
⎡ x& o ⎤ ⎡ A o 0 ⎤ ⎡ x o ⎤ ⎡ Bo ⎤
⎢& ⎥ = ⎢ ⎥ ⎢ ⎥ + ⎢ ⎥u
⎣ x o ⎦ ⎣ A 21 A o ⎦ ⎣ x o ⎦ ⎣ Bo ⎦
(6.44)
⎡ xo ⎤
[
y = Co ]
0 ⎢ ⎥ + Du
⎣xo ⎦
where A o is n2×n2 and Ao is (n-n2)×(n-n2), and
the n2 dimensional subequation of (6.44),
x& o = A c x o + B o u
y = Co x o + Du

is observable and has the same transfer matrix


as (6.38).
™ Theorem 6.7
Every state-space equation can be
transformed, by an equivalence transformation,
into the following canonical form
⎡ x& co ⎤ ⎡ A co 0 A13 0 ⎤ ⎡ x co ⎤ ⎡ B co ⎤
⎢& ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ x co ⎥ = ⎢ A 21 A co A 23 x
A 24 ⎥ ⎢ co ⎥ ⎢ B co ⎥
+ u
⎢ x& c o ⎥ ⎢ 0 0 A co 0 ⎥ ⎢ x co ⎥ ⎢ 0 ⎥
⎢& ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (6.45)
⎣ x c o ⎦ ⎢⎣ 0 0 A 43 x
A c o ⎥⎦ ⎣ c o ⎦ ⎣ 0 ⎦
[
y = Cco 0 Cc o ]
0 x + Du

where x co is controllable and observable state


vector parts, xco is controllable but not
observable parts, xc o is observable but not
controllable parts, and xc o is neither
controllable nor observable.
Furthermore, the state equation is zero-state
equivalent to the controllable and observable
state equation
x& co = A co x co + B co u
y = Cco x co + Du
(6.46)

and the transfer matrix is


ˆ ( s) = C ( sI − A ) −1 B + D
G co co co

6.5 Conditions in Joudan-Form Equations


™ Controllability and observability are invariant
under any equivalent transformation.
™ If a state equation is transformed into Jordan
form, then the controllability and observability
conditions become very simple.
Consider the state equation
x& = Jx + Bx
y = Cx (6.47)
where J is in Jordan form.
To simplify, we assume that J has only two
distinct eigenvalues λ1 and λ2 and can be
written as
J=diag(J1,J2)
where J1 consists of all Jordan blocks
λ1
associated with , and J2 with λ2 .
Again to simplify discussion, we assume that
J1 has three Jordan blocks and J2 has two, or
J1=diag(J11, J12, J13) J2=diag(J21, J22)
An example for this situation is
⎡λ1 1 0 0 0 0 0 0 0⎤ ⎡ b1 ⎤
⎢0 ⎥ ⎢ b ⎥
⎢J11 λ1 1 0 0 J10 0 0 0⎥ ⎢ 2 ⎥
⎢0 0 λ1 0 0 0 0 0 0⎥ ⎢ b 3 (b l11 ) ⎥
⎢ ⎥ ⎢ ⎥
⎢0 0 0 Jλ121 0 0 0 0 0⎥ b (b
⎢ 4 l12 ⎥ )
J = ⎢0 0 0 0 λ1 1 0 0 0 ⎥ B = ⎢ b5 ⎥
⎢ J ⎥ ⎢ ⎥
⎢0 0 0 0 λ1
0 13 0 0 0⎥ ⎢ b 6 (b l13 ) ⎥
⎢0 0 0 0 0 0 λ2 1 0⎥ ⎢ b ⎥
⎢ J2⎥ ⎢ 7

⎢0 0 0 0 0 0 J0 21λ 2 0⎥ ⎢b 8 (b l 21 ) ⎥
⎢0 ⎢b (b )⎥
⎣ 0 0 0 0 0 0 0 Jλ222 ⎥⎦ ⎣ 9 l 22 ⎦

The row of B corresponding to the last row of Jij is


denoted by blij. The column of C corresponding to the
first column is denoted by cfij.
™ Theorem 6.8
1. The state equation in (6.47) is controllable if and
only if the three row vectors {bl11, bl12, bl13} and
the two vectors {bl21, bl22} are linearly
independent.
2. The state equation in (6.47)is observable if and
only if the three column vectors {cf11, cf12, cf13}
and the two column vectors {cf21, cf22} are
linearly independent.
™ Example 6.10 Consider the Jordan-form state
equation
⎡λ1 1 0 0 0 0 0⎤ ⎡0 0 0⎤
⎢0 λ1 0 0 0 0 0 ⎥⎥ ⎢1 0 0⎥⎥
⎢ ⎢
⎢0 0 λ1 0 0 0 0⎥ ⎢0 1 0⎥
⎢ ⎥ ⎢ ⎥
x& = ⎢ 0 0 0 λ1 0 0 0 ⎥ x + ⎢1 1 1⎥u
⎢0 0 0 0 λ2 1 0⎥ ⎢1 2 3⎥
⎢ ⎥ ⎢ ⎥ (6.48)
⎢0 0 0 0 0 λ2 1⎥ ⎢0 1 0⎥
⎢0 λ 2 ⎥⎦ ⎢⎣1 1 1⎥⎦
⎣ 0 0 0 0 0
⎡1 1 2 0 0 2 1⎤
y = ⎢⎢ 1 0 1 2 0 1 1⎥⎥ x
⎢⎣ 1 0 2 3 0 2 0⎥⎦
™ Corollary 6.8
A single-input Jordan-form sate equation is
controllable if and only if there is only one Jordan
block associated with each distinct eigenvalue and
every entry of B corresponding to the last row of each
Jordan block is non-zero.
™ Corollary 6.O8
A single-output Jordan-form sate equation is
observable if and only if there is only one Jordan
block associated with each distinct eigenvalue and
every entry of C corresponding to the first column of
each Jordan block is non-zero.
™ Example 6.11 Consider the state equation
⎡0 1 0 0 ⎤ ⎡10⎤
⎢0 0 1 0 ⎥⎥ ⎢⎢ 9 ⎥⎥
x& = ⎢ x+ u
⎢0 0 0 0 ⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥ (6.52)
⎣0 0 0 − 2⎦ ⎣ 1 ⎦
y = [1 0 0 2]x

You might also like