0% found this document useful (0 votes)
13 views

ECE557

Uploaded by

Sasa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

ECE557

Uploaded by

Sasa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Chapter 4

Controllability

This chapter develops the fundamental results about controllability and pole assignment.

4.1 Reachable States


We study the linear system
ẋ = Ax + Bu, t 0,
where x(t) 2 Rn and u(t) 2 Rm . Thus A 2 Rn⇥n and B 2 Rn⇥m . We begin our study of
controllability with a question: What states can we get to from the origin by choice of input? Are
there states in Rn that are unreachable? This is a question of control authority. For example, if B
is the zero matrix, there is no control authority, and if we start x at the origin, we’ll stay there. By
contrast, if B = I it will turn out that we can reach every state.
Fix a time t1 > 0. We say a vector v in Rn is reachable (at time t1 ) if there exists an input
u(·) that steers the state from the origin at t = 0 to v at t = t1 .
To characterize reachability we have to recall the integral form of the above di↵erential equation.
It was derived in ECE356 that
Z t
tA
x(t) = e x(0) + e(t ⌧ )A Bu(⌧ )d⌧.
0

If x(0) = 0 and t = t1 , then


Z t1
x(t1 ) = e(t1 ⌧ )A Bu(⌧ )d⌧.
0

Thus a vector v is reachable at time t1 i↵


Z t1
(9u(·)) v = e(t1 ⌧ )A Bu(⌧ )d⌧.
0

Let us introduce the space U of all signals u(·) defined on the time interval [0, t1 ]. The smoothness
of u(·) is not really important, but to be specific, let’s assume u(·) is continuous. Also, let us
introduce the reachability operator
Z t1
n
R : U ! R , Ru = e(t1 ⌧ )A Bu(⌧ ) d⌧.
0

47
48 CHAPTER 4. CONTROLLABILITY

In words, R is the linear transformation (LT) that maps the input signal to the state at time t1
starting from x(0) = 0.
The LT R is not the same as a matrix because U isn’t finite dimensional. But its image is well
defined and Im R is a subspace of Rn , as is easy to prove.
It’s time now to introduce the controllability matrix
⇥ ⇤
Wc = B AB · · · An 1 B .

In the single-input case, B is n ⇥ 1 and Wc is square, n ⇥ n. The importance of this matrix comes
from the theorem to follow.

Note The object B is a matrix. However, associated with it is an LT, namely the LT that maps a
vector u to the vector Bu. Instead of introducing more notation, we shall write Im B for the image
of this LT. That is, the symbol B will stand for the matrix or the LT depending on the context.
Likewise for other matrices such as A and Wc .

Theorem 4.1.1 Im R = Im Wc , i.e., the subspace of reachable states equals the column span of
Wc .

Let’s postpone the proof and instead note the conclusion: A vector v is reachable at time t1 i↵
it belongs to the column span of Wc , i.e.,
⇥ ⇤
rank Wc = rank Wc v .

Notice that reachability turns out to be independent of t1 . Also, every vector is reachable i↵
rank Wc = n.

Example Consider this setup of 2 carts, 2 forces:


y1 y2
u1 u2
M1 M2
K
The equations are

M1 ÿ1 = u1 + K(y2 y1 ), M2 ÿ2 = u2 + K(y1 y2 ).

Taking the state x = (y1 , ẏ1 , y2 , ẏ2 ) and M1 = 1, M2 = 1/2, K = 1 we have the state model
2 3 2 3
0 1 0 0 0 0
6 1 0 1 0 7 6 1 0 7
A=6
4
7, B = 6 7.
0 0 0 1 5 4 0 0 5
2 0 2 0 0 2

We compute using Scilab/MATLAB (or by hand) that Wc is 4 ⇥ 8 and its rank equals 4. Thus
every state is reachable from the origin. That is, every position and velocity of the carts can be
4.1. REACHABLE STATES 49

produced at any time by an appropriate open-loop control. In this sense, two forces gives enough
control authority. ⇤

Example Now consider 2 carts, 1 common force:


y1 y2
u
M1 M2
K
Now u1 = u and u2 = u. So A is as before, while
2 3
0
6 1 7
B=6
4 0
7.
5
2

Then
2 3
0 1 0 3
6 1 0 3 0 7
Wc = 6
4
7.
0 2 0 6 5
2 0 6 0

The rank equals 2. The set of reachable states is the 2-dimensional subspace spanned by the first
two columns. So we don’t have complete control authority in this case. ⇤

Example Two pendula balanced on one hand:

M1
M2

L1 L2

1 2

The linearized equations of motion are

M1 (d¨ + L1 ✓¨1 ) = M1 g✓1


M2 (d¨ + L2 ✓¨2 ) = M2 g✓2 .
50 CHAPTER 4. CONTROLLABILITY

Take the state and input to be

x = (✓1 , ✓˙1 , ✓2 ✓˙2 ), u = d.


¨

Find Wc and show that every state is reachable i↵ L1 6= L2 . ⇤

The proof of the theorem requires the Cayley-Hamilton theorem, which we discuss now. Consider
the matrix

0 1
A= .
1 1

Its characteristic polynomial is

s2 + s + 1.

Substitute s = A into this polynomial, regarding the constant as s0 . You get the matrix

A2 + A + I.

Verify that this equals the zero matrix:

A2 + A + I = 0.

Thus A2 is a linear combination of {I, A}. Likewise for higher powers A3 , A4 etc.

Theorem 4.1.2 If p(s) denotes the characteristic polynomial of A, then p(A) = 0. Thus An is a
linear combination of lower powers of A.

Proof Here’s a proof for n = 3; the proof carries over for higher n. We have the identity

1 1
(sI A) = N (s),
p(s)

where p(s) = det(sI A) and N (s) is the adjoint of sI A. We can manipulate this to read

p(s)I = (sI A)N (s).

Say p(s) = s3 + a3 s2 + a2 s + a1 . Then N (s) must have the form s2 I + sN2 + N1 , where N1 , N2 are
constant matrices. Thus we have

(s3 + a3 s2 + a2 s + a1 )I = (sI A)(s2 I + sN2 + N1 ).

Equating coefficients of powers of s, we get


a3 I = N2 A
a2 I = N1 AN2
a1 I = AN1 .
Multiply the first equation by A2 and the second by A, and then add all three: You get

a 3 A2 + a 2 A + a 1 I = A3 ,
4.1. REACHABLE STATES 51

or

A3 + a3 A2 + a2 A + a1 I = 0.

Proof of Theorem 4.1.1 We first show Im R ⇢ Im Wc . Let v 2 Im R. Then 9u 2 U such that


v = Ru, i.e.,
Z t1
v = e(t1 ⌧ )A Bu(⌧ ) d⌧
0
Z t1 
(t1 ⌧ )2 2
= I + (t1 ⌧ )A + A + · · · Bu(⌧ ) d⌧
0 2!
Z t1 Z t1
= B u(⌧ )d⌧ + AB (t1 ⌧ )u(⌧ )d⌧ + · · · .
0 0

Thus v belongs to the column span of


⇥ ⇤
B AB A2 B · · · .

By the Cayley-Hamilton theorem, the column span terminates at


⇥ ⇤
B AB · · · An 1 B .

Now we show Im Wc ⇢ Im R. An equivalent condition is in terms of orthogonal complements:


(Im R)? ⇢ (Im Wc )? . So let v be a vector orthogonal to Im R. Thus for every u(·)
Z t1
T
v e(t1 ⌧ )A Bu(⌧ )d⌧ = 0.
0

That is,
Z t1
v T e(t1 ⌧ )A
Bu(⌧ )d⌧ = 0.
0

Since this is true for every u(·), it must be that

v T e(t1 ⌧ )A
B = 0 8⌧, 0  ⌧  t1 .

This implies that

v T etA B = 0 8t, 0  t  t1

and hence that


✓ ◆
T t2 2
v I + tA + A + · · · B = 0 8t, 0  t  t1 .
2
Thus

v T B = 0, v T AB = 0, . . . .
52 CHAPTER 4. CONTROLLABILITY

That is, v is orthogonal to every column of Wc . ⇤

An auxiliary question is, if v is a reachable state, what control input steers the state from
x(0) = 0 to x(t1 ) = v? That is, if v 2 Im R, solve v = Ru for u. There are an infinite number of u
because the nullspace of R is nonzero. We can get one input as follows. We want to solve
Z t1
v= e(t1 ⌧ )A Bu(⌧ )d⌧
0

for the function u. Without any motivation, let’s look for a solution of the form
⌧ )AT
u(⌧ ) = B T e(t1 w,

where w is a constant vector. Then the equation to be solved is


Z t1
T
v= e(t1 ⌧ )A BB T e(t1 ⌧ )A wd⌧.
0

Now w can be brought outside the integral:


 Z t1
T
v= e(t1 ⌧ )A BB T e(t1 ⌧ )A d⌧ w.
0

In square brackets is a square matrix:


Z t1
T
Lc = e(t1 ⌧ )A BB T e(t1 ⌧ )A d⌧.
0

If Lc is invertible, we’re done, because w = Lc 1 v. It can be shown that Lc is in fact invertible if


Wc has full rank.

Recap: The set of all states reachable starting from the origin is a subspace, the image (column
span) of Wc , the controllability matrix. Thus, if this matrix has rank n, every state is reachable. If
a state is reachable at some time, then it’s reachable at any time. Of course you’ll have to use a
big control if the time is very short.

Finally, we say that the pair of matrices (A, B) is a controllable pair if every state is reachable,
equivalently, the rank of Wc equals n. Go over the examples in this section and see which are
controllable.

4.2 Properties of Controllability


Invariance under change of basis
The state vector of a system is certainly not unique. For example, if x is a state vector, so is V x
for any square invertible matrix V . Suppose we have the state model

ẋ = Ax + Bu
y = Cx + Du
4.2. PROPERTIES OF CONTROLLABILITY 53

and we define a new state vector, x̃ = V x. The new equations are


x̃˙ = V AV 1
x̃ + V Bu
1
y = CV x̃ + Du.
Under the change of state x 7 ! V x, the A, B matrices change like this
1
(A, B) 7 ! (V AV , V B)
and the controllability matrix changes like this
⇥ ⇤ ⇥ ⇤
B AB A2 B · · · 7 ! V B AB A2 B · · · .
Thus, (A, B) is controllable i↵ (V AV 1 , V B) is controllable.
A transformation of the form A 7 ! V AV 1 is called a similarity transformation; we say A
and V AV 1 are similar.

Invariance under state feedback


Consider applying the control law u = F x + v to the system ẋ = Ax + Bu. Here F 2 Rm⇥n and v
is a new independent input. The new state model is
ẋ = (A + BF )x + Bv.
That is, if u = F x + v (or u 7 ! F x + u), then
(A, B) 7 ! (A + BF, B).
Notice that the implementation of such a control law requires that there be a sensor for every
state variable. To emphasize this, consider the maglev example. Think of the sensors required to
implement u = F x + v.
You can prove that under state feedback, the set of reachable states remains unchanged; con-
trollability can be neither created nor destroyed by state feedback.

Decomposition
If we have a model ẋ = Ax + Bu where (A, B) is not controllable, it is natural to try to decompose
the system into a controllable part and an uncontrollable part. Let’s do an example.

Example 2 carts, 1 force


We had these matrices:
2 3 2 3
0 1 0 0 0
6 1 0 1 0 7 6 1 7
A=6 4 0 0
7, B = 6
5 4
7.
0 1 0 5
2 0 2 0 2
And the controllability matrix is
2 3
0 1 0 3
6 1 0 3 0 7
Wc = 64 0
7.
2 0 6 5
2 0 6 0
54 CHAPTER 4. CONTROLLABILITY

The rank equals 2. The set of reachable states is the 2-dimensional subspace spanned by the first
two columns. Let {e1 , e2 } denote these two columns; thus {e1 , e2 } is a basis for Im Wc . Add two
more vectors to get a basis for R4 , say

e3 = (0, 0, 1, 0), e4 = (0, 0, 0, 1).

Now, the matrix A is the matrix representation of an LT in the standard basis. Write the matrix
of the same LT but in the basis {e1 , . . . , e4 }. As you recall, you proceed as follows: Write Ae1 in
the new basis and stack up the coefficients as the first column:

Ae1 = e2 .

So the first column of the new matrix is (0, 1, 0, 0). Repeat for the other basis vectors. The result
is the matrix
2 3
0 3 1 0
6 1 0 0 0 7
6 7.
4 0 0 0 1 5
0 0 0 0
There’s a more streamlined way to describe this transformation. Form the matrix V by putting
{e1 , . . . , e4 } as its columns:
2 3
0 1 0 0
6 1 0 0 0 7
V =6 4 0
7.
2 1 0 5
2 0 0 1

Now transform the state via x = V x̃. Then A, B transform to V 1 AV, V 1 B:

2 3 2 3
0 3 1 0 1
6 1 0 0 0 7 6 7
V 1 AV = 6 7 , V 1B = 6 0 7 .
4 0 0 0 1 5 4 0 5
0 0 0 0 0
These matrices have a very nice structure, indicated by the partition lines:
2 3 2 3
0 3 1 0 1
6 1 0 0 0 7 6 7
V 1 AV = 6 7 , V 1B = 6 0 7 .
4 0 0 0 1 5 4 0 5
0 0 0 0 0
Let us write the blocks like this:
 
1 A11 A12 1 B1
V AV = , V B= .
A21 A22 B2

Thus, the state x has been transformed to x̃ = V 1 x, and the state equation ẋ = Ax + Bu to
x̃˙ 1 = A11 x̃1 + A12 x̃2 + B1 u
x̃˙ 2 = A21 x̃1 + A22 x̃2 + B2 u.
4.3. THE PBH (POPOV-BELEVITCH-HAUTUS) TEST 55

Note these key features: B2 = 0, A21 = 0, and (A11 , B1 ) is controllable. With these, the model is
actually
x̃˙ 1 = A11 x̃1 + A12 x̃2 + B1 u
x̃˙ 2 = A22 x̃2 .

In this form, x̃2 represents the uncontrollable part of the system—the second equation has no input.
And x̃1 represents the controllable part. There is coupling only from x̃2 to x̃1 . ⇤

Now we describe the general theory. The matrix Wc is the controllability matrix and the
subspace Im Wc is the set of reachable states. It’s convenient to rename this subspace as the
controllable subspace. Now, if we have a model ẋ = Ax + Bu where (A, B) is not controllable,
we can decompose the system into a controllable part and an uncontrollable part.
The decomposition construction follows the example. Let {e1 , . . . , ek } be a basis for Im Wc .
Complement it to get a full basis for state space:

{e1 , . . . , ek , . . . , en }.

Let V denote the square matrix with columns e1 , . . . , en . Then


 
1 A11 A12 1 B1
V AV = , V B= .
0 A22 0

Furthermore, (A11 , B1 ) is controllable. The lower-left block of the new A equals zero because Im Wc
is A-invariant; the lower block of the new B equals zero because Im B ⇢ Im Wc .

4.3 The PBH (Popov-Belevitch-Hautus) Test


If we have the model ẋ = Ax + Bu and we want to check if (A, B) is controllable, that is, if there
is enough control authority, we can check the rank of Wc . This section develops an alternative test
that is frequently more insightful.
Observe that the n ⇥ n matrix A I is invertible i↵ is not an eigenvalue. In other words

rank(A I) = n () is not an eigenvalue.

The PBH test concerns the n ⇥ (n + m) matrix


⇥ ⇤
A I B .

Theorem 4.3.1 (A, B) is controllable i↵


⇥ ⇤
rank A I B = n 8 eigenvalues of A.

Proof ⇥ ⇤
(=)) Assume rank A I B < n for some eigenvalue. Then there exists x 6= 0 (x will be
complex if is) such that
⇥ ⇤
x⇤ A I B =0
56 CHAPTER 4. CONTROLLABILITY

⇥ ⇤
(i.e., x ? every column of A I B ), where * denotes complex-conjugate transpose. Thus

x⇤ A = x⇤ , x⇤ B = 0.

So

x ⇤ A2 = x ⇤ A = 2 ⇤
x

etc.

) x ⇤ Ak = k ⇤
x .

Thus
⇥ ⇤ ⇥ ⇤
x⇤ B AB · · · An 1B = x⇤ B x⇤ B · · · n 1 x⇤ B = 0.

So (A, B) is not controllable.

((=) Assume (A, B) is not controllable. As in the preceding section, there exists V such that
1 1
(Ã, B̃) = (V AV, V B)

have the form


 
A11 A12 B1
à = , B̃ = .
0 A22 0

Thus rank [Ã I B̃] < n for


an eigenvalue of A22 . So rank [A I B] < n since

⇥ ⇤ 1
⇥ ⇤ V 0
à I B̃ = V A I B .
0 I

In view of this theorem, it makes sense to define an eigenvalue of A to be controllable if


⇥ ⇤
rank A I B = n.

Then (A, B) is controllable i↵ every eigenvalue of A is controllable.

Example 2 carts, 1 force


We had these matrices:
2 3 2 3
0 1 0 0 0
6 1 0 1 0 7 6 1 7
A=6 4 0 0
7, B = 6
5 4
7.
0 1 0 5
2 0 2 0 2
p p
The eigenvalues of A are {0, 0, ± 3j}. The controllable ones are {± 3j}, which are the eigenvalues
of A11 , the controllable part of A. ⇤
4.4. CONTROLLABILITY FROM A SINGLE INPUT 57

4.4 Controllability from a Single Input


In this section we ask, when is a system controllable from a single input? That is, we consider
(A, B) pairs where A is n ⇥ n and B is n ⇥ 1.
A matrix of the form
2 3
0 1
6 0 0 7
6 7
6 .. 7
6 . 7
6 7
4 0 1 5
a1 a2 · · · an 1 an

is called a companion matrix. Its characteristic polynomial is

s n + an s n 1
+ · · · + a2 s + a1 .

Companion matrices arise naturally in going from a di↵erential equation model to a state model.

Example Consider the system modeled by


...
y + a3 ÿ + a2 ẏ + a1 y = b3 ü + b2 u̇ + b1 u.

The transfer function is


b 3 s 2 + b2 s + b1
s3 + a3 s2 + a2 s + a1
and then the controllable realization is
2 3 2 3
0 1 0 0
A = 4 0 0 1 5 , B= 0 5
4
a1 a2 a3 1
⇥ ⇤
C = b1 b2 b3 , D = 0

Notice that if A is a companion matrix, then there exists a vector B such that (A, B) is con-
trollable, namely,
2 3
0
6 .. 7
6 7
B = 6 . 7.
4 0 5
1

Example The controllability matrix of


2 3 2 3
0 1 0 0
A=4 0 0 1 5, B = 4 0 5
a1 a2 a3 1
58 CHAPTER 4. CONTROLLABILITY

is
2 3
⇥ ⇤ 0 0 1
Wc = B AB A2 B =4 0 1 a3 5.
1 a3 2
a2 + a3

The rank of Wc equals 3, so (A, B) is controllable. ⇤

Now we’ll see that if (A, B) is controllable and B is n ⇥ 1, then A is similar to a companion
matrix.

Theorem 4.4.1 Suppose (A, B) is controllable and B is n ⇥ 1. Let the characteristic polynomial
of A be

sn + an sn 1
+ · · · + a1 .

Define
2 3
0 1 2 3
6 7 0
6 0 0 7 6 .. 7
6 .. 7 6 . 7
à = 6 . 7 , B̃ = 6 7.
6 7 4 0 5
4 0 1 5
1
a1 a2 · · · an 1 an

Then there exists a W such that


1 1
W AW = Ã, W B = B̃.

Proof Assume n = 3 to simplify the notation. The characteristic poly of A is

s 3 + a3 s 2 + a2 s + a1 .

By Cayley-Hamilton,

A3 + a3 A2 + a2 A + a1 I = 0.

Multiply by B:

A3 B + a3 A2 B + a2 AB + a1 B = 0.

Hence

A3 B = a1 B a2 AB a3 A2 B.

Using this equation, you can verify that


2 3
⇥ ⇤ ⇥ ⇤ 0 0 a1
A B AB A2 B = B AB A2 B 4 1 0 a2 5 . (4.1)
0 1 a3
4.4. CONTROLLABILITY FROM A SINGLE INPUT 59

⇥ ⇤
Define the controllability matrix Wc := B AB A2 B and the new matrix
2 3
0 0 a1
M =4 1 0 a2 5 .
0 1 a3

Note that M T is the companion matrix corresponding to the characteristic polynomial of A. From
(4.1) we have

Wc 1 AWc = M. (4.2)

Regarding B, we have
2 3
⇥ ⇤ 1
B= B AB A2 B 4 0 5,
0
so
2
3
1
Wc 1 B = 4 0 5 . (4.3)
0

Now define
2 3 2 3
0 1 0 0
à = 4 0 0 1 5 , B̃ = 0 5 .
4
a1 a2 a3 1

These are the matrices we want to transform A, B to. Define their controllability matrix
⇥ ⇤
W̃c := B̃ ÃB̃ Ã2 B̃ .

Then, as in (4.2) and (4.3),

W̃c 1 ÃW̃c = M, (same M !) (4.4)


2 3
1
W̃c 1 B̃ = 4 0 5 . (4.5)
0
From (4.2), (4.4) and (4.3), (4.5),

Wc 1 AWc = W̃c 1 ÃW̃c , W̃c 1 B = W̃c 1 B̃.

Define W = Wc W̃c 1 . Then


1 1
W AW = Ã, W B = B̃.

Let’s summarize the steps to transform (A, B) to (Ã, B̃) :


60 CHAPTER 4. CONTROLLABILITY

Procedure
Step 1 Find the characteristic poly of A:

s n + an s n 1
+ · · · + a1 .

Step 2 Define
⇥ ⇤ ⇥ ⇤
Wc = B AB · · · An 1B , W̃c = B̃ ÃB̃ · · · Ãn 1 B̃ , W = Wc W̃c 1 .

Then W 1 AW = Ã, W 1B = B̃.

Example

2 3 2 3
3 2 9 3
A=4 2 2 7 5, B = 4 3 5
1 1 4 1

char poly A = s3 s2 2s + 1
2 3 2 3
0 1 0 0
à = 4 0 0 1 5 , B̃ = 4 0 5
1 2 1 1
2 3 2 3
3 6 10 0 0 1
Wc = 4 3 5 8 5 , W̃c = 4 0 1 1 5
1 2 3 1 1 3
2 3
2 3 3
W =4 3 2 3 5
1 1 1

4.5 Pole Assignment


This section presents the most important result about controllability: That eigenvalues1 can be
arbitrarily assigned by state feedback. This result is very useful and is the key to state-space
control design methods. The result was first proved by W.M. Wonham, a professor in our own ECE
department.
As we saw, the control law u = F x + v transforms (A, B) to (A + BF, B). This leads us to pose
the pole assignment problem:

Given A, B.

Design F so that the eigs of A + BF are in a desired location.


1
It is customary to call the eigenvalues of A its “poles.” Of course this more properly refers to the poles of the
relevant transfer function.
4.5. POLE ASSIGNMENT 61

We might like to specify the eigenvalues of A + BF exactly, or we might be satisfied to have them
in a specific region in the complex plane, such as a truncated cone for fast transient response and
good damping:

desired poles
region

The fact is that you can arbitrarily assign the eigs of A + BF i↵ (A, B) is controllable. We’ll prove
this, and also get a procedure to design F .

Single-Input Case
Example
2 3 2 3
0 1 0 0
A=4 0 0 1 5, B = 4 0 5
1 1 1 1
⇥ ⇤
This A is in companion form, and
⇥ also B is as in
⇤ Theorem 4.4.1. Let us design F = F 1 F 2 F 3
to place the eigs of A + BF at 1 2 3 . We have A + BF is a companion matrix with final
row
⇥ ⇤
1 + F1 1 + F2 1 + F3 .

Thus its characteristic poly is

s3 + (1 F3 )s2 + (1 F2 )s + ( 1 F1 ).

But the desired char poly is

(s + 1)(s + 2)(s + 3) = s3 + 6s2 + 11s + 6.

Equating coefficients in these two equations, we get the unique F :


⇥ ⇤
F = 7 10 5 .

The example extends to general case of A a companion matrix with final row
⇥ ⇤
a1 · · · an
62 CHAPTER 4. CONTROLLABILITY

and
3 2
0
6 .. 7
6 7
B=6 . 7
4 0 5
1

as follows: Let { 1 , . . . , n } be the


⇥ desired set of⇤eigenvalues, occurring in complex-conjugate pairs.
The matrix F has the form F = F1 · · · Fn , so A + BF is a companion matrix with final row
⇥ ⇤
a 1 + F1 · · · a n + Fn .

Equate the coefficients of the two polynomials

sn + (an Fn )sn 1
+ · · · + (a1 F1 ), (s 1 ) · · · (s n ),

and solve for F1 , . . . , Fn .


Let us turn to the general case of (A, B) controllable, B is n ⇥ 1. The procedure to compute F
to assign the eigenvalues of A + BF is as follows:
Step 1 Using Theorem 4.4.1, compute W so that
1 1
à := W AW, B̃ := W B

have the form that à is a companion matrix and


2 3
0
6 .. 7
6 7
B̃ = 6 . 7 .
4 0 5
1

Step 2 Compute F̃ to assign the eigs of à + B̃ F̃ to the desired locations.


Step 3 Set F = F̃ W 1.

To see that A + BF and à + B̃ F̃ have the same eigs, simply note that
1
W (A + BF )W = Ã + B̃ F̃ .

Example 2 pendula
We had
2 3
✓1
6 ✓˙1 7
x=6 7 ¨
4 ✓2 5 , u = d.
✓˙2
4.5. POLE ASSIGNMENT 63

Taking L1 = 1, L2 = 1/2, g = 10, we have


2 3 2 3
0 1 0 0 0
6 10 0 0 0 7 6 1 7
A=6 4 0
7, B = 6 7.
0 0 1 5 4 0 5
0 0 20 0 2

Suppose the desired eigs are { 1, 1, 2 ± j}

Step 1

n p p o
eigs A = ± 10, ± 20

char poly A = s4 30s2 + 200


2 3 2 3
0 1 0 0 0
6 0 0 1 0 7 6 0 7
à = 6
4 0
7 , B̃ = 6
5 4
7
0 0 1 0 5
200 0 30 0 1
2 3
0 1 0 10
6 1 0 10 0 7
Wc = 64 0
7
2 0 40 5
2 0 40 0
2 3
0 0 0 1
6 0 0 1 0 7
W̃c = 6
4 0 1 0 30 5
7

1 0 30 0
2 3
20 0 1 0
6 0 20 0 1 7
W = W W̃c 1 = 64 20 0
7
5
2 0
0 20 0 2

Step 2 Desired char poly is

(s + 1)2 (s + 2 j)(s + 2 + j) = s4 + 6s3 + 14s2 + 14s + 5.

Char poly of à + B̃ F̃ is

s4 F̃4 s3 + ( 30 F̃3 )s2 F̃2 s + (200 F̃1 ).

Equate coe↵s and solve:


⇥ ⇤
F̃ = 195 14 44 6 .

Step 3
64 CHAPTER 4. CONTROLLABILITY

1
⇥ ⇤
F = F̃ W = 24.5 7.4 34.25 6.7 .

This result is the same as via the MATLAB command acker. ⇤

We have now proved sufficiency of the single-input pole assignment theorem:

Theorem 4.5.1 The eigs of A + BF can be arbitrarily assigned i↵ (A, B) is controllable.

The direction (=)) is left as an exercise. It says, if (A, B) is not controllable, then some
eigenvalues of A + BF are fixed, that is, they do not move as F varies.
Note that in the single-input case, F is uniquely determined by the set of desired eigenvalues.

Multi-Input Case
Now we turn to the general case of (A, B) where B is n ⇥ m, m > 1. As before, if (A, B) is not
controllable, then some of the eigs of A + BF are fixed; if (A, B) is controllable, the eigs can be
freely assigned.
To prove this, and construct F , we first see that the system can be made controllable from a
single input by means of a preliminary feedback.

Lemma 4.5.1 (Heymann 1968) Assume (A, B) is controllable. Let B1 be any nonzero column of
B (could be the first one). Then 9F such that (A + BF, B1 ) is controllable.

Before proving the lemma, let’s see how it’s used.

Theorem 4.5.2 (Wonham 1967) The eigs of A+BF are freely assignable i↵ (A, B) is controllable.

Proof (=)) For you to do.


((=)

Step 1 Select, say, the first column B1 of B. If (A, B1 ) is controllable, set F̃ = 0 and go to
Step 3.

Step 2 Choose F̃ so that (A + B F̃ , B1 ) is controllable.

Step 3 Compute F̃˜ so that A + B F̃ + B1 F̃˜ has the desired eigs.


2 3
1
6 0 7
6 7˜
Step 4 Set F = F̃ + 6 . 7 F̃. Then
4 .. 5
0

˜
A + BF = A + B F̃ + B1 F̃.
4.5. POLE ASSIGNMENT 65


Two comments: A random F̃ will work in Step 2; In general F is not uniquely determined by
the desired eigenvalues.
Now we turn to the proof of the lemma.
Proof (Hautus ’77)
The proof concerns an artificial discrete-time state model, namely,

x(k + 1) = Ax(k) + Bu(k), x(1) = B1 . (4.6)

Suppose we can prove this: There exists an input sequence {u(1), . . . , u(n 1)} such that the
resulting states {x(1), . . . , x(n)} span Rn , that is, they are linearly independent. Then define F as
follows:

u(n) = anything
⇥ ⇤ ⇥ ⇤
F x(1) · · · x(n) = u(1) · · · u(n) . (4.7)
To see that (A + BF, B1 ) is controllable, note from (4.6) and (4.7) that

x(k + 1) = (A + BF )x(k), x(1) = B1 ,

so
⇥ ⇤ ⇥ ⇤
x(1) · · · x(n) = B1 (A + BF )B1 · · · (A + BF )n 1B
1 .

Thus the controllability matrix of (A + BF, B1 ) has rank n.


So it suffices to show, with respect to (4.6), that there exist {u(1), . . . , u(n 1)} such that the
set {x(1), . . . , x(n)} is lin. indep. This is proved by induction.
First, {x(1)} = {B1 } is lin. indep. since B1 6= 0.
For the induction hypothesis, suppose we’ve chosen {u(1), . . . , u(k 1)} such that {x(1), . . . , x(k)}
is lin. indep. Define
⇥ ⇤
V = Im x(1) · · · x(k) .

If V = Rn we’re done. So assume

6 Rn .
V= (4.8)

We must now show there exists u(k) such that x(k + 1) 2 / V. (Note that x(k + 1) 2
/ V is equivalent
to {x(1), . . . , x(k + 1)} is lin. indep.) That is, we must show (from (4.6))

(9u(k)) Ax(k) + Bu(k) 2


/ V. (4.9)

Suppose, to the contrary, that

(8u 2 Rm ) Ax(k) + Bu 2 V.

Setting u = 0 we get

Ax(k) 2 V. (4.10)
66 CHAPTER 4. CONTROLLABILITY

Then we also get

(8u 2 Rm ) Bu 2 V. (4.11)

Now we will show that

AV ⇢ V, (4.12)

i.e.,

Ax(i) 2 V, i = 1, . . . , k.

This is true for i = k by (4.10); for i < k we have from (4.6) that

Ax(i) = x(i + 1) Bu(i)

where x(i + 1) 2 V by definition V, and Bu(i) 2 V by (4.11). This proves (4.12).


Using (4.11) and (4.12) we get that 8u 2 Rm

ABu 2 AV ⇢ V
A2 Bu 2 A2 V ⇢ AV ⇢ V
etc.

Thus every column of the controllability matrix of (A, B) belongs to V. Hence Im Wc ⇢ V. But
Im Wc = Rn , by controllability, so V = Rn . This contradicts (4.8), so (4.9) is true after all.

Finally, the following test for controllability is fairly good numerically:

1. compute the eigs of A

2. choose F at random

3. compute the eigs of A + BF

Then (A, B) is controllable () {eigs A} and {eigs A + BF } are disjoint.

4.6 Stabilizability
Recall that A is stable if all its eigenvalues have negative real parts. This is the same as saying
that for ẋ = Ax, x(t) ! 0 as t ! 1 for every x(0). Under state feedback, u = F x + v, the system
ẋ = Ax + Bu is transformed to

ẋ = (A + BF )x + Bv.

We say (A, B) is stabilizable if 9F such that A + BF is stable. By the pole assignment theorem
(A, B) controllable =) (A, B) stabilizable, i.e., controllability is sufficient (a stronger property).
What exactly is a test for stabilizability? The following theorem answers this.

Theorem 4.6.1 The following three conditions are equivalent:


4.7. PROBLEMS 67

1. (A, B) is stabilizable

2. The uncontrollable part of A, A22 , is stable.


⇥ ⇤
3. rank A I B = n for every eigenvalue of A with Re 0.

The 2 carts, 1 force example is not stabilizable, because the uncontrollable eigenvalues, 0, 0,
aren’t stable.
Suppose (A, B) is stabilizable but not controllable, and we want to find an F to stabilize A+BF .
The way is clear: Transform to see the controllable part and stabilize that. Here are the details:
Let {e1 , . . . , ek } be a basis for Im Wc . Complement it to get a full basis for state space:

{e1 , . . . , ek , . . . , en }.

Let V denote the square matrix with columns e1 , . . . , en . Then


 
1 A11 A12 1 B1
V AV = , V B= ,
0 A22 0

where (A11 , B1 ) is controllable. Choose F1 to stabilize A11 + B1 F1 . Then


  
A11 A12 B1 ⇥ ⇤ A11 + B1 F1 A12
+ F1 0 =
0 A22 0 0 A22

is stable. Transform the feedback matrix back to the original coordinates:


⇥ ⇤
F = F1 0 V 1 .

4.7 Problems
1. Write a Scilab/MATLAB program to verify the Cayley-Hamilton theorem. Run it for
2 3
1 0 1
A=4 0 0 1 5.
0 1 2

2. Consider the state model ẋ = Ax + Bu with


2 3 2 3
3 2 2 2 2
A=4 1 0 1 5, B = 4 1 1 5.
5 1 4 3 2

(a) Find a basis for the controllable subspace.


(b) Is the vector (1, 1, 0) reachable from the origin?
(c) Find a nonzero vector that is orthogonal to every vector that is reachable from the origin.

0 1
3. Show that the matrix A = has the property that (A, B) is controllable for all
1 0
nonzero 2 ⇥ 1 matrices B.
68 CHAPTER 4. CONTROLLABILITY

4. Consider the setup of two identical systems with a common input:

u
- S

- S

(An example is two identical pendula balanced on one hand.) Let the upper and lower systems
be modeled by, respectively,

ẋ1 = Ax1 + Bu, ẋ2 = Ax2 + Bu.

Assume (A, B) is controllable.



x1
(a) Find a state model for the overall system, taking the state to be x = .
x2
(b) Find a basis for the controllable subspace of the overall system.

5. Consider the state model ẋ = Ax + Bu with


2 3 2 3
1 0 0 1 0
A=4 0 0 1 5, B = 4 2 1 5.
0 1 2 2 1

(a) Transform x, A, and B so that the controllable part of the system is exhibited explicitly.
(b) What are the controllable eigenvalues?

6. Consider the system with


2 3 23
0 0 1 0 0
6 0 0 0 1 7 6 0 7
A=6 4 0 0
7, B=6 7
4 1 5.
1 1 5
0 0 2 2 0

(a) What are the eigenvalues of A?


(b) It is desired to design a state-feedback matrix F to place the eigenvalues of A + BF at
{ 1, 1, 2 ± j}. Is this possible? If so, do it.
(c) It is desired to design a state-feedback matrix F to place the eigenvalues of A + BF at
{0, 0, 2 ± j}. Is this possible? If so, do it.

7. Consider a pair (A, B) where A is n ⇥ n and B is n ⇥ 1 (single input). Show that (A, B)
cannot be controllable if A has two linearly independent eigenvectors for the same eigenvalue.

8. Consider the two pendula. Find the controllable subspace when the lengths are equal and
when they’re not.
4.7. PROBLEMS 69

9. Continue with the two-pendula problem. Give numerical values to L1 6= L2 and stabilize by
state feedback.
10. Let
2 3 2 3
1 2 0 2 0 0
6 2 1.5 1.5 1.5 7 6 1 1 7
A=6
4
7, B=6 7.
1 2.5 0.5 2.5 5 4 1 1 5
1 1 1 2 0 0
(a) Check that (A, B) is controllable but that (A, Bi ) is not controllable for either column
Bi of B.
(b) Compute an F to assign the eigenvalues of A + BF to be 1 ± j, 2 ± j.
11. Let A be an n ⇥ n real matrix in companion form. Then (A, B) is controllable for a certain
column vector B. What does this imply about the Jordan form of A?
12. Consider the system model ẋ = Ax + Bu, x(0) = 0 with
2 3 2 3
3 2 1 0
A= 4 11 6 2 5, B = 4 1 5.
9 5 2 1
Does there exist an input such that x(1) = (1, 1, 1)?
13. Consider the system model ẋ = Ax + Bu with
2 3 2 3
0 1 0 0 0 0 0
6 0 0 0 0 0 7 7 6 7
6 6 1 1 7
A=6 6 0 0 2 0 0 7, B = 6 0
7 6 0 7.
7
4 0 0 0 0 1 5 4 0 0 5
0 0 0 0 0 1 1
(a) It is desired to design a state feedback matrix F so that the eigenvalues of A + BF all
have negative real part. Is this possible?
(b) Is it possible to design F so that each eigenvalue of A + BF satisfies Re  4?
14. Consider the following 2-cart system:

y1 y2
u1 u2
M1 M2
K

There are two inputs: a force u1 applied to the first cart; a force u2 applied to the two carts
in opposite directions as shown. Take the state variables to be
y1 , y2 , ẏ1 , ẏ2
in that order, and take M1 = 2, M2 = 1, K = 2.
70 CHAPTER 4. CONTROLLABILITY

(a) Find (A, B) in the state model.


(b) According to the Cayley-Hamilton theorem, A4 can be expressed as a linear combination
of lower powers of A. Derive this expression for the 2-cart system.

15. Many mechanical systems can be modeled by the equation

M q̈ + Dq̇ + Kq = u,

where q is a vector of positions (such as joint angles on a robot), u is a vector of inputs, M


is a symmetric positive definite
 matrix, and D and K are two other square matrices. Find a
q
state model by taking x = . What can you say about controllability of this state model?

16. (a) Let
2 3 32
0 1 0 0 0
6 0 0 1 0 7 6 0 7
A=6
4
7, B=6 7
4 0 5.
0 0 0 1 5
2 1 0 2 1

Find F so that the eigenvalues of A + BF are

1 ± j, 2 ± j.

(b) Take the same A but


2 3
1
6 0 7
B=6 4 2 5.
7

Find the controllable part of the system.

17. Consider the system


2 3 2 3
1 1 0 0
ẋ = 4 0 1 0 5x + 4 1 5u
0 0 1 1

Is the vector (4, 1, 4) reachable from the origin?

18. Is the following (A, B) pair controllable?


2 3 2 3
0 6 0 4 1 1
6 1 4 1 0 7 6 0 0 7
A=6 4 1
7, B = 6 7
8 0 0 5 4 0 1 5
1 11 0 0 1 0
2 3 2 3
0 1 1 1
19. Let A = 4 1 1 0 5, B = 4 0 5
0 2 0 1
4.7. PROBLEMS 71

(i) Check that (A, B) is controllable.


(ii) Find a feedback law u = F x such that the closed-loop poles are all at 1.

20. Prove that (A, B) is controllable if and only if (A + BF, B) is controllable for some F . Prove
that (A, B) is controllable if and only if (A + BF, B) is controllable for every F .

You might also like