0% found this document useful (0 votes)
163 views

Chapter 2 - State Space Fundamentals

This chapter establishes fundamental results for state-space representations of linear time-invariant systems, including deriving the state equation solution using the matrix exponential. It also covers state coordinate transformations, which show that state-space realizations are not unique, and introduces diagonal canonical form.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
163 views

Chapter 2 - State Space Fundamentals

This chapter establishes fundamental results for state-space representations of linear time-invariant systems, including deriving the state equation solution using the matrix exponential. It also covers state coordinate transformations, which show that state-space realizations are not unique, and introduces diagonal canonical form.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

2

STATE-SPACE
FUNDAMENTALS

Chapter 1 presented the state-space description for linear time-invariant


systems. This chapter establishes several fundamental results that follow
from this representation, beginning with a derivation of the state equation
solution. In the course of this analysis we encounter the matrix exponen-
tial, so named because of many similarities with the scalar exponential
function. In terms of the state-equation solution, we revisit several famil-
iar topics from linear systems analysis, including decomposition of the
complete response into zero-input and zero-state response components,
characterizing the system impulse response that permits the zero-state
response to be cast as the convolution of the impulse response with the
input signal, and the utility of the Laplace transform in computing the
state-equation solution and defining the system transfer function.
The chapter continues with a more formal treatment of the state-space
realization issue and an introduction to the important topic of state coor-
dinate transformations. As we will see, a linear transformation of the state
vector yields a different state equation that also realizes the system’s input-
output behavior represented by either the associated impulse response or
transfer function. This has the interesting consequence that state-space
realizations are not unique; beginning with one state-space realization,
other realizations of the same system may be derived via a state coor-
dinate transformation. We will see that many topics in the remainder of

Linear State-Space Control Systems. Robert L. Williams II and Douglas A. Lawrence


Copyright  2007 John Wiley & Sons, Inc. ISBN: 978-0-471-73555-7
48
STATE EQUATION SOLUTION 49

this book are facilitated by the flexibility afforded by this nonuniqueness.


For instance, in this chapter we introduce the so-called diagonal canon-
ical form that specifies a set of decoupled, scalar, first-order ordinary
differential equations that may be solved independently.
This chapter also illustrates the use of MATLAB in supporting the compu-
tations encountered earlier. As in all chapters, these demonstrations will
revisit the MATLAB Continuing Example along with Continuing Examples 1
and 2.

2.1 STATE EQUATION SOLUTION

From Chapter 1, our basic mathematical model for a linear time-invariant


system consists of the state differential equation and the algebraic output
equation:

ẋ(t) = Ax(t) + Bu(t) x(t0 ) = x0


y(t) = Cx(t) + Du(t) (2.1)

where we assume that the n × n system dynamics matrix A, the n × m


input matrix B, the p × n output matrix C, and the p × m direct transmis-
sion matrix D are known constant matrices. The first equation compactly
represents a set of n coupled first-order differential equations that must be
solved for the state vector x(t) given the initial state x(t0 ) = x0 and input
vector u(t). The second equation characterizes a static or instantaneous
dependence of the output on the state and input. As we shall see, the real
work lies in deriving a solution expression for the state vector. With that
in hand, a direct substitution into the second equation yields an expression
for the output.
Prior to deriving a closed-form solution of Equation (2.1) for the n–di-
mensional case as outlined above, we first review the solution of scalar
first-order differential equations.

Solution of Scalar First-Order Differential Equations


Consider the one–dimensional system represented by the scalar differen-
tial equation
ẋ(t) = ax(t) + bu(t) x(t0 ) = x0 (2.2)

in which a and b are scalar constants, and u(t) is a given scalar input
signal. A traditional approach for deriving a solution formula for the scalar
50 STATE-SPACE FUNDAMENTALS

state x(t) is to multiply both sides of the differential equation by the


integrating factor e−a(t−t0 ) to yield

d −a(t−t0 )
(e x(t)) = e−a(t−t0 ) ẋ(t) − e−a(t−t0 ) ax(t)
dt
= e−a(t−t0 ) bu(t)

We next integrate from t0 to t and invoke the fundamental theorem of


calculus to obtain
 t
d −a(τ −t0 )
e−a(t−t0 ) x(t) − e−a(t−t0 ) x(t0 ) = (e x(τ )) dτ
t0 dt
 t
= e−a(τ −t0 ) bu(τ ) dτ.
t0

After multiplying through by ea(t−t0 ) and some manipulation, we get


 t
x(t) = e a(t−t0 )
x0 + ea(t−τ ) bu(τ ) dτ (2.3)
t0

which expresses the state response x(t) as a sum of terms, the first owing
to the given initial state x(t0 ) = x0 and the second owing to the specified
input signal u(t). Notice that the first component characterizes the state
response when the input signal is identically zero. We therefore refer to
the first term as the zero-input response component. Similarly, the sec-
ond component characterizes the state response for zero initial state. We
therefore refer to the second term as the zero-state response component.
The Laplace transform furnishes an alternate solution strategy. For this,
we assume without loss of generality that t0 = 0 and transform the differ-
ential equation, using linearity and time-differentiation properties of the
Laplace transform, into

sX(s) − x0 = aX(s) + bU (s)

in which X(s) and U (s) are the Laplace transforms of x(t) and u(t),
respectively. Straightforward algebra yields

1 b
X(s) = x0 + U (s)
s−a s−a
STATE EQUATION SOLUTION 51

From the convolution property of the Laplace transform, we obtain


x(t) = eat x0 + eat ∗ bu(t)
 t
= e x0 +
at
ea(t−τ ) bu(τ ) dτ
0

which agrees with the solution (2.3) derived earlier for t0 = 0.


If our first-order system had an associated scalar output signal y(t)
defined by the algebraic relationship

y(t) = cx(t) + d u(t) (2.4)

then by simply substituting the state response we obtain


 t
y(t) = ce x0 +
at
cea(t−τ ) bu(τ ) dτ + du(t)
0

which also admits a decomposition into zero-input and zero-state response


components. In the Laplace domain, we also have
c cb
Y (s) = x0 + U (s)
s−a s−a
We recall that the impulse response of a linear time-invariant system is
the system’s response to an impulsive input u(t) = δ(t) when the system
is initially at rest, which in this setting corresponds to zero initial state
x0 = 0. By interpreting the initial time as t0 = 0− , just prior to when
the impulse occurs, the zero-state response component of y(t) yields the
system’s impulse response, that is,
 t
h(t) = cea(t−τ ) bδ(τ ) dτ + dδ(t)
0− (2.5)
= ce b + dδ(t)
at

where we have used the sifting property of the impulse to evaluate the inte-
gral. Now, for any input signal u(t), the zero-input response component
of y(t) can be expressed as
 t  t
ce a(t−τ )
bu(τ ) dτ + du(t) = [cea(t−τ ) b + dδ(t − τ )]u(τ ) dτ
0− 0−
 t
= h(t − τ )u(τ ) dτ
0−

= h(t) ∗ u(t)
52 STATE-SPACE FUNDAMENTALS

which should look familiar to the reader. Alternatively, in the Laplace


domain, the system’s transfer function H (s) is, by definition,
  
Y (s)  cb
H (s) = = +d
U (s) zero initial state s−a

and so the impulse response h(t) and transfer function H (s) form a
Laplace transform pair, as we should expect.
Our approach to deriving state-equation solution formulas for the n-
dimensional case and discussing various systems-related implications in
both the time and Laplace domains is patterned after the preceding devel-
opment, but greater care is necessary to tackle the underlying matrix-
vector computations correctly. Before proceeding, the reader is encouraged
to ponder, however briefly, the matrix-vector extensions of the preceding
computations.

State Equation Solution


In this subsection we derive a closed-form solution to the n-dimensional
linear time invariant state equation (2.1) given a specified initial state
x(t0 ) = x0 and input vector u(t).

Homogeneous Case We begin with a related homogeneous matrix


differential equation

Ẋ(t) = AX(t) X(t0 ) = I (2.6)

where I is the n × n identity matrix. We assume an infinite power series


form for the solution


X(t) = Xk (t − t0 )k (2.7)
k=0

Each term in the sum involves an n × n matrix Xk to be determined and


depends only on the elapsed time t − t0 , reflecting the time-invariance of
the state equation. The initial condition for Equation (2.6) yields X(t0 ) =
X0 = I . Substituting Equation (2.7) into Equation (2.6), formally differ-
entiating term by term with respect to time, and shifting the summation
STATE EQUATION SOLUTION 53

index gives

 ∞

 
(k + 1)Xk+1 (t − t0 )k = A Xk (t − t0 )k
k=0 k=0


= A Xk (t − t0 )k
k=0

By equating like powers of t − t0 , we obtain the recursive relationship

1
Xk+1 = AXk k≥0
k+1
which, when initialized with X0 = I , leads to
1 k
Xk = A k≥0
k!
Substituting this result into the power series (2.7) yields

∞
1 k
X(t) = A (t − t0 )k
k=0
k!

We note here that the infinite power series (2.7) has the requisite con-
vergence properties so that the infinite power series resulting from term-
by-term differentiation converges to Ẋ(t), and Equation (2.6) is satisfied.
Recall that the scalar exponential function is defined by the following
infinite power series

eat = 1 + at + 12 a 2 t 2 + 16 a 3 t 3 + · · ·
∞
1 k k
= a t
k=0
k!

Motivated by this, we define the so-called matrix exponential via

eAt = I + At + 12 A2 t 2 + 16 A3 t 3 + · · ·
∞
1 k k (2.8)
= At
k=0
k!
54 STATE-SPACE FUNDAMENTALS

from which the solution to the homogeneous matrix differential


equation (2.6) can be expressed compactly as

X(t) = eA(t−t0 )

It is important to point out that eAt is merely notation used to represent


the power series in Equation (2.8). Beyond the scalar case, the matrix
exponential never equals the matrix of scalar exponentials corresponding
to the individual elements in the matrix A. That is,

eAt  = [eaij t ]

Properties that are satisfied by the matrix exponential are collected in


the following proposition.

Proposition 2.1 For any real n × n matrix A, the matrix exponential eAt
has the following properties:

1. eAt is the unique matrix satisfying

d At 
e = AeAt eAt t=0 = In
dt

2. For any t1 and t2 , eA(t1 +t2 ) = eAt1 eAt2 . As a direct consequence, for
any t
I = eA(0 ) = eA(t−t) = eAt e−At

Thus eAt is invertible (nonsingular) for all t with inverse


 −1
eAt = e−At

3. A and eAt commute with respect to matrix multiplication, that is,


AeAt = eAt A for all t.
T
4. [eAt ]T = eA t for all t.
5. For any real n × n matrix B, e(A+B)t = eAt eBt for all t if and only
if AB = BA, that is, A and B commute with respect to matrix multipli-
cation. 

The first property asserts the uniqueness of X(t) = eA(t−t0 ) as a solution


to Equation (2.6). This property is useful in situations where we must ver-
ify whether a given time–dependent matrix X(t) is the matrix exponential
STATE EQUATION SOLUTION 55

for an associated matrix A. To resolve this issue, it is not necessary com-


pute eAt from scratch via some means. Rather, it suffices to check whether
Ẋ(t) = AX(t) and X(0) = I . If a candidate for the matrix exponential is
not provided, then it must be computed directly. The defining power series
is, except in special cases, not especially useful in this regard. However,
there are special cases in which closed-form solutions can be deduced, as
shown in the following two examples.

Example 2.1 Consider the 4 × 4 matrix with ones above the main diag-
onal and zeros elsewhere:
 
0 1 0 0
0 0 1 0
A=
0 0 0 1
0 0 0 0

As called for by the power series (2.8), we compute powers of A:


   
0 0 01 0 0 0 1
 0 0 1
0  0 0 0 0
A2 =  A3 = 
0 0 0
0 0 0 0 0
0 0 00 0 0 0 0
 
0 0 0 0
0 0 0 0
A =
4
0 0 0 0
0 0 0 0

from which it follows that Ak = 0, for k ≥ 4, and consequently, the power


series (2.8) contains only a finite number of nonzero terms:

eAt = I + At + 12 A2 t 2 + 16 A3 t 3
    0 0 1 2
t 0

1 0 0 0 0 t 0 0 2
0 1 0 0 0 0 t 0  0 0 0 1 2
t
= + + 2 
0 0 1 0 0 0 0 t  0 0 0 0 
0 0 0 1 0 0 0 0 0 0 0 0
   
0 0 0 16 t 3 1 t 12 t 2 61 t 3
0 0 0 0  0 1 t 1 2
t
+0 0 0 0  = 0
  2 
0 1 t 
0 0 0 0 0 0 0 1
56 STATE-SPACE FUNDAMENTALS

Inspired by this result, we claim the following outcome for the n–di-
mensional case:
 1 2 1 
  1 t t ··· t n−1
0 1 0 ··· 0  2 (n − 1)! 
 
0 0 1 ··· 0  1 
. . . ..  0 1 t ··· t n−2 
A=
 .. .. ..
..
. .
 ⇒ eAt = 
. .
(n − 2)! 

0 0 0 . . .. .. 
1
..
··· . . . . . 
0 0 0 ··· 0 0 0 0 ··· t 
0 0 0 ··· 1

the veracity of which can be verified by checking that the first property
of Proposition 2.1 is satisfied, an exercise left for the reader. 

Example 2.2 Consider the diagonal n × n matrix:


 
λ1 0 ··· 0 0
0 λ2 ··· 0 0
 . 
A=
 ..
..
.
..
.
..
.


..
.
0 0 ··· λn−1 0
0 0 ··· 0 λn

Here, the power series (2.8) will contain an infinite number of terms when
at least one λ1  = 0, but since diagonal matrices satisfy
 
λk1 0 ··· 0 0
0 λk2 ··· 0 0 
 
 
A =  ...
k ..
.
..
.
..
.
..
. 
 
0 0 · · · λkn−1 0 
0 0 ··· 0 λkn

each term in the series is a diagonal matrix, and


 
λk1 0 ··· 0 0
0 λk2 ··· 0 0 
1  
∞
 .. .. .. .. ..  k
e At
=  . t
k!  . . . . 
k=0 0 0 ··· λkn−1 0 
0 0 ··· 0 λkn
STATE EQUATION SOLUTION 57


∞ 
1 k k
λt 0 ··· 0 0
 k=0 k! 1 
 
 
∞ 
 0 1 k k
λ t ··· 0 0 
 k! 2 
 k=0 
 .. .. .. .. .. 
= . . . . . 
 
 
∞ 
 0 0 ··· 1 k
λ tk 0 
 k! n−1 
 k=0 
 
∞ 
0 0 ··· 0 1 k k
λ t
k! n
k=0

On observing that each diagonal entry specifies a power series converging


to a scalar exponential function, we have
 
e λ1 t 0 ··· 0 0
 0 e λ2 t ··· 0 0 
 
 .. .. .. .. ..
 . . . . . 
 
 0 0 ··· e λn−1 t
0 
0 0 ··· 0 e λn t

Another useful property of the matrix exponential is that the infinite


power series definition (2.8) can be reduced to a finite power series


n−1
eAt = αk (t)Ak (2.9)
k=0

involving scalar analytic functions α0 (t), α1 (t), . . . , αn−1 (t). As shown


in Rugh (1996), the existence of the requisite functions can be verified by
equating  n−1  n−1
d At d  
e = αk (t)A =
k
α̇k (t)Ak
dt dt k=0 k=0

and  n−1 
 
n−1
A αk (t)A k
= αk (t)Ak+1
k=0 k=0

We invoke the Cayley-Hamilton theorem (see Appendix B, Section 8)


which, in terms of the characteristic polynomial |λI − A| = λn
58 STATE-SPACE FUNDAMENTALS

+ an−1 λn−1 + · · · + a1 λ + a0 , allows us to write

An = −a0 I − a1 A − · · · − an−1 An−1

Substituting this identity into the preceding summation yields


n−1 
n−2
αk (t)Ak+1 = αk (t)Ak+1 + αn−1 (t)An
k=0 k=0


n−2 
n−1
= k+1
αk (t)A − ak αn−1 (t)Ak
k=0 k=0


n−1
= −a0 αn−1 (t)I + [αk−1 (t) − ak αn−1 (t)]Ak
k=1

By equating the coefficients of each power of A in the finite series


representation for (d/dt)eAt and the preceding expression for AeAt , we
obtain

α̇0 (t) = −a0 αn−1 (t)


α̇1 (t) = α0 (t) − a1 αn−1 (t)
α̇2 (t) = α1 (t) − a2 αn−1 (t)
..
.
α̇n−1 (t) = αn−2 (t) − an−1 αn−1 (t)

This coupled set of first-order ordinary differential equations can be writ-


ten in matrix-vector form to yield the homogenous linear state equation
    
α̇0 (t) 0 0 ··· 0 −a0 α0 (t)
 α̇1 (t) 1 0 ··· 0 −a1  α1 (t) 
    
 α̇2 (t) = 0 1 ··· 0 −a2  α2 (t) 
 ..  . . . .. ..  .. 
 .   .. .. .. . .  . 
α̇n−1 (t) 0 0 ··· 1 −an−1 αn−1 (t)

Using the matrix exponential property eA·0 = I , we are led to the initial
values α0 (0) = 1 and α1 (0) = α2 (0) = · · · = αn−1 (0) = 0 which form the
STATE EQUATION SOLUTION 59

initial state vector    


α0 (0) 1
 α1 (0)  0
   
 α2 (0)  = 0
 ..  .
 .   .. 
αn−1 (0) 0

We have thus characterized coefficient functions that establish the finite


power series representation of the matrix exponential in Equation (2.9) as
the solution to this homogeneous linear state equation with the specified
initial state.
Several technical arguments in the coming chapters are greatly facil-
itated merely by the existence of the finite power series representation
in Equation (2.9) without requiring explicit knowledge of the functions
α0 (t), α1 (t), . . . , αn−1 (t). In order to use Equation (2.9) for computational
purposes, the preceding discussion is of limited value because it indirectly
characterizes these coefficient functions as the solution to a homoge-
neous linear state equation that, in turn, involves another matrix expo-
nential. Fortunately, a more explicit characterization is available which
we now discuss for the case in which the matrix A has distinct eigen-
values λ1 , λ2 , . . . , λn . A scalar version of the preceding argument allows
us to conclude that the coefficient functions α0 (t), α1 (t), . . . , αn−1 (t) also
provide a finite series expansion for the scalar exponential functions eλi t ,
that is,

n−1
e =
λi t
αk (t)λki , i = 1, . . . , n
k=0

which yields the following system of equations

   α0 (t)   λ t 
1 λ1 λ21 · · · λ1n−1 e 1
1 n−1   α1 (t) 
 λ2 λ22 · · · λ2    
e 
λ2 t 
 α (t) 
. ..  = . 
2
 .. .. .. .. 
. . . .  ..   .. 
.
1 λn λ2n · · · λnn−1 α (t) e λn t
n−1

The n × n coefficient matrix is called a Vandermonde matrix that is non-


singular when and only when the eigenvalues λ1 , λ2 , . . . , λn are distinct.
In this case, this system of equations can be solved to uniquely determine
the coefficient functions α0 (t), α1 (t), . . . , αn−1 (t).
60 STATE-SPACE FUNDAMENTALS

Example 2.3 Consider the upper-triangular 3 × 3 matrix


 
0 −2 1
A= 0 −1 −1
0 0 −2

with distinct eigenvalues extracted from the main diagonal: λ1 = 0, λ2 =


−1, and λ3 = −2. The associated Vandermonde matrix is
   
1 λ1 λ21 1 0 0
 
 1 λ2 λ22  =  1 −1 1 
1 λ λ2 1 −2 4
3 3

which has a determinant of −2 is therefore nonsingular. This yields the


coefficient functions
   −1  λ1 t   1 0 0   
α0 (t) 1 0 0 e 1
 α1 (t)  =  1 −1 1   eλ1 t  =  1
 2 −2 2   e 
3 −t

1 −2 4 −2t
α2 (t) e λ1 t 1
−1 12 e
2
 
1
 
=  32 − 2e−t + 12 e−2t 
1
2
− e−t + 12 e−2t

The matrix exponential is then

eAt = α0 (t)I + α1 (t)A + α2 (t)A2


   
1 0 0 3  0 −2 1
= (1)  0 1 0  + 2 − 2e−t + 12 e−2t  0 −1 −1 
0 0 1 0 0 −2
 
  0 2 0
+ 12 − e−t + 12 e−2t  0 1 3 
0 0 4
 
1 −2 + 2e−t 32 − 2e−t + 12 e−2t
= 0 e−t −e−t + e−2t  
0 0 e−2t

The interested reader is referred to Reid (1983) for modifications to this


procedure to facilitate the treatment of complex conjugate eigenvalues and
STATE EQUATION SOLUTION 61

to cope with the case of repeated eigenvalues. Henceforth, we will rely


mainly on the Laplace transform for computational purposes, and this will
pursued later in this chapter.
The linear time-invariant state equation (2.1) in the unforced case [u(t)
≡ 0] reduces to the homogeneous state equation

ẋ(t) = Ax(t) x(t0 ) = x0 (2.10)

For the specified initial state, the unique solution is given by

x(t) = eA(t−t0 ) x0 (2.11)

which is easily verified using the first two properties of the matrix expo-
nential asserted in Proposition 2.1.
A useful interpretation of this expression is that the matrix exponential
eA(t−t0 ) characterizes the transition from the initial state x0 to the state x(t)
at any time t ≥ t0 . As such, the matrix exponential eA(t−t0 ) is often referred
to as the state-transition matrix for the state equation and is denoted by
(t, t0 ). The component φij (t, t0 ) of the state-transition matrix is the time
response of the ith state variable resulting from an initial condition of 1 on
the j th state variable with zero initial conditions on all other state variables
(think of the definitions of linear superposition and matrix multiplication).

General Case Returning to the general forced case, we derive a solu-


tion formula for the state equation (2.1). For this, we define

z(t) = e−A(t−t0 ) x(t)

from which z(t0 ) = e−A(t0 −t0 ) x(t0 ) = x0 and


d −A(t−t0 )
ż(t) = [e ]x(t) + e−A(t−t0 ) ẋ(t)
dt
= (−A)e−A(t−t0 ) x(t) + e−A(t−t0 ) [Ax(t) + Bu(t)]
= e−A(t−t0 ) Bu(t)

Since the right-hand side above does not involve z(t), we may solve for
z(t) by applying the fundamental theorem of calculus
 t
z(t) = z(t0 ) + ż(τ )dτ
t0
 t
= z(t0 ) + e−A(τ −t0 ) Bu(τ )dτ
t0
62 STATE-SPACE FUNDAMENTALS

From our original definition, we can recover x(t) = eA(t−t0 ) z(t) and
x(t0 ) = z(t0 ) so that
  t 
−A(τ −t0 )
x(t) = e A(t−t0 )
z(t0 ) + e Bu(τ ) dτ
t0
 t
=e A(t−t0 )
x(t0 ) + eA(t−t0 ) e−A(τ −t0 ) Bu(τ ) dτ (2.12)
t0
 t
=e A(t−t0 )
x(t0 ) + eA(t−τ ) Bu(τ ) dτ
t0

This constitutes a closed-form expression (the so-called variation-of-


constants formula) for the complete solution to the linear time-invariant
state equation that we observe is decomposed into two terms. The first
term is due to the initial state x(t0 ) = x0 and determines the solution in
the case where the input is identically zero. As such, we refer to it as the
zero-input response xzi (t) and often write

xzi (t) = eA(t−t0 ) x(t0 ) (2.13)

The second term is due to the input signal and determines the solution in
the case where the initial state is the zero vector. Accordingly, we refer
to it as the zero-state response xzs (t) and denote
 t
xzs (t) = eA(t−τ ) Bu(τ ) dτ (2.14)
t0

so that by the principle of linear superposition, the complete response


is given by x(t) = xzi (t) + xzs (t). Having characterized the complete
state response, a straightforward substitution yields the complete output
response
 t
y(t) = Ce A(t−t0 )
x(t0 ) + CeA(t−τ ) Bu(τ ) dτ + Du(t) (2.15)
t0

which likewise can be decomposed into zero-input and zero-state response


components, namely,
 t
yzi (t) = CeA(t−t0 ) x(t0 ) yzs (t) = CeA(t−τ ) Bu(τ ) dτ + Du(t) (2.16)
t0
LAPLACE DOMAIN REPRESENTATION 63

2.2 IMPULSE RESPONSE

We assume without loss of generality that t0 = 0 and set the initial state
x(0) = 0 to focus on the zero-state response, that is,
 t
yzs (t) = CeA(t−τ ) Bu(τ ) dτ + Du(t)
0−
 t
= [CeA(t−τ ) B + Dδ(t − τ )]u(τ ) dτ
0−

Also, we partition coefficient matrices B and D column-wise


 
B = b1 b2 ··· bm D = d1 d2 ··· dm

With {e1 , . . . , em } denoting the standard basis for Rm , an impulsive input


on the ith input u(t) = ei δ(t), i = 1, . . . , m, yields the response
 t
CeA(t−τ ) Bei δ(τ ) dτ + Dei δ(t) = CeAt bi δ(t) + di δ(t) t ≥0
0

This forms the ith column of the p × m impulse response matrix

h(t) = CeAt B + Dδ(t) t ≥0 (2.17)

in terms of which the zero-state response component has the familiar


characterization  t
yzs (t) = h(t − τ )u(τ ) dτ (2.18)
0

2.3 LAPLACE DOMAIN REPRESENTATION

Taking Laplace transforms of the state equation (2.1) with t0 = 0, using


linearity and time-differentiation properties, yields

sX(s) − x0 = AX(s) + BU (s)

Grouping X(s) terms on the left gives

(sI − A)X(s) = x0 + BU (s)


64 STATE-SPACE FUNDAMENTALS

Now, from basic linear algebra, the determinant |sI − A| is a degree-n


monic polynomial (i.e., the coefficient of s n is 1), and so it is not the
zero polynomial. Also, the adjoint of (sI − A) is an n × n matrix of
polynomials having degree at most n − 1. Consequently,

adj(sI − A)
(sI − A)−1 =
|sI − A|

is an n × n matrix of rational functions in the complex variable s. More-


over, each element of (sI − A)−1 has numerator degree that is guaranteed
to be strictly less than its denominator degree and therefore is a strictly
proper rational function.
The preceding equation now can be solved for X(s) to obtain

X(s) = (sI − A)−1 x0 + (sI − A)−1 BU (s) (2.19)

As in the time domain, we can decompose X(s) into zero-input and


zero-state response components X(s) = Xzi (s) + Xzs (s) in which

Xzi (s) = (sI − A)−1 x0 Xzs (s) = (sI − A)−1 BU (s) (2.20)

Denoting a Laplace transform pair by f (t) ↔ F (s), we see that since

xzi (t) = eAt x0 ↔ Xzi (s) = (sI − A)−1 x0

holds for any initial state, we can conclude that

eAt ↔ (sI − A)−1 (2.21)

This relationship suggests an approach for computing the matrix exponen-


tial by first forming the matrix inverse (sI − A)−1 , whose elements are
guaranteed to be rational functions in the complex variable s, and then
applying partial fraction expansion element by element to determine the
inverse Laplace transform that yields eAt .
Taking Laplace transforms through the output equation and substituting
for X(s) yields

Y (s) = CX(s) + DU (s)


(2.22)
= C(sI − A)−1 x0 + [C(sI − A)−1 B + D]U (s)
LAPLACE DOMAIN REPRESENTATION 65

from which zero-input and zero-state response components are identified


as follows:

Yzi (s) = C(sI − A)−1 x0 Yzs (s) = [C(sI − A)−1 B + D]U (s) (2.23)

Focusing on the zero-state response component, the convolution property


of the Laplace transform indicates that
 t
yzs (t) = h(t − τ )u(τ ) dτ ↔ Yzs (s) = [C(sI − A)−1 B + D]U (s)
0
(2.24)
yielding the familiar relationship between the impulse response and trans-
fer function

h(t) = CeAt B + Dδ(t) ↔ H (s) = C(sI − A)−1 B + D (2.25)

where, in the multiple-input, multiple-output case, the transfer function is


a p × m matrix of rational functions in s.

Example 2.4 In this example we solve the linear second-order ordinary


differential equation ÿ(t) + 7ẏ(t) + 12y(t) = u(t), given that the input
u(t) is a step input of magnitude 3 and the initial conditions are y(0) =
0.10 and ẏ(0) = 0.05. The system characteristic polynomial is s 2 + 7s +
12 = (s + 3)(s + 4), and the system eigenvalues are s1,2 = −3, −4. These
eigenvalues are distinct, negative real roots, so the system is overdamped.
Using standard solution techniques, we find the solution is

y(t) = 0.25 − 0.55e−3t + 0.40e−4t t ≥0

We now derive this same solution using the techniques of this chapter.
First, we must derive a valid state-space description. We define the state
vector as    
x1 (t) y(t)
x(t) = =
x2 (t) ẏ(t)

Then the state differential equation is


      
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −12 −7 x2 (t) 1
66 STATE-SPACE FUNDAMENTALS

We are given that u(t) is a step input of magnitude 3, and the initial
state is x(0) = [y(0), ẏ(0)]T = [0.10, 0.05]T . We can solve this problem
in the Laplace domain, by first writing

X(s) = (sI − A)−1 X(0) + (sI − A)−1 BU (s)


 
s −1
(sI − A) =
12 s + 7
 
−1 1 s+7 1
(sI − A) = 2
s + 7s + 12 −12 s

where the denominator s 2 + 7s + 12 = (s + 3)(s + 4) is |sI − A|, the


characteristic polynomial. Substituting these results into the preceding
expression we obtain:
  
1 s+7 1 0.10
X(s) = 2
s + 7s + 12 −12 s 0.05
  
1 s+7 1 0 3
+ 2
s + 7s + 12 −12 s 1 s

1
where the Laplace transform of the unit step function is . Simplifying,
s
we find the state solution in the Laplace domain:
 
  3
X(s) =
X1 (s)
=
1  0.10s + 0.75 + s 
X2 (s) (s + 3)(s + 4)
0.05s + 1.80
 
0.10s 2 + 0.75s + 3
 s(s + 3)(s + 4) 
=


0.05s + 1.80
(s + 3)(s + 4)

A partial fraction expansion of the first state variable yields the residues

C1 = 0.25
C2 = −0.55
C3 = 0.40
LAPLACE DOMAIN REPRESENTATION 67

The output y(t) equals the first state variable x1 (t), found by the inverse
Laplace transform:

y(t) = x1 (t) = L−1 {X1 (s)}


 
−1 0.25 0.55 0.40
= L − +
s s+3 s+4
= 0.25 − 0.55e−3t + 0.40e−4t t ≥0

which agrees with the stated solution. Now we find the solution for the sec-
ond state variable x2 (t) in a similar manner. A partial fraction expansion
of the second state variable yields the residues:

C1 = 1.65
C2 = −1.60

Then the second state variable x2 (t) is found from the inverse Laplace
transform:
x2 (t) = L−1 {X2 (s)}
 
−1 1.65 1.60
=L −
s+3 s+4
= 1.65e−3t − 1.60e−4t t ≥0

We can check this result by verifying that x2 (t) = ẋ1 (t):

ẋ1 (t) = −(−3)0.55e−3t + (−4)0.40e−4t = 1.65e−3t − 1.60e−4t

which agrees with the x2 (t) solution. Figure 2.1 shows the state response
versus time for this example.
We can see that the initial state x(0) = [0.10, 0.05]T is satisfied and
that the steady state values are 0.25 for x1 (t) and 0 for x2 (t). 

Example 2.5 Consider the two-dimensional state equation


        
ẋ1 (t) 0 1 x1 (t) 0 −1
= + u(t) x(0) = x0 =
ẋ2 (t) −2 −3 x2 (t) 1 1
 
 x1 (t)
y(t) = 1 1
x2 (t)
68 STATE-SPACE FUNDAMENTALS

0.2

x1(t) 0.1

0
0 0.5 1 1.5 2 2.5 3

0.2
x2(t)

0.1

0
0 0.5 1 1.5 2 2.5 3
time (sec)

FIGURE 2.1 Second-order state responses for Example 2.4.

From      
s 0 0 1 s −1
sI − A = − =
0 s −2 −3 2 s+3

we find, by computing the matrix inverse and performing partial fraction


expansion on each element,
 
s+3 1
adj(sI − A) −2 s
(sI − A)−1 = = 2
|sI − A| s + 3s + 2
 
s+3 1
 (s + 1)(s + 2) (s + 1)(s + 2) 
=


−2 s
(s + 1)(s + 2) (s + 1)(s + 2)
 
2 −1 1 −1
s+1 + s+2 s+1 + s+2
= −2

2 −1 2 
+ +
s+1 s+2 s+1 s+2
It follows directly that

eAt = L−1 [(sI − A)−1 ]


  t ≥0
2e−t − e−2t e−t − e−2t
=
−2e−t + 2e−2t −e−t + 2e−2t
LAPLACE DOMAIN REPRESENTATION 69

For the specified initial state, the zero-input response component of the
state and output are
 −t 
−e
xzi (t) = e x0 =
At
yzi (t) = CeAt x0 = Cxzi (t) = 0 t ≥ 0
e−t

For a unit-step input signal, the Laplace domain representation of the


zero-state components of the state and output response are

Xzs (s) = (sI − A)−1 BU (s)


 
s+3 1
 
−2 s 0 1
= 2
s + 3s + 2 1 s
 
1
 (s + 1)(s + 2)  1
=

s
s
(s + 1)(s + 2)
 
1
 s(s + 1)(s + 2) 
=


1
(s + 1)(s + 2)
 
1/2 1 1/2
 s − s+1 + s+2
=



1 1

s+1 s+2
Yzs (s) = CXzs (s) + DU (s)
 
1/2 1 1/2
− +
  s s+1 s+2
= 1 1    + [0] 1
1 1  s

s+1 s+2
1/2 1/2
= −
s s+2
from which
 
1/2 − e−t + 1/2e−2t 1
xzs (t) = yzs (t) = (1 − e−2t ) t ≥ 0
e−t − e−2t 2
70 STATE-SPACE FUNDAMENTALS

and complete state and output responses then are


 
1/2 − 2e−t + 1/2e−2t 1
x(t) = y(t) = (1 − e−2t ) t ≥0
2e−t − e−2t 2

Finally, the transfer function is given as

H (s) = C(sI − A)−1 B + D


 
s+3 1
 
 −2 s 0
= 1 1 2 +0
s + 3s + 2 1
s+1 s+1 1
= = =
s2 + 3s + 2 (s + 1)(s + 2) s+2

with associated impulse response

h(t) = e−2t t ≥0 

2.4 STATE-SPACE REALIZATIONS REVISITED

Recall that at the conclusion of Section 1.3 we called a state equation a


state-space realization of a linear time-invariant system’s input-output
behavior if, loosely speaking, it corresponds to the Laplace domain
relationship Y (s) = H (s)U (s) involving the system’s transfer function.
Since this pertains to the zero-state response component, we see that
Equation (2.25) implies that a state-space realization is required to satisfy

C(sI − A)−1 B + D = H (s)

or, equivalently, in terms of the impulse response in the time domain,

CeAt B + Dδ(t) = h(t)

Example 2.6 Here we extend to arbitrary dimensions the state-space


realization analysis conducted for the three–dimensional system of
Example 1.6. Namely, we show that the single-input, single-output
n–dimensional strictly proper transfer function

b(s) bn−1 s n−1 + · · · + b1 s + b0


H (s) = = n (2.26)
a(s) s + an−1 s n−1 + · · · + a1 s + a0
STATE-SPACE REALIZATIONS REVISITED 71

has a state-space realization specified by coefficient matrices

 
0 1 0 ··· 0 0
 0 0 1 ··· 0 0 
 
 .. .. .. .. .. .. 
A=

. . . . . . 

 0 0 0 ··· 1 0 
 0 0 0 ··· 0 1 
−a0 −a1 −a2 · · · −an−2 −an−1
 
0 (2.27)
0
.
.
B= .
 
0
0
1
C = [ b0 b1 b2 ··· bn−2 bn−1 ] D=0

Moreover, by judiciously ordering the calculations, we avoid the


unpleasant prospect of symbolically rendering (sI − A)−1 , which is
seemingly at the heart of the identity we must verify. First, observe that
    1 
1 s −1 0 · · · 0 0
 s  0 s −1 · · · 0 0  s 
    2 
 s2   . .. .. .. .. .. 
 s 
   ..  . 
(sI − A)  .. = . . . . .  . 
 .  0 0 0 · · · −1 0  . 
  
 s n−2   0 0 0 ··· s −1   s n−2 
s n−1 a0 a1 a2 · · · an−2 s + an−1 s n−1
 
s · 1 + (−1) · s
 s · s + (−1) · s 2 
 
 .. 
 . 
= 
 s · s n−3
+ (−1) · s n−2 
 
 s·s n−2
+ (−1) · s n−1 
a0 · 1 + a1 · s + a2 · s + · · · + an−2 · s
2 n−2
+ (s + an−1 )s n−1

 
0
 0 
 
 .. 

= . 

 0 
 0 
s n + an−1 s n−1 + · · · + a1 s + a0
= Ba(s)
72 STATE-SPACE FUNDAMENTALS

Rearranging to solve for (sI − A)−1 B and substituting into


C(sI − A)−1 B + D yields
 
1
 s 
 
 s2 
 
 ..
 .
 
 s n−2 
 s n−1
C(sI − A)−1 B + D = b0 b1 b2 · · · bn−2 bn−1 +0
a(s)
b0 · 1 + b1 · s + b2 · s 2 + · · · + bn−2 · s n−2 + bn−1 s n−1
=
a(s)
b(s)
= = H (s)
a(s)
as required. 

2.5 COORDINATE TRANSFORMATIONS


Here we introduce the concept of a state coordinate transformation and
study the impact that such a transformation has on the state equation itself,
as well as various derived quantities. Especially noteworthy is what does not
change with the application of a state coordinate transformation. The reader
is referred to Appendix B (Sections B.4 and B.6) for an overview of linear
algebra topics (change of basis and linear transformations) that underpin our
development here. Once having discussed state coordinate transformations
in general terms, we consider a particular application: transforming a given
state equation into the so-called diagonal canonical form.

General Coordinate Transformations


For the n-dimensional linear time-invariant state equation (2.1), any non-
singular n × n matrix T defines a coordinate transformation via
x(t) = T z(t) z(t) = T −1 x(t) (2.28)
Associated with the transformed state vector z(t) is the transformed state
equation
ż(t) = T −1 ẋ(t)
= T −1 [Ax(t) + Bu(t)]
= T −1 AT z(t) + T −1 Bu(t)
y(t) = CT z(t) + Du(t)
COORDINATE TRANSFORMATIONS 73

That is, the coefficient matrices are transformed according to

 = T −1 AT B̂ = T −1 B Ĉ = CT D̂ = D (2.29)

and we henceforth write

ż(t) = Âz(t) + B̂u(t)


z(t0 ) = z0 (2.30)
y(t) = Ĉz(t) + D̂u(t)

where, in addition, the initial state is transformed via z(t0 ) = T −1 x(t0 ) .


For the system dynamics matrix, the coordinate transformation yields
 = T −1 AT . This is called a similarity transformation, so the new system
dynamics matrix  has the same characteristic polynomial and eigenval-
ues as A. However, the eigenvectors are different but are linked by the
coordinate transformation.
The impact that state coordinate transformations have on various quan-
tities associated with linear time-invariant state equations is summarized
in the following proposition.

Proposition 2.2 For the linear time-invariant state equation (2.1) and
coordinate transformation (2.28):

1. |sI − Â| = |sI − A|


2. (sI − Â)−1 = T −1 (sI − A)−1 T
3. eÂt = T −1 eAt T
4. Ĉ(sI − Â)−1 B̂ + D̂ = C(sI − A)−1 B + D
5. ĈeÂt B̂ + D̂δ(t) = CeAt B + Dδ(t) 

Item 1 states that the system characteristic polynomial and thus the sys-
tem eigenvalues are invariant under the coordinate transformation (2.28).
This proposition is proved using determinant properties as follows:

|sI − Â| = |sI − T −1 AT |


= |sT −1 T − T −1 AT |
= |T −1 (sI − A)T |
= |T −1 ||sI − A||T |
= |T −1 ||T ||sI − A|
= |sI − A|
74 STATE-SPACE FUNDAMENTALS

where the last step follows from |T −1 ||T | = |T −1 T | = |I | = 1. Therefore,


since a nonsingular matrix and its inverse have reciprocal determinants,
 and A have the same characteristic polynomial and eigenvalues.
Items 4 and 5 indicate that the transfer function and impulse response
are unaffected by (or invariant with respect to) any state coordinate trans-
formation. Consequently, given one state-space realization of a transfer
function or impulse response, there are infinitely many others (of the
same dimension) because there are infinitely many ways to specify a state
coordinate transformation.

Diagonal Canonical Form


There are some special realizations that can be obtained by applying the
state coordinate transformation (2.28) to a given linear state equation. In
this subsection we present the diagonal canonical form for the single-input,
single-output case.
Diagonal canonical form is also called modal form because it yields
a decoupled set of n first-order ordinary differential equations. This is
clearly convenient because in this form, n scalar first-order differential
equation solutions may be formulated independently, instead of using cou-
pled system solution methods. Any state-space realization with a diagonal-
izable A matrix can be transformed to diagonal canonical form (DCF) by

x(t) = TDCF z(t)

where the diagonal canonical form coordinate transformation matrix TDCF


= [ v1 v2 · · · vn ] consists of eigenvectors vi of A arranged column-
wise (see Appendix B, Section B.8 for an overview of eigenvalues and
eigenvectors). Because A is assumed to be diagonalizable, the n eigenvec-
tors are linearly independent, yielding a nonsingular TDCF . The diagonal
canonical form is characterized by a diagonal A matrix with eigenvalues
appearing on the diagonal, where eigenvalue λi is associated with the
eigenvector vi , i = 1, 2, . . . , n:
 
λ1 0 0 ··· 0
0 λ2 0 ··· 0 
 
ADCF = −1
TDCF ATDCF = 0 0 λ3 ··· 0  (2.31)
 . .. .. .. 
 .. . .
..
. . 
0 0 0 0 λn

−1
BDCF = TDCF B, CDCF = CTDCF , and DDCF = D have no particular form.
COORDINATE TRANSFORMATIONS 75

Example 2.7 We now compute the diagonal canonical form via a state
coordinate transformation for the linear time-invariant state equation in
Example 2.5. For this example, the state coordinate transformation (2.28)
given by    
1 −1 −1 2 1
TDCF = TDCF =
−1 2 1 1

yields transformed coefficient matrices

−1 −1
ADCF = TDCF ATDCF BDCF = TDCF B
      
2 1 0 1 1 −1 2 1 0
= =
1 1 −2 −3 −1 2 1 1 1
   
−1 0 1
= =
0 −2 1

CDCF = CTDCF
 
 1 −1
= 1 1 DDCF = D = 0
−1 2

= 0 1 DDCF

Note that this yields a diagonal ADCF matrix so that the diagonal canonical
form represents two decoupled first-order ordinary differential equations,
that is,
ż1 (t) = −z1 (t) + u(t) ż2 (t) = −2z2 (t) + u(t)

which therefore can be solved independently to yield complete solutions


 t
−t
z1 (t) = e z1 (0) + e−(t−τ ) u(τ ) dτ
0
 t
z2 (t) = e−t z2 (0) + e−2(t−τ ) u(τ ) dτ
0

We also must transform the initial state given in Example 2.5 using z(0) =
−1
TDCF x(0):
      
z1 (0) 2 1 −1 −1
= =
z2 (0) 1 1 1 0
76 STATE-SPACE FUNDAMENTALS

which together with a unit step input yields

z1 (t) = 1 − 2e−t z2 (t) = 12 (1 − e−2t ) t ≥ 0

The complete solution in the original coordinates then can be recovered


from the relationship x(t) = TDCF z(t):
     1 
x1 (t) 1 −1 1 − 2e−t − 2e−t + 12 e−2t
= = 2
x2 (t) −1 2 1
2
(1 − e−2t ) 2e−t − e−2t

which agrees with the result in Example 2.5. Note also from CDCF that
y(t) = z2 (t), which directly gives
 t
yzs (t) = e−2(t−τ ) u(τ ) dτ
0

from which we identify the impulse response and corresponding transfer


function as

1
h(t) = e−2t t ≥0 ↔ H (s) =
s+2

This agrees with the outcome observed in Example 2.5, in which a pole-
zero cancellation resulted in the first-order transfer function above. Here,
a first-order transfer function arises because the transformed state equation
is decomposed into two decoupled first-order subsystems, each associated
with one of the state variables z1 (t) and z2 (t). Of these two subsystems, the
first is disconnected from the output y(t) so that the system’s input-output
behavior is directly governed by the z2 subsystem alone.
The previously calculated zero-state output responses in both Laplace
and time domains are verified easily. It is interesting to note that the
preceding pole-zero cancellation results in a first-order transfer function
having the one–dimensional state space realization

ż2 (t) = −2z2 (t) + u(t)


y(t) = z2 (t)

We can conclude that not only do there exist different state-space real-
izations of the same dimension for a given transfer function, but there
also may exist realizations of different and possibly lower dimension (this
will be discussed in Chapter 5). 
COORDINATE TRANSFORMATIONS 77

Example 2.8 Given the three–dimensional single-input, single-output


linear time-invariant state equation specified below, we calculate the diag-
onal canonical form.
   
8 −5 10 −1 
A= 0 −1 1 B= 0 C = 1 −2 4 D=0
−8 5 −9 1

The characteristic polynomial is s 3 + 2s 2 + 4s + 8; the eigenvalues of


A are the roots of this characteristic polynomial, ±2i, −2. The diago-
nal canonical form transformation matrix TDCF is constructed from three
eigenvectors vi arranged column-wise.

TDCF = v1 v2 v3
 
5 5 −3
=  2i −2i −2 
−4 + 2i −4 − 2i 4

The resulting diagonal canonical form state-space realization is


−1 −1
ADCF = TDCF ATDCF BDCF = TDCF B
   
2i 0 0 −0.0625 − 0.0625i
=  0 −2i 0  =  −0.0625 + 0.0625i 
0 0 −2 0.125
CDCF = CT
 DCF DDCF = D
= −11 + 4i −11 − 4i 9 = 0

If one were to start with a valid diagonal canonical form realization,


TDCF = In because the eigenvectors can be taken to be the standard basis
vectors. 
As seen in Example 2.8, when A has complex eigenvalues occurring
in a conjugate pair, the associated eigenvectors can be chosen to form a
conjugate pair. The coordinate transformation matrix TDCF formed from
linearly independent eigenvectors of A will consequently contain complex
elements. Clearly, the diagonal matrix ADCF will also contain complex
elements, namely, the complex eigenvalues. In addition, the matrices
BDCF and CDCF computed from TDCF will also have complex entries
in general. To avoid a state-space realization with complex coefficient
matrices, we can modify the construction of the coordinate transforma-
tion matrix TDCF as follows. We assume for simplicity that λ1 = σ + j ω
and λ2 = σ − j ω are the only complex eigenvalues of A with associated
78 STATE-SPACE FUNDAMENTALS

eigenvectors v1 = u + j w and v2 = u − j w. It is not difficult to show that


linear independence of the complex eigenvectors v1 and v2 is equivalent to
linear independence of the real vectors u = Re(v1 ) and w = I m(v1 ). Let-
ting λ3 , . . . , λn denote the remaining real eigenvalues of A with associated
real eigenvectors v3 , . . . , vn , the matrix

TDCF = u w v3 · · · vn

is real and nonsingular. Using this to define a state coordinate transfor-


mation, we obtain
 
σ ω 0 ··· 0
 −ω σ 0 ··· 0 
 
ADCF = −1
TDCF ATDCF = 0 0 λ3 ··· 0 
 . .. .. .. 
 .. . .
..
. . 
0 0 0 0 λn

which is a real matrix but no longer diagonal. However, ADCF is a block


diagonal matrix that contains a 2 × 2 block displaying the real and imag-
inary parts of the complex conjugate eigenvalues λ1 , λ2 . Also, because
−1
TDCF now is a real matrix, BDCF = TDCF B and CDCF = CTDCF are guaran-
teed to be real yielding a state-space realization with purely real coefficient
matrices. This process can be generalized to handle a system dynamics
matrix A with any combination of real and complex-conjugate eigenval-
ues. The real ADCF matrix that results will have a block diagonal structure
with each real eigenvalue appearing directly on the main diagonal and a
2 × 2 matrix displaying the real and imaginary part of each complex con-
jugate pair. The reader is invited to revisit Example 2.8 and instead apply
the state coordinate transformation specified by
 
5 0 −3
TDCF = 0 2 −2
−4 2 2

2.6 MATLAB FOR SIMULATION AND COORDINATE


TRANSFORMATIONS

MATLAB and the accompanying Control Systems Toolbox provide many


useful functions for the analysis, simulation, and coordinate transforma-
tions of linear time-invariant systems described by state equations. A
subset of these MATLAB functions is discussed in this section.
MATLAB FOR SIMULATION AND COORDINATE TRANSFORMATIONS 79

MATLAB for Simulation of State-Space Systems


Some MATLAB functions that are useful for analysis and simulation of
state-space systems are

eig(A) Find the eigenvalues of A.


poly(A) Find the system characteristic polynomial coef-
ficients from A.
roots(den) Find the roots of the characteristic polynomial.
damp(A) Calculate the second-order system damping
ratio and undamped natural frequency (for
each mode if n > 2) from the system dynam-
ics matrix A.
damp(den) Calculate the second-order system damping
ratio and undamped natural frequency (for
each mode if n > 2) from the coefficients den
of the system characteristic polynomial.
impulse(SysName) Determine the unit impulse response for a sys-
tem numerically.
step(SysName) Determine the unit step response for a system
numerically.
lsim(SysName,u,t,x0) General linear simulation; calculate the output
y(t) and state x(t) numerically given the sys-
tem data structure.
expm(A*t) Evaluate the state transition matrix at time
t seconds.
plot(x,y) Plot dependent variable y versus independent
variable x.

One way to invoke the MATLAB function lsim with left-hand side argu-
ments is

[y,t,x] = lsim(SysName,u,t,x0)

The lsim function inputs are the state-space data structure SysName,
the input matrix u [length(t) rows by number of inputs m columns], an
evenly spaced time vector t supplied by the user, and the n × 1 initial
state vector x0. No plot is generated, but the lsim function yields the
system output solution y [a matrix of length(t) rows by number of outputs
p columns], the same time vector t, and the system state solution x [a
matrix of length(t) rows by number of states n columns]. The matrices y,
x, and u all have time increasing along rows, with one column for each
80 STATE-SPACE FUNDAMENTALS

component, in order. The user then can plot the desired output and state
components versus time after the lsim command has been executed.

MATLABfor Coordinate Transformations and Diagonal


Canonical Form
Some MATLAB functions that are useful for coordinate transformations and
the diagonal canonical form realization are

canon MATLAB function for canonical forms (use the modal switch for
diagonal canonical form)
ss2ss Coordinate transformation of one state-space realization to
another.

The canon function with the modal switch handles complex conjugate
eigenvalues using the approach described following Example 2.8 and
returns a states-space realization with purely real coefficient matrices.

Continuing MATLAB Example


State-Space Simulation For the Continuing MATLAB Example [single-
input, single-output rotational mechanical system with input torque τ (t)
and output angular displacement θ (t)], we now simulate the open-loop
system response given zero input torque τ (t) and initial state x(0) =
[0.4, 0.2]T . We invoke the lsim command which numerically solves for
the state vector x(t) from ẋ(t) = Ax(t) + Bu(t) given the zero input u(t)
and the initial state x(0). Then lsim also yields the output y(t) from y(t) =
Cx(t) + Du(t). The following MATLAB code, in combination with that in
Chapter 1, performs the open-loop system simulation for this example.
Appendix C summarizes the entire Continuing MATLAB Example m-file.

%-----------------------------------------------
% Chapter 2. Simulation of State-Space Systems
%-----------------------------------------------
t = [0:.01:4]; % Define array of time
% values
U = [zeros(size(t))]; % Zero single input of
% proper size to go with t
x0 = [0.4; 0.2]; % Define initial state
% vector [x10; x20]

CharPoly = poly(A) % Find characteristic


% polynomial from A
MATLAB FOR SIMULATION AND COORDINATE TRANSFORMATIONS 81

Poles = roots(CharPoly) % Find the system poles

EigsO = eig(A); % Calculate open-loop


% system eigenvalues
damp(A); % Calculate eigenvalues,
% zeta, and wn from ABCD

[Yo,t,Xo] = lsim(JbkR,U,t,x0);% Open-loop response


% (zero input, given ICs)

Xo(101,:); % State vector value at


% t=1 sec
X1 = expm(A*1)*X0; % Compare with state
% transition matrix
% method

figure; % Open-loop State Plots


subplot(211), plot(t,Xo(:,1)); grid;
axis([0 4 -0.2 0.5]);
set(gca,'FontSize',18);
ylabel('{\itx}− 1 (\itrad)')
subplot(212), plot(t,Xo(:,2)); grid; axis([0 4 -2 1]);
set(gca,'FontSize',18);
xlabel('\ittime (sec)'); ylabel('{\itx}− 2 (\itrad/s)');

This m-file, combined with the m-file from Chapter 1, generates the
following results for the open-loop characteristic polynomial, poles, eigen-
values, damping ratio ξ and undamped natural frequency ωn , and the value
of the state vector at 1 second. The eigenvalues of A agree with those from
the damp command, and also with roots applied to the characteristic
polynomial.

CharPoly =
1.0000 4.0000 40.0000

Poles =
-2.0000 + 6.0000i
-2.0000 - 6.0000i

EigsO =
-2.0000 + 6.0000i
-2.0000 - 6.0000i
82 STATE-SPACE FUNDAMENTALS

0.4

x1 (rad) 0.2

−0.2
0 1 2 3 4

1
x2 (rad/sec)

−1

−2
0 1 2 3 4
time (sec)

FIGURE 2.2 Open-loop state responses for the Continuing MATLAB Example.

Eigenvalue Damping Freq. (rad/s)


-2.00e + 000 + 6.00e + 000i 3.16e - 001 6.32e + 000
-2.00e + 000 - 6.00e + 000i 3.16e - 001 6.32e + 000

X1 =
0.0457
0.1293

The m-file also generates the open-loop state response of Figure 2.2.

Coordinate Transformations and Diagonal Canonical Form For


the Continuing MATLAB Example, we now compute the diagonal canonical
form state-space realization for the given open-loop system. The following
MATLAB code, along with the MATLAB code from Chapter 1, which also
appears in Appendix C, performs this computation.

%----------------------------------------------------
% Chapter 2. Coordinate Transformations and Diagonal
% Canonical Form
%----------------------------------------------------

[Tdcf,E] = eig(A); % Transform to DCF


% via formula
Adcf = inv(Tdcf)*A*Tdcf;
Bdcf = inv(Tdcf)*B;
MATLAB FOR SIMULATION AND COORDINATE TRANSFORMATIONS 83

Cdcf = C*Tdcf;
Ddcf = D;

[JbkRm,Tm] = canon(JbkR,'modal'); % Calculate DCF


% using MATLAB canon
Am = JbkRm.a
Bm = JbkRm.b
Cm = JbkRm.c
Dm = JbkRm.d

This m-file, combined with the Chapter 1 m-file, produces the diagonal
canonical form realization for the Continuing MATLAB Example:

Tdcf =
-0.0494 - 0.1482i -0.0494 + 0.1482i
0.9877 0.9877

Adcf =
-2.0000 + 6.0000i 0 - 0.0000i
0.0000 - 0.0000i -2.0000 - 6.0000i

Bdcf =
0.5062 + 0.1687i
0.5062 - 0.1687i

Cdcf =
-0.0494 - 0.1482i -0.0494 + 0.1482i

Ddcf =
0

Tm =
0 1.0124
-6.7495 -0.3375

Am =
-2.0000 6.0000
-6.0000 -2.0000

Bm =
1.0124
-0.3375
84 STATE-SPACE FUNDAMENTALS

Cm =
-0.0494 -0.1482

Dm =
0

We observe that Am is a real 2 × 2 matrix that displays the real and


imaginary parts of the complex conjugate eigenvalues −2 ± 6i. The MAT-
LAB modal transformation matrix Tm above is actually the inverse of our
coordinate transformation matrix given in Equation (2.28). Therefore, the
inverse of this matrix, for use in our coordinate transformation, is

inv(Tm) =
-0.0494 -0.1482
0.9877 0

The first column of inv(Tm) is the real part of the first column of Tdcf,
which is an eigenvector corresponding to the eigenvalue −2 + 6i. The
second column of inv(Tm) is the imaginary part of this eigenvector.

2.7 CONTINUING EXAMPLES FOR SIMULATION


AND COORDINATE TRANSFORMATIONS

Continuing Example 1: Two-Mass Translational Mechanical


System
Simulation The constant parameters in Table 2.1 are given for Contin-
uing Example 1 (two-mass translational mechanical system):
For case a, we simulate the open-loop system response given zero
initial state and step inputs of magnitudes 20 and 10 N, respectively, for
u1 (t) and u2 (t).
For case b, we simulate the open-loop system response given zero input
u2 (t) and initial state x(0) = [0.1, 0, 0.2, 0]T [initial displacements of
y1 (0) = 0.1 and y2 (0) = 0.2 m, with zero initial velocities].

TABLE 2.1 Numerical Parameters for Continuing


Example 1
i mi (kg) ci (Ns/m) ki (N/m)
1 40 20 400
2 20 10 200
CONTINUING EXAMPLES FOR SIMULATION AND COORDINATE TRANSFORMATIONS 85

Case a. For case a, we invoke lsim to numerically solve for the state
vector x(t) given the inputs u(t) and the zero initial state x(0); lsim also
yields the output y(t). The state-space coefficient matrices, with parameter
values from Table 2.1 above, are
   
0 1 0 0 0 0
 −15 −0.75 5 0.25   0.025 0 
A= B=
0 0 0 1 0 0 
10 0.5 −10 −0.5 0 0.05
   
1 0 0 0 0 0
C= D=
0 0 1 0 0 0

The plots of outputs y1 (t) and y2 (t) versus time are given in Figure 2.3.
We see from Figure 2.3 that this system is lightly damped; there is sig-
nificant overshoot, and the masses are still vibrating at 40 seconds. The
vibratory motion is an underdamped second-order transient response, set-
tling to final nonzero steady-state values resulting from the step inputs.
The four open-loop system eigenvalues of A are s1,2 = −0.5 ± 4.44i
and s3,4 = −0.125 ± 2.23i. The fourth-order system characteristic poly-
nomial is
s 4 + 1.25s 3 + 25.25s 2 + 10s + 100

This characteristic polynomial was found using the MATLAB function


poly(A); the roots of this polynomial are identical to the system

0.1
y1 (m)

0.05

0
0 10 20 30 40

0.2

0.15
y2 (m)

0.1

0.05

0
0 10 20 30 40
time (sec)

FIGURE 2.3 Open-loop output response for Continuing Example 1, case a.


86 STATE-SPACE FUNDAMENTALS

eigenvalues. There are two modes of vibration in this two-degrees-


of-freedom system; both are underdamped with ξ1 = 0.112 and ωn1 =
4.48 rad/s for s1,2 and ξ2 = 0.056 and ωn2 = 2.24 rad/s for s3,4 . Note
that each mode contributes to both y1 (t) and y2 (t) in Figure 2.3. The
steady-state values are found by setting ẋ(t) = 0 in ẋ(t) = Ax(t) + Bu(t)
to yield xss = −A−1 Bu. As a result of the step inputs, the output
displacement components do not return to zero in steady-state, as the
velocities do: xss = [0.075, 0, 0.125, 0]T .
Although we focus on state-space techniques, for completeness, the matrix
of transfer functions H (s) [Y (s) = H (s)U (s)] is given below for Contin-
uing Example 1, Case a (found from the MATLAB function tf):

H (s) =
 
0.025s 2 + 0.0125s + 0.25 0.0125s + 0.25
 s 4 + 1.25s 3 + 25.25s 2 + 10s + 100 s 4 + 1.25s 3 + 25.25s 2 + 10s + 100 
 
 
 0.0125s + 0.25 0.05s + 0.0375s + 0.75
2 
s 4 + 1.25s 3 + 25.25s 2 + 10s + 100 s 4 + 1.25s 3 + 25.25s 2 + 10s + 100

Note that the denominator polynomial in every element of H(s) is the


same and agrees with the system characteristic polynomial derived from
the A matrix and presented earlier. Consequently, the roots of the system
characteristic polynomial are identical to the eigenvalues of A.

Case b. For Case b, we again use lsim to solve for the state vector x(t)
given the zero input u2 (t) and the initial state x(0). The state-space coeffi-
cient matrices, with specific parameters from above, are (A is unchanged
from Case a):
 
0
 0 
B= C = [1 0 0 0] D=0
0 
0.05

The plots for states x1 (t) through x4 (t) versus time are given in Figure 2.4.
We again see from Figure 2.4 that this system is lightly damped. The
vibratory motion is again an underdamped second-order transient response
to the initial conditions, settling to zero steady-state values for zero
input u2 (t). The open-loop system characteristic polynomial, eigenvalues,
CONTINUING EXAMPLES FOR SIMULATION AND COORDINATE TRANSFORMATIONS 87

0.1
x1 (m)
0

−0.1
0 10 20 30 40

0.2
x2 (m/sec)

−0.2
0 10 20 30 40

0.2
x3 (m)

−0.2
0 10 20 30 40

0.5
x4 (m/sec)

−0.5
0 10 20 30 40
time (sec)

FIGURE 2.4 Open-loop state response for Continuing Example 1, case b.

damping ratios, and undamped natural frequencies are all identical to the
Case a results.
In Figure 2.4, we see that states x1 (t) and x3 (t) start from the given initial
displacement values of 0.1 and 0.2, respectively. The given initial veloci-
ties are both zero. Note that in this Case b example, the final values are all
zero because after the transient response has died out, each spring returns
to its equilibrium position referenced as zero displacement.
When focusing on the zero-input response, we can calculate the state
vector at any desired time by using the state transition matrix (t, t0 ) =
eA(t−t0 ) . For instance, at time t = 20 sec:

x(20) = (20, 0)x(0)


= eA(20) x(0)
 
0.0067
 −0.0114 
 
= 
 0.0134 
−0.0228
88 STATE-SPACE FUNDAMENTALS

These values, although difficult to see on the scale of Figure 2.4, agree
with the MATLAB data used in Figure 2.4 at t = 20 seconds.
Although we focus on state-space techniques, for completeness the transfer
function is given below for Continuing Example 1, case b (found from
the MATLAB function tf):
Y1 (s) 0.0125s + 0.25
H (s) = = 4
U2 (s) s + 1.25s 3 + 25.25s 2 + 10s + 100

where the system characteristic polynomial is again the same as that given
in case a above. Note that this scalar transfer function relating output y1 (t)
to input u2 (t) is identical to the (1,2) element of the transfer function
matrix presented for the multiple-input, multiple-output system in case a.
This makes sense because the (1,2) element in the multiple-input, multiple-
output case captures the dependence of output y1 (t) on input u2 (t) with
u1 (t) set to zero.

Coordinate Transformations and Diagonal Canonical Form


We now calculate the diagonal canonical form for Continuing Example 1,
case b. If we allow complex numbers in our realization, the transforma-
tion matrix to diagonal canonical form is composed of eigenvectors of A
arranged column-wise:
 0.017 + 0.155i 0.017 − 0.155i −0.010 − 0.182i −0.010 + 0.182i 
 −0.690 −0.690 0.408 0.408 
TDCF =
 −0.017 − 0.155i

−0.017 + 0.155i −0.020 − 0.365i −0.020 + 0.365i 
0.690 0.690 0.817 0.817

Applying this coordinate transformation, we obtain the diagonal canonical


form:
−1
ADCF = TDCF ATDCF
 
−0.50 + 4.44i 0 0 0
 −0.50 − 4.44i 
 0 0 0 
= 
 0 0 −0.125 + 2.23i 0 
0 0 0 −0.125 − 2.23i

 
0.0121 + 0.0014i
 0.0121 − 0.0014i 
−1  
BDCF = TDCF B= 
 0.0204 + 0.0011i 
0.0204 − 0.0011i
CONTINUING EXAMPLES FOR SIMULATION AND COORDINATE TRANSFORMATIONS 89

CDCF = CTDCF = 0.017 + 0.155i 0.017 − 0.155i −0.010 − 0.182i
− 0.010 + 0.182i]

DDCF = D = 0

Note that, as expected, the eigenvalues of the system appear on the diag-
onal of ADCF .
The MATLAB canon function with the switch modal yields
 
−0.50 4.44 0 0
 −4.44 −0.50 0 
 0 
Am =  
 0 0 −0.125 2.23 
0 0 −2.23 −0.125
 0.024 
 −0.003 
Bm = 
 0.041 

−0.002

Cm = 0.017 0.155 −0.010 −0.182
Dm = D = 0

which is consistent with our preceding discussions.

Continuing Example 2: Rotational Electromechanical System


Simulation The constant parameters in Table 2.2 are given for Contin-
uing Example 2 [single-input, single-output rotational electromechanical
system with input voltage v(t) and output angular displacement θ (t)].
We now simulate the open-loop system response given zero initial state
and unit step voltage input. We use the lsim function to solve for the state

TABLE 2.2 Numerical Parameters for Continuing Example 2


Parameter Value Units Name
L 1 H Circuit inductance
R 2 Circuit resistance
J 1 kg-m2 Motor shaft polar inertia
b 1 N-m-s Motor shaft damping constant
kT 2 N-m/A Torque constant
90 STATE-SPACE FUNDAMENTALS

vector x(t) given the unit step input u(t) and zero initial state x(0); y(t)
also results from the lsim function. The state-space coefficient matrices,
with parameter values from Table 2.2 above, are
   
0 1 0 0
 
A = 0 0 1 B = 0 C = [1 0 0] D = 0
0 −2 −3 2

Plots for the three state variables versus time are given in Figure 2.5.
We see from Figure 2.5 (top) that the motor shaft angle θ (t) = x1 (t)
increases linearly in the steady state, after the transient response has died
out. This is to be expected: If a constant voltage is applied, the motor
angular displacement should continue to increase linearly because there is
no torsional spring. Then a constant steady-state current and torque result.
The steady-state linear slope of x1 (t) in Figure 2.5 is the steady-state value
of θ̇ (t) = x2 (t), namely, 1 rad/s. This x2 (t) response is an overdamped
second-order response. The third state response, θ̈ (t) = x3 (t), rapidly rises
from its zero initial condition value to a maximum of 0.5 rad/s2 ; in steady
state, θ̈(t) is zero owing to the constant angular velocity θ̇ (t) of the motor
shaft. The three open-loop system eigenvalues of A are s1,2,3 = 0, −1, −2.
The third-order system characteristic polynomial is

s 3 + 3s 2 + 2s = s(s 2 + 3s + 2)
= s(s + 1)(s + 2)

5
x1 (rad)

0
0 1 2 3 4 5 6 7

1
x2 (rad/s)

0.5

0
0 1 2 3 4 5 6 7
x3 (rad/s2)

0.5

0
0 1 2 3 4 5 6 7
time (sec)

FIGURE 2.5 Open-loop response for Continuing Example 2.


CONTINUING EXAMPLES FOR SIMULATION AND COORDINATE TRANSFORMATIONS 91

This was found using the MATLAB function poly(A); the roots of this poly-
nomial are identical to the system eigenvalues. The zero eigenvalue cor-
responds to the rigid-body rotation of the motor shaft; the remaining two
eigenvalues −1, −2 led to the conclusion that the shaft angular velocity
θ̇ (t) = ω(t) response is overdamped. Note that we cannot calculate steady
state values from xss = −A−1 Bu as in Continuing Example 1 because the
system dynamics matrix A is singular because of the zero eigenvalue.
For completeness the scalar transfer function is given below for this
example (found via the MATLAB function tf):
θ (s) 2 2
H (s) = = 3 =
V (s) s + 3s + 2s
2 s(s + 1)(s + 2)
Note the same characteristic polynomial results as reported earlier. The
roots of the denominator polynomial are the same as the eigenvalues of
A. The preceding transfer function H (s) relates output motor angular
displacement θ (t) to the applied voltage v(t). If we wish to consider the
motor shaft angular velocity ω(t) as the output instead, we must differenti-
ate θ (t), which is equivalent to multiplying by s, yielding the overdamped
second-order system discussed previously:
ω(s) 2
H2 (s) = =
V (s) (s + 1)(s + 2)
We could develop an associated two–dimensional state-space realiza-
tion if we wished to control ω(t) rather than θ (t) as the output:
x1 (t) = ω(t)
x2 (t) = ω̇(t) = ẋ1 (t)
   
  0 1   0
ẋ1 (t) x1 (t)
=  −Rb −(Lb + RJ )  +  kT  v(t)
ẋ2 (t) x2 (t)
LJ LJ LJ
    
0 1 x1 (t) 0
= + v(t)
−2 −3 x2 (t) 2
 
 x1 (t)
ω(t) = 1 0 + [0]v(t)
x2 (t)

Coordinate Transformations and Diagonal Canonical Form


We now present the diagonal canonical form for Continuing Example 2.
The coordinate transformation matrix for diagonal canonical form is
92 STATE-SPACE FUNDAMENTALS

composed of eigenvectors of A arranged column-wise:


 
1 −0.577 0.218
TDCF 
= 0 0.577 −0.436 
0 −0.577 0.873

Applying this coordinate transformation, we obtain the diagonal canonical


form:  
0 0 0
ADCF = TDCF−1
ATDCF =  0 −1 0
0 0 −2
 
1
−1  
BDCF = TDCF B =  3.464 
4.583

CDCF = CTDCF = 1 −0.577 0.218
DDCF = D = 0

Note that the system eigenvalues appear on the main diagonal of diagonal
matrix ADCF , as expected.
The MATLAB canon function with the modal switch yields identical
results because the system eigenvalues are real.

2.8 HOMEWORK EXERCISES

Numerical Exercises
NE2.1 Solve 2ẋ(t) + 5x(t) = u(t) for x(t), given that u(t) is the unit
step function and initial state x(0) = 0. Calculate the time con-
stant and plot your x(t) result versus time.
NE2.2 For the following systems described by the given state equations,
derive the associated transfer functions.
      
a. ẋ1 (t) −3 0 x1 (t) 1
= + u(t)
ẋ2 (t) 0 −4 x2 (t) 1
 
x1 (t)
y(t) = [ 1 1 ] + [0]u(t)
x2 (t)
HOMEWORK EXERCISES 93

      
b. ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −3 −2 x2 (t) 1
 
x1 (t)
y(t) = [ 1 0 ] + [0]u(t)
x2 (t)
      
c. ẋ1 (t) 0 −2 x1 (t) 1
= + u(t)
ẋ2 (t) 1 −12 x2 (t) 0
 
 x1 (t)
y(t) = 0 1 + [0]u(t)
x2 (t)
      
d. ẋ1 (t) 1 2 x1 (t) 5
= + u(t)
ẋ2 (t) 3 4 x2 (t) 6
 
 x1 (t)
y(t) = 7 8 + [9]u(t)
x2 (t)
NE2.3 Determine the characteristic polynomial and eigenvalues for the
systems represented by the following system dynamics matrices.
a. A = −1 0
0 −2
 
b. A = 0 1
−10 −20
 
c. A = 0 1
−10 0
 
d. A = 0 1
0 −20
NE2.4 For the given homogeneous system below, subject only to the
initial state x(0) = [2, 1]T , calculate the matrix exponential and
the state vector at time t = 4 seconds.
    
ẋ1 (t) 0 1 x1 (t)
=
ẋ2 (t) −6 −12 x2 (t)

NE2.5 Solve the two-dimensional state equation below for the state vec-
tor x(t), given that the input u(t) is a unit step input and zero
initial state. Plot both state components versus time.
      
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −8 −6 x2 (t) 1
94 STATE-SPACE FUNDAMENTALS

NE2.6 Solve the two-dimensional state equation


       
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −2 −2 x2 (t) 1
   
x1 (0) 1
=
x2 (0) −1

for a unit step u(t).


NE2.7 Solve the two-dimensional state equation
      
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −5 −2 x2 (t) 1
   
x1 (0) 1
=
x2 (0) −1

for a unit step u(t).


NE2.8 Solve the two-dimensional state equation
      
ẋ1 (t) 0 1 x1 (t) 0
= + u(t)
ẋ2 (t) −1 −2 x2 (t) 1
   
x1 (0) 1
=
x2 (0) −2
 
 x1 (t)
y(t) = 2 1
x2 (t)

for u(t) = e−2t , t  0.


NE2.9 Calculate the complete response y(t) for the state equation
      
ẋ1 (t) −2 0 x1 (t) 1
= + u(t)
ẋ2 (t) 0 −3 x2 (t) 1
 
2
x(0) = 3
1
2
 
x1 (t)
y(t) = [ −3 4 ]
x2 (t)
HOMEWORK EXERCISES 95

for the input signal u(t) = 2et , t  0. Identify zero-input and


zero-state response components.

NE2.10 Diagonalize the following system dynamics matrices A using


coordinate
 transformations.

0 1
a. A =
−8 −10
 
0 1
b. A =
10 6
 
0 −10
c. A =
1 −1
 
0 10
d. A =
1 0

Analytical Exercises

AE2.1 If A and B are n × n matrices, show that


 t
e(A+B)t = eAt + eA(t−τ ) Be(A+B)τ dτ
o

You may wish to use the Leibniz rule for differentiating an inte-
gral:
 b(t)
d
X(t, τ ) dτ = X[t, b(t)]ḃ(t)
dt a(t)
 b(t)
∂X(t, τ )
− X[t, a(t)]ȧ(t) + dτ
a(t) ∂t

AE2.2 Show that for any n × n matrix A and any scalar γ

e(γ I +A)t = eγ t eAt

AE2.3 A real n × n matrix A is called skew symmetric if AT = −A. A


real n × n matrix R is called orthogonal if R −1 = R T . Given a
skew symmetric n × n matrix A, show that the matrix exponen-
tial eAt is an orthogonal matrix for every t.
96 STATE-SPACE FUNDAMENTALS

AE2.4 Show that the upper block triangular matrix


 
A11 A12
A=
0 A22

has the matrix exponential


 t 
eA11 t eA11 (t−τ ) A12 eA22 τ dτ
e At
= 0
0 eA22 t

AE2.5 Show that the matrix exponential satisfies


 t
eAt = I + A eAτ dτ
0

t
Use this to derive an expression for eAτ dτ in the case where
0
A is nonsingular.
AE2.7 For n × n matrices A and Q, show that the matrix differential
equation

Ẇ (t, t0 ) = A W (t, t0 ) + W (t, t0 )AT + Q W (t0 , t0 ) = 0

has the solution


 t
T
W (t, t0 ) = eA(t−τ ) Q eA (t−τ )

t0

AE2.8 Verify that the three–dimensional state equation

ẋ(t) = Ax(t) + Bu(t)

specified by
   −1  
0 1 0 1 0 0 b2
A=  0 0 1  
B = a2 1 0   b1 
−a0 −a1 −a2 a1 a2 1 b0

C= 1 0 0
HOMEWORK EXERCISES 97

is a state-space realization of the transfer function

b2 s 2 + b1 s + b0
H (s) =
s 3 + a2 s 2 + a1 s + a0
AE2.9 Verify that the three–dimensional state equation

ẋ(t) = Ax(t) + Bu(t)


y(t) = Cx (t)

specified by
   
0 0 −a0 1

A = 1 0 −a1  B = 0

0 1 −a2 0
 −1
 1 a2 a1
C = b2 b1 b0  0 1 a2 
0 0 1

is a state-space realization of the transfer function

b2 s 2 + b1 s + b0
H (s) =
s 3 + a2 s 2 + a1 s + a0
AE2.10 Show that if the multiple-input, multiple-output state equation

ẋ(t) = Ax(t) + Bu(t)


y(t) = Cx(t) + Du(t)

is a state-space realization of the transfer function matrix H (s),


then the so-called dual state equation

ż(t) = −AT z(t) − C T v(t)


w(t) = B T z(t) + D T v(t)

is a state-space realization of H T (−s).


AE2.11 Let the single-input, single-output state equation

ẋ(t) = Ax(t) + Bu(t) x(0) = x0


y(t) = Cx(t) + Du(t)
98 STATE-SPACE FUNDAMENTALS

be a state-space realization of the transfer function H (s). Suppose


that is z0 ∈ C not an eigenvalue of A for which
 
z0 I − A −B
C D

is singular. Show that z0 is a zero of the transfer function H (s).


Furthermore, show that there exist nontrivial x0 ∈ Cn and u. ∈ C
such that x(0) = x0 and u(t) = u0 ez0 t yield y(t) = 0 for all t  0.
AE2.12 For the single-input, single-output state equation

ẋ(t) = Ax(t) + Bu(t) x(0) = x0


y(t) = Cx(t) + Du(t)

with D  = 0, show that the related state equation

ż(t) = (A − BD −1 C)z(t) + BD −1 v(t) z(0) = z0


w(t) = −D −1 Cz(t) + D −1 v(t)

has the property that if z0 = x0 and v(t) = y(t), then


w(t) = u(t).
AE2.13 For the m-input, m-output state equation

ẋ(t) = Ax(t) + Bu(t) x(0) = x0


y(t) = Cx(t)

with the m × m matrix CB nonsingular, show that the related


state equation

ż(t) = (A − B(CB)−1 CA)z(t) + B(CB)−1 v(t) z(0) = z0


w(t) = −(CB)−1 CAz(t) + (CB)−1 v(t)

has the property that if z0 = x0 and v(t) = ẏ(t), then


w(t) = u(t).

Continuing MATLAB Exercises

CME2.1 For the system given in CME1.1:


a. Determine and plot the impulse response.
HOMEWORK EXERCISES 99

b. Determine and plot the unit step response for zero initial
state.
c. Determine and plot the zero input response given x0 =
[1, −1]T .
d. Calculate the diagonal canonical form.

CME2.2 For the system given in CME1.2:


a. Determine and plot the impulse response.
b. Determine and plot the unit step response for zero initial
state.
c. Determine and plot the zero input response given x0 =
[1, 2, 3]T .
d. Calculate the diagonal canonical form.

CME2.3 For the system given in CME1.3:


a. Determine and plot the impulse response.
b. Determine and plot the unit step response for zero initial
state.
c. Determine and plot the zero input response given x0 =
[4, 3, 2, 1]T .
d. Calculate the diagonal canonical form.

CME2.4 For the system given in CME1.4:


a. Determine and plot the impulse response.
b. Determine and plot the unit step response for zero initial
state.
c. Determine and plot the zero input response given x0 =
[1, 2, 3, 4]T .
d. Calculate the diagonal canonical form.

Continuing Exercises

CE2.1a Use the numerical parameters in Table 2.3 for this and all ensu-
ing CE1 assignments (see Figure 1.14).
Simulate and plot the resulting open-loop output displacements
for three cases (for this problem, use the state-space realizations
of CE1.1b):
i. Multiple-input, multiple-output: three inputs ui (t) and three
outputs yi (t), i = 1, 2, 3.
100 STATE-SPACE FUNDAMENTALS

TABLE 2.3 Numerical Parameters for CE1


System
i mi (kg) ci (Ns/m) ki (N/m)
1 1 0.4 10
2 2 0.8 20
3 3 1.2 30
4 1.6 40

(a) Step inputs of magnitudes u1 (t) = 3, u2 (t) = 2, and


u3 (t) = 1 (N ). Assume zero initial state.
(b) Zero inputs u(t). Assume initial displacements y1 (0) =
0.005, y2 (0) = 0.010, and y3 (0) = 0.015 (m); Assume
zero initial velocities. Plot all six state components.
ii. Multiple-input, multiple-output: two unit step inputs u1 (t)
and u3 (t), three displacement outputs yi (t), i = 1, 2, 3.
Assume zero initial state.
iii. Single-input, single-output: unit step input u2 (t) and output
y3 (t). Assume zero initial state. Plot all six state compo-
nents.
For each case, simulate long enough to demonstrate the steady-
state behavior. For all plots, use the MATLAB subplot function
to plot each variable on separate axes, aligned vertically with
the same time range. What are the system eigenvalues? These
define the nature of the system transient response. For case i(b)
only, check your state vector results at t = 10 seconds using the
matrix exponential.
Since this book does not focus on modeling, the solution for
CE1.1a is given below:

m1 ÿ1 (t) + (c1 + c2 )ẏ1 (t) + (k1 + k2 )y1 (t)


− c2 ẏ2 (t) − k2 y2 (t) = u1 (t)
m2 ÿ2 (t) + (c2 + c3 )ẏ2 (t) + (k2 + k3 )y2 (t) − c2 ẏ1 (t) − k2 y1 (t)
− c3 ẏ3 (t) − k3 y3 (t) = u2 (t)
m3 ÿ3 (t) + (c3 + c4 )ẏ3 (t) + (k3 + k4 )y3 (t)
− c3 ẏ2 (t) − k3 y2 (t) = u3 (t)

One possible solution for CE1.1b (system dynamics matrix


A only) is given below. This A matrix is the same for all
input-output cases, while B, C, and D will be different for each
HOMEWORK EXERCISES 101

case. First, the state variables associated with this realization are
x1 (t) = y1 (t) x3 (t) = y2 (t) x5 (t) = y3 (t)
x2 (t) = ẏ1 (t) = ẋ1 (t) x4 (t) = ẏ2 (t) = ẋ3 (t) x6 (t) = ẏ3 (t) = ẋ5 (t)

 
0 1 0 0 0 0
 (k + k ) (c1 + c2 ) k2 c2 
− 1 2
− 
 0 0 
 m1 m1 m1 m1 
 
 
 0 0 0 1 0 0 
 
A = k c2 (k2 + k3 ) (c2 + c3 ) k3 c3 
 2
− − 
 m2 m2 m2 m2 m2 m2 
 
 
 0 0 0 0 0 1 
 
 k3 c3 (k3 + k4 ) (c3 + c4 ) 
0 0 − −
m3 m3 m3 m3

CE2.1b Calculate the diagonal canonical form realization for the case iii
CE1 system. Comment on the structure of the results.
CE2.2a Use the numerical parameters in Table 2.4 for this and all ensu-
ing CE2 assignments (see Figure 1.15).
Simulate and plot the open-loop state variable responses for
three cases (for this problem use the state-space realizations of
CE1.2b); assume zero initial state for all cases [except Case i(b)
below]:
i. Single-input, single-output: input f (t) and output θ (t).
(a) unit impulse input f (t) and zero initial state.
(b) zero input f (t) and an initial condition of θ (0) = 0.1 rad
(zero initial conditions on all other state variables).
ii. Single-input, multiple-output: impulse input f (t) and two
outputs w(t) and θ (t).
iii. Multiple-input, multiple-output: two unit step inputs f (t)
and τ (t) and two outputs w(t) and θ (t).
Simulate long enough to demonstrate the steady-state behavior.
What are the system eigenvalues? Based on these eigenvalues
and the physical system, explain the system responses.

TABLE 2.4 Numerical Parameters for CE2


System
Parameter Value Units Name
m1 2 kg cart mass
m2 1 kg pendulum mass
L 0.75 m pendulum length
g 9.81 m/s2 gravitational acceleration
102 STATE-SPACE FUNDAMENTALS

Since this book does not focus on modeling, the solution for
CE1.2a is given below:
Coupled Nonlinear Differential Equations

(m1 + m2 )ẅ(t) − m2 L cos θ (t)θ̈(t) + m2 L sin θ (t)θ̇(t)2 = f (t)


m2 L2 θ̈ (t) − m2 L cos θ (t)ẅ(t) − m2 gL sin θ (t) = 0

Coupled Linearized Differential Equations

(m1 + m2 )ẅ(t) − m2 Lθ̈ (t) = f (t)


−m2 ẅ(t) + m2 Lθ̈ (t) − m2 gθ (t) = 0

Coupled Linearized Differential Equations with Torque Motor


Included

(m1 + m2 )ẅ(t) − m2 Lθ̈(t) = f (t)


−m2 Lẅ(t) + m2 L2 θ̈ (t) − m2 gLθ (t) = τ (t)

Note that the coupled nonlinear differential equations could have


been converted first to state-space form and then linearized
about a nominal trajectory, as described in Section 1.4; a natu-
ral choice for the nominal trajectory is zero pendulum angle and
rate, plus zero cart position (center of travel) and rate. Consider
this as an alternate solution to CE1.2b—you will get the same
A, B, C, and D matrices.
One possible solution for CE1.2b (system dynamics matrix
A only) is given below. This A matrix is the same for all input-
output cases, whereas B, C, and D will be different for each
case. First, the state variables associated with this realization are

x1 (t) = w(t) x3 (t) = θ (t)


x2 (t) = ẇ(t) = ẋ1 (t) x4 (t) = θ̇ (t) = ẋ3 (t)
 
0 1 0 0
0 0 m2 g
0
 
 m1 
A = 0 0 1
 0 
 (m1 + m2 )g 
0 0 0
m1 L
HOMEWORK EXERCISES 103

TABLE 2.5 Numerical Parameters for CE3 System


Parameter Value Units Name
L 0.0006 H armature inductance
R 1.40 armature resistance
kB 0.00867 V/deg/s motor back emf constant
JM 0.00844 lbf -in-s2 motor shaft polar inertia
bM 0.00013 lbf -in/deg/s motor shaft damping constant
kT 4.375 lbf -in/A torque constant
n 200 unitless gear ratio
JL 1 lbf -in-s2 load shaft polar inertia
bL 0.5 lbf -in/deg/s load shaft damping constant

CE2.2b For the case i CE2 system, try to calculate the diagonal
canonical form realization (diagonal canonical form cannot be
found—why?).
CE2.3a Use the numerical parameters in Table 2.5 for this and all ensu-
ing CE3 assignments (see Figure 1.16).
Simulate and plot the open-loop state variable responses for
two cases (for this problem, use the state-space realizations of
CE1.3b):
i. Single-input, single-output: input armature voltage vA (t) and
output robot load shaft angle θ L (t).
(a) Unit step input armature voltage vA (t); plot all three state
variables given zero initial state.
(b) Zero input armature voltage vA (t); plot all three state
variables given initial state θL (0) = 0, θ̇L (0) = 1, and
θ̈L (0) = 0.
ii. Single-input, single-output: unit step input armature voltage
vA (t) and output robot load shaft angular velocity ωL (t); plot
both state variables. For this case, assume zero initial state.
Simulate long enough to demonstrate the steady-state behavior.
What are the system eigenvalues? Based on these eigenvalues
and the physical system, explain the system responses.
Since this book does not focus on modeling, the solution for
CE1.3a is given below; the overall transfer function is
L (s) kT /n
G(s) = =
VA (s) LJ s + (Lb + RJ )s 2 + (Rb + kT kB )s
3

JL bL
where J = JM + 2 and b = bM + 2 are the effective polar
n n
inertia and viscous damping coefficient reflected to the motor
104 STATE-SPACE FUNDAMENTALS

shaft. The associated single-input, single-output ordinary differ-


ential equation is
kT
LJ θ¨˙L (t) + (Lb + RJ )θ̈L (t) + (Rb + kT kB )θ̇L (t) = vA (t)
n
One possible solution for CE1.3b (case i) is given below. The
state variables and output associated with the solution below are:

x1 (t) = θL (t)
x2 (t) = θ̇L (t) = ẋ1 (t) y(t) = θL (t) = x1 (t)
x3 (t) = θ̈L (t) = ẍ1 (t) = ẋ2 (t)

The state differential and algebraic output equations are


    
0 1 0
ẋ1 (t) x1 (t)
  0 
 ẋ2 (t)  = 
0 1   x2 (t) 

 (Rb + kT kB ) (Lb + RJ ) 
ẋ3 (t) 0 − − x3 (t)
LJ LJ
 
0
 0 
+ 
 k  vA (t)
T
LJ n
 
x1 (t)
  
y(t) = 1 0 0  x2 (t)  + [0]vA (t)
x3 (t)

The solution for case ii is similar, but of reduced (second) order:

x1 (t) = ωL (t)
y(t) = ωL (t) = x1 (t)
x2 (t) = ω̇L (t) = ẋ1 (t)

 
  0 1  
ẋ1 (t) x1 (t)
=  (Rb + kT kB ) (Lb + RJ ) 
ẋ2 (t) − − x2 (t)
LJ LJ
 
0
+  kT  vA (t)
LJ n
HOMEWORK EXERCISES 105

 
 x1 (t)
y(t) = 1 0 + [0]vA (t)
x2 (t)

CE2.3b Calculate the diagonal canonical form realization for the case i
CE3 system. Comment on the structure of the results.
CE2.4a Use the numerical parameters in Table 2.6 for this and all ensu-
ing CE4 assignments (see Figure 1.8).
Simulate and plot the open-loop state variables in response to
an impulse torque input τ (t) = δ(t) Nm and p(0) = 0.25 m
with zero initial conditions on all other state variables. Simulate
long enough to demonstrate the steady-state behavior. What are
the system eigenvalues? Based on these eigenvalues and the
physical system, explain the system response to these initial
conditions.
A valid state-space realization for this system is given in
Example 1.7, linearized about the nominal trajectory discussed
there. This linearization was performed for a horizontal beam
with a ball trajectory consisting of an initial ball position and
constant ball translational velocity. However, the realization in
Example 1.7 is time varying because it depends on the nominal
ball position p̃(t). Therefore, for CE4, place a further constraint
on this system linearization to obtain a linear time-invariant
system: Set the constant ball velocity to zero (v0 = 0) and set
p̃(t) = p0 = 0.25 m. Discuss the likely real-world impact of
this linearization and constraint.
CE2.4b Calculate the diagonal canonical form realization for the CE4
system.
CE2.5a Use the numerical parameters in Table 2.7 (Bupp et al., 1998)
for this and all ensuing CE5 assignments (see Figure 1.17).

TABLE 2.6 Numerical Parameters for CE4 System


Parameter Value Units Name
L 1 m beam length (rotates about center)
J 0.0676 kg-m2 beam mass moment of inertia
m 0.9048 kg ball mass
r 0.03 m ball radius
Jb 0.000326 kg-m2 ball mass moment of inertia
g 9.81 m/s2 gravitational acceleration
106 STATE-SPACE FUNDAMENTALS

TABLE 2.7 Numerical Parameters for CE5


System
Parameter Value Units Name
M 1.3608 kg cart mass
k 186.3 N/m spring stiffness constant
m 0.096 kg pendulum-end point
mass
J 0.0002175 kg-m2 pendulum mass moment
of inertia
e 0.0592 m pendulum length

Simulate and plot the open-loop state variables in response to


the initial conditions q(0) = 0.05 m, q̇(0) = 0, θ (0) = π/4 rad,
and θ̇ (0) = 0 rad/s. Simulate long enough to demonstrate the
steady-state behavior. What are the system eigenvalues? Based
on these eigenvalues and the physical system, explain the system
response to these initial conditions.
Since this book does not focus on modeling, the solution for
CE1.5a is given below:

(M + m)q̈(t) + kq(t) + me(θ̈(t) cos θ (t) − θ̇ 2 (t) sin θ (t)) = 0


(J + me2 )θ̈(t) + meq̈(t) cos θ (t) = n(t)

A valid state-space realization for this system is given below:

x1 (t) = q(t) x3 (t) = θ (t)


x2 (t) = q̇(t) = ẋ1 (t) x4 (t) = θ̇ (t) = ẋ3 (t)
   
0 1 0 0 0
 −k(J + me2 )   
   −me cos(θ̃ ) 
 0 0 0  
 d(θ̃ )   
A=

 B= d( θ̃ ) 
 0 0 0 1

 0


   
 kme cos(θ̃)   M +m 
0 0 0
d(θ̃ ) d(θ̃ )

C= 1 0 0 0 D=0

where d(θ̃ ) = (M + m)(J + me2 ) − (me cos(θ̃ ))2 . Note that


this linearized state-space realization depends on the zero-torque
HOMEWORK EXERCISES 107

equilibrium for which the linearization was performed. For CE5,


place a further constraint on this system linearization to obtain
a linear time-invariant system: Set the nominal pendulum angle
to θ̃ = π/4. Discuss the likely impact of this linearization and
constraint.
Note: The original system of CE1.5 is nonlinear (because
the pendulum can rotate without limit); in order to control it
properly, one must use nonlinear techniques that are beyond
the scope of this book. Please see Bernstein (1998) a special
nonlinear control journal issue with seven articles that survey
different nonlinear control approaches applied to this benchmark
problem.
CE2.5b Calculate the diagonal canonical form realization for the CE5
system.

You might also like