0% found this document useful (0 votes)
30 views

Models of Linear Systems

This chapter introduces models of linear systems. It defines the fundamental properties of linearity and superposition that linear systems obey. It then describes two common nonparametric models for linear systems: (1) the time domain model, which represents the system's impulse response function and models the output as the convolution of the input and impulse response; and (2) frequency domain models, which characterize the system in terms of its frequency response function. Identification methods for linear system models are covered in the next chapter.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Models of Linear Systems

This chapter introduces models of linear systems. It defines the fundamental properties of linearity and superposition that linear systems obey. It then describes two common nonparametric models for linear systems: (1) the time domain model, which represents the system's impulse response function and models the output as the convolution of the input and impulse response; and (2) frequency domain models, which characterize the system in terms of its frequency response function. Identification methods for linear system models are covered in the next chapter.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CHAPTER 3

MODELS OF LINEAR SYSTEMS

The nonlinear system models that are the topic of this book are all generalizations, in
one way or another, of linear models. Thus, to understand nonlinear models, it is first
necessary to appreciate the structure and behavior of the linear system models from which
they evolved. Consequently, this chapter will review a variety of mathematical models
used for linear systems.
The fundamental properties of linear systems will be defined first and then the most
commonly used models for linear systems will be introduced. Methods for identifying
models of linear systems will be presented in Chapter 5.

3.1 LINEAR SYSTEMS

Linear systems must obey both the principles of proportionality (1.5) and superposition
(1.6). Thus, if N is a linear system, then

N(k1 u1 (t) + k2 u2 (t)) = k1 y1 (t) + k2 y2 (t)

where k1 and k2 are any two scalar constants and

y1 (t) = N(u1 (t))


y2 (t) = N(u2 (t))

are any two input–output pairs.

Identification of Nonlinear Physiological Systems, By David T. Westwick and Robert E. Kearney


ISBN 0-471-27456-9  c 2003 Institute of Electrical and Electronics Engineers

39
40 MODELS OF LINEAR SYSTEMS

3.2 NONPARAMETRIC MODELS

The principles of superposition and proportionality may be used to develop nonparametric


models of linear systems. Conceptually, the idea is to select a set of basis functions
capable of describing any input signal. The set of the system’s responses to these basis
functions can then be used to predict the response to any input; that is, it provides a
model of the system.
The decomposition of an arbitrary input into a set of scaled basis functions is illustrated
in Figure 3.1; in this case the basis functions consist of a series of delayed pulses, shown
in the left column. The input, shown at the top of the figure, is projected onto the delayed
pulses to yield the expansion consisting of scaled copies of the basis elements, shown
in the right column. The input signal may be reconstructed by summing the elements of
the expansion.
Knowing the system’s response to each basis function, the response to each scaled
basis function can be determined using proportionality (1.5). Then, by superposition (1.6),
the individual response components may be summed to produce the overall response.
This is illustrated in Figure 3.2, where the scaled basis functions and the corresponding
responses of a linear system are shown in the first three rows. The bottom row shows
that summing these generates the original input signal and the resulting response.
Note that this approach will work with any set of basis functions that spans the set of
input signals. However, it is desirable to select a set of basis functions that are orthog-
onal since this leads to the most concise signal representations and best conditioning of
identification algorithms.

Input
6

0
−1 3

Basis elements Expansion


6 6

X
0 0
−1 3 −1 3
6 6
X
0 0
−1 3 −1 3
6 6
X
0 0
−1 3 −1 3
Time Time

Figure 3.1 Expansion of a signal onto a basis of delayed pulses. The input signal (top) is projected
onto a series of delayed pulses (left), resulting in the decomposition shown on the right.
NONPARAMETRIC MODELS 41

(A) Input (B) Response


6 3

Amplitude
0 0

−1 3 −1 3
(C) 6 (D) 3
Amplitude

0 0

−1 3 −1 3
(E) 6 (F) 3
Amplitude

0 0

−1 3 −1 3
(G) 6 (H) 3
Amplitude

0 0

−1 3 −1 3
Time Time

Figure 3.2 Linear systems and superposition. (A, C, E) Scaled and delayed unit pulse inputs.
(B, D, F) Responses of a linear, time-invariant system to the scaled pulses. (G) The sum of the
three scaled, delayed pulses. (H) The system’s response to the input G. By superposition, this is
the sum of the responses to the individual pulses.

3.2.1 Time Domain Models


Using the set of delayed pulses as basis functions leads to a nonparametric time domain
model. Consider a pulse of width t and of unit area
 
 1 t
for |t| <
d(t, t ) = t 2 (3.1)

0 otherwise

as illustrated in Figure 3.2A. Let the response of the linear system, N, to a single pulse be

N(d(t, t )) = h(t, t ) (3.2)


42 MODELS OF LINEAR SYSTEMS

Next, consider an input signal consisting of a weighted sum of delayed pulses,




u(t) = uk d(t − kt , t ) (3.3)
k=−∞

By superposition, the output generated by this “staircase” input will be

y(t) = N(u(t))


= uk N(d(t − kt , t ))
k=−∞


= uk h(t − kt , t ) (3.4)
k=−∞

The relationships among these signals are illustrated in Figure 3.2. Thus, if the input
can be represented as the weighted sum of pulses, then the output can be written as the
equivalently weighted sum of the pulse responses.
Now, consider what happens in the limit, as t → 0, and the unit-area pulses become
impulses,

lim d(t, t ) = δ(t) (3.5)


t →0

The pulse response becomes the impulse response, h(t), and the sum of equation (3.4)
becomes the convolution integral,

 ∞
y(t) = h(τ )u(t − τ ) dτ (3.6)
−∞

Thus, the response of the system to an arbitrary input can be determined by convolving it
with the system’s impulse response. Hence, the impulse response function (IRF) provides
a complete model of a system’s dynamic response in the time domain.∗
Theoretically, the limits of the integration in (3.6) extend from τ = −∞ to τ = ∞.
However, in practice, the impulse response will usually be of finite duration so that

h(τ ) = 0 for τ ≤ T1 or τ ≥ T2 (3.7)

The value of T1 , the lower integration limit in the convolution, determines whether
the system is causal or noncausal. If T1 ≥ 0, the IRF starts at the same time or after
the impulse. In contrast, if T1 < 0, the IRF starts before the impulse is applied and the
system is noncausal. Any physically realizable system must be causal but, as discussed
in Section 1.2.2, there are important, practical situations where noncausal responses are
observed. These include behavioral experiments with predictable inputs, signals measured
from inside feedback loops, and situations where the roles of the system input and output
are deliberately reversed for analysis purposes.
The value of T2 , the upper integration limit in the convolution (3.6), determines the
memory length of the system. This defines how long the response to a single impulse
∗ The convolution operation, (3.6), is often abbreviated using an centered asterisk—that is, y(t) = h(τ ) ∗ u(t).
NONPARAMETRIC MODELS 43

lasts or, conversely, how long a “history” must be considered when computing the cur-
rent output.
Thus, for a finite memory, causal system, the convolution integral simplifies to
 T
y(t) = h(τ )u(t − τ ) dτ (3.8)
0

If the sampling rate is adequate, the convolution integral can be converted to discrete
time using rectangular integration, to give the sum:

−1
T
y(t) = h(τ )u(t − τ )t (3.9)
τ =0

where t is the sampling interval. Note that in this formulation the time variable, t,
and time lag, τ , are discrete variables that are integer multiples of the sampling interval.
Notice that although the upper limit of the summation is T − 1, the memory length is
described as “T ,” the number of samples between 0 and T − 1 inclusively.
In practice, the impulse response is often scaled to incorporate the effect of the sam-
pling increment, t . Thus, equation (3.9) is often written as
−1
T
y(t) = g(τ )u(t − τ )
τ =0

where g(τ ) = t h(τ ).


The linear IRF will be generalized, in Sections 4.1 and 4.2, to produce the Volterra
and Wiener series models for nonlinear systems. Furthermore, the IRF is the basic
building block for block-structured models and their generalizations, to be discussed
in Sections 4.3–4.5.

3.2.1.1 Example: Human Ankle Compliance Model—Impulse Response


Throughout this chapter, a running example will be used to illustrate the linear system
models. The system used in this running example is the dynamic compliance of the human
ankle, which defines the dynamic relationship between torque, Tq (t), and position, (t).
Under some conditions, this relation is well modeled by a causal, linear, time-invariant
system (Kearney and Hunter, 1990). The example used in this chapter is based on a
transfer function model (see Section 3.3.1 below) of ankle compliance using parameter
values from a review article on human joint dynamics (Kearney and Hunter, 1990).
Figure 3.3 shows a typical ankle compliance IRF. Inspection of this IRF provides
several insights into the system. First, the largest peak is negative, indicating that the
output position, at least at low frequencies, will have the opposite sign from the input.
Second, the system is causal with a memory of less than 100 ms. Finally, the decaying
oscillations in the IRF suggest that the system is somewhat resonant.

3.2.2 Frequency Domain Models


Sinusoids are another set of basis functions commonly used to model signals—in this
case in the frequency domain. The superposition and scaling properties of linear systems
44 MODELS OF LINEAR SYSTEMS

0.1

Complicance (Nm/rad)
0

−0.1

−0.2

−0.3
0 0.05 0.1
Lag (s)

Figure 3.3 Impulse response of the dynamic compliance of the human ankle.

guarantee that the steady-state response to a sinusoidal input will be a sinusoid at the
same frequency, but with a different amplitude and phase. Thus, the steady-state response
of a linear system may be fully characterized in terms of how the amplitude and phase of
its sinusoidal response change with the input frequency. Normally, these are expressed as
the complex-valued frequency response of the system, H (j ω). The magnitude, |H (j ω)|,
describes how the amplitude of the input is scaled, whereas the argument (a.k.a. angle or
phase), φ(H (j ω)), defines the phase shift. Thus, the output generated by the sinusoidal
input, u(t) = sin(ωt), is given by

y(t) = |H (j ω)| sin(ωt + φ(H (j ω))) (3.10)

The Fourier transform


 ∞
U (j ω) = u(t)e −j ωt dt (3.11)
−∞

expands time domain signals onto an infinite basis of sinusoids. It provides a convenient
tool for using the frequency response to predict the response to an arbitrary input, u(t).
Thus, the Fourier transform of the input, U (j ω), is computed and then multiplied by the
frequency response, H (j ω), to give the Fourier transform of the output,

Y (j ω) = H (j ω)U (j ω) (3.12)

Taking the inverse Fourier transform of Y (j ω) gives the time domain response, y(t).
Methods for estimating the frequency response will be discussed in Section 5.3. The
nonlinear generalization of the frequency response, along with methods for its identifi-
cation, will be presented in Section 6.1.3.

3.2.2.1 Example: Human Ankle Compliance—Frequency Response


Figure 3.4 shows the frequency response, the gain and phase plotted as functions of
frequency, of the human ankle compliance dynamics corresponding to the IRF presented
in Figure 3.3. The gain plot shows that at low frequency the response is almost constant
at about −50 dB, there is a slight resonance at about 25 Hz, and the response falls off
rapidly at higher frequencies.
NONPARAMETRIC MODELS 45

(A) −40
Gain (dB)

−60

−80
100 101 102 103

(B) 180
Phase (degrees)

90

0
100 101 102 103
Frequency (Hz)

Figure 3.4 Frequency response of the dynamic compliance of the human ankle. (A) Transfer func-
tion magnitude |H (j ω)|. (B) Transfer function phase φ(H (j ω)). The system’s impulse response
is shown in Figure 3.3.

The phase plot shows that at low frequencies the output has the opposite sign to the
input. In contrast, at higher frequencies, the output is in phase with the input, although
greatly attenuated.
This behavior is consistent with a physical interpretation of a system having elastic,
viscous, and inertial properties.

3.2.2.2 Relationship to the Impulse Response Consider the Fourier transform


(3.11) of the convolution integral (3.6),
 ∞  ∞ 
Y (j ω) = h(τ )u(t − τ ) dτ e −j ωt dt
−∞ −∞
 ∞  ∞
= h(τ ) u(t − τ )e −j ωt dt dτ
−∞ −∞
 ∞
= U (j ω) h(τ )e −j ωτ dτ
−∞
= U (j ω)H (j ω)

where H (j ω) is the Fourier transform of the impulse response, h(τ ).


This relation is identical to equation (3.12), demonstrating that the frequency response,
H (j ω), is equal to the Fourier Transform of the impulse response, h(τ ). Conversely, the
inverse Fourier transform of the frequency response is equal to the impulse response.
46 MODELS OF LINEAR SYSTEMS

3.3 PARAMETRIC MODELS

Any causal, linear, time-invariant, continuous-time system can be described by a differ-


ential equation with constant coefficients of the form

d n y(t) d n−1 y(t) dy(t)


n
+ an−1 + · · · + a1 + a0 y(t)
dt dt n−1 dt
d m u(t) d m−1 u(t) du(t)
= bm + bm−1 + · · · + b1 + b0 u(t) (3.13)
dt m dt m−1 dt

where n ≥ m. Equation (3.13) is often abbreviated as

A(D)y = B(D)u

where A and B are polynomials in the differential operator, D = d/dt.

3.3.1 Parametric Frequency Domain Models


Taking the Laplace transform of (3.13) gives

(s n + an−1 s n−1 + · · · + a1 s + a0 )Y (s)


= (bm s m + bm−1 s m−1 + · · · + b1 s + b0 )U (s)

which may be manipulated to obtain the transfer function,


Y (s)
H (s) =
U (s)

bm s m + · · · + b1 s + b0 B(s)
H (s) = = (3.14)
s + an−1 s
n n−1 + · · · + a1 s + a0 A(s)

Note that equation (3.14) can be factored to give


(s − z1 ) . . . (s − zm )
H (s) = K (3.15)
(s − p1 ) . . . (s − pn )
where the zeros (zi ) and poles (pi ) of the transfer function may be real, zero, or com-
plex (if complex they come as conjugate pairs). Furthermore, for a physically realizable
system, n ≥ m, that is, the system cannot have more zeros than poles.
It is straightforward to convert a transfer function into the equivalent nonparametric
model. Thus, the impulse response function is determined from Y (s) = H (s)U (s),
by setting U (s) = L(δ) = 1 and taking the inverse Laplace transform. The frequency
response may be determined by setting U (s) = ω/(s 2 + ω2 ), the Laplace transform
PARAMETRIC MODELS 47

200

Imaginary (rad/sec)
100

−100

−200
−60 −40 −20 0 20
Real frequency (rad/sec)

Figure 3.5 Pole-zero map of a continuous-time, parametric model of the human ankle compli-
ance. Notice that the system has two poles and no zeros.

of u(t) = sin(ωt), and then taking the inverse Laplace transform. Operationally, this
corresponds to making the substitution j ω = s in the transfer function.

3.3.1.1 Example: Human Ankle Compliance—Transfer Function Figure 3.5


shows a pole-zero map of a parametric model of the human ankle compliance. The
transfer function is given by
(s) −76
H (s) = ≈ 2
T (s) s + 114s + 26,700
This model has the same form as that of a mass, spring and damper
(s) 1
H (s) = ≈ 2
T (s) I s + Bs + K
Consequently, it is possible to obtain some physical insight from the parameter values.
For example, a second-order transfer function is often written as

Gωn2
H (s) =
s 2 + 2ζ ωn s + ωn2
where ωn is the undamped natural frequency of the system, G is its DC gain, and ζ is
the damping factor. For this model we have

ωn = 163.4 rad/s = 26 Hz
G = −0.0028
ζ = 0.35

In fact, this model was constructed using values for these parameters obtained from a
review on human joint dynamics (Kearney and Hunter, 1990). This transfer function was
used to generate all the compliance models presented in this chapter.
48 MODELS OF LINEAR SYSTEMS

3.3.2 Discrete-Time Parametric Models


Linear systems may also be modeled in discrete time using difference equations. In this
case, the fundamental operators are the forward and backward shift operators, defined as∗

zu(t) = u(t + 1) (3.16)


z−1 u(t) = u(t − 1) (3.17)

The forward difference is defined as

[z − 1]u(t) = zu(t) − u(t) = u(t + 1) − u(t)

Even though there is an analogy between the derivative and the forward difference,
it is the backward shift operator, z−1 , that is most commonly employed in difference
equation models. Thus, by analogy to equation (3.13), any deterministic, linear, time-
invariant, discrete-time system can be modeled using a difference equation of the form

A(z−1 )y(t) = B(z−1 )u(t) (3.18)

It must be remembered that the transformation from discrete to continuous time is


not achieved by simply replacing derivatives with forward differences. A variety of
more complex transformations are used including the bilinear transform, Padé approxi-
mations, and the impulse invariant transform. These transformations differ chiefly in the
assumptions made regarding the behavior of the signals between sampling instants. For
example, a zero-order hold assumes that the signals are “staircases” that remain constant
between samples. This is equivalent to using rectangular (Euler) integration. In addition,
a discrete-time model’s parameters will depend on the sampling rate; as it increases there
will be less change between samples and so the backward differences will be smaller. The
interested reader should consult a text on digital signal processing, such as Oppenheim
and Schafer (1989), for a more detailed treatment of these issues.
Adding a white noise component, w(t), to the output of a difference equation model
(3.18) gives the output error (OE) model.

A(z−1 )y(t) = B(z−1 )u(t)


z(t) = y(t) + w(t)

This is usually written more compactly as

B(z−1 )
z(t) = u(t) + w(t) (3.19)
A(z−1 )

The difference equation (3.18) and output error (3.19) models are sometimes referred
to as autoregressive moving average (ARMA) models, or as ARMA models with additive
∗ Note that in this context, z and z−1 are operators that modify signals. Throughout this text, the symbol z(t)
is used in both discrete and continuous time to denote the measured output of a system. The context should
make it apparent which meaning is intended.
PARAMETRIC MODELS 49

0.5

Imaginary
0

−0.5

−1
1 0 1
Real

Figure 3.6 Pole-zero map of the discrete-time, parametric frequency domain representation
dynamic ankle compliance, at a sampling frequency of 200 Hz. Note the presence of two zeros at
z = −1.

noise in the case of an OE model. However, the term ARMA model is more correctly
used to describe the stochastic signal model described in the next section.
Finally, note that if A(z−1 ) = 1, the difference equation reduces to a finite impulse
response (FIR) filter,

y(t) = B(z−1 )u(t)


= b0 u(t) + b1 u(t − 1) + · · · + bm u(t − m) (3.20)

3.3.2.1 Example: Human Ankle Compliance—Discrete-Time Transfer Func-


tion A z-domain transfer function description of human ankle compliance is given by

(z) (−3.28z2 − 6.56z − 3.28) × 10−4


H (z) = ≈
T (z) z2 − 1.15z + 0.606
for a sampling frequency of 200 Hz. Figure 3.6 shows the corresponding pole-zero map.

3.3.2.2 Parametric Models of Stochastic Signals Linear difference equations


may also be used to model stochastic signals as the outputs of linear dynamic systems
driven by unobserved white Gaussian noise sequences. The simplest of these models is
the autoregressive (AR) model,

A(z−1 )y(t) = w(t) (3.21)

where w(t) is filtered by the all-pole filter, 1/A(z−1 ). In a systems context, w(t) would
be termed a process disturbance since it excites the dynamics of a system. The origin of
the term autoregressive can best be understood by examining the difference equation of
an AR model:

y(t) = w(t) − a1 y(t − 1) − · · · − an y(t − n)

which shows that the current value of the output depends on the current noise sample,
w(t), and n past output values.
50 MODELS OF LINEAR SYSTEMS

A more general signal description extends the filter to include both poles and zeros,

A(z−1 )y(t) = C(z−1 )w(t) (3.22)

This structure is known as an autoregressive moving average (ARMA) model and has
the same form as the deterministic difference equation, (3.18). However, in this case the
input is an unmeasured, white-noise, process disturbance so the ARMA representation
is a stochastic model of the output signal. In contrast, equation (3.18) describes the
deterministic relationship between measured input and output signals.

3.3.2.3 Parametric Models of Stochastic Systems Parametric discrete-time


models can also represent systems having both stochastic and deterministic components.
For example, the autoregressive exogenous input (ARX) model given by

A(z−1 )y(t) = B(z−1 )u(t) + w(t) (3.23)

(A) w1(t ) w2(t )

u (t ) z (t )
Gc (z ) Gp (z)

(B)
u (t ) Gc(z)Gp(z )
1 + Gc(z )Gp(z )

w1(t ) Gp(z ) z (t )
1 + Gc(z )Gp(z )

w2(t ) 1
1 + Gc(z )Gp(z )

Figure 3.7 Two equivalent representations of a closed-loop control system with controlled input,
u(t), and two process disturbances, w1 (t) and w2 (t). (A) Block diagram explicitly showing the
feedback loop including the plant Gp (z) and controller Gc (z). (B) Equivalent representation com-
prising three open-loop transfer functions, one for each input. Note that the three open-loop transfer
functions all share the same denominator.
PARAMETRIC MODELS 51

incorporates terms accounting for both a measured (exogenous) input, u(t), and an unob-
served process disturbance, w(t). Consequently, its output will contain both deterministic
and stochastic components.
The ARMA model can be generalized in a similar manner to give the ARMAX model,

A(z−1 )y(t) = B(z−1 )u(t) + C(z−1 )w(t) (3.24)

which can be written as

B(z−1 ) C(z−1 )
y(t) = −1
u(t) + w(t)
A(z ) A(z−1 )

This makes it evident that the deterministic input, u(t), and the noise, w(t), are filtered
by the same dynamics, the roots of A(z−1 ). This is appropriate if the noise is a process
disturbance. For example, consider the feedback control system illustrated in Figure 3.7.
Regardless of where the disturbance (or control signal) enters the closed-loop system, the
denominator of the transfer function will be 1+Gc (z)Gp (z). Thus, both the deterministic
and noise models will have the same denominators, and an ARX or ARMAX model
structure will be appropriate.
However, if the noise source is outside the feedback loop, or if the process generating
the disturbance input contains additional poles not found in the closed-loop dynamics,
the ARMAX structure is not appropriate. The more general Box–Jenkins structure

B(z−1 ) C(z−1 )
y(t) = u(t) + w(t) (3.25)
A(z−1 ) D(z−1 )

addresses this problem. Here the deterministic and stochastic inputs are filtered by sep-
arate dynamics, so the effects of process and measurement noise may be combined
into the single term, w(t). The Box–Jenkins model is the most general parametric linear
system model; all other linear parametric models are special cases of the Box–Jenkins
model. This is illustrated in Figure 3.8 as follows:

1. The output error model Figure (3.8B) is obtained by removing the noise filter from
the Box–Jenkins model, so that C(z−1 ) = D(z−1 ) = 1.
2. The ARMAX model Figure (3.8C), is obtained by equating the denominator poly-
nomials in the deterministic and noise models (i.e., A(z−1 ) = D(z−1 )).
3. Setting the numerator of the noise model to 1, C(z−1 ) = 1, reduces the ARMAX
model to an ARX structure Figure (3.8D).
4. The ARMA model (Figure 3.8E) can be thought of as a Box–Jenkins (or ARMAX)
structure with no deterministic component.
5. Setting the numerator of the noise model to 1, C(z−1 ) = 1, reduces the ARMA
model to an AR model Figure (3.8F)
52 MODELS OF LINEAR SYSTEMS

(A) Box Jenkins w (t ) (B) Output Error

C (z −1)
D (z −1) w (t )

u (t ) B (z −1) z (t ) u (t ) B (z −1) z (t )
A (z −1) A (z −1)

(C) ARMAX w (t ) (D) ARX w (t )

C (z −1) 1
A (z −1) A (z −1)

u (t ) B (z −1) z (t ) u (t ) B (z −1) z (t )
A (z −1) A (z −1)

(E) ARMA w (t ) (F) AR w (t )

C (z −1) 1
A (z −1) A (z −1)

z (t ) z (t )

Figure 3.8 Taxonomy of the different parametric difference equation-type linear system models.
The input, u(t), and output, z(t), are assumed to be measured signals. The disturbance, w(t), is an
unobserved white noise process. (A) The Box–Jenkins model is the most general structure. (B–F)
Models that are special cases of the Box–Jenkins structure.

3.4 STATE-SPACE MODELS

An nth-order differential equation describing a linear time-invariant system (3.13) can


also be expressed as a system of n coupled first-order differential equations. These are
expressed conveniently in matrix form as

ẋ(t) = Ax(t) + Bu(t)


(3.26)
y(t) = Cx(t) + Du(t)

This representation is known as a state-space model where the state of the system
is defined by an n element vector, x(t). The n × n matrix A maps the n-dimensional
state onto its time derivative. For a single-input single-output (SISO) system, B is an
n × 1 (column) vector that maps the input onto the derivative of the n-dimensional state.
STATE-SPACE MODELS 53

Similarly, C will be a 1×n (row) vector, and D will be a scalar. Multiple-input multiple-
output (MIMO) systems may be modeled with the same structure simply by increasing
the dimensions of the matrices B, C and D. The matrices B and D will have one column
per input, while C and D will have one row per output.
In discrete time, the state-space model becomes a set of n coupled difference equations.

x(t + 1) = Ax(t) + Bu(t)


(3.27)
y(t) = Cx(t) + Du(t)

where x(t) is an n-dimensional vector, and A, B, C, D are time-invariant matrices of


appropriate dimensions.
State-space models may be extended to include the effects of both process and mea-
surement noise, as follows:
x(t + 1) = Ax(t) + Bu(t) + w(t)
(3.28)
y(t) = Cx(t) + Du(t) + v(t)

Note that the terms representing the process disturbance, w(t), and measurement noise,
v(t), remain distinct. This contrasts with the Box–Jenkins model, where they are com-
bined into a single disturbance term.
The impulse response of a discrete-time, state-space model may be generated by setting
the initial state, x(0), to zero, applying a discrete impulse input, and solving equations
(3.27) for successive values of t. This gives

 T
h= D CB CAB CA2 B ... (3.29)

where h is a vector containing the impulse response, h(τ ).


Now, consider the effects of transforming the matrices of the state-space system as
follows
AT = T−1 AT, BT = T−1 B
(3.30)
CT = CT, DT = D

where T is an invertible matrix. The impulse response of this transformed system will be
 T
hT (k) = DT CT BT CT AT BT . . .
 T
= D CTT−1 B CTT−1 ATT−1 B ...
 T
= D CB CAB . . .

which is the same as the original system. Hence, T is a similarity transformation that does
not alter the input–output behavior of the system. Thus, a system’s state-space model will
have many different realizations with identical input–output behaviors. Consequently, it
is possible to choose the realization that best suits a particular problem. One approach is
to seek a balanced realization where all states have the same order of magnitude; this is
best suited to fixed precision arithmetic applications. An alternative approach is to seek
54 MODELS OF LINEAR SYSTEMS

0.1

0.05

0
Complicance (Nm/rad)

−0.05

−0.1

−0.15

−0.2

−0.25

−0.3
0 0.02 0.04 0.06 0.08 0.1
Lag (s)

Figure 3.9 Impulse response computed from the discrete state-space model of the dynamic com-
pliance of the human ankle.

a minimal realization, one that minimizes the number of free parameters; this is most
efficient for computation since it minimizes the number of nonzero matrix elements.

3.4.1 Example: Human Ankle Compliance—Discrete-Time,


State-Space Model
One discrete-time, state-space realization of the ankle-compliance model, considered
throughout this chapter, is



1.15 −0.606 0.0313
A= , B=
1 0 0
(3.31)

C = −0.0330 −0.00413 , D = −3.28 × 10−4
In this case x2 (t) = x1 (t − 1), and the input drives only the first state, x1 (t). Thus, in
this realization the states act as a sort of delay line.
Figure 3.9 shows the discrete impulse response obtained from the state-space real-
ization shown above. Compare this with the continuous time IRF shown in Figure 3.3.
Notice that the continuous-time IRF is smoother than the discrete-time version. This
jagged appearance is largely due to the behavior of the IRF between sample points.

3.5 NOTES AND REFERENCES

1. Bendat and Piersol (1986) is a good reference for nonparametric models of linear
systems, in both the time and frequency domains.
THEORETICAL PROBLEMS 55

2. The relationships between continuous- and discrete-time signals and systems are
dealt with in texts on digital signal processing, such as Oppenheim and Schafer
(1989), and on digital control systems, such as Ogata (1995).
3. There are several texts dealing with discrete-time, stochastic, parametric models.
Ljung (1999), and Söderström and Stoica (1989) in particular are recommended.
4. For more information concerning state-space systems, the reader should consult
Kailath (1980).

3.6 THEORETICAL PROBLEMS

1. Suppose that the output of system is given by


 T
y(t) = h(t, τ )u(t − τ ) dτ
τ =0

• Is this system linear?


• Is this system causal?
• Is it time-invariant?
Justify your answers.
2. Compute and plot the frequency response of the following transfer function,
s−2
H (s) =
s +2
What does this system do? Which frequencies does it pass, which does it stop? Com-
pute its complex conjugate, H ∗ (s). What is special about H (j ω)H ∗ (j ω)?
3. Draw a pole-zero map for the deterministic, discrete-time parametric model,

y(t) = u(t) − 0.25u(t − 1) + 0.5y(t − 1)

Next, compute its impulse response. How long does it take for the impulse response
to decay to 10% of its peak value? How long does it take before it reaches less than
1%? How many lags would be required for a reasonable FIR approximation to this
system?
4. Consider the following state-space system

x(t + 1) = Ax(t) + Bu(t)


w(t − 1) = Gw(t) + Hu(t)
y(t) = Cx(t) + Ew(t) + Du(t)

Is this system linear? time-invariant? causal? deterministic? Compute the system’s


response to a unit impulse. Can you express the output as a convolution?
56 MODELS OF LINEAR SYSTEMS

3.7 COMPUTER EXERCISES

1. Create a linear system object. Check superposition. Is the system causal?


2. Transform a linear system object from TF to IRF to SS.
3. Excite a linear system with a cosine—look at the first few points. Why isn’t the
response a perfect sinusoid?
4. Generate two linear system objects, G and H . Compute G(H (u)) and H (G(u)). Are
they equivalent. Explain why/why not.

You might also like