Sanet - ST Digital Control Systems
Sanet - ST Digital Control Systems
Digital
Control Systems
With 159 Figures
Revised and enlarged edition of the German book ,Digitale Regelsysteme" 1977,
translated by the author in cooperation with Dr. D. W. Clarke, Oxford, U.K.
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned,
specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine
or similar means, and storage in data banks. Under § 54 of the German Copyright Law where copies are made for
other than private use a fee is payable to 'Verwertungsgesellschaft Wort', Munich.
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific
statement, that such names are exempt from the relevant protective laws and regulations and therefore free for
general use.
2061/3020-543210
Preface
As compared with analog control systems, here are some of the charac-
teristics of control systems using process computers or process micro-
computers:
Many of the methods, developments and results have been worked out in
a research project funded by the Bundesminister fur Forschung und Tech-
nologie (DV 5.505) within the project 'Prozesslenkung mit DV-Anlagen
(PDV)' and in research projects funded by the Deutsche Forschungsge-
meinschaft in the Federal Republic of Germany.
The first edition of the book was published 1977 in German with the
title 'Digitale Regelsysteme'. This book is a translation of the first
edition and contains several supplements. To ease the introduction in-
to the basic mathematical treatment of linear sampled-data systems,
chapter 3 has been extended. The design of multivariable control sys-
tems has been supplemented by section 18.1.5., chapter 20 (matrix poly-
nomial approach) and sections 21.2 to 21.4 (state control approach).
The old chapters 21 and 22 on signal filtering and state estimation are
moved to chapters 27 and 15. Therefore all chapters beginning with 22
have one number less than in the first edition. Because of the progress
in recursive parameter estimation section 23.8 has been added. Chapter
25 has been extended considerably taking into account the recent deve-
lopment in parameter-adaptive control. Finally, chapter 30 on case stu-
dies of digital control has been added.
1 • Introduction •••.•••.••••.•....•.•••••••••••••.•.••...•...••.•••
3. Discrete-time Systems • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 14
3. 1 Discrete-time Signals • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • . • 14
3.1.1 Discrete-time Functions, Difference Equations •••••••• 14
3. 1 • 2 Impulse Trains • • • • • • • • • • • • . • • • • • • • • • • • • • . • . • • . . • • • • • • 18
3.2 Laplace-transformation of Discrete-time Functions •.•.•••••• 20
3. 2. 1 Laplace-transformation • • . • • • • • • • • • • • • • • . • • . • • • • • • • • • • 20
3. 2. 2 Sampling Theorem • • • • • • • • • • . • • • • • • • • • • • • • • . • • • • • • • • • • • 20
3. 2. 3 Holding Element ••••••••••••••••••••••••••••.••••••••• 23
3. 3 z-Transform • • • • . • • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • • • • • • • 24
3.3.1 Introduction of z-Transform .•••••.•••••.••••••••..••• 24
3.3.2 z-Transfor~ Theorems •••••••.•.••••••••••••••.•••••••• 26
3. 3. 3 Inverse z-Transform • • • • • • • • • • • • • • • • • • . • • • • • • . • . • • . • • • 26
3.4 Convolution Sum and z-Transfer Function •••••••••••••.•••••• 27
3. 4. 1 Convolution Sum • • . . . • • • • • • • . • • • . • • • • • • • • • . . . • . • • . . • • • 27
3.4.2 Pulse Transfer Function and z-Transfer Function •••••• 28
3.4.3 Properties of the z-Transfer Function •••••••••••••••• 31
3. 5 Poles and Stability • • • • • • • . • • • • • . • • • • • • • • • • • • • • • • • • • • • • • . • • 33
3.5.1 Location of Poles in the z-Plane •.•••••••••..•••.•••• 33
3.5.2 Stability Condition •••••••..••.••••••••••••..•••••••• 37
3.5.3 Stability Criterion throughBilinear Transformation ..• 37
3.6 State Variable Representation •••.•••••••••••••.••..•••••••• 39
3.7 Mathematical Models of Processes •••••.••••••••.•••••••••••• 51
3,7.1 Basic Types of Technical Process ••••••••••••••••••••• 52
3.7.2 Deternination of Discrete-time Models from Continuous
Time Hodels • • • • • • • • • . • • • • • . • . • • • . • • • • . • • . • • • • • . • • • • • • 54
XII Contents
5. Parameter-optimized Controllers . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . 74
7.3 Choice of the Sample Time for Deadbeat Controllers •..•.•.• 131
Appendix • • • . • . • . • • • . . • • . . . • . . • . . . • . • • . • . . • . . • . . • • • • . . . . . . . • . . • . • 528
Literature . • • . • . • . • • . . • . • . • • . . • • . • . • • . • . . . • . • • . • . . . . . . . . . . • . . . . . 535
List of Abbreviations and Symbols • . . . . . . . • • . . . • • • . • . • • • . . • . . • . . • 552
Subject Index . . . • • • . . . • • . . • . • . . • . • . . • • • . • . • . • . • . • . . . . • . . . . . . . • . . 557
1. Introduction
LEVEL 5
LEVEL 4
LEVEL 3
LEVEL 2
LEVELl
At the second level the process is monitored. The functions of the pro-
cess are checked, for example by testing if special variables exceed cer-
tain limits. Monitoring can be restricted to current variables, but pre-
dicted future variables can also be taken into account; the outputs of
the monitoring level are alarms to the plant personnel. If steps are
automatically taken to undergo a disturbance or a shut-down of the plant,
this is called security control.
The upper level, here the fifth level, is for management. A whole system
of processes (factory, interconnected networks, large economic units)
are organised for planning the market, the given raw materials and the
given personnel.
By 1959 the first process computers were used on-line, but mainly in
open-loop for data-logging, data reduction and process monitoring. The
direct control of process variables was performed by analog equipment,
principally because of the unsatisfactory reliability of process compu-
ters at that time. Then, the reference values (set points) of analog
controllers were given by the process computers (supervisory control) ,
for example according to time schedules or for process optimization.
Process computers for direct digital control (DDC) in an on-line, closed-
loop fashion have been used since 1962 for chemical and power plants
[1.1], [1.2], [1.3], (1.4], [1.5], [1.6].
gan to take over the lower level functions of analog devices and mini-
computers. Further developments are underway; microcomputers will have
a major influence on r.1easurement and control engineering.
This book considers digital control at the lowest level of process con-
trol. However, many of our methods for designing algorithms, for obtain-
ing process models, for estimation of states and parameters, for noise
filtering and actuator control can also be applied to the synthesis of
digital monitoring, optimization and coordination systems.
The Contents
This book deals with the design of digital control systems with refe-
rence to process computers and microcomputers. The book starts with part
A Processes and Process Computers.
The general signal flow for digital control systems, and the steps taken
for the design of digital control systems are explained in chapter 2. A
short introduction to the mathematical treatment of linear discrete-
time systems follows in chapter 3. The basic types of technical processes
and the ways to obtain mathematical process models for discrete-time
signals are also discussed.
The process and signal models used in this book are mainly parametric,
in the form of difference equations or vector difference equations,
since modern synthesis procedures are based on these models. Processes
lend themselves to compact description by a few parameters, and methods
of time-domain synthesis are obtained with a small amount of calculation
and provide structurally optimal controllers. These models are the direct
result of parameter estimation methods and can be used directly for state
observers or state estimators. Nonparametric models such as transient
functions or frequency responses in tabular form do not offer these ad-
vantages. They restrict the possibilities for synthesis, particularly
with regard to computer-aided design and to adaptive control algorithms.
Chapter 8 deals with the design of state controllers and state obser-
vers. As well as other topics, the design for external, constantly ac-
ting disturbances is treated and further modifications are indicated
for computer applications. The design is based on minimization of qua-
dratic performance functions as well as on pole assignment.
The control of processes with large time delays, including the predictor
controller, is taken up in chapter 9.
0)
lS-ISO-CONTROL SYSTEM~ INTERCONNECTED IMIMO-CONTROL SYSTEMS!
CONTROL SYSTEMS
E-< par am. opt. c. ( 5, 1 3, 25) cascaded c. (16) param.opt. c. (18,19)
Ul
:>< lin.c.w. pole assignment ( 11 ) feedforward c. (17) state c. (18,21)
Ul I'Ll
p:; cancellation c. (6) matrix polynom. (21)
....:l p
0 E-< deadbeat c. ( 7)
p:;
E-<
up predictor c. (9)
z p:; state c. (8,15)
0 E-<
u Ul c. ( 14)
::;::
E-<
Ul Ul
p Ul CONTROL A 0 R IT H M s:
'-=> I'Ll noise
0 u
-:t: 0 filter (27)
p:; f i x e d a d a p t i v e
0 D.
~ .......
tuning rules computer aided self-optimizing actuator
(9
z(9
H ....:l (5,19) design (29) param.adaptive control (28)
Ul -:t:
I'Ll control alg. (22-25)
0 u
( 11 )
( 2 5)
PARAMETER STATE
z z0
0 ESTIMATION 8 -OBSERVER ( 8) X
H
z Ul
0 (3,23,24) -ESTIMATION (15)
H 0
E-< z
-:t:
H
~ u MEASURABLE PROCESS HODEL ~
0 u rt
f.4 0 (3,18,23) (12,15,23) fi
z p:; y SIGNALS 0
H D. p,
s::
(l
Fig·. 1.2 Survey of the control system structures under discussion, information used by them on the processes rt
and signals, and adjustment of control systems and processes. () Chapter-number; c.: control algo- f-'·
0
rithm. (c.f. chapter 2.) SISO: single input/single output; !1IHO: multi input/multi output. ~
1. Introduction 7
In data processing with process computers signals are sampled and digi-
tized, resulting in discrete (discontinuous) signals, which are quan-
tized in amplitude and time, as shown in Fig. 2.1.
y (t)
b t
........ udL
.....
t
k t
The samplers of the input and output signal do not operate synchronous-
ly, but are displaced by an interval TR. This interval results from the
AID-conversion and the data processing within the central processing
unit. Since this interval is usually small in comparison with the time
constants of the actuators, processes and sensors, it can often be ne-
glected. Synchronous sampling at process computer input and output can
therefore be assumed. Also the quantization of the signal is small for
computers with a wordlength of 16 bits and more and A/D-converters with
at least 10 bits so that the signal amplitudes initially can be regar-
ded as continuous.
For the design of digital control systems as described in this book the
following stages are considered (compare also the survey scheme, Fig.
1 • 2) •
The basis for the systematic design of control systems is the avai-
lable information on the processes and their signals, which can be
provided for example by
- direct measurable inputs, outputs, state variables,
- process models and signal models,
- state estimates of processes and signals.
Process and signal models can be obtained by identification and para-
meter estimation methods, and in the case of process models, by theo-
retical modelling as well. Nonmeasurable state variables can be re-
constructed by observers or state estimators.
4. Noise filtering
Finally, for all control algorithms and filter algorithms the effects of
amplitude quantization have to be considered. In Fig. 2.4 a scheme for
the design of digital control systems is given. If tuning rules are
applied to the adjustment of simple parameter-optimized control algo-
rithms, simple process models are sufficient. For a single computer-
aided design, exact process/signal models are required which most appro-
priately can be obtained by identification and parameter estimation.
If the acquisition of information and the control algorithm design is
performed continuously (on-line, real-time) then self-optimizing adap-
tive control systems can be realized.
CONTROL
I
I
I
: v~ vt ~ :
: PROCESS/ :
ALGORITHM I r;=io========:>l SIGNAL- t<:==:=========il
ADJUSTMENT I I
L _______________
ANALYSIS :
L_C~~;~~-J
~
==::>O_===::>IALGORITHMI=~t=~
" ACTUATOR 0 ~ NOISE
y
CONTROL ~ PROCESS l=;:::c)==C>I FILTER
X
X It I
.8h « T0
...
xTit I
XT
'
'
1 2 3
x(kT 0 ) for t = kT 0
}k=0,1,2, ... (3.1-1)
o for kT 0 < t < (k+1)T 0
Example 3.1.1
X (t) = e -at
16 3. Discrete-time Systems
k=0,1,2, ...
b) The integration
1 t
x(t) = T J w(t)dt
0
is to be performed numerically by a staircase approximation of the
function w(t). This leads to
k-1
T Z T0 w (vT 0 ) .
v=O
In this case, x(kT 0 ) depends on a second function
The next example shows how a difference equation can be obtained for an
implicit function.
Example 3.1.2
In example 3.1.1 b)
1 k
x((k+1)T 0 ) = T z T0 w(vT 0 )
v=O
is also valid. Subtraction yields
To
T w(kT 0 )
or
x(k+1)+a 1 x(k)
or
x(k)+a 1x(k-1) = b 1w(k-1)
with a 1
This is a first order linear difference equation.
0
3.1 Discrete-time Signals 17
The argument kT 0 has now been replaced by k. The current output at time
k can be calculated recursively by
( 3. 1-3)
if the current input w(k) and m past inputs w(k-1) , ... ,w(k-m) and m past
outputs x(k-1) , ... ,x(k-m) are known.
dx(t) x(t)-x(t-l:lt)
lim l:lx (k) = x (k) -x (k-1)
~
l:lt+O l:lt
Example 3.1.3
T dx ( t) = w ( t) .
dt
rn rn-1
am(', x(k)+arn_ 1 6 x(k)+ •.. +a 1 6x(k)+x(k)
t'fo
o(tl ={ 0 ( 3. 1-5)
t=O
If the switch duration becomes very small compared with the sample time,
h << T0 , the pulses of the pulse train xp(t) with the area x(t)h can
be approximated by impulses o(t) with the same area
approximation
Xp
h«T 0
t
Figure 3.1.3 Approximation of the pulse train x (t) by an impulse
train xo(t) p
Since the impulse train only exists for t kT 0 , Eq. (3.1-7) becomes
(3. 1-8)
The switch duration h cancels for transfer systems with identical syn-
chronously operating switches at the input and the output. Furthermore,
different values for h do not affect the result if a date. hold follows
the switch. Therefore, for simplicity, the switch duration will be ig-
nored (or we choose h = 1 sec) which leads to the "starred" impulse
train
L{o(t)} = f
0
o(t)e-stdt (3.2-2)
(3.2-3)
The Laplace-trans form of the impulse train, Eq. (3.1-9), then becomes
-kT 0 s . (3.2-4)
L{x*(t)} = x*(s) I x(kT 0 )e
k=O
(3.2-5)
since
If the continuous signal x(t) is sampled with a small sample time T0 =Lt,
the Laplace-trans form of the continuous signal, Eq. (3.2-1), can be
approximated by
3.2 Laplace-transformation of Discrete-time Functions 21
x(t l lx(iw)l
t -Wmax Wmax w
a)
x*(t) lx"liwll
I
/
-- '
t w
b)
IG!iwll
c)
-Wmax Wmax w
(3.2-9)
if T0 is sufficiently small.
x(iw) f 0 for -w
max
,;; w ,;; w
max
x(iw) 0 for w < -w and w > w
max max
(see Fig. 3.2.1a). This band-limited signal is sampled with sam~le time
T0 and approximated by the impulse train x*(t). If T0 is sufficiently
small, the Fourier-transform then exists in a "basic spectrum"
-w ,;; w ,;; w
max max
v=±1,±2, ...
I G(iw) I -w
max
,;; w ,;; w
max
I G(iw) I 0 w < -w
max
and w > w
max
(3.2-10)
3.2 Laplace-transformation of Discrete-time Functions 23
""--
1T
w (3.2-11)
max
(3.2-12)
t t t
Figure 3.2.2 Sampler with zero-order hold
with Laplace-transform
24 3. Discrete-time Systems
x* (s)
m(s) = E
k=O
x*(s)
(3.2-13)
3.3 z-Transform
(3.3-1)
-k -1 -2
x(z) = }{x(kT0 )} = E x(kT 0 )z = x(O)+x(1)z +x(2)z + ... (3.3-2)
k=O
This infinite series converges if lx(kT 0 ) I is restricted to finite va-
lues and if lzl > 1. Since a can be chosen appropriately, convergence
is possible for many functions x(kT 0 ). The assumptions made for the
Laplace-transform, especially x(kT 0 ) = 0 for k < 0, are also valid for
the z-transform.
Examples 3.3.1:
-1 -2
x(z) = 1+z +z + ...
1 z
x(z) = ----1 if Iz I > 1.
1-z z-1
-akT
b) The exponential function: x(kT 0 ) = e 0 (a is real)
z z
z-e-aTo
x(z)
2i [z_)w1To
zsinw 1 T0
2
z +a 1 z+1
These examples have shown how the z-transforms of some simple functions
can be obtained. In this way, a table of transforms of common functions
can be assembled. A short table of corresponding continuous time func-
tions, Laplace-transforms and z-transforms, is given in the Appendix.
This table shows:
---
a) There is a direct correspondence of the denominators of x(s) and x(z)
with z 1 = T s
e 0 1
n n
(s-s 1 ) (z-z 1 )
Some important theorems for the application of the z-transform are lis-
ted below. For their derivation, it is referred to the textbooks given
at the beginning of this chapter.
a) Linearity
d <!: 0
d) Damping
~{x(k)e-akTo} = x(zeaTO)
z-1
lim x(kT 0 ) = lim z x(z) lim (z-1) x (z)
k+oo z->-1 z->-1
(3.4-2)
I To
~ -1 Ljm~l -j ~\~i I
"h 1::::_ ~ ~.________, 'k=
u{kl Hlsl yltl
t k t
Figure 3.4.2 A linear process with zero-order hold and sampled input
and output
y*(s)
y*(s) G*(s)u*(s)
~ -qT s
G*(s) l: g(qT 0 )e o . (3.4-6)
u_* (s) q=O
T s
With the abbreviation z = e 0 the z-transfer function is defined
<X>
K K'
G ( s) = 1 +Ts a+s
-at
g(t) = K'e .
Now, letting t = kT 0
g(kT ) = K'e-akTo
0
-aT
b 0 = K' = K/T = 0.1333; a 1 = -e o = -0.5866.
0
(3. 4-8)
The symbol l{x(s)} means that one looks for the corresponding x(z) in
the z-transtorm table directly, as
1 -Tos
HG ( z) = t {H ( s) G ( s) } =l { -e
s
G ( s) }
(3.4-10)
30 3. Discrete-time Systems
Applying Eq. (3.4-10) and the z-transform table in the Appendix yields
-e-aTO = -0.5866
Examples for higher order systems are given in section 3.7.2. The dyna-
mic behaviour of linear, time invariant systems with lumped parameters
and continuous input and output signals u(t) and y(t) is described by
differential equations
+ b sm-1
G(s) = ~ m-1 B (s)
(3.4-12)
u(s) m A (s) •
+ a
m
s
(3.4-13)
y(z)[1 + a 1 z
-1
+ ... + amz
-m -1
] = u(z)[b 0 + b 1 z + ... + b z -m]
m
b + b z-' 1 + + b z""'m B(z- 1 )
~ 1 m
G ( z) -1 -m (3.4-14)
u (z) + a 1z + + a z A(z- 1 )
m
3.4 Convolution Sum and z-Transfer Function 31
Proportional Behaviour
Integral Behaviour
Dead time
-d
D (z) = z (3.4-18)
DG(z) Y..i&
u(z)
= G(z)z-d (3.4-19)
32 3. Discrete-time Systems
Realizabili~y
-1 -1 -2
G(z ) = g (0) +g ( 1 ) z +g ( 2) z + ...
1 2
is obtained (c.f. Eq. (3.4-7)) not containing members with Z I Z I
(i) i f bo
+0 then ao +0
i f b1
+0 then a1 +0 (3.4-20)
$
(ii) m :?_
n
m
bc)+bi z+ ... +b~z
b) G (z) ~
u(z)
0 l
a +a z+ ... +a~ z
n
0
a y(k)+ ... +a~y(k+n) = b u(k)+ ... +b~u(k+m) 0
the term y(k+n) does not depend on future values u(k+m), invoking the
causality principle. Therefore, the realizability condition for this
form of G(z) is
m :s; n (3.4-21)
for a'
n
f 0.
The impulse response g(k) results from the difference equation, Eq.
(3. 4-13), with
u(O)
u(k) 0 for k > 0
because this describes a unit pulse at the input, see Eq. (3.4-1).
3.5 Poles and Stability 33
g(O) bo
g(1) b 1 - a 1 g(O)
g(2) b 2 - a 1 g(1) - a 2 g(O) (3.4-22)
Cascaded Systems
Real Poles
It was shown in Example 3.4.1 that a first order lag with s-transfer
function
K' z K'
G(z) = ~ -1
(3.5-2)
u(z) z-z 1
1-a 1 z
with pole
-aT
z1 = a1 = e 0 • (3. 5-3)
b) y(z)
HG p ( z ) · GR ( z )
G (z)
w
=~
w (z) 1+HG (z) ·GR(z)
p "
giving
y ( 1)
y (2)
y (k) (3.5-6)
T-is first order system only converges to zero and is, therefore, only
asymptotically stable for ia 1 i < 1. The time behaviour of y(k) for diffe-
3.5 Poles and Stability 35
Since the poles in the s-plane and the z-plane are related by
-aT
z1 = a1 = e 0
the s-poles for -oo < a < +00 lead to the z-poles oo > z 1 > 0, i.e. only
positive z-poles. Therefore, a negative z-pole z 1 a 1 < 0 has no cor-
responding s-pole.
with a = D/T
2
w1 21 (1-D 2 )
T
Here a = e-aTe.
(3.5-7)
For initial values y(O) and y(1) = acosw 1 T0 , the solution of this equa-
tion is
y(k) (3.5-8)
yh(~.k Im
y { • I~
(1)
I. . . > 1
C(< -1 C(
• • • ~ k
~ 1 k
y t
(3}
y~5) -i
~ O<ct<1
YL:.__
-1 <ct<O • k
• k
aoQ
O<ct<1
yh4) • C( >1
Im
• k
w1T0 =150°
(1} •
•
~·- .-.
c)
l2l_,X:.y-tt-....7_•,...,........-(2,...,..... =1
t
_,,...._ C(
-1 I 1; e e e e • k
0< C( <1
y~5)
• k
-I
'i ' y...t. .·. . ·-.. . . .~.~,. . ). . . .~.--k O<ct<1
z-1
w = z+1 ( 3. 5-11)
maps the unit circle of the z-plane onto the imaginary axis of the w-
plane. Therefore the lefthalf w-plane maps the inside of the unit cir-
cle. Because the w-plane plays the same role as the s-plane for conti-
nuous time systems the Hurwitz- or Routh-stability criterion can be
applied. For this purpose the inverse transformation
1+w
z = 1-w (3.5-12)
m m-1
A(z) = z +a 1 z + ... +am ( 3. 5-13)
of a transfer function
B (z)
G(z) = A(z) (3. 5-14)
leading to
1+w m-1
+a1(1-w) + ... +a (3.5-15)
m
This criterion states that the coefficients have to exist and have to
carry the same algebraic sign. Then the system is not monotonic unstable.
To avoid oscillatory unstability the Hurwitz-determinants have to be
positive for systems higher than second order (or the Routh criterion
has to be falfilled).
Example 3.5.1:
A(z) z2 + a 1z + a 2 •
Then
2
A(w) (1+w) +a (1+w) + a 2
1-w 1 1-w
and
A(w) (1+w) 2 + a 1 (1+w)(1-w) + a 2 (1-w) 2
G (z) ~ (3.6-2)
u(z)
y(k+n-1)=xn(k)=xn_ 1 (k+1)
x (k+n) = xn (k+1)
Eq. (3.6-4) and Eel. (3.6-5) lead to the vector difference equation
x, (k+1) 0 0 0 x, (k) 0
x 2 (k+1) 0 0 0 x 2 (k) 0
+ u(k) (3. 6-6)
0 0 0
y(k) (1 0 OJ
l x, 1 (k)
x 2 (k)
(3 .6-7)
xn(k)
Setting bn = 1 and b 0 , ... ,bn-l = 0, Eq. (3.6-2) and Eq. (3.6-3) lead to
If, however, bn f 1 and b 0 , ... ,bn-l f 0, then Eq. (3.6-2) and Eq. (3.6..,-10)
give
or
(3.6-11)
(3.6-12)
is also valid. Here xn(k+1) comes from Eq. (3.6-5), so finally we have
lxn(k)J
3.6 State Variable Representation 41
y(k) (3.6-15)
Fig. 3.6.1 shows a block diagram in "regulator form" of the state re-
presentation for a difference equation directly taken from Eq. (3.6-4),
Eq. (3.6-5) and Eq. (3.6-12). The vector difference equation and the
output equation are
..
ulk) ylk)
T
y(t) = ~ ~(t) + d u(t).
At ; <it)\) (3.6-21)
e-
v=O v!
For sampled input and output signals, the state representation can be
simply derived from Eq. (3.6-18) and Eq. (3.6-19) of the linear process
is followed by a zero-order hold as in Fig. 3.4.2. Then for the input
signal
the state equation becomes for initial state ~(kT 0 ) for kT 0 ~ t < (k+1)~
t
~(t) = _!(t-kT 0 )~(kT 0 ) + u(kT 0 ) f _!(t-T) ~ dT. (3.6-22)
kT 0
If the solution for only t (k+1)T 0 is of interest, then
(k+1l T 0
~((k+1)T 0 ) = _!(T 0 )~(kT 0 ) + u(kT 0 ) f _!((k+1)T 0 -T) b dT
kT 0
With the substitution q (k+1)T 0 -T and with dq = -dT
To
~ (k+1) ,!(T 0 )~(k) + u(k) f _!(q) b dq. (3.6-23)
0
3.6 State Variable Representation 43
A (3.6-24)
b (3.6-25)
d.
For the calculation of Eq. (3.6-24) and (3.6-25) see e.g. [2.19].
Canonical Forms
~t = :!: x. (3.6-28)
T
y(k) = ~t ~t(k) + d u(k) (3.6-30)
with
~t T b
} (3.6-31)
T T -1
~t c T .
~t ~t ~t
remarks
column 0 ... 0 -a
rn
1 cTb
- -
companion
canonical
1 ... 0 -a
rn-1
0 eTA b
- - -
T
~t =[c1 1 s c2 1 s ... ern Is J
form T
~t=[g ( 1) 1 g(2) 1 ••• g(rn)J
0 1 -a, 0 cTArn-1b
- - -
controlla- 0 1 ... 0 0 b
rn
ble cano-
nical form
(regulator .
form) 0 0 .... 1 0 b2
-a -a 1
rn rn-1 .. -a, b1
0 ... 1 -a, b1 1
'-- -
3.6 State Variable Representation 45
STRUCTURAL DIAGRAM
y(k)
z
0
H:;;
z~
4;0
IJ.<Ii;
;:;;
O...:l
u«:
u
ZH
:;;z
::::>0
...:JZ
o«:
uu
~
1"10
,_:jli;
o:1
j ~u_l_k__) <>----1
...:JU
OH
~z
E-<0
zz
04;
uu
;:;;
z~
00
Hli;
~H
P-<4;
:;;u
OH
uz
0
:s:z
o«:
~u
u(k)
Xmlk) y[k)
0 0 0 0
0 0 1 0 0
A b (3.6-38)
-u -u
0 0 0
0 0 0 0
T 0]
c [1 0 0
-u
u(k~Glu(k-~f:11
~
lf l
Combining Eq.
X
-u
(k+1)
l
r~(k+1) "r ~ E2~w(k) 0
-
A
-u
X
-u
(k) b
-u
u(k) (3.6-39)
If the controllable canonical form is chosen for ~' the extended system
matrix becomes
0 0 •.. 0 0 0 .•• 0
0 0 0 0 0 0
-a -a .-a 1 I 0 0
m m-1 (3.6-43)
~d ----------~------
0
I
0
I
0 1 . . 0
I
I
0 0
I
0 . . 0 I 0 0 0
are presented.
k
~ (k) !::_k~(O) + L Ai- 1 b u(k-i). (3.6-44)
i=1- -
homoge- particular
neous solution (con-
solution volution sum)
where
Ak A•A ... A.
'-....--'
k
y(k) can finally be obtained from Eq. (3.6-27). If u(k) is given expli-
citely as a z-transform, a second possible solution can be used.
}{~(k)} = ~(z)
or
(3.6-46)
Comparing Eq. (3.6-46) and Eq. (3.6-44) the following condition is ob-
tained
(3.6-48)
3.6 State Variable Representation 49
(3.6-50)
k
y(k) = E cT Ai- 1 b u(k-i) + d u(k). (3.6-51)
i=1
Introducing
k=O
u(k) =
{~ k>O
g(O) d
(3.6-52)
g(k) cT Ak- 1 b for k>O.
Controllability
To obtain the input u(k) one can start with Eq. (3.6-44). Then, for a
process with one input
50 3. Discrete-time Systems
N N-1
~{N) ~ ~(0) + [~, A b ••• ~ ~] ~N (3.6-54)
with
T
~N [u(N-1)u(N-2) ... u(O) ]. (3.6-55)
-1 m
u Q [x(m) -A x(O)] (3.6-56)
-m -s - --
L~ Ab (3.6-57)
•.C
ll.
Q ;J. 0.
det -s (3.6-58)
Rank Qs = m (3.6-59)
with m as the order of A. For N < m no solution exists for ~, and for
N > m no unique solution.
Observability
T
y(k) = ~ ~(k)
T
y(k) ~ ~(k)
Here
T
~ = [.u(k+N-1) .•• u(k+1) u(k) ]. (3.6-61)
(3.6-62)
whence
_!!(k) (3.6-63)
continuous discrete
., continuous X !• • •
I I I
•I •I f ' ' 't
discrete
...
xi • • • • •
t I I I I I ' ' ' •t
binary
~bono. t
o Continuous Processes
Materials, energy, information flow in continuous streams
- Once-through operation
Signals: many combinations as in Fig. 3.7.1 are possible
- Mathematical models: linear and nonlinear, ordinary or partial
differential equations or difference equations
- Examples: pipeline, electrical power plant, electrical cable for
analog signals. Many processes in power and chemical industries.
o Batch Processes
Materials, energy, information flow in "packets" or in inter-
rupted streams
- Process operation in a closed space
-Signals: many combinations such as in Fig. 3.7.1 are possible
- Mathematical models: mostly nonlinear, ordinary or partial diffe-
rential equations or difference equations
Examples: processes in chemical engineering: processes for che-
mical reactions, washing, dyeing, vulcanising.
o Piece-good Processes
Materials, energy, information are transported in "pieces"
- Process operation: piecewise
- Signals: mostly discrete (binary) amplitude. Continuous or
discrete time.
- Mathematical models: flow schemes, digital simulation programs
- Examples: many processes in manufacturing technology. Processing
of work pieces, transport of parts, transport in storages.
f(k) - f(k-1)
To
For larger sample times the difference equations and the z-transfer
functions are calculated most appropriately by the use of z-transform
tables. For this, either the impulse response g(t) = f(t) in analyti-
cal form or the s-transfer function G(s) = f(s) is required, and the
corresponding f(z) = G(z) are taken from the z-transform tables. A par-
tial-fraction expansion has to be performed for higher-order processes
to obtain those terms of G(s) which are tabled. If there is a zero-order
hold, Eq. (3.4-10) has to be used, and from the table G(s)/s has to be
taken. (In the following, G(s) has to be replaced by G(s)/s, as in
3.7 Mathematical Models of Processes 55
p+1
l: c.sj
·=o J
G(s) (3.7-2)
1
(s-s )P II (s-s.)
0 i=1 1
~ Aog 1 Ai
G(s) + l: -=--
q=1 (s-s 0 )q i=1 (s-si)
dp-q p ]
A [ - - [(s-s )-G(s)]
oq (p-q)! dsp-q 0
s=s
0
(3.7.3)
A.
1
The poles of G(s) and of G(z) are directly mapped through z eTos, as
in section 3.5.1.
Example 3.7.1
Problem:
The z-transfer function of the process
G(s) = ~
u(s) = ~-=---~=K-~-~~---
(1+T s) (1+T s) ... (1+Tms)
1 2
with zero-order hold is to be calculated.
Solution:
1. Partial-fraction expansion of G(s)/s
56 3. Discrete-time Systems
1 1 1
KTT ··· T
G(s) 1 2 m
s
( s+} l
m
A1 A
m
+ --1- + ... + - 1 -
s S+.):; s+T
1 m
m
-K n T.
j=1 J
A. Hi i 1, ••• , m
.]_ m
n (- 1 + ..!_)
T. T.
j=1 .]_ J
jfi
it follows that
To To
m T. 1 m -1 m -T:" -1
A 0 IT (1-e .L z- ) + l: ( 1-z ) A. n ( 1-e J z )
i=1 i=1 .Lj=1
---------~=---------jti--------
_To
m T..]_ -1
n ( 1-e z )
i=1
the parameters of G(z) are given in Table 3.7.1 for different sample
times T0 . With increasing sample time the following trends can be reco-
gnized:
3.7 Mathematical Models of Processes 57
For larger sample times we have la 3 1 << 1+Lai and lb 3 1 << Lbi' so that
a 3 and b 3 can be neglected. In practice this means that a second order
model is obtained.
D
Table 3.7.1 Parameters of the z-transfer function G(z) for the process
1
G(s) = (l+lOs) (l+?.Ss) (l+Ss) with zero-order hold, for
different sampling times T0 .
T0 [sec 2 4 6 8 10 12
Another method for calculating HG(z) from G(s), which makes no use of
z-transform tables, consists in the following approximation due to
Tustin [3.3]
t
y(t) T J u(t)dt (3.7-5)
0
58 3. Discrete-time Systems
.YJ2j_ (3.7-6)
u(s) Ts
To k
u(i-1)
y(k)
""or L
i=1
To k-1
y (k-1)
"" or L
i=1
u(i-1)
To
u(k-1)
y (k) -y (k-1)
""or
-1 To -1
] u (z) z
y(z)(1-z
""or
-1
T0 z To
~ (3.7-7)
u(z) "" T (1-z
-1
) T (z-1)
Through correspondence of Eq. (3.7-6) and Eq. (3.7-7) for small sample
times we have
s + T1 (z-1).
0
To k
1
y(k)
"" or i=1
L
2 (u (i) +u(i-1)]
To k-1 1
Lu(i) + u(i-1)]
y(k-1)
""or i=1
L
2
To
y(k)-y(k-1) (u(k) + u(k-1)]
"" 2T
To z+1
~ (3. 7-8)
u(z) "" 2T z-1
2 z-1 (3.7-9)
s + T 0 z+1
3.7 Mathematical Models of Processes 59
s = 1 ln z " "2[
- ~ 1 + (z-1) 3 + ..• ] (3.7-10)
To To z+1 3(z+1)3
Example 3.7.2
1
G(s) = (1+10s) (1+5s)
with zero-order hold, Table 3.7.2 gives the exact parameters of the z-
transfer function HG(z) and the parameters resulting from the approxi-
mation Eq. (3. 7-4)
HG ( z) = G ( s) sI 2 z-1
T0 z+1
- z-1 2
Table 3.7.2 Parameters of HG(z) and HG(z) for s = z+ 1 and the resul-
~
ting maximum error of the transient fun8tion for HG(z).
1
G(s) = (1+10s) (1+5s) •
To bo b1 b2 a,, a2 Eb.
~
(L'ly/yoo)ma X
[sec] for t[sec ]
1 0.00906 0.00819 -1.72357 0.74082 0.01725
0.00433 0.00866 0.00433 -1.72294 0.74026 0.01732 +0.024 6
ling time such that the output reaches 95 % of the final value y(oo) of
the transient function, for T 95 = 37 sec Table 3.7.2 shows that:
17.5 to 8.
0
m
IT (1+T 8 s) -T s
G(s) ti& 8=1 e t (3.7-11)
u(s) m-2
(1+2DTs+T 2 s 2 ) IT (1+T s)
cx=1 ex
-T s
e t
the dynamic behaviour is approximately the same in both open and closed
loop if the generalized sum of time constants
m-2
2DT + E T
(l
(3.7-12)
cx=1
remains constant, [3.4], [3.5]. That means that the sum of energy, mass
or momentum which is stored during a transient process has to remain
3.7 Mathematical Models of Processes 61
T =
t
Now, some rules for the simplification of discrete-time models are gi-
ven. A z-transfer function of form
G(z) ~ (3.7-13)
u(z) -1
a 1z +
y(k) ~ K u 0 for k ~ 1
m m
K L b.
~
I L a.
~
(3.7-15)
i=O i=O
1
A" "" A l: [u(k)-y(kl] (3.7-16)
uO k=O
b 0 u(O)
+ b 0 u(1) + b 1 u(O)
-a 1 y(m-1)
l+m
a0 l: y(k)
k=O
(3.7-17)
Hence, with u(k) = u 0 fork ~ 0 and Eq. (3.7-15)
m 1 m
l: ai l: y(k) = (1+1)u 0 l: bi+u0 Lmb 0 +(m-1)b 1 + ... +bm_ 1 ]
i=O k=O i=O
A= m
(3.7-18)
l: a.
i=O 1
(3.7-19)
l
3A/3ai (m-i)-A
3A/aa 0 rn-A
(3.7-21)
aA.;ab. m-i
1.
3A/ab 0 m
b) Using
aA.;ab. (m-i)
1.
= K
3A/3ai A- (m-i)
leads to
> >
K for A 2 (m-i).
< <
From Eq. (3.7-15) for small parameter changes it follows that, for the
gain K 11
K m [ 3K
l:-
""" i=O aai
~a.
1.
+ l!S_
3b.
1.
~bi]
m
l: (~b.-~a.). (3.7-22)
m i=O 1. 1.
l: a.
i=O 1.
A first solution is obtained directly from Eq. (3.7-20) and Eq. (3.7-22)
~a.
1.
~b. i 0, 1 1 • • • I m
1.
and
m
l: ~a. 0. (3.7-23)
1.
i=O
Hence, for A """ constant and K """ constant, small parameter changes are
permitted if ~a 1 = ~b 1 , ~a 2
+~a = 0.
m
For the sampling time T0 = 10 sec one obtains from Table 3.7.2
Now, a3 = 0 and b 3
- = 0 are set, i.e.
In these two equations, four unknowns remain, i.e. two variables can be
chosen freely. It is assumed, for example,
-0.75~a 1 - ~b 2 = 0.0131
and
~a 1 + (a 3 -~b 2 ) + (-a 3 +b 3 ) 0
~a 1 - ~b 2 = -0.0181
and here
-0.0178 -0.0131
+0.0003 0.
-1 -1
y(z) B(z ) z-du(z) + D(z ) v(z) (3.7-24)
A(z- 1 ) C(z- 1 )
(c.f. Eq. (3.4-14) and Eq. (12.2-31)) are assumed and the unknown para-
meters of the process and the disturbance models are estimated based on
the measured signals u(k) and y(k) [3.13]. For parameter estimation
methods such as the following can be used: Least Squares, Instrumental
Variables, Maximum Likelihood in nonrecursive or recursive form. In re-
cent years, using on-line and off-line computers, methods of process
identification have been extensively developed and tested in practice.
Many linear and nonlinear processes with and without perturbation signals,
in open and closed loop, can be identified with sufficient accuracy.
There are program packages which are easy to operate and which contain
methods for the determination of the model order and the dead time
(c. f. chapters 2 3, 2 4 and 2 9) .
B Control Systems for Deterministic
Disturbances
For terminal control systems a definite final state ~(N) of the process
has to be reached and held at a prescribed or free final time point N.
For both reference and terminal control systems, the influence of ini-
tial values ~(0) or disturbances v(k) of the processes has to be com-
pensated for as much as possible. The control problem, moreover, is
such that for unstable processes a stable overall system is to be ob-
tained through the feedback.
L ______ _j
feedback
controller
----------------~
@ ®
-- ---,
____, [process
major
controller ""
n1
t:l
(D
Y1 y rT
(D
I 1-j
El
J process f-'·
:::1
f-'·
Ul
rT
f-'·
(l
@ @ n
0
:::1
rT
1-j
0
Figure 4.1 Block diagrams of the most important control system structures for one controlled variable f-'
design method
control algorithm
In this book the design of linear control systems for linearizable time-
invariant processes with sampled signals is considered. A schematic pre-
sentation of the most important control systems and their design prin-
ciples is given in Fig. 4.3.
n
0
::J
rt"
~ ~
Sub- zero first second higher cancellation 0
groups order order order order controller I state
controller J "
I-'
(Jl
'<
Ul
rt"
~~ ~~ /~ ro
sUl
Design tuning pole performance prescribed final sett- performance pole performance
Method: rules assignment criterion behaviour ling time criterion assignment criterion
t + t f~j
Controlle P- general
------tPI-,------- cancellation deadbeat minimum variance modal state state
I- PD-, PID- linear controller controller controller controller controller
External
disturbance:
deterministic X X X - - X
stochastic X - - X - X
-
--.}
72 4. Deterministic Control Systems
These quadratic criteria are suited for both deterministic and stochas-
tic signals, so are preferred in this book.
lim (z-1)e(z) = 0
z-+1
4. Deterministic Control Systems 73
lim 1 o
z-+ 1 1 +GR ( z) Gp ( z)
a) lim G (z) t oo
z-+1 P
-+ lim GR(z)
z-+1
-+ lim GR ( z) GP ( z)
z-+1
b) lim G (z) = oo
z-+1 P
-+ lim GR (z)
z-+1
(4-4)
(.! ( z)
GR ( z) = pI ( z) ( z -1 ) (4-5)
1 t de (t)
u(t) = K[e(t) + ~ f e(T)dT + TD ~ ) ( 5. 1-1)
I 0
with parameters:
K gain
TI integration time
T0 derivative time
For small sample times T0 this equation can be turned into a difference
equation by discretization. The derivative is simply replaced by a dif-
ference of first order and the integral by a sum. The continuous inte-
g·ration may be approximated by rectangular or trapezoidal integration,
as in section 3.2.
u(k)-u(k-1) (5 .1-4)
with parameters
(5 .1-5)
To
-K(1 + 2 - - (5 .1-7)
To
v z
w y
M
l: (e 2 (k) + r t.u 2 (k)) (5.2-6)
k=O
are used for parameter optimization (c.f. chapter 4). Here
t,u(k) = u(k) - u
2
dS eu
"""Clq 0. (5.2-9)
(5.2-11)
u(O)
u ( 1) 2 qo + q1
u (2) u ( 1) 3 qo + 2 q1 + q2
(5.2-13)
(5.2-14)
The resulting step response is shown in Fig. 5.2.2a) and the resulting
parameter ranges in Fig. 5.2.3. The parameter q 0 determines the mani-
pulated variable u(O) after the step input.
K qo - q2 gain
q2 I K lead coefficient (5.2-15)
<qo+q1+q2) I K integration coefficient.
- T; CI = T"
0 I
For small sample times the gains agree exactly. c 0 is the ratio of lead
time to sample time and ci the ratio of sample time to integration time.
(5.2-18)
(5.2-19)
u ( 1) u(O)
u(2) u ( 1)
(5.2-20)
For u(1) > u(O) the first-order control algorithm can be compared with
a continuous PI-controller with no aaditional lag. With q 0 > 0 we ob-
tain q 0 + q 1 > o or q 1 > -q0 •
K gain
(5.2-21)
ci (q 0 +q 1 )/K integration coefficient.
(5.2-22)
.......... t gain
lead coefficient c0
K
-. . . . . .--•
_:~-----,-
K(1+c 1 )
T
integration coeff. c 1
K
k
5.2 Parameter-optimized Discrete Control Algorithms of Low Order 83
(5.2-23}
(5.2-26}
(5.2-27}
The transfer function between the reference and the manipulated variable
in closed loop is
u(z}
w{z) (5.2-28}
Introducing the process transfer function, Eq. {5.2-1}, and the second-
order controller transfer function, Eq. (5.2-10}, and setting b 0 = 0,
results in
-1 -2 -1 -m
(q 0 +q 1 z +q 2 z } (1+a 1 z + .•. +amz }w(z}
The first two manipulated variable values are, after a ster change of
the command variable w(k) = 1 (k):
1. Case d = 0
u(O)
u ( 1) (5.2-31)
2. Case d ;:, 1
u(O)
u (1) (5.2-32)
Independently of the dead time d, the value of u(O) for a step change
of the command variable depends only on the controller parameter q 0 .
Therefore by prescribing the manipulated variable u(O), the parameter
q 0 can be fixed.
The correspondence between the first manipulated variable and the con-
troller parameter q 0 is useful during design when considering the allow-
able range of manipulated variable change. One has only to select a
certain operating point of the control loop and the maximum process in-
put change u(O) for the (worst) case of a step change w0 of the referen-
ce variable w(k) (or the error e(k)) and one simply sets q 0 = u(O)/w 0 .
From Eq. (5.2-31) and Eq. (5.2-32) it follows that within u(1) ~ u(O)
d o: q1 ~ -qo< 1 -qob1)
(5.2-33)
d ;;::: 1: q1 ~ -qo.
TO TD ( -y(k)+2y(k-1)-y(k-2)].
u(k)-u(k-1) = K[-y(k)+y(k-1)+~e(k-1)+~ )
I 0
(5. 3-3)
With this algorithm it is then more appropriate to use e(k) instead of
e(k-1) (c.f. page 75). These modified algorithms are less sensitive to
the higher frequency signals of w(k) than to those of y(k). Therefore,
for the same type of disturbance at e.g. the process input and the com-
mand variable, the differences between the controller parameters ob-
tained by parameter optimization become smaller (c.f. [5.8]). Large
86 5. Parameter-optimized Controllers
T
T~ (e(k) - 2e(k-1) + e(k-2))
is taken and then all approximations to the first derivative are ave-
raged in relation to ek. The derivative term for the nonrecursive form
therefore becomes
TD
GT [e(k) + 3e(k-1) - 3e(k-2) - e(k-3)]. (5.3-4)
0
TD
- [e(k) + 2e(k-1) - 6e(k-2) + 2e(k-3) + e(k-4)]. (5.3-5)
6T 0
5.4 Simulation Results 87
(5.3-6)
with parameters
-4c 1 /(1+2c 1 )
(2c 1-1)/(1+2c 1 )
CI
K(1+2(c 1 +c 0 )+ ;r<1+2c 1 )]/[1+2c 1 ]
K(ci-4(c 1 +c 0 )]/[~+2c 1 ]
K(c 1 (2-ci)+2c 0 + J -1]/l1+2c 1 ]
with
K 1; T 1 = 4 sec; T 2 10 sec.
b 1z
-1 +b z -2
2
(5 .4-2)
-1 -2 -3
b 0 +b 1 z +b 2 z +b 3 z
-d
-1 2 -3 z (5.4-4)
1+a 1 z +a 2 z +a 3 z
5.4 Simulation Results 89
d 4 1 1 1
bo 0 0 0.06525 0.37590
b1 0.00462 0.06525 0.25598 0.32992
b2 0.00169 0.04793 0.02850 0.00767
b3 - 0.00273 - 0.00750 - 0.00074 - 0.00001
a, - 2.48824 - 1.49863 - 0.83771 - 0.30842
a2 2.05387 0.70409 0.19667 0.02200
a3 - 0.56203 - 0.09978 - 0.00995 - 0.00010
For a step change of the reference variable, the control performance ex-
pressed in the form of
se ~ = y N+1
1 N 2
E e (k)
(Quadratic average of
the control deviation) (5.4-5)
k=O
S
u
=
Vt;u 2 (k) =Y- ~1-
N+1 k=O
t;u 2 (k) (5.4-8)
90 5. Parameter-optimized Controllers
---------------~~~~------
0,5
40 50 [sec] 60 t
10 20 30 40 50 [sec] 60 t
the "manipulating effort" are functions of the sample time T0 and the
weighting factor r of the manipulated variable in the optimization cri-
terion Eq. (5.2-6). For the simulations a settling time TN = 128 sec
was chosen to be large enough that the control deviation becomes prac-
tically zero. Therefore we have N = 128 sec/T 0 • For Se and Su the term
"quadratic average" was chosen; this value is equal to the "effective
value" and to the "root of the corresponding effective power" for a one-
Ohm resistance.
Figure 5.4.2 shows the discrete values of the control and the manipula-
ted variables for both processes after a step change of the command
variable for the sample times T0 = 1; 4; 8 and 16 sec and for r = 0.
For the relatively small sample time of T0 = 1 sec one obtains an ap-
proximation to the control behaviour of a continuous PID-controller.
For T0 = 4 sec the continuous signal of the control variable can still
be estimated fairly well for both processes. However, this is no lon-
ger valid for T0 = 8 sec for process II and for T0 = 16 sec for both
processes. This means that the value of Se' Eq. (5.4-5), which is de-
fined for discrete signals, should be used with caution as a measure
of the control performance for T0 > 4 sec. However, as the parameter
optimization is based on the discrete-time signals (for computational
reasons) Se is used in the comparisons.
In Fig. 5.4.3 the control performance and the manipulating effort are
shown as functions of the sample time. For process II the quadratic
mean of the control deviation Se, the overshoot ym and the settling
time k 1 increase with increasing sample time T0 , i.e. the control per-
formance becomes worse. The manipulating effort Su is at a minimum for
T0 = 4 sec and increases for T0 > 4 sec and T0 < 4 sec. For process III
all three characteristic values deteriorate with increasing sample time.
The manipulating effort is at a minimum for T0 = 8 sec. The improvement
of the control performance for T0 < 8 sec is due to the fact that the
92 5. Parameter-optimized Controllers
0--rr-------------------+----------~-
50 100 t [sec[
u
u
0_,---------+--------~--------~
50 100 t [sec] 50 100 t [sec]
T0 =I. sec
50 100 1 [sec]
10 10
0 0
50 100 t [sec] 50 100 1 [sec J
T0 = 4sec
T0 =1sec
y
~
I
j
10 10
T0 ' 16sec
T0 , 8 sec
0.7515 0751.5
t t tt Seu
::,....- .,.-X.
Se Su Se Su
\_....- ...- --
Seu X
_x-;---·
·-- ----------
0501.0 0.501.0 x.---x...- •
>E--x--
•
• ~
/-.
o--
-----0 o-- - - - 0
0,....-'
0;::.<. 0250.5
02505 0 ........
---
• - - • Su
o--o Se
0. 0.
4 8 To(S) 16 4 8~ 16
tt
/
200 •
L~(S)
Ymk1
(S)
./~ /./0
0.3150 0.3150 •
0.2100
;!
•
0.2100
01/
•
0.1 50 .I 0.1 50
10
0
;•-•k,
0
o--oym 10
0 0
4 8rYsj 16 4 8ris) 16
Table 5.4.3 gives the controller parameters. With increasing sample time
the parameters q 0 , q 1 and q 2 become smaller. The controller gain K hard-
ly changes for T0 ~ 4 sec, the lead factor c 0 reduces and the integra-
tion factor ci increases. The inequalities Eq. (5.2-14) or Eq. (5.2-17)
are satisfied for T0 = 1, 4 and 8 sec, so that a control algorithm with
normal PID-behaviour emerges.
For the sample time T0 = 1 sec Figure 5.4.4 shows step responses to
changes of the t:eference variable as functions of the weighting factor r
in the optimization criterion. A change from r = 0 tor= 0.1 leads to
a more restrained control behaviour than the change from r = 0.1 to
r = 0.25.
96 5. Parameter-optimized Controllers
y
y
u u
0>--t-------~~-------+-----------
25 50 t (sec I
01--r-------~--------------------
25 50 t (sec)
01--+-------------------~----------~
25 50 t (sec(
T0 = 1sec
r;0.25
25 50 t [sec[ 25 50 t [sec I
10 10
50 t [sec I 25 50 t [sec I
25 50 t [sec!
10
25 50 t [sec!
T0 =1sec
r ~ 0.25
I
0.75 1.5 0.75 1.5 I
I
t t ----Su
--Se
t 1 I
I
Se Su se su I
0.50 1.0
I
0.50 1.0 I
I
0.25 0.5
--·t==i
,-c----c------c
0
0.
0.1 - r 0.5
0.
0.1 0.5
200 200
' t
• T0 =1s
f o T0 =4s t
Ym k1 0 T0 =8s Ym k1
[s] (S)
~\· ----,
0.2 100 \ 0.2 100
\
\
\
b_ ............ o
- o----o---
0.1 50 __ 0.1 50
10 0 ............... 0 10
""' 0
0. 0.
0.1 0.5 0.1 0.5
Table 5.4.3 Controller parameters for different sample times T 0 and r=O
T 0 [sec] T0 [sec]
4 8 16 4 8 16
r r
T0 =4sec 0 0.1 0.25 T0 =4sec 0 0.1 0.25
r r
T0 =8sec 0 0.1 o. 25 T0 =8sec 0 0.1 0.25
Table 5.4.4 shows the controller parameters for the sample times T 0 =
4 sec and 8 sec. With increasing weighting r of the manipulated variable
the parameters q 0 , q 1 and q 2 decrease. K and cD also decrease, whilst
ci hardly changes.
In section 5.2.2 i t was shown that for a step change of thereference va-
riable by 1 or w0 , the parameter q 0 of the control algorithm is equal
to the manipulated variable u(O) oru(O)/wo from Eq. (5.2-6). By properly
chosing u(O), taking into account an allowable region of the manipula-
ted variable, the parameter q 0 can be readily determined. Then only two
parameters q 1 and q 2 have to be optimized. The control algorithm is
therefore called 3 PC-2.
In Fig. 5.4.6 the responses to step changes in the command variable are
shown for different values of the initial manipulated variable u(O) = q0 .
Starting with a value q 0 t ' which results from optimization of all pa-
,op
rameters for r = 0, a decrease of the chosen manipulated variable u(O)
results in a more restrained control behaviour. The overshoot ym decrea-
ses. In Fig. 5.4.6 b) q 0 has been reduced such that both of the first
two manipulated variables u(O) and u(1) are equal. However, the resul-
ting overshoot increases again. The same result was obtained for pro-
cess II.
5.4 Simulation Results 101
- 3 PC-3 q 0 •4.55
0-+--------~---------+-----------
so 100 t [sec] so 100 trsecl
Fig. 5.4.7 shows all the characteristic values of the control perfor-
mance and the manipulating effort for the practically significant sample
times T0 = 4 sec and T0 = 8 sec. With a reduction in the chosen manipu-
lated variable u(O), i.e. decreasing q 0 , the manipulating effort decrea-
ses and the control performance Se increases slightly. The overshoot Ym
and the response time k 1 also decrease for T0 = 8 sec. For T0 = 4 sec,
102 5. Parameter-optimized Controllers
o-- --o
----o---
4-o t t
1
==e Su
Se
---c---1
b-----9I 0.25 0.5
I I
I
I
I
I
i q~opt
---Su Vkas
1 q'opt
--s. I o
0.05 0.1 I To=4s I 0.05 0.1
-
I ~
0. 0.
1.5 2.0 2.5
q~
200 200
t t t t
Ym k1 Ym k1
(S) (5)
0
II
0.3 150 0.3 150 1:
II
I I
I
I 1
I I
I I
I I
I I
0.2 100 0.2 100 I 0
I
,o
,,._o' I
I
" ........ c, I
I
I
I
0.1 50 0.1 50 I
I
I
I
I
q~opt--J
-
T0:4s I
0. 0.
1.5 2.0 q"0 2.5 1.5 2.5 3.5 ---q;- 4.5
0
the same trend occurs at first for both processes. If q 0 is chosen too
small then both values increase again. A minimum occurs for ym and k 1 .
A good choice of the initial manipulated variable u(O) produces not only
good control performance but also fewer computations in the parameter
optimization. Table 5.4.5 shows the controller parameters for different
values of u(O). The parameters q 1 and q 2 follow the trend of q 0 but ci
hardly changes. For the same q 0 the other controller parameters hardly
vary for both processes.
For parameter optimized control algorithms of first and second order the
coefficients K, ci and c 0 of the gain, the integral and the lead action,
can be simply determined using the parameters q 0 , q 1 and q 2 in Ea.
(5.2-15). These coefficients need not be derived from the corresponding
coefficients of the analog controller differential equations. The rela-
tions Eq. (5.2-15) are also valid for large sample times.
The smaller the sample time the better the control behaviour. However,
if the sample time becomes very small, further improvement of the con-
trol behaviour can only be obtained by a considerable increase in the
manipulating effort. Therefore, too small a sample time should not be
chosen. For the selection of proper sample times the following rules
can be used:
Table 5.4.5 Controller parameters for different chosen manipulated variable u(O) = q 0 for 0
ol>.
T 0 = 4 sec and 8 sec
()
0
:::1
(1"
'1
0
r-'
r-'
(])
'1
Ul
5.5 Choice of Sample Time for Parameter-optimized Control Algorithms105
(5.4-11)
where K GP(1) is the process gain. The larger the sample time the
smaller the influence of r.
If the sample time is not too small, u(O) can be chosen according to
which is obtained for the modified deadbeat controller DB(v+1) from Eq.
(7.2-13).
The choice of sample time depends not only on the achievable control
performance but also on:
The process dynamics have a great influence on the sample time in terms
of both the transfer function structure and its time constants. Rules
for the sample time, therefore, are given in Table 5.5.1 as functions
of the time delay, dead time, sum of time constants etc. In general,
the larger the time constant the larger the sample time.
Now the dependence of the sample time on the disturbance signal spectrum
or its bandwidth is considered. As is well known, for control loops
three frequency ranges can be distinguiphed [5.14] (c.f. section 11.4):
(l
::::>
criteria to determine determination of sample time remarks 0
literature process III .....
the sample time ()
the sample time [sec] ro
0
H)
[5.10], [5.3] T 0 ~(1/8 ... 1/16lt 3 ... 1
(ll
[5.10], [ 5. 3 J T 0 ~(1/4 ... 1/8)Tt - processes with dominant Pl
dead time ,a
I-'
ro
larger settling time
[5.11], [5.17J T 0 ~(1.2 ... o.35)Tu 4.5 0.1 !>Tu/T!>1 .0 >'3
as with continuous .....
T 0 ~(0.35 ... 0.22)Tu 1 . O!>Tu/T!>1 0 sro
PI-controller: 15 %
H)
0
ti
compensation of dis- 8 ... 2 hj
To=Tr/wmax wmax chosen such that for
Pl
ti
turbances until w the process I G (wrnax) I = Pl
max s
as contin. loop 0.01 ... 0.1 ro
rT
ro
ti
I
simulation, [ 5. 7 J T 0 ~t1/6 ... 1/15)T 95 8 ... 3 0
'C
section 5.4 rT
.....
s.....
N
identification of [3.13] 8 ... 4 ro
T 0 ~(1/6 ... 1/12)T 95 Q..
!J;>
f : eigen frequency of the closed loop in cycles/sec I-'
cQ
0
Tt : dead time ti
.....
T 95 : 95 % settling time of the step response rT
::::>
Tu : delay time, c.f. Table 5.6.1 sUJ
0
-...]
108 5. Parameter-optimized Controllers
The high frequency range (w 2 < w < oo): disturbances are not
affected by the loop.
Control loops in general have to be designed such that the medium fre-
quency range comes within that range of the disturbance signal spectrum
where the magnitude of the spectrum is small. In addition, disturbances
with high and medium frequency components must be filtered in order to
avoid unnecessary variations in the manipulated variable. If disturban-
ces up to the frequency wmax = w1 have to be controlled approximately
as in a continuous loop, the sample time has to be chosen in accordance
with Shannon's sampling theorem
The sampling theorem can also be used to determine the sample time if
an eigenvalue with the greatest eigenfrequency wmax is known. Then this
frequency is the highest frequency to be detected by the sampled data
controller. Particularly with an actuator having a long rise-time it
is inappropriate in general to take too small a sample time, since it
can happen that the previous manipulated variable has not been acted
upon when a new one arrives. If the measurement equipment furnishes
time discrete signals, as in chemical analysers or rotating radar an-
tennae, the sample time is already determined. An operator generally
wants a quick response of the manipulated and control variable after
a change of the reference;variableat an arbitrary time. Therefore, the
sample time should not be larger than a few seconds. Moreover, in a
dangerous situation such as after an alarm, one is basically interested
in a small sample time. To minimise the computational load or the costs
for each control loop, the sample time should be as large as possible.
This discussion shows that the sample time has to be chosen according
to many requirements which partially are contradictory. Therefore sui-
table compromises must be found in each case. In addition, to simplify
the software organization one must use the same sample time for several
control loops. In Table 5.5.1 rules for choosing the sample time are
summarized, based on current literature. Note that rules w~ich are based
5.6 Tuning Rules for Parameter-optimized Control Algorithms 109
Flow
Pressure 5
Level 10
Temperature 20
The application of these rules in modified form for discrete time Pin-
control algorithms has been attempted. (5.15] gives the controller pa-
rameters for processes which can be approximated by the transfer func-
tion
-T s
G(s) e t (5.6-1)
1+Ts
Tuning rules which are based on the characteristics of the process step
response and on experiments at the stability limit, have been treated
in [5.16] for the case of the modified control algorithms according to
Eq. (5.3-3). These are given in Table 5.6.1.
(5.6-2)
(5. 6-3)
(c.f. Eq. (5.2-6)), for step changes of the command variable w(k) with
weighting of the manipulated variable r = 0; 0.1 and 0.25. Then, the
characteristic values of the controller K, cD and c 1 given by Eq.
(5.2-15) were determined. The results of these investigations are shown
in Figures 5.6.1 to 5.6.3 (tuning diagrams). The characteristic values
of the controller are shown as functions of the ratio Tu/TG of the pro-
cess transient functions in Table 5.6.1. The relationship between the
characteristic values Tu/T or TG/T and Tu/TG can be taken from Figure
5.6.4.
Ziegler-Nich ols m
[ T T
Control algorithm: u(k)-u(k-1) = K y(k-1)-y(k)+ T0 [w(k)-y(k)]+ T 0 [2y(k-1)-y(k -2)-y(k)] ] >-3
1 0 c
::J
1-'·
To TD ::J
To TD I.Q
K K
TI To TI To ;o
c
I-'
p CD
TG Ul
- Kkrit
Tu+To
- -2- - - t-n
0
ti
'U
0, 9 TG 0,135 TGTO 0,27 TGTO P>
o 54 Kkrit To ti
PI - - P>
Tu+T 0 /2 (Tu+To/2)2
[o, 45Kkri t •.. o ,27Kkri t] ~: ' K - s
CD
K(Tu+T 0 /2) 2 Tp
~ 4Tu rt
CD
smaller values for T 0
ti
1,2 TG I
0, 3 TGTO 0,6 TGTO 0, 5 TG 3 0
PID - --- 1 2 Kkrit To Kkrit ::2_ '0
[ 0 ' 6 Kkr it· · • 0 ' 6 Kkr i ' K 40 K rt
(Tu+Tol (Tu+To/2)2 K(Tu+T0/2) 2 K To
J~: Tp To 1-'·
s
1-'·
N
Not applicable for Tu/T 0 ~ 0 Range of validity: T0 ~ 2Tu CD
p.
()
Not recommended for T0 ~ 4Tu 0
::J
rt
y• I ti
/ y. 0
/ I-'
1
~
/ I-'
Q I.Q
I 0
'\J t ti
1-'·
A
~---1 t
r - - Tp_...
rt
::r
sUl
Tu TG
STEP RESPONSE MEASUREMENT
OSCILLATION MEASUREMENT
------
112 5. Parameter-optimized Controllers
12 ~ I
I
!
1\
\
8 \ 32 - - t--
/"-
\
~ ~
0/
6 \ 2~
-i -- I
I~ i/
~l \\ 1E>
I
1',
2 1----o-· o-- 0' t-_8- 8
1/ +-----
!
--- +---
f--!~~-,·~=> t:---..
0- o..- l---i
o_
o ___
n= i j3 4 l ~: o-·-·
o-
o-·-ro-·- ·-·---i
0 0-2 0-~ ().E) () ().2 0-~ 0-6
Tu/TG o, 1/Tr
u G
' .,
I
0.5
'0 -
I
\
\
',i
02
Figure 5.6.1 Optimal controller
parameters of the
control algorithm
3 PC-3 (Pro-behaviour) ().1
1----
+ I "-a
due to the perfor-
mance criterion Eq. r- i - o - to-
(5.6-4) with r = 0
for processes 1--1
G
P
(s) = 1
( 1+Ts) n
0
Characteristic
values according to
Eq. ( 5. 2-1 5) .
K = q0-q2
CD = q2/K
CI = (q0+q1+q2)/K
5.6 Tuning Rules for Parameter-optimized Control Algorithms 11 3
2·0 \ 5
-. \
0\\
~\
!
evj iJ
'
'· r-o-, \
!
lL
·o '
'· ..... ~-~ j_
l2 3
lL
"
•o..........
~
·~e
..;-
0
1 l-/
0.8 2
I /
0
//
/
, -~
04 1 J <1'.,_
o'.
~~-
0/
r----c,..
l
0 02 0·4 0·6 0 0-2 0-4 0-6
1/TG
05
jr =0.1j r---- p,\
T0 /T =0-1
\
o o -
' 'o
o---o =0-5 r--~·
'·'
l'o,
o---·-·-·-·-<1 = lO 0.3
·,.
' ' .,
r----o,
0-2 ' o,
....
o...._
1'-.......0
Figure 5.6.2 Optimal controller
0-1
parameters of the
control algorithm
3 PC-3 (PID-behaviour) I o- o- -l
due to the performance
criterion Eq. (5.6-4) 0 0-2 0-4 06
with r = 0.1 for pro-
cesses 1
t/TG
G (s) = --'--
p (1+Ts)n
Characteristic values
according to Eq.
(5.2-15): K = q 0 -q 2
CD = q2/K
CI = (q0+q1+q2)/K
114 5. Parameter-optimized Controllers
20
-~~ ;, i
·..,=~
'-.....
Q ',
l2 ........ 3
~~ /0 I
v
r------ ~
·.;..~
I I
I
0-8 2
Ll ./
04 1 / 0
// ..,.,. ..o
h........
o""'·
..,o/_...-
...
-~ -
g;..·
T
0 0-2 04 0-6 0 0.2 04 0-6
Tu/TG Tu/TG
Ir = 0-251 &
\\
o o T0 /T = 0-1 ·b,_
o---o =O.S \
0-3 'o.. .,
o-·-·-·-·-·-0 = 1-0 '-,
o, 'o
0-2 ' o,·,
o....._
.............. 0
Figure 5.6.4 Characteristics of nth ~rder lags with equal time con-
stants. G(s) = 1/(1+Ts)n. Taken from (3.11]. Tu and TG
see Table 5.6.1 (DIN 19226).
4. The Figures 5.6.1 to 5.6.3 yield, after choice of the weighting fac-
tor r of the manipulated variable, the characteristic values K0 , cD
and ci depending on Tu/TG and T0 /T. Here, K0 is the loop gain K0 =K KP
5. From Eq. (5.2-15) and K K0 /Kp' cD and ci' the controller parame-
ters follow:
qo K{1+cD)
q1 K(ci-2cD-1)
q2 K CD.
Though the tuning diagrams in Fig. 5.6.1 to 5.6.3 depend on equal time
constants, this procedure for determining the controller parameters can
also be used for low pass processes with widely differing time constants,
as simulations have shown {c.f. section 3.2.4). This can also be reco-
gnized in Table 5.6.2 where a comparison is made for process III, show-
ing the optimized controller parameters and the parameters based on the
tuning rules of Table 5.6.1 and based on the tuning diagrams in Figures
5.6.1 to 5.6.3. The tuning diagrams yield controller parameters which
compare well with the optimal values. Applying the tuning rules accor-
ding to Table 5.6.1 {left part) the gain K is too large. cD and ci'
however, compare well.
Table 5.6.2 Comparison of the results of tuning rules for the controller
parameters based on step response characteristics. Process
III. T0 = 4 sec; K = 1; Tu/TG = 6.6 sec/25 sec= 0.264.
Parameter optim.
for r=O .•. 0.25 1. 52 ... 1. 13 0.28 . .. 0.21 1.99 ... 0.81 Process
III
{Table 5.4.4)
. ..
Figures 5.6.1
to 5.6.3 for 1.7 1. 2 0.27 . .. 0.17 3.8 ... 0.85
1
{1+Ts)n
r=O ••• 0.25
6. Cancellation Controllers
(6-1)
y
..
Figure 6.1 Feedforward control system
For processes with time lags, however, the feedforward element is not
realizable and one has to add a "realizability term"
1 R
GlZf Gs(z) (6-2)
p
1 Gw(z)
GR (z) = Gp (z) (6-4)
1-Gw(z)
For the design of these cancellation controllers, many papers have been
published, especially for continuous signals. Discrete cancellation con-
trollers have been described in (6.1], [2.4], [2.14], [6.2], [6.3].
a) Realizability
s0 + s1 z + ... + Snz
n
G (z) (6-6)
m
1 + a 1 z + ... + amz
the indices describe the orders of the single polynomials. With Eq.
(6-3) it follows the closed-loop behaviour is given by:
pe (Gwl = (m-n)
This means that because of the realizability condition, the ?Ole excess
of the command transfer function Gw(z) has to be equal to or greater
than the pole excess of the process if the controller order is ~ ~ v
[2.19].
could be assumed.
120 6. Cancellation Controllers
If the cancellation controller GR(z) given by Eq. (6-4) and the process
GP 0 (z) are in a closed loop, the poles and zeros of the processes are
cancelled by the zeros and poles of the controller if the process model
Gp(z) matches the process exactly. Since the process models GP(z)
B(z)/A(z) used for the design practically never describe the process be-
haviour exactly, the corresponding poles and zeros will not be cancelled
exactly but only approximately. For poles A+(z) and zeros B+(z) which
are "sufficiently spread" in the inner of the unit disc of the z-plane,
this leads to only small deviations of the assumed behaviour Gw(z) in
general. However, if the process has poles A-(z) or zeros B-(z) near or
outside the unit disc one has to be careful.
A~(z) A-(z)
(6-11)
B~ (z) B- (z)
it follows that:
(6-13)
Gw,res
For 1\A- (z) 0 and 1\B-(z) = 0, the poles of this transfer function are
near or outside the unit circle. They are, however, exactly cancelled
6. Cancellation Controllers 121
by the zeros. For small differences ~A-(z) and ~B-(z) the poles change
by small amounts and are therefore no longer cancelled. Then a weakly
damped control behaviour or, if the poles are outside the unit circle,
an unstable behaviour results. Therefore, one should not design cancel-
lation controllers for processes with poles or zeros outside or near the
unit circle in the z-plane. One always has to take into account that
small differences ~A-(z) and ~B-(z) occur.
-1 -1 -2 -1 -2 -3
Gw(z) = z or 2z -z or 3z -3z +z
where KP is the process gain. Then one obtains the socalled predictor
controllers which can be of advantage for processes with large dead
times, see chapter 9.
The ripples between the sampling points that can appear with the cancel-
lation controllers treated in chapter 6 can be avoided if a finite sett-
ling time is required for both the controlled variable and the manipula-
ted variable. Jury [7.1], [2.3] has called this behaviour "deadbeat-res-
ponse". For a step change of the reference variable the input and the
output signal of the process have to be in a new steady state after a
definite finite settling time. In the follwoing, methods for the design
of deadbeat controllers are described which are characterized by an es-
pecially simple derivation and for which the resulting synthesis re-
quires little calculation.
w(z) = (7 .1-3)
( 1-z - 1 )
P (z) (7.1-6)
p1 y ( 1)
p2 y(2) - y(1)
Pm 1 - y (m-1)
u(z)
w(z) = qo + q1 z
-1
+ ... + q mz
-m
Q(z) (7 .1-7)
qo u(O)
q1 u ( 1) - u(O)
qm u(m) - u(m-1).
(7. 1-8)
1 (7. 1-9)
u(m) Gp (1).
GR (z) Gp (z)
~ (7.1-10)
w(z) 1 +GR ( z) Gp ( z)
P (z). (7.1-12)
P (z)
GP(z) = QTZ) (7.1-13)
q u(O)
0 + bm
(7.1-15)
-1
Gw(z) = P(z) = p 1 z +
m-1
p1z +
m
z
The characteristic equation is therefore
m
z 0. (7.1-16)
Hence the control loop with the deadbeat controller possesses m poles
at the origin of the z-plane.
+ b z-(m+d)
m
-1 -m
1 + a 1z + + a z
m
- -1
b 1z + ...
-1 (7.1-17)
1 + a 1z +
7.1 Deadbeat Controller with Normal Order 125
Considering
E1 b2 bd 0 am+1 0
b1+d b1
b2+d b2 a 0. (7.1-18)
v
b
m
Eq. (7.1-3) to (7.1-15) can be applied using Eq. (7.1-17). Then it fol-
lows from Eq. (7.1-17) and Eq. (7.1-13) that
qo
b1 + b2 + ... + b
m
u(O)
q1 a1q0
q2 a2q0 p1 b1q0 0
qm amqo pd bdqO 0
(7.1-20)
From Eq. (7.1-20) and Eq. (7.1-21) the transfer function of the deadbeat
controller DB(v) becomes
u (z)
e ( z) -1 -d (7.1-22)
1-q0 B(z )z
a)
y
1.0 ---- -~--()- -o-~-~-G--~-~-~-G--G--G--
0
b) + +
0-- -·.....:..•---..----- ..·. . .:. .•-·~·~~--
... -.,
9.0 i 5 10 k
7.0
u
5.0
3.0
1.0 . . .L........_. . . . . . . . . . . . ._.........................._............__
c)
0
-1.0
5 10 k
-3.0
-5.0 + - V: _ j
DB{m) o -...... W: _r-
q 0 B' (z)
(7.1-23)
z(m+d)
z(m+d) = 0. (7.1-24)
It should be noted that the deadbeat controller cancels the process poles.
7.2 Deadbeat Controller with Increased Order 127
For the low pass process III, described in section 5.4.1 and in the ap-
pendix, one obtains for T0 = 4 sec the following coefficients of the
deadbeat controller by using Eq. (7.1-20):
If the finite settling time is increased from m to m+1 then one value
of the manipulated variable can be prescribed. Since the first manipu-
lated variable u(O) generally is the largest one, this value should be
reduced by prescribing it [5.7].
In Eq. (7.1-4) and Eq. (7.1-5) one more step is admitted. Then Eq. (7.1-6)
and Eq. (7.1-7) become
-1 -2 - (m+ 1)
P ( z) p1z + p2z + ... + Pm+1z ( 7. 2·-1)
- (m+1)
Pm+1z
(7.2-3)
- (m+1)
+ · · .+ qm+1 z
This equation can only be satisfied if the right hand term has the same
root in both the denominator and numerator. Hence,
-m -1
P (z)
+ ... + p~z ) (a-z )
(7.2-4)
Q(z) m -1
+ .•. + ~z ) (a-z )
128 7. Controllers for Finite Settling Time (Deadbeat)
If the coefficients in Eq. (7.2-3) are compared one obtains, after di-
vi ding by q'0
(7.2-6)
(aq~-q~-1) Pm (ap~-p~-1)
(7.2-7)
P1 + ••• + Pm+1 = 1 •
q' (7.2-8)
0
The parameters of the controller now read, using Eq. (7.2-7) and Eq.
(7 .2-8),
qo u(O) (given)
q1 q 0 (a 1-1l + Eb1
i
a1
q2 qo (a2-a1) + Eb.
~
(7 .2-9)
am-1
qm qo (am-am-1) + Yi:i-:-
~
1
qm+1 am (-qo + Eb.)
~
7.2 Deadbeat Controller with Increased Order 129
(7.2-10)
(7.2-12)
u(O) should not be chosen too small because then u(1) > u(O), which is
unsuitable in most cases.
(7.2-13)
(7.2-15)
For the same process as in example 7.1, Fig. 7.2.1 shows step responses
for given u(O) with controller parameters:
For a step change of the reference variable the desired deadbeat beha-
viour is obtained. The manipulated variable u(O) could be decreased to
40% of the value in Fig. 7.1.1. The control variable needs one more
sample time to reach the finite settling time. For a step change of
the disturbance v, a relatively well damped behaviour can be seen. The
chosen initial manipulated variable u(O) leads to a somewhat worse con-
trol performance. However, this deadbeat controller can be applied more
generally as it produces smaller amplitudes of the manipulated variable.
0
y
1.0 ------ Q_-G--G--o--G--()--()--()--G--G--o-
0
+ +
+ +
0 + +
+ +
o-,-·~·-----.,---------~---·~·~·--
9.0 5 10 k
7.0
u
5.0
3.0
1.0 L.. ••••••••-••M•W•M-·•••••·••·--••••••••-••"''''"'
0~~~~------------~
-1.0
. . .J 5 10 k
-3.0
-5.0 +-v:_r-
DB(m+1) o ········ w: ...r-
The step responses of the deadbeat controllers used in example 7.1 and
7.2 are shown in Fig. 7.2.2. For the controller DB(v), a negative u(1)
follows the initial large positive manipulated variable u{O); this op-
posite control is required because of the large value of u{O). The va-
lue of u(2) is positive, and after the oscillations have decayed an in-
tegral behaviour with increasing u{k) arises.
ulkl ulk)
Figure 7.2.2 Step responses of
10 10
control algorithms
for finite sett-
ling time. Process
III.
a) DB(V)
b) DB (v+1)
10 20 k 10 20 k
-5 a) -5 b)
Controller T0 (sec] 2 4 6 8 10
To avoid u(O) becoming too large, the sample time for the DB(v)-control-
ler should be T0 ~ 8 sec. This corresponds to
Here TE is the sum of the time constants and T 95 the 95 % settling time.
If that maximum possible u(O) is assumed which can be chosen from the
allowable range of the manipulated variable, the sample time for the
DB(v+1)-controller can be smaller than for the DB(v)-controller.
Tab,le 7.3.2 Comparison of the sample times for parameter optimized and
deadbeat controllers for the example of process III.
Assumption u(O)max ~ 4.5
x(OJ
r- -, y!kl
---=--=t c r:-~..:c-=:;>
L_-:.....J
These conditions on the weighting matrices ~' g and ~ result from the
conditions for the existence of the optimum of I, and can be discussed
as follows. Meaningful solutions in the control engineering sense can
only be obtained if all terms have the same sign, e.g. a positive sign.
Therefore, all matrices have to be at least positive semidefinite. If
~ = Q, i.e. the final state ~(N) is not weighted, but g + Q, i.e. all
states ~(O), ••. ,~(N-1) are weighted, a meaningful optimum also exists.
That means that if ~ is positive definite ~ can also be positive semi-
definite. The converse is also true. One should, however, exclude the
case where S = 0 and g = Q, for then the states ~(k) would not be weigh-
ted and only the manipulated variables would be weighted by ~ +Q,
which is nonsense. R has to be positive definite for continuous-time
- -1
state controllers as R is involved in the control law. For time-dis-
crete state controllers, however, this requirement can be relaxed as
described later.
are not fed back. Instead, we consider the modification of the process
eigen behaviour and stabilization through state feed back. If the opti-
mal manipulated variable u(k) is found then
N-1
min I= min {xT(N)Q ~(N) + E (xT(k)Q x(k) + ,!!T(k)~ ,!!(k)]}
,!!(k) - - k=O - - - ( 8. 1-4)
k = 0,1,2, ••. ,N-1
Remarks
a) According to the optimality principle of Bellmann each final element
of an optimal trajectory is also optimal. This means that if the
end point is known, one can determine the optimal trajectory in a
backward direction.
8.1 Optimal State Controllers for Initial Values 137
min I = min
[
min {~T (N) .Q ~ (N)
~(k) ~ (N-1)
as the two first terms are not influenced by ~(N-1) and IN- 1 ,N are
the costs of k = N-1 to k = N resulting from ~(N-1). If the state
equation
or (8.1-7)
T T T T T
~ (N) = ~ (N-1)~ + ~ (N-1) ~
(8.1-8)
a .Q, o.
min
~(N-1)
{ ... } = a~(N-1){. •• } > (8.1-9)
138 8. State Controllers
Hence, using the rules for taking derivatives of vectors and matrices
given in the appendix
a 0
a~(N-1 > {. • ·}
and
-(~TQ ~ + ~)-1~TQ ~ ~(N- 1 )
- !_(N-1) ~(N-1). (8.1-10)
Here
~(N-1) (8.1-11)
and
(8.1-12)
I or min I according to Eq. (8.1-5) and Eq. (8.1-6) can be given as func-
tion of ~(k), k = o, ... ,N-1 and ~(k), k = o, ... ,N-2. Thus the unknowns
~(N) and ~(N-1) can be eliminated. In order to perform this elimination,
first IN- 1 ,N from Eq. (8.1-13) is substituted in Eq. (8.1-6), resulting
in
N-1 T T
min {xT(N)Q x(N) + L [x (k)Q x(k) + ~ (k)~ ~(k)]}
~ (N-1) - - - k=O - - -
N-1 N-2 T T
L XT(k)Q x(k) + k~O~ (k)~ ~(k) + ~ (N-1)!:'_N- 1 ,N~(N-1)
k=O- - -
N-2
L [xT(k)Q x(k)
k=O - - -
(8.1-15)
8.1 Optimal State Controllers for Initial Values 139
The abbreviation
T
IN-1,N + ~ (N-1)9 ~(N-1)
T
~ (N-1)~N- 1 ~(N-1) (8.1-17)
can be formed. In this abbreviation the costs of the last step and the
evaluation of the corresponding initial deviation ~(N-1) are included.
(This compression allows a simpler formulation of the following equa-
tions.) If Eq. (8.1-16) is introduced into Eq. (8.1-15), and if there-
sult is placed into Eq. (8.1-5) it follows that:
N-2 T T
min I min f min { Z (~ (k)9 ~(k) + ~ (k)~ ~(k)]
u(k) u(N-2) k=O
Instead of min now it reads min as the optimal ~(N-1) and the re-
u(N-1) u(N-2)
sulting state ~(N) have been calculated and substituted. For the term
min one obtains by analogy to Eq. (8.1-6)
~(N-2)
N-2 N-3
min { ... } Z XT(k)Q ~(k) + Z UT(k)R u(k) +
~(N-2) k=O- - k=O- - -
(8.1-19)
IN- 2 ,N describes the costs resulting from the last two stages
T T (8.1-20}
IN- 2 ,N = ~ (N-2)~ ~(N-2) + ~ (N-1)9 ~(N-1) + IN- 1 ,N.
it follows that
0 T -1 T
~ (N-2) = - (~ + ~ ~N-1~) ~ ~N-1~ ~(N-2) - KN_ 2 ~(N-2). (8.1-22)
T -1 T
~N-2 = (~ + ~ ~N-1~) ~ ~N-1~" (8.1-23)
Therefore the minimal costs IN- 2 ,N for the two last stages become using
Eq. ( 8. 1-21 ) :
IN-2,N =
with
(8.1-25)
IN-2 (8.1-26)
If the abbreviation
is introduced again, the costs of the two last stages including the
weighting of the initial deviation ~(N-2) results in
T T
!N- 2 ,N + ~ (N-2)2 ~(N-2) ~ (N-2) (~N- 2 ,N + g)~(N-2)
T (8.1-28)
~ (N-2)~N- 2 ~(N-2).
8.1 Optimal State Controllers for Initial Values 141
~(0)
Figure 8.1.2 Process state model with optimal state controller K for the
control of an initial state deviation x(O). It is assumed
that the state vector x(k) is exactly and completely mea-
surable. -
~-j (8.1-30)
-PN -].
(8.1-31)
(8.1-32)
with k = O,l, ••• ,N-1. The minimal value of the quadratic criterion can
142 8. State Controllers
- K: ~(kl (8.1-33)
det[B: + ~T~ ~] +0
has to be satisfied, but because of the recursive calculation or the
existence conditions for the optimum, we must also have:
This means that the terms in the brackets have to be positive definite.
This is satisfied in general by a positive definite matrix R. R 0 can,
T -
however, also be allowed if the second term~ E_N-j+ 1 ~ > 0 for j 1,2,
..• ,Nand g0. Since E_N-j+ 1 is not known a priori, R > 0 has to be
>
required in general.
For the closed system from Eq. (8.1-1) and Eq. (8.1-33) we have
det(z ! - ~ + ~ EJ = O. (8.1-37)
Example 8.1.1
This example uses test process III, which is the low pass third-order
process with deadtime described in section 5.4. Table 8.1.1 gives the
coefficients of the matrix PN ., and Fig. 8.1.3 shows the controller
- -J
coefficients kNT . as functions of k = N-j (see also example 8.7.1).
- -J
kI
10 t
.
08t++++++ ++++++++ ++++++++
1 & & £ • • • a • • • • a a • a a • a a a & • +
-
• +
0.6 -
• k,
04 + k2
- kJ
+
0.2
•
+
0 •-+-+-
10 20 k 30
kj
1.0 1
0.8 i . . . . . . . . . . . . . . . . . . . . .
1 ++++++++ ++++++++ ++++++"
VYYVVYYYYYVYYVVVYYYYYyyy
10 20 - - k 30
The recursive solution of the matrix Riccati equation was started for
j = o, with R r = 1, N = 29 and
=l~ n
0 0
0 0
.Q
0
0 0
T
(c.f. section 8.9.1). The coefficients of PN . and kN . do not change
- -J - -J
significantly after about ten stages, i.e. the stationary solution is
quickly reached.
D
In this section i t was assumed that the state vector ~(k) can be measu-
red exactly and completely. This is occasionally true, for example for
some mechanical or electrical processes such as aircraft or electrical
networks. However, state variables are often incompletely measurable;
they then have to be determined by a reference model or observer (see
section 8.6).
We now consider the case where aonstant referenae variabtes~(k) and dis-
turbanaes ~(k) arise. These constant signals can be generated [8.8]
from definite initial values ~(0) and y(O) by areferenae variabtemodeZ
(c.f. Figure 8.2.1):
Here dim y(k) dim ~(k). For example, if a step change of the refe-
146 8. State Controllers
Ylol V( ol
'!! lkl
II
II ~ ( k )
Figure 8.2.1 State model of a linear process with a reference and dis-
turbance variable model for the generation of constant
reference signals w(k) and disturbances n(k). The result-
ing state controller is illustrated by dashed lines.
1 (k+1) .f(k)
£(k) c n.(k)
with dim 1_(k) dim £ (k) (see Fig. 8. 2. 1) . Based on the initial values
1_(0) and !}(0) a constant £(k) can be generated.
Using other structures of the preset model with state variables y(k)
or 1_(k) other classes of external signal can be modelled. For example
a linearly increasing signal of first order can be obtained from
y1 (k+1) y1 (k).
The state variables of the process model, the reference variable and
the disturbance model are combined into an error state variable ~(k),
so that for the control deviation ~(k) we have
l=[~ ~lr~(k)
The overall model is then described by
[ ~(k+1) ]-[~]u(k).
Q -
(8.2-5)
y(k+1)-1_(k+1) Q .!_ y(k)-1_(k)
If
~1 (k) = y(k) - .f(k) (8.2-7)
by using a feedback state controller. Hence we are left with the syn-
thesis of an optimal feedback state controller for the initial values
of the system
Unlike the control of initial values ~(0) (section 8.1), this state
controller controls the system with initial values ~(0).
and therefore with Eq. (8.2-6), Eq. (8.2-7) and Eq. (8.2-11)
€ (k) ]
~(k) = (~_!] [ - . (8.2-12)
y(k) - ~(k)
The overall model of Eq. (8.2-5) can be represented by using the abbre-
viations
~*(k)
--[ ~(k) A*
y(k)
(8.2-13)
B* C* (f .Q]
as follows
~* (k+1) (8.2-14)
~(k) (8.2-15)
with
K* = (~ _!]. (8.2-17)
Hence~* is a (m+r)x(m+r)-matrix.
[~
I - A + B K
det [z I - A* - ~*~*] det
(z-1)i]
det [z I - A + B ~](z-1)q
0 (8.2-18)
Assuming the given models for external disturbances, the control sys-
tem for q manipulated variables acquires q poles at z = 1, i.e. q "in-
tegral actions", which compensate for the offsets. For a single input/
single output system the characteristic equation becomes
Hence it has (m+1) poles. Note that the characteristic equation has
poles at z = 1 as the system is open loop with respect to the additio-
nal state variables }.(k) and 1(k).
[8.5]
det [z ! - ~ + ~ ~] 0 (8.3-4)
~ (k+1)
0
-am-1 · · ·
J~(k). m u (k). (8. 3-5)
T
u(k) - ~ ~(k) (8.3-6)
~.
~(k+1) ~ (k). (8.3-7)
(-am-km)
0 0
(-a
m-1
-k
m-1
) (
1
-k
1
,j
Hence, the characteristic equation is
(8. 3-8)
0.
(8.3-10)
are first determined appropriately, and then the ai are calculated and
the ki are determined from Eq. (8.3-9). The multivariable system case
is treated for example in [2.19J. See also section 21.2.
152 8. State Controllers
It should be noted, however, that by placing the poles only single ei-
gen oscillations are determined. As the cooperation of these eigen os-
cillations and the response to external disturbances is not considered,
design methods in which the control and manipulated variables are di-
rectly evaluated are generally preferred. The advantage of the above
method of pole assignment lies in the especially clear interpretation
of the changes of the single coefficients ai of the characteristic
equations caused by the feedback constants ki. As has been shown in
chapter 7, the characteristic equation for deadbeat aontroZ is zm = 0.
Eq. (8.3-8) shows that this occurs when ai = 0. This state deadbeat
control will be considered in section 8.5.
A linear time invariant process with multi input and output signals
The system
(8.4-5)
T B (8.4-6)
C T- 1 • (8.4-7)
det [z l - ~] =0 (8.4-8)
det [z l - ~] = det [z l - ~ ~ ~- 1 ]
det ~ [z I - ~] ~- 1 = det [z I - ~] (8.4-9)
The diagonal elements of ~ are the eigenvalues of Eq. (8.4-4) which are
identical to the eigenvalues of Eq. (8.4-1). The transformatio n matrix
T can be determined as follows [5.17]. Eq. (8.4-5) is written in the
form
(8.4-10)
T- 1 = [_v 1 v v J (8.4-11)
-2 · · · -m
0 z
m
(8.4-12)
or
[z.I -A] v. = 0. (8.4-14)
~- - -~
Eq. (8.4-14) yields m equations for the m unknown vectors ~i which have
m elements vi 1 , vi 2 ' .•. , vim· If the trivial solution ~i 0 is exclu-
ded, there is no unique solution of the equation system Eq. (8.4-14).
For each i only the direction and not the magnitude of ~i is fixed. The
magnitude can be chosen such that in ~t or ft only elements 0 and 1 ap-
pear [2.19]. The vectors ~i are called eigenvectors. For a single in-
put/single output process, the state representation in diagonal form
corresponds to the partial fraction expansion of the z-transfer func-
tion for m different eigenvalues
T -1 ct1bt1 ct2bt2 ctmbtm
G(z) = ~t [ z ! - ~t] ~t = z-z 1 + z-z 2 + ... + z-zm (8.4-15)
This equation also shows that the bti and cti cannot be uniquely de-
termined. If e.g. the bti = 1 is chosen then the cti can be calculated
by
~t(k) (8.4-16)
yielding
(8.4-17)
(8.4-18)
(8.4-19)
If ~t is also diagonal
~t (8.4-20)
8.4 Modal State Control 155
The realizable control vector ~(k) is calculated from Eq. (8.4-16) and
Eq. (8.4-6), which yields:
(8.4-22)
det B + 0.
This means that the m eigenvalues of ~ or ~ can only be influenced in-
dependently from each other if m different manipulated variables are
at the disposal. The process order and the number of the manipulated
variables therefore have to be equal.
with
(8. 4-25)
(8.4-26)
(8.4-27)
(8.4-28)
(z 1-k 1 ) -k2 -k
m
-k1 (z2-k2) -k
m (8.4-29)
F
The single state variables are no longer decoupled and the eigenvalues
of f change compared with ~ in a coupled way so that the supposed ad-
vantage of modal control cannot be attained by assumption Eq. (8.4-27).
If, however, a single state variable xtj is fed back
z1 -k. 0
J
F 0 (zj-kj) 0 (8.4-30)
0 -k. z
J m
As modal state control for controller design considers only pole place-
ment, the remarks made at the end of section 8.3 are also valid here.
Note, however, that the modal control can advantageously be applied to
distributed parameter processes with several manipulated variables
(8.11], (3.10], (8.12].
It was shown in section 3.2.2 that this process can be driven from any
initial state ~(0) to the zero state ~(N) =Q in N = m steps. The re-
quired manipulated variable can be calculated using Eq. (3.2-7). It
can also be generated by a state feedback
T (8.5-2)
u(k) = - ~ ~(k).
Then we have
~(N) ~N~(O).
158 8. State Controllers
(8.5-4)
0. (8.5-6)
N =m
= am = 0.
det (z I - B] = zm = 0. (8.5-7)
Using the controllable canonical form all state variables xi are multi-
plied by ai in the state controller and are fed back with opposite sign
to the input as in the state model of the process itself, (c.f. Figure
3.6.3). Therefore, for m times, zeros of the first state variable are
generated one after another and are shifted forward to the next state
variables, so that fork =mall states become zero (2.19].
The deadbeat controller DB(v) described in section 7.1 drives the pro-
cessfrom any initialstate ~(0) = 0 in m steps to a constant output of
where we assume that only the input vector ,'!:!(k) and the output vector
y(k) can be measured without error and that the state variables ~(k)
The constant observer feedback matrix H must be chosen such that ~(k+1)
approaches ~(k+1) asymptotically as k~oo. Figure 8.6.1 leads to the
following observer equation
160 8. State Controllers
PROCESS
~------------------- -----1
1 u(k) x!Ol I
I B I
I I
I I
I I
L______________________ _ _ _ _j
~--- - - - - - - - ~ ~(k) - +- -l
1 ~============~ I
I ~~ I
I I
I I
I I
I I
I I
IL_ ____ _ - _ _ _j
OBSERVER
lim ~(k) 0
k-+oo
8.6 State Observers 161
Ym + Ym-1z + (8.6-6)
det W
we have
T T T
det (z!- ~ + ~ f] = det lz!- ~ + f ~ ]. (8.6-7)
(8.6-8)
with feedback
u(k) = - ~ ~(k)
~ (k+1) (8.6-9)
with feedback
so that the equations of the state controller design can be used. The
observer matrix H can then be determined, e.g. by:
162 8. State Controllers
ll
canonical form in section 8.3, so that
I
·ll
0
(-am - h m) m
0
.
..
(-am- 1 - hm-1 ) hm-1
~ (k+1) i<kl + Q ~(k) -t ~1
h
y(k)
0 1 (-a 1 - h 1 )
(8.6-12)
and analogously to Eq. (8.3-9) we have
where yi are the coefficients of Eq. (8.6-6) and which must be given.
b) Deadbeat behaviour
Choosing
h.~ =- a.
~
(8.6-14)
the observer attains a minimal settling time and therefore has deadbeat
behaviour (c.f. section 8.5).
[R C P C]T- 1 C P AT
·-b + - -N-j+l - - -N-j+1
P
-N-j
= -b
Q A P AT - H . [R
+- -N-j+l-
C P CT]HT
-N-J -b +- -N-j+l- -N-j"
Hence the behaviour of the observer and therefore its eigen behaviour
can be chosen in several ways. In practical observer realization, the
noise that is always present in the output variable limits the attain-
able settling time. For the observers described so far, all state va-
8.7 State Controllers with Observers 163
riables ~(k) are calculated. However, some state variables can often be
directly determined, e.g. by the OUtpUt Variable z(k) 1 SO that re-
duced-Order observers can be derived (see section 8.8). Figure 8.6.1
shows that the state variables of the observer follow the process
states with no lag for changes in ~(k). However, they lag for initial
values ~{0), and disturbances affecting the output variable z(k) lead
to errors in the observer states.
For the state controller described in sections 8.1 to 8.5 it was assumed
that the state variables of the process can be measured exactly and com-
pletely. However, this is not the case for most processes, so that in-
stead of the actual process state variables ~(k) (c.f. Eq. (8.1-33)),
state variables reconstructed by the observer have to be used by the
control law. Hence:
,---------------
1
~(k) ~(k+1)
x(OJ
-
------, PROCESS
I
I
I
I
I
----------------------~
r-- - - - -- - - --------- ~~lkl - -:;-- l
1 llxlklrr=====J I<~========O I
I
I I
I
I
I I
I I
I ~ I
L ______________ _ _j
OBSERVER
STATE
CONTROLLER
Figure 8.7.1 A state controller with an observer for initial values ~(0)
164 8. State Controllers
The complete state of the closed control system follows from Eq. (8.1-1),
Eq. (8.6-3) and Eq. (8.7-1)
~(k) and g(k) influence each other. From Eq. (8.1-36) the eigenbehavio ur
of the process and the state feedback without observer is described by:
giving:
T-1
=[i -i] T
r~(k+1)J
~(k+1)
[: -
B K
A - H ~
B
.!S_ 1 r~ (k)J
~(k)
(8. 7-5)
A*
~ (k)]
y(k) = [~ Q] [ ~{k) (8.7-6)
Therefore, the poles of the control system with state controller and
observer are the poles of the control system with no observer together
with the poles of the observer. The poles of the control and the poles
of the observer can be determined independentl y, as they do not influ-
8.7 State Controllers with Observers 165
ence each other. This is the result of the so-called separation theorem.
However it should be noted that, of course, the time behaviour of ~(k)
m m 2m
det [ z ! - ~*] z z z (8.7-8)
Hence the steady state after a non-zero initial value is reached only
after 2 m sampling steps, and not in m steps as with the deadbeat con-
troller given by section 8.5. In this case, the simple deadbeat con-
troller discussed in chapter 6 is faster than the state controller with
observer. Section 8.7.2 and section 8.8 show how the observer lags can
be partially overcome.
For processes with m state variables and r outputs, the observer feed-
back matrix H* has dimension (m+r)xr, and can be determined by the me-
thods given in section 8.6.
166 8. State Controllers
Using the observer, the controller equation becomes, from Eq. (8.2-16),
Fig. 8.7.2 shows the resulting block diagram for the case of constant
changes of the command variable. Fig. 8.7.3 shows the corresponding
scheme with the abbreviations used in Eq. (8.2-13). For changes of dis-
turbances g(k) or reference variables ~(k) the unknown state variables
2*(k) are first determined by the observer such that the assumed dis-
turbance or reference variable model generates exactly g(k) or ~(k) at
the output.
Fig. 8.7.2 indicates that for the manipulated variable ~1 (k) results
~1 (k) = i<k)
(8.7-11)
i<k+1) = i<k) + ~(rxr)~~(k).
Here, H(rxr) is the corresponding part of H*. For a single output va-
riable (r=1) it follows that:
-1
z
u 1 (z) ------
1 hm+ 1 ~e(z). (8.7-12)
1-z
(8.7-13)
I
I
I A I
I -
L_ _ _ _ _ _ _ - - - - - - - - - - - - - - - _ _ j
PROZESS
~---------------~
1 ~lol 1 w(~
I
1 ~(k) y(k) +
I
I e!kl
I
L - - - - - - - - ________ _j
,----
I !:! lk l
--------~--------,
+ K € Ik l I
I + - - I
I
L------- -----------~
OPTIMAL STATE CONTROLLER
FOR CONSTANT REFERENCE VARIABLE CHANGES
Figure 8.7.2 State controller with observer (drawn for constant refe-
rence variable changes). Compare with Fig. 8.2.1.
168 8. State Controllers
,-------------- -----~
1 PROCESS
I n(k) I w(k)
I - I
~(k) + I + ~(k)
g_ (k)
OBSERVER
L _ - - - - - - - - - - - - - - - - _ _ _ _ _ _j
Figure 8.7.3 State controller with observer for constant reference va-
riables ~(k) and disturbances ~(k)
8.7 State Controllers with Observers 169
The controller poles and the observer poles appear separately in this
hypothetical system. The real behaviour is obtained by coupling the
process of Eq. (8.6-1) with the extended observer Eq. (8.7-9), and the
controller of Eq. (8.7-10) and Eq. (8.2-4)
X (k+1)
f ~*(k+1l
1 [A
= -B*f
B K*
(~*+ B*K*- B*f*l
J [ x (k) J + r -0
~* (k) B*
l (w(k)- n(k)).
- -
(8.7-15)
Hence, after z-transformation
In Figure 8.7.4 the behaviour of the controlled and the manipulated va-
riable for the process III and the state controller designed for exter-
nal disturbances is shown for a step change in the disturbance n(k). No
offset in the controlled variable arises. The manipulated variable, how-
ever, is changed only after one sampling interval. This delay occurs as
all changes 6e(k) of the control deviation have to pass one element z- 1
in the observer before a change of the manipulated variable can happen
(see Figure 8.7.2).
.l
]'
>ist
0
0
~o~oaoao~
-r------~0~o~b~0
10 20 k
u
-2.0
k x (kl
mm
8.7 State Controllers with Observers 171
(see Figures 8.7.5 and 8.7.6). An undelayed output variable y(k) can
also be included by using a reduced-order observer (see section 8.8).
Example 8 .. 7.1
x 4 (k+1) 0 0 0 0 x 4 (k)
or
~d(k+1) = ~d ~(k) + ~ u(k)
y(k) = [0 0 1 OJ x 1 (k)
x 2 (k)
x 3 (k)
x 4 (k)
or
T
y(k) = £d ~d (k).
A block diagram is shown in Fig. 8.7.5. The observer for step changes
of external variables w(k) or n(k) has its parameters given by Eq.
(8.7-9) and Eq. (8.2-13)
0 0 -a3 b3 0 0
1 0 -a2 b2 0 0
A* 0 -a, b1 0 b* 0
0 0 0 0 1
---------·--
I
0 0 0 0 I
1 0
I
c*T [0 0 0 1
OJ
1
u(k)
v(k)
,--------------,
I _, x~,(k)
PROCESS III
I
.---------t~--'--1~ z I
1 I n(k)
I I
I y(k)
I
I
I
I I
I I
L - - - - - - - - - - - - ___ _j
I
I
I
I
I
I
I
I
I
I
L ____ _ _ _ _ _ _ _j
OBSERVER
,- I
1 I
I I
I I
u(k) I I
I
L---------------------~
STATE CONTROLLER
Figure 8.7.5 Block diagram of process III with state controller and
observer for external disturbances, with bypass of the
initial delay.
8.7 State Controllers with Observers 173
0 0 0 0
0 0 0 0
Q 0 0 0 0 rB 5
-B
0 0 0 1 0
0 0 0 0 25
then
h*T = [0.061 -0.418 0.984 1. 217 1.217]
Fig. 8.7.6 shows the time responses of the resulting control and the
manipulated variables.
The algorithms required for one sample of this process of total order
m+d = 4 are:
Yist Yist
1,0 0-------------- 1,0 0--------------
0+--"-<0,..,..,.~..,..,____,_
10 k
0+------+::--- 0~---~----
10 k 10 k
u
-2,0
-3.0 -3.0
r= 0.043 r= 0.18
-4,0 -4,0
sc ( 1) SC(2)
Figure 8.7.6 Time response of the controlled variable and the manipula-
ted variable for process III with a state controller for
external disturbances and a modified observer with 'bypass'
of the initial delay. Step change disturbance n(k). [8.5].
- state estimates:
i1 (k} - a
3 3
x x
(k-1) + b 3 4 <k-1 1 + h 1*lle(k-1)
x3 (k) x x
2 <k-1) - a 1i 3 (k-1) + b 1 4 <k-1) + h 3*lle(k-1)
x4 (k) xs(k-1) + u(k-1) + h 4*lie (k-1)
- manipulated variable:
~(k) (mx1)
!! (k) (px1)
y(k) (rx1).
(8.8-5)
~ ~1 = [ :J
and the transformed process is
(8.8-6)
~t T A T-1
!l.t T B
C T- 1 [Q !]·
ft
(c.f. Eq. (8.6-3)). With the identity observer of complete order m the
output error given by Eq. (8.6-2) is used for the error between the
observer and the process. However, as the reduced order observer does
not explicitly calculate y(k), and as y(k) contains no information con-
cerning ~a{k), the observer error ~t(k) must be redefined. Here, Eq.
(8.8-11) can be used because it yields an equation error ~t{k) if ~a(k)
is not yet adapted to the measurable variables y(k), y(k+1) and ~(k)
8.8 A State Observer of Reduced Order 177
~t (k) y(k+ 1 ) - ~t21 ~a(k) - ~t22 y(k) - ~t2 ~(k) • (8. 8-13)
Q<k+1)
x
-a <k+1 l
= ~t11 i_a(k) + ~t12 y(k) + ~t1 ~(k)
x (z).. = -H z- 1 y(z) z
-a
is replaced by
(8.8-17)
- y{k)
X
-a
{k+1) =X {k+1) - X {k+1)
-a -a
{8.8-19)
178 8. State Controllers
u (k)
y(k)
-
1 At12
-
kI '===
,rr
~~t2l== IAt 221
+ -
K
I
-H kI
~
II
+
-lr -
1 ~t 211
u(k) + f-!_(k+1) jl!k)
+
I 8 I I Iz-1 I r H l
~ ,;>j_t1 r -v
1- I """1-
+ + +
IA t11
1-
r + 8a!k)
~
(8.8-20)
(8.8-21)
If the state controller is not designed for finite settling time (dead-
beat behaviour), then comparatively many free parameters have to be
suitably. chosen in designing state controllers, compared with other
structure optimal controllers. For a design with no performance crite-
rion, either the coefficients of the characteristic equation (section
8.3) or the eigenvalues (section 8.4) have to be chosen. The quadratic
180 8. State Controllers
r1 0 0
0 r2
R (8.9-1)
0 r
p
To give a positive definite ~' the elements ri must be positive for all
i = 1,2, •.• ,p. In special cases where~ can be positive semi-definite
certain ri can be zero (see section 8.1).
~has
J.
to be positive definite, so that qi > 0 (see section 8.1).
(8.9-2)
8.9 Choice of Weighting Matrices and Sample Time 181
T
y (k) ~y(k)
it follows that
(8.9-3)
R = r
T
(8.9-4)
~ c
case for processes with real poles, but not for processes with conju-
gate complex poles if the sampling is close to the half period of the
natural frequencies [8.16). Generally, the smallest costs are attained
for T0 = 0, i.e. for the continuous state controller and for very small
sample times the control performance differs only little from the con-
tinuous case. Only for larger sample times does the control performance
deteriorate significantly.
Current experience is that the sample time for state controllers can be
chosen using the rules given in sections 5.5 and 7.3.
Processes with small dead time compared with other process dynamics
have already been discussed in some examples. A small dead time can re-
place several small time constants or can describe a real transport de-
lay. If, however, the dead time is large compared with other process
dynamics some particular cases arise which are considered in this chap-
ter. Large dead times are exclusively pure transport delays. Therefore
one has to distinguish between pure deadtime processes and those which
have additional dynamics.
(or the corresponding difference equation- see Eq. (3.4-13)). Eq. (9.1-1)
follows either by the replacement of d by d' = d-1, b 1 = b and b 2 .' .•. ,
bm 0 and a 1 , ••. ,am = 0
-1 -d I -d
b 1z z = bz (9 .1-4)
-1
or simply by taking d = 0 in z-d m d in B(z ) : bm b and
b 1 , ••. ,bm_ 1 = 0 and a 1 , .•• ,am = 0
-m -d
GP(z) = bmz = bz • (9 .1-5)
In all cases A can have different canonical forms (see section 3.6).
For Eq. (9.1-6) and Eq. (9.1-8) ~has dimension mxm, but for Eq. (9.1-7)
the dimension is (m+d)x(m+d). If the dead time is included in the sys-
tem matrix ~, d more state variables must be taken into account. Though
the input/output behaviour of all processes is t~e same, for state con-
troller design the various cases must be distinguished as they lead to
different controllers. Inclusion of the dead time at the input or at the
output depends on the technological structure of the process and in ge-
neral can be easily determined. For a pure dead time, including the
dead time within the system matrix in controllable canonical form one
obtains Eq. (3.6-36) with a state vector ~(k) of dimension d. In con-
trast, for Eq. (9.1-6) and Eq. (9.1-8) ~=a= 0 results and d has to
be replaced by d' = d-1. In this case one can no longer use a state
representation.
Note that as well as dead time at the input or the output dead time
can also arise between the state variables. In the continuous case,
9.2 Deterministic Controllers for Deadtime Processes 185
result. For discrete-time signals, however, these dead time systems can
be reduced to Eq. (9.1-7) by extending the state vector and the system
matrix.
-d
z (9.2-1}
The closed loop transfer function is then equal to the process transfer
function with unity gain - a reasonable requirement for dead time pro-
cesses. The prediator aontro~~er then becomes, using Eq. (6-4} and Eq.
(9.2-1}:
A(z - 1 }
-1 -1 -d
KPA(z }-B(z }z
(9.2-2}
d m -1 d m m-1 .2-3}
z_ z A(z } = z [z +a 1 z + ••• +am_ 1 z+am] = 0. (9
The characteristic equations of the process and the closed loop are
identical. Therefore, the predictor controller can only be applied to
asymptotically stable processes. In order to decrease the large sensi-
tivity of the predictor controller of Reswick (see section 9.2.2} to
changes in the dead time, Smith introduced a modification [9.2], [9.3),
[9.4), [5.14) so that the closed loop behaviour
Gw ( z) = K~ GP ( z) • G' ( z) (9.2-4)
DB(V) X X X X X X X X
PREC X X X X X X X X X
MV3-d X X X X X X X
G (z) = 1 ~d (9.2-7)
R b 1-z
or the difference equation
0 3 6 9 12 15 k
with parameters
qo 1 (9.2-9)
q' 2 = 2b
0
1 1 1 (d-2)
q'
1 qO[d - 2J - 2b - d -
gain factor
d2 integration factor.
9.2 Deterministic Controllers for Deadtime Processes 189
0. (9.2-10)
zd+ 1 - z + 1 = 0. (9.2-11)
For d~1 the roots are on or outside the unit circle, giving rise to in-
stability (c. f. Table 9.2.2). If the process has a dead time d-1 we have
zd + z - 1 = 0. (9.2-12)
In this case instability occurs for d~2 (Table 9.2.2). Table 9.2.3
shows the largest magnitudes of the unstable roots ford= 1,2,5,10
and 20; for very large dead time the feedback loop with the cancella-
tion controller is so sensitive to changes of the process dead time by
one sampling unit that instability is induced. Therefore in these cases
this controller can only be applied if the dead time is known exactly.
If a PI-controller (2PC-2) is used the characteristic equation becomes
(9.2-13)
d+1 d d-2
2z - 2z + z - ~ = 0. (9.2-14)
d d-1 d-2
2z - 2z + z - ~ = 0. (9.2-16)
Tables 9.2.4 and 9.2.5 show the magnitudes of the resulting roots. If
the case d=1 is excluded, no instability occurs. Therefore the feedback
loop with the PI-controller is less sensitive to changes in process
Table 9.2.2 Magnitudes lz. I of the roots of the characteristic equations (9.2-10), (9.2-11), (9.2-12) \D
for dead-time~processes with the controller GR(z) = 1/(1-z-d) 0
Process d = 1 d = 2 d = 5
z-d 0 0 0 0 0 0 0 0
z-(d+1) 1.0 1.0 1.325 0.869 0.869 1.126 1.126 1.050 1.050 0.846
z-(d-1) 0.5 1.62 0.618 1.000 0.755
hill. ~
\D
(')
0
::s
r1'
11
0
1-'
1-'
Table 9.2.4 Magnitudes lzil of the roots of the characteristic equations (9.2-14), (9.2-15), (9.2-16) CD
11
for dead-time processes with the PI-controller of Eq. (9.2-9) Ill
HI
0
11
Process d = 1 d = 2 d = 5
'tl
11
z-d 0.707 o. 707 0.07 0.886 0.886 0.829 0.829 0.701 0
0
- CD
z-(d+1) 0.856 0.856 Ill
1.065 1 .065 0.441 0.941 0.941 0.565 0.923 0.923 0.858 0.858 Ill
CD
z-(d-1) Ill
0.333 0.500 0 0.796 0.796 0.789 o. 789 0.760
- ....r1'~
:Y
t<
PI
11
o.Q
CD
0
CD
PI
p.
r1'
.....
~
9.2 Deterministic Controllers for Deadtime Processes 1 91
Table 9.2.3 Largest magnitudes lz.J. Imax of the roots of the characteris-
tic equation for dead-time processes with controller
-d
GR(z) = 1/(1-z )
Process d=1 2 5 10 20
-d 0 0 0
z 0 0
z- (d+1) 1 .o 1 • 1 2 6 1.068 1 .034
1. 320
-(d-1)
-- --- ---
2 1. 151
1. 61 8 1 .076 1 .036
0.5
- - - - - - --- ---
Table 9.2.5 Largest magnitudes lz.J. I max of the roots of the characteris-
tic equation for dead-time processes with the PI-controller
of Eq. ( 9. 2-9)
Process d=1 2 5 10 20
-d 0.866 0.938 0.970
z 0.707 0. 707
z- (d+1) 1 .065 0.941 0.923 0. 951 0.974
2 -(d-1) 0.333 0.500 0.796 0.923 0.967
State Controller
If the dead time as in Eqs. (9.1-7) and (3.6-41) is included in the
system matrix, and assuming that all state variables are directly mea-
surable, one obtains for the pure dead-time process with a state con-
troller (see Eq. (8.3-8)) the characteristic equation
T + k1zd-1 + zd
det [z ! - ~ + ~ ~ ] kd + kd~1z +
(z-z 1 ) (z-z 2 ) (z-zd) = 0. (9.2-17)
an open loop dead-time process with reducing state feedback has the
smallest settling time for initial values x(O), corresponding to dead-
beat behaviour (~zd = O). If the state fee~back is not to reduce, the
poles zi in Eq. (9.2-17) must be nonzero. As the state variables intro-
duced in the process model Eq. (3.6-41) cannot be directly measured in
general, the state variables have to be observed or estimated. Then
the question arises as to whether the state controllers with the state
observers of sections 8.6 and 8.7 or state estimators of sections 22.3,
15.2 and 15.3 have advantages over the input/output controllers dis-
cussed above. This is considered in the following sections using the
results of digital computer simulations.
and the low-pass process III (see Eq. (5.4-4) and Appendix) with dead
time d = 10
-1 ~ z
-d (9.3-2)
GP(z ) u(z)
The resulting closed loop behaviour for step changes of the set point
is shown in Fig. 9.3.1 and Fig. 9.3.2. The root-mean-squared (or rms)
control error
se -
-
v - 1 M
l: e 2 (k)
M+1 k=O w
(9. 3-3)
su y;1
-l: M [u(k)
M+1 k=O
- U (co) ] 2 (9.3-4)
are shown in Fig. 9.3.3 for M = 100 and for the dead time dE chosen
for the design (which is exact for dE = d = 10, too small for dE = 8
and 9, or too large for dE= 11 and 12). Table 9.3.1 shows the resul-
9 .3 Compar i son of the Control Perfo rman c e 193
y y
..
1.0 ..: :·..·..:.-. ·... - --·- 1,0
....
.· .-- ------
0 0
20 l.O 60 80 k 20 l.O 60 80 k
u 3 PC-3 2 PC-2
u
1,0 1,0
0 0
20 l.O 60 80 k 20 l.O 60 80 k
y y
0 0
20 l.O 60 80 k 20 l.O 60 80 k
u DB(v) ,PREC u DB ( '1!+ 1)
tt
0 20 l.O 60 . 80k
20 l.O 60 80k
0 20 l.O 60 80k
u sc
1,0 ------
0
20 l.O 60 80 k
Figure 9.3.1 Control variable y(k ) and mani pulated va r iable u(k) for
the pure dead-time process Gp(z) = z-d with d = 10 and
a step change in the set p oint
194 9. Controllers for Processes with Large De a dt ime
y
1.0
o~+-~~~~~~~--
20
----
40 60 80 k
tr---
u
20 40 60 00 k
u 3 PC-2
3 PC-3
2.0
3,0
2,0
0
20 40 60 80 k
1,0
0
20 40 60 80 k ,~:L
- -
- - ·- - - 20 40 60 80 k
{:1 : u
DB (v +1)
"t:==
I I I
I
' '
20 40 60 80 k 2,0
1P~ 0
20 40 60 80 k
0 '
20 40 60 80 k
~·:+-1~· - <Pf-R~E-C+1-
1,0
--+o--,-, ....,--+--,--
1 1
0
20 40 60 80 k 20 40 60 80 k
0,2
0.1 0,1
10 ,, 12
10 11 12
Su
\I
\I
\
\ 1.0 0 2 PC-2
\
I I
•o 2 PC - 1
3 PC-3
• 3- PC-2
l i "'
•
DB(~I
08(~+11
\ .f o PREC
\ c sc
'· I
\ I
0.5
I
0,5
1I I ~
l \I :
unstable
0.,__1 1,)
0...
"'0--
--
....a"
_.o
{ 2 PC- 1
2 PC-2
3 PC-2
3 PC-3
0
c, ,"'-0....
--10-
_,~/
~
6 9 11 12 dE
a) b)
Figure 9 . 3.3 The rms control error Se and the change of the manipula-
ted v ariable Su as functions of the dead time dE used
for the design -d
a) pure dead-time process z b) process III with dead
with d = 10 time d = ;10
196 9. Controllers for Processes with Large Deadtime
-d B (z - 1 ) -d
G = z G = z
H Ill p p A(z - 1 )
Q) H
rl Q)
rl +.1
0 Q)
d = 10 (Proc. III with d = 10)
H
+.1
s
Ill
s:: H
0 Ill 3PC-3 2PC-2 3PC-3 3PC-2 2PC-2
0 0.
(r=O) (r=O) (r=O) (r=O)
qo 1. 25 1 3.8810 9.5238 1
q1 -0.25 0 -0.1747 -14.2762 -1.4990
q2 0 0 -5.7265 6. 7048 0. 7040
q3 0 0 3.5845 - 0.9524 -0.1000
q4 0 0 -0.5643 0 0
Po 1 1 1 1 1 .0000
p1 0 0 0 0 -1.4990
p2 0 0 0 0 0.7040
p3 0 0 0 0 -0.1000
p4 0 0 0 0 0
pd 0 -1.25 0 0 0
pd+1 0 0 -0.2523 -0.6190 0.0650
pd+2 0 0 -0.5531 -0.4571 0.0480
pd+3 0 0 -0.2398 0.0762 -0.0451
pd+4 0 0 0.0451 0
sc (r = 1 )
.~)
sc (r = 1) {,)
k1 0 0.0680
k2 0 0.0473
k3 0 0.0327
k4 0 0.0807
k5 0 0.0691
k6 0 0.0551 .~)
k7 0 0.0420 rb = 5;
k8 0 0.0311
0 0.0226 Qb c.f.
k9 Example
k10 0 0.0161
8. 7 .1.
k11 1. 0 0.0114
k12 - 0.0080
k13 - 0.0056
k14 - 1.0
9.3 Comparison of the Control Performance 197
The best control performance for this low pass process with large time
delay can therefore be achieved with the state controller, the predic-
tor controller or the parameter-optimized controller 3PC-2 (or 3PC-3
with r ~ 1). The predictor controller leads to the smallest, the 3PC-2
to the largest and the state controller to average changes in the mani-
pulated variable. A comparison of the control performance shows that
the open loop transient response to set point changes can hardly be
changed compared with the transient response of the process itself if
very large variations of the input have to be avoided. For larger chan-
ges of the manipulated variable, as for deadbeat controllers, one can
reach a smaller settling time. However, this leads to a higher sensiti-
vity to dead time. Therefore deadbeat controllers cannot be recommended
in general for processes with large dead time. As the predictor control-
ler can be applied only to asymptotically stable processes, state con-
trollers with observer and parameter optimized controllers with PID- or
PI-behaviour are preferred for low-pass processes with large dead time.
10. Control of Variable Processes with
Constant Controllers
The preceding dontroller design methods assumed that the process model
is exactly known. However, this is never the case in practice. Both in
theoretical modelling and in experimental identification one must al-
ways take into account both the small and often the large differences
between the derived process model and the real process behaviour. If,
for simplicity, i t is assumed that the structure and the order of the
process model are chosen exactly then these differencies are manifested
as parameter errors. Moreover, during most cases of normal operation,
changes of process behaviour arise for example through changes of the
working point (the load) or changes of the energy-, mass- or momentum-
storages or flows. When designing .controllers we must therefore assume:
- the assumed process model is inexact~
GR(z)GP(~n'z)
~ (10.1-1)
w(z)
1+GR(z)GP(~n'z)
The feedforward control with the same input/output behaviour has the
transfer function
a Gw (8-n , z)
1 (10.1-3)
(10.1-4)
10.1 On the Sensitivity of Closed-loop Systems 201
it follows that:
Cly(z)
-----
I =
Cly(z)
R(0 ,z) -----
-n
I (10.1-6)
ae- R
ae- S
with
(10.1-7)
The same equation gives both the ratio of the parameter sensitivity of
the feedback and the feedforward control, and the ratio of the influ-
ences of the disturbance n(k) on the output variable y(k)
y(z)
--
I = R(z) - -
y(z) I (10.1-8)
n(z) R n(z) S
From Eq. (10.1-3) and Eq. (10.1-1) it further follows for the feedback
control that
dGw(~n'z)
(10.1-9)
Gw(~n'z)
(10.1-11)
T
~R(k) = -~ ~(k) (10.1-12)
up(z)
R' (z) = (10.1-14)
In [10.8], page 132, it is shown that Eq. (10.1-6) gives the parameter
sensitivity of the open loop output variable y(k) = cTx(k) of the state
controller, but with R' (z) instead of R(z). Optimal state controllers
for continuous signals always have smaller parameter sensitivities for
all frequencies than feedforward controllers, [10.2], [8.4] page 314,
and [10.8] page 126 .• State controllers with observers and state con-
trollers for discrete time signals do not obey this rule ([8.4 page 419
and 520).
(10.1-15)
2
The influence of process changes I~Gpl on I~Gwl is amplified by IRI IGRI.
Compare the corresponding relation for the signals
(10.1-17)
The amplification of the I~Gpl by IRI 2 1GRI means that changes of I~Gpl
have a great influence in frequency ranges II and III of the dynamic
control factor, as shown in Fig. 11.4.1. For very low frequencies and
controllers with integral action IRI 2 1GRI - IRI. This means an influ-
ence such as that for the control performance given by Eq. (10.1-16).
An insensitive control can be obtained in general by making IR(z) I as
small as possible, particularly for the higher frequencies of range I
and for the ranges II and III if disturbances arise in these ranges.
From Figures 11.4.2 and Table 11.4.2 it follows that for feedback con-
trols insensitive to low-frequency disturbances the weight of the mani-
pulated variable r must be small, leading to a strong control action.
Disturbance signal components n(k) in the vicinity of the resonant fre-
quency, however, require a decrease of the resonance peak and therefore
a smaller r, or a weaker control action. This case again shows that
steps towards an insensitive control depend on the disturbance signal
spectrum. If IR(z) 12 is considered, Figures 11.4.2 and 11.4.3 show that
for different controllers high sensitivity to process changes arises
with the following controllers: Range I: 2PC-2. Range II: 2PC-2, DB(v)
and SC. Small sensitivities are obtained for range I: SC, and for range
II: DB(v+1). Note, however, that parameter optimized and deadbeat con-
trollers have been designed for step changes of the set point, i.e. for
a small excitation in ranges II and III. For step changes of w(k) these
results essentially agree with the sensitivity investigations of sec-
tion 11. 3, c) .
Until now, only some sensitivity measures have been considered. Other
common parameter sensitivity measures given a nominal parameter vector
8 are
-n
204 10. Control of Variable Processes with Constant Controllers
The sensitivity of the output variables follows from the state variable
sensitivity
a T ax
(10.1-21)
<J
-y ae [.£ ~J = ae c.
I + f(cr) (10.1-22)
n -
has to be minimized. If the parameter sensitivity of the control per-
formance criterion
(10.1-25)
a) b)
0 s e: 0 s 1 ( 10.2-3)
~
(10.2-4)
e:i weights the criteria at the individual operating points. With con-
tinuous signals it was proved that there are constant controllers ~
giving stable behaviour at all N operating points. A representative
process with parameter matrices ~' ~ and f is defined for which a con-
troller is calculated to be optimal on average according to Eq. (10.2-3).
In this the solution of M Matrix-Riccati-Equations is required.
In recent time the problem considered here is to design "robust" con-
trollers.
11. Comparison of Different Controllers
for Deterministic Disturbances
At the end of part B the various design methods and the resulting con-
trollers or control algorithms for linear processes with and without
dead time are compared. Section 11.1 considers the controller struc-
tures and in particular the resulting poles and zeros of the closed
loop. Then the control performance for two test processes is compared
quantitatively for different controllers in sections 11.2 and 11.3. The
dynamic control factors for different controllers are compared in sec-
tion 11.4. Finally section 11.5 draws conclusions as to the application
of the various control algorithms.
-d -d
z z (11.1-2)
and the controller Eq. (11.1-1) the closed-loop transfer function for
setpoint changes
(11.1-3)
208 11. Different Controllers for Deterministic Disturbances
G (z) ~
n n(z) -1 -1 -1 -1 -d
P(z )A(z )+Q(z )B(z )z
(11.1-4)
P(z- 1 )B(z- 1 )z-d
(11.1-5)
are obtained. In general form these transfer functions can be written as
as + .•. +
(11.1-6)
+ .•. +
Q (z) (11.1-8)
P (z)
'&- (z)
(11 .1-10)
G* (z) = ._A.(z)
J\-(z)
(11.1-11)
follow from
11.1 Comparison of Controller Structures: Poles and Zeros 209
(11.1-15)
-1 -].! -1 -m
(1 + p 1 z + ... + p z ) (1 + a 1 z + ... + a z )
J.l m
+ (qo + q1z
-1
+ •.. + qvz
-v ) (b z -1 + ••. + bmz -m )z -d o.
1
(11.1-16)
To avoid steady state offsets, Gw(1) has to be set to one. From Eq.
(11.1-3) it follows that P(1)A(1) = 0, and this is generally fulfilled
for
]J
E p. = -1. (11.1-17)
1
i=1
]J + v + 1 = R, + 1. (11.1-18)
a) \1 ;;,_ v + d -+ Q, = m + 11·
Eq. (11.1-18) gives v = m. Hence \1 ;;,_ m + d.
b) \1 ,
v + d -+ Q, = m + d + v.
Eq. (11.1-18) gives \1 m + d. Hence v ;;,_ m.
v = m and \1 = m + d (11.1-19)
a1
0
0
0 lo
: :
I .
1d
0 p1 a 1 -a 1
a
m
a, 0 0
0
,---
10
I b1 o
---
0
I b1 a -a
0 a Pm+d m m
m I
a1 I b m 0 qo am+1
I b1 q1
I
I
l·~- ~ ---,- am I 0
... 1 I o
I
----
bm
0 qm
a2m+d
-1
m + d m + 1
R ~R 5;!:_ (11.1-20)
-1 (11.1-21)
~R = R a
i f det B + 0.
fluenced as well. For v = m and ~ = m+d the zeros of the processes ap-
pear in Gw(z) and Gu(z) and the process poles appear in the zeros of
Gn(z). This means that the process itself dictates some of the zeros
of the closed loop transfer functions.
A d d d J Gw ( z) = 0
J-r( z) = A0 ( z) z B ( z) + [A ( z) z B0 ( z) - A0 ( z) z B ( z)
(11.1-22)
(11.1-23)
(see Eq. (7.1-22)); after expansion with z(m+d) in the nominator and
the denominator
Here A(z)zd and B(z) are polynomials of the process model. The charac-
teristic equation becomes
(11.1-25)
~. (m+d) d
~(z) ~ z A0 (z)z =. 0. (11.1-26)
'an (z)
[z(m+d) - q B(z)] A (z)z d
0 0 0 1(11.1-27)
[z(m+d) - q B (z) J B (z) 0
13u (z) 0 0
and
(11 .1-30)
(11.1-31)
with A0 (z) = A(z). Small changes ~B(z) do not seriously affect stabili-
ty. The zeros of the process can therefore be placed outside the unit
circle; they are not cancelled by the deadbeat controller.
Q (z -1) A(z - 1 )
GR (z) -1 - B(z -1 ) z -d
P(z- 1 ) KPA(z )
or
Q(z) A(z)z d
GR(z) P(z) (11.1-32)
d
KPA(z)z - B(z)
d d d d
Jl
~(z) = KPA(z)z A0 (z)z - A0 (z)B(z)z + A(z)B 0 (z)z = 0. (11.1-33)
(11.1-34)
d
KPA 0 (z)z
d
[KPA(z)z - B(z) J P(z)
d
(11.1-35)
KPA(z)z
d d
For Gw(z) or Gn(z) the poles A(z)z or A0 (z)z are always cancelled by
the corresponding zeros. Closed loops with a predictor controller are
only stable for asymptotically stable processes, as Eq. (11.1-34) shows.
The process zeros can therefore lie outside of unit circle. The zeros
of the closed loop are only for the closed loop response to setpoint
changes dictated by the process zeros. If the poles of the process are
sufficiently within the unit circle small differencies 6B(z) = B(z) -
B0 (z) do not influence the stability, Eq. (11.1-33).
T b kT]- 1 f
Gv(z) = ~
v(z) .£ [z I - A + - - -
adj [z I - A + b kTJ _ 13(z)
T f - A-(z). (11.1-39)
c
det [z I - A + b ~TJ
11.1 Comparison of Controller Structures: Poles and Zeros 215
do not change compared with the process (see Eqs. (11.1-38) and (11.1-39)).
This holds because the parameters of the denominator polynomials13(z)
B(z) are contained either in ~T or in~' depending on the assumed ca-
nonical state representation. If, however, the disturbance influences
one state variable the zeros of the transfer function Gv(z) are also
influenced by the state controller.
Example:
l
The process order m is 2, and controllable canonical form is chosen.
j
Then
z + (a 1 +k 1 )
13(z) = [b 2 b 1 J 1 f.
-(a2+k2) z -
1j(z) = b 1 z + b 2 = B(z)
The choice of the poles of the control law also determines the zeros
in the last case.
0
The control algorithms are investigated for the single input/single out-
put control systems of Figure 5.2.1. The comparison is performed parti-
cularly with regard to the computer aided design of algorithms by the
process computer itself [8.5]. As process computers often have to per-
form other tasks the computational time for the synthesis should be
small. Furthermore the required storage should not be too large consi-
dering the capacity of smaller process computers and micro-computers.
A further criterion is the computational time of the algorithms between
two samples. Not only the computational burden of the synthesis but al-
so that required during operation have to be considered in connection
with characteristic values of the control problem such as, for example,
the control performance, required manipulation power, required manipu-
lation range, sensitivity to inexact process models and to parameter
changes of the process.
se 63 (11.2-1)
d) Overshoot
Ym = Ymax(k) - w(k) (11.2-3)
10 1 = 0 6 lao (11.2-4)
y g
This value is described at the end of section 11.3.
For judging the computational effort between two samples the following
measures will be used:
variable u(O) for the deadbeat control algorithm DB(v+1) was set to
give u(O) = u(1) in order to minimize the manipulated variable changes.
The characteristic values of all algorithms are summarized in Table.
11.3.1.
For step changes of the setpoint (the case for which the design has
been made) the most important results are summarized below:
3PC-3 (PID-behaviour)
Choosing r = 0 results in a large u(O) and a relatively weakly damped
behaviour. The rms control error Se is relatively large. The overshoot
ym and the settling time k 3 have average values.
2PC-2 (PI-behaviour)
In comparison to 3PC-3 this controller gives a somewhat larger Se to-
gether with smaller Su' much smaller u(O), larger ym and larger k 3 ,
somewhat smaller computational effort tE.
Ul
,._, P R 0 C E s s II P R 0 c E S S III
Q)
.j.J
Q)
I
s
Ill
C 0 NT R 0 L A L G0 R I T H M
,._,
Ill
Q., 2PC-1 2PC-2 3PC-2 3PC-3 2PC-1 2PC-2 3PC-2 3PC-3
q4 - - 0. 571 -
p1 -0.595 - 1. 436 0 0
p4 - - 0.244 - 0.076
P5 - - -0.046 -
k4 - - 0.532 0.263
k5 - - 1. 532 1 . 263
222 11. Diffe rent Contro llers for Determi n ist i c Disturba nces
PROCESS I I
y
.
y
1.0 ---- ............................ lO -------.................................
.
5,?l
01:
!b
10 20 k 0}· 10 20 k
3 PC-3 3PC-2
tO .. . ::
10 20 k 10 20 k
....,,......................
y y
1.0 ____
10 ----;
........................ .
0·
.. 10 20 k
0
10 20 k
~b
sc (2)
50~ sc ( 1)
~-=--= 10 20 k 10 20 k
y
y
10 ............................. 1.0 __.........................
0· 0·
10 20 k 10 20 k
1.0 1.0
14.0 14.0
u u
DB( v ) DB ( v+ 1)
10.0 10.0
5.0 5.0
tO 10 .
0 0
10 20 k 10 20 k
·2.5 -2.5
PROCESS III
y y
.
1.0 ----~~·;;•'~···· · ··· ·· ••••• 1.0 --- .·~!....._ ................
0 •
.
01 10 20 k I 10 20 k
~0~ 3 PC -2
7ak
3 PC-3
~-- ~
0 10 20 k 0 10
'
20 k
y y
1.0 .
---·~! ... ,............... _ lO - - - . . . . . .. . . . . .. . . . . . . .. . . . . . &
o-r· o-r ~ 10 20 k
5Dl
k
sob
10 20
sc ( 1) SC(2)
y y
1.0 __ ........................
, 1.0 ___ .. .......................
o.,.,___,_____
0· ·· - -. . - - - - --
10 20 k 10 20 k
10.0 10.0
u DB( v ) u DB ( v +1 )
5.0 5.0
10 10 . fL-- -
0-H1f--_,_ __ -
_-- __
- 0--k
O-H+- -1-0- -2 10 20 k
-50 -5.0
w-__r- Se v:__r-
Se
0.4 0.4
• •. • • • •I • •
• .. • • • .
0.3 0.3
•.
I I I
I
I
•...
I I
0.2 0.2
0. 1 0.1
I
2.0
Su
1.5
• 2.0
Su
1.5
••
I
1.0 1.0
•
I
05 0.5
•
••
Seu
I
r= 0.1 co Seu r = 0.1 co
r=025 •• r =0.25 ••
••
0.6 0.6
t •
I I
0.4 0.4
•
•l
I
0.2 t t f t
14
12
•I
~(OM
8 • PROCESS II c
6
•
•
I
4 PROCESS III 0
2
W :~ V: _r-
Ym fws Ym
0.6
•• 0.6
•... • 1:
I
$I
•• "' ~
•I •*
0.5 0.5 I
~
0.4 0.4
Q3 + 0.3
I ...
0.2
0.1
•• I 0.2
0.1
60
k3
• 60
k3 •
... •...
40 40
• •... t •... •
I .
• •...
30 30
I
20 20 I
i
I
10 10
.
20
16
12
8
...
4
2PC-2 3PC- 3
r =0.1 [)O
• •
• •I
2.5
510
eu r= 0.25 • • I e2
2.0
••
0.8
•* * •• •• •
I
•• ..• • • •I •...
• • *:t
4I
t
0.6 1.5
•
I I
•• •
I
t I I
I ~-
0.4 _._.::: 1.0
0.2 0.5
2PC-1 3PC - 2 DB ( v + 1) sc ( 1)
lr 2PC-2 3PC-3 DB ( v ) SC ( 2 )
34
32
28
24
20
16 ...
12 • +
8
4
3.0
e,
2.5
+
20
•
*: •
1.5
I
1.0 t
0.5
2PC-1 3PC-2 DB ( v +1 ) SC ( 1 )
2PC- 2 3PC - 3 DB ( v ) SC ( 2 )
cess III (and for all control algorithms) to a decrease of the mean
squared control error Se. Too much manipulated effort or too large u(O)
lead to an inferior control performance. Smallest values of Se for still
small Su' i.e. relatively good Seu' result for r = 0.1 and 0.25 from
controllers 2PC-2, 3PC-2, 3PC-3, SC-1 and SC-2. Both a small undershoot
and a small overshoot can be obtained using 3PC-2.
5 eu = (Seu)w + (Seu)v.
signal ratios n = 0.1 and 0.2 and for three different identification
times [3.13]. The synthesis of the control algorithms was then perform-
ed with these identified models, and the resulting control variable
y(k) using the inexact process models and the resulting controlled va-
riable y(k) using the exact process model (the real process) were cal-
culated. Then the error caused by the inexactly identified process mo-
del is
N 2 N 2 ] 1/2
[ L b.y (k) / L y 0 (k) (11.3-2)
k=O k=O
can be determined. y 0 (k) is the controlled variable for the exact mo-
del with its matched control algorithm. The error of the controlled va-
riable oy is considered as a function of the error of the impulse res-
ponse o of the process model
12
,; [ Ag 2 (k) I g 2 (k) ] ' (11.3-3)
E
1
= a8 I a8 (11.3-5)
y g
can be determined. The smaller E 1 the smaller the influence of the in-
exact model on the behaviour of the closed loop. Figure 11.3.3 c) shows
that the sensitivity of process II is generally larger than for process
III. The smallest sensitivity results for both processes with control
algorithms 3PC-2 and SC-2, the largest sensitivity being with 2PC-1.
A large sensitivity is shown by the deadbeat controller DB(v) for pro-
230 11. Different Controllers for Deterministic Disturbances
Synthesis effort depends on the storage and the computation time requi-
red for the design of the control algorithms. Both depend on the soft-
ware system (including mathematical routines) of the digital computer
used. The values given in Table 11.3.2 are for a process computer Hew-
lett-Packard HP 2100 A with 24K core memory, an external disk storage
and hardware floating point arithmetic. The synthesis computation time
is particularly small with deadbeat controllers, medium for state con-
trollers and greatest for parameter optimized controllers. Note that
the parameter optimization used the Hooke-Jeeves method, which requi-
res relatively little storage; the stopping rule was \~q\ = 0.01. The
storage required for synthesis is like that of the synthesis computa-
tion time - smallest for the deadbeat controller, medium for the state
controller and greatest for the parameter optimized controller.
Computation time 6 6 34 14 18
between samples £E
Synthesis compu- 20 ... 30 40 ... 60 1 0,004 0,004
tation time [s]
Synthesis storage 1881 1881 342
1996 342
[words]
11.3 Comparison of the Performance of the Control Algorithms 231
SC-1
}
2nd group
3PC-3
u(O) = 3.81 ••• 4.56
DB(v+1)
3rd group
DB(v)
} u(O) = 9.52.
Se
2 PC-1
•<>
•
2 PC-2
0.3 <>
'•lloc___ __ -•---- --- .,, _,A
PROCESS II 3 PC-2
3 PC-3
•
0
. . .........
<>• A
DB(v)
0.2 'C-0..6.
--------A DB (v+1) ...
PROCESS III
sc ( 1) c
0.1 SC(2) •
0
0 0.5 1.0 1.5 2.0 Su
Figure 11.3.4 Relationship between the control performanceS and the
manipulation effort S for the investigated co~trol al-
gorithms and processe~ II and III
}
3PC-3 2nd group
SC-1 u(O) = 3.44 3.49.
R ( z) (11.4-1)
y (z)
(11.4-2)
y(z) R(z)n(z)
y ( z) = R ( z) GPv ( z) v ( z) = R ( z) n ( z) • (11.4-3)
Eq. (11.4-3) includes Eq. (11.4-2) with v(z) = n(z), GPv(z) = 1 or v(z)=
w(z) and GPv{z) = GR(z)GP(z). For deterministic disturbances the ampli-
tude density spectra are
T iw
with z = e 0 and 0 ~ w ~ ws' where ws is the Shannon frequency ws=~/T 0
(see section 27.1). The amplitude density spectrum of the controlled va-
riable is therefore
(11.4-6)
T=-oo
where
2 (11.4-7)
S (z) = IGp v (zll S (z)
nn vv
The magnitude of the dynamic control factor IR(z) I or its squared value
IR(z) 1 2 indicate how much the amplitude or power spectra are reduced by
the control loop. Therefore in the following the dependence of IR(z) I
on the frequency w in the range 0 ~ w ~ ws is shown for different con-
trollers. The effect of different weighting of the manipulated variable
is also shown.
The dynamic control factor R(z)=y(z)/n(z) can be simply derived, also for
state controllers with observers,in the following way: A low-pass pro-
cess with several small time constants
~ (11.4-9)
u(s) (1+4.2s) (1+1s) (1+0.9s) (1+0.6s) (1+0.55s) 2
Based on this model various control algorithms were then designed with
the aid of the same process computer for step changes in the set point
(see chapter 29). IR(z) I was then determined experimentally through mea-
surement of the frequency response of the closed loop which consisted
of the analog computer and the process computer, leading to the results
described below. The dynamic control factor can, as is well-known, be
divided into three main regions [5.14] (c.f. Fig. 11.4.1):
IRI
Ws w
f-- I --+-- IT--t--- ill -i
Fig. 11.4.3 shows the dynamic control factor for the different controll-
ers. The weight on the manipulated variable was chosen such that after
a step change in the set point the manipulated variable u(O) is about
the same, i.e. u(O) ~ 1.93 ... 2.41. IR(z) I does not differ very much
for 3PC-3, DB(v+1) and SC. Only 2PC-2 shows a significantly higher re-
sonance peak at lower frequencies. SC is best in region I, DB(v+1) in
region II, and in region III SC is best again.
The dynamic control factor is not only useful for evaluating control
performance as a function of the disturbance signal spectrum. Eq.
(10.1-10) shows that the dynamic control factor is identical to the
sensitivity function S(~n'z) of the closed loop which determines the
effect of changes in the process behaviour. Small IR(z) I not only means
a good control performance but also a small sensitivity (see chapter 10).
236 11. Different Controllers for Deterministic Disturbances
Ws
a) Parameter-optimized b) Parameter-optimized
controller 2PC-2 (PI) c o ntro lle r 3PC- 3 (PID)
IRI
20
d) St ate c o n troller
with o bserver
Figure 11.4.2 Graph of the magnitude of the dynamic control factor for
different controllers and different weightings on the
manipulated variable or different u(O).
11.4 Comparison of the Dynamic Control Factor 237
IRI
2 PC-2
0 0.5
2PC-2 3PC-3
Controller
parameter r=O r=0.1 r=O r=0.1
CD - - 1. 14 75 0. 7072
o. 35 0. 33 0.55 0.60
wres
Controller
sc
Controller DB(v) DB (v+1)
parameter parameter r=0.03 r=0.05
(!) 0. 73 0.58
res
I R (z) \ becomes
State-control algorithms
Deadbeat-control algorithms
N
X E{x(k)} limN l:: x(k) (12.2-1)
N->-oo k=1
1 N
E{x(k)x(k+-r)} limN l:: x(k)x(k+-r). (12.2-2)
N-+oo k=1
E{[x(k)-xJ[x(k+-rl-xJ} (12.2-3)
12.2 Mathematical Models of Stochastic Signal Processes 243
N
2
a
X
lim z [x(k)-x] 2 . (12.2-4)
N->-oo N k=1
N
cjJ (T) E{x(k)y(k+T)} lim Z x(k)y(k+T) (12.2-5)
xy N->oo N k=1
R (T)
xy
= cov[x,y,t] E{x(kl-xJ[y(k+T)-yJ} = wxy (t)-x v.- (12.2-6)
cov[x,y,t J = R
xy
(T) = 0. (12.2-7)
for T 0
6 (T) -- { 01
for It! + 0
(12 .2-10)
T (12.2-11)
{~ (k)} = [x 1 (k) x 2 (k) ... xn (k) ]
Example 12.2.1: x 1 (k) and x 2 (k) are two different white random signals.
Then their covariance matrix is
cov[~,T=OJ
cov[~, TfO J = 0.
0
Covariance or correlation functions are nonparametric models of sto-
chastic signals; the next two sections describe parametric models of
stochastic signal processes.
The conditional probability for the event of value x(k) depends only
on the last value x(k-1) and not on any other past value. Therefore
a future value will only be influenced by the current value. This defi-
nition of a Markov signal process corresponds to a first-order scalar
difference equation
for which the future value x(k+1) depends only on the currsnt values of
both x(k) and v(k). If v(k) is a statistically independent signal (white
noise) then this difference equation generates a Markov process. How-
ever, if the scalar difference equation has an order greater than one,
for example satisfying
x(k) x 1 (k)
[
x 1 (k+1)] [0
[x 1(k)l +[OJ
f
v(k) (12.2-18)
x 2 (k+1) = a1 x 2 (k)
depends only on the state ~(k) and on v(k), i.e. only on current va-
lues. ~(k+1) is then a first-order veetor Markov signal proeess. Sto-
chastic signals which depend on finite past values can always be des-
cribed by vector Markov processes by transforming into a first-order
vector difference equation. Therefore a wide class of stochastic sig-
nals can be represented by vector Markov signal processes in a parame-
tric model, as shown in Figure 12.2.1. If the parameters of~ and!
are constant and v = 0, then the signal is stationary. Nonstationary
Markov signals result from ~(k), !(k) or v(k) which vary.
v (k)
E{~(k)} = ~
for T 0
cov[~ (k) , T J ={ ~ for T +0 (12.2-22)
E{~(O)} = ~(0)
- -
cov[x(O) ,T=OJ = X(O)
E{[~(k)-~J[~(k)-~J
= } = 0.
T
Eq. (12.2-24) is now multiplied with its transpose from the right and
the expectation is taken. Then the covariance matrix obeys
det[z I - ~J = 0
are within the unit circle of the z-plane, and if ~ and F are constant
matrices, then for k+oo a stationary signal process with covariance ma-
trix X is obtained which can be recursively calculated using Eq. (12.2-~)
giving
(12.2-26)
T T
~ g ~ = tr[g ~ ~ J (12.2-27)
where the trace operator tr produces the sum of the diagonal elements
it follows, for ~(k) =Q
E{~T (k)_Q ~(k)} E{tr[g ~(k)~T(k) J}
tr[g ~]. (12.2-28)
If ~(k) = ~ f 0 accordingly
-T
~ g ~ + tr [g ~ J . (12.2-29)
n (z)
(12.2-31)
v (z)
A
0
-c
m-1
n(z) (12.2-35)
P = 1 ,21 • • •
v(z)
M
l: [e 2 (k) + rt~u 2 (k) ] ( 13-1)
k=O
In the following some simulation results are presented which show how
the optimized controller parameters change compared with parameters
obtained for step changes of the disturbances and for test processes
II and III. A three-parameter-control-algorithm
-1 -2
qo+q1z +q2z
( 13-2)
1-z- 1
E{v(k)} = 0 ( 1 3-3)
0.1. ( 13-4)
Then we have n(z) = GP(z)v(z). For this disturbance the controller pa-
rameters were determined by minimization of the control performance
criterion Eq. (13-1) forM= 240, r = 0 and using the Fletcher-Powell
method. Table 13.1 gives the resulting controller parameters, the qua-
N
Table 13.1 Controller parameters, control performance and manipulation effort for stochastic Ul
0
disturbances v(k)
Process II Process III
for the control algorithm 3PC-3 optimized for step changes, the para-
meters q 0 and K for stochastic disturbances decrease for both processes
and cD increases, with exception of process II, T0 = 4 sec. The inte-
gration factor ci tends towards zero in all cases, as there is no con-
stant disturbance, meaning that E{v(k)} = 0. The controller action in
most cases becomes weaker, as the manipulation effort Su decreases.
Therefore the control performance is improved as shown by the values
of the stochastic control factor K. The inferior control performance
and the increased manipulation effort of the controllers optimized to
step changes indicates that the stochastic disturbances excite the re-
sonance range of the control loop. As the stochastic disturbance n(k)
has a relatively large spectral density for higher frequencies, the K-
values of the stochastic optimized control loops are only slightly be-
low one. The improvement in the effective value of the output due to
the controller is therefore small as compared with the process without
control; this is especially true for process II. For the smaller sample
time T0 = 4 sec, much better control performance is produced for pro-
cess III than with T0 = 8 sec. For process II the control performance
in both cases is about the same. For the controller 3PC-2 with a given
initial input u(O) = q 0 and where two parameters q 1 and q 2 are to be
optimized, only one value q 0 was given. For process II q 0 was chosen
too large. For process II the control performance is therefore worse
than that of the 3PC-3 controller. In the case of process III for both
sample times T0 = 4 sec and T0 = 8 sec, changes of q 0 compared with
3PC-3 have little effect on performance.
(13-6)
(13-7)
~ (14.1-1)
u (z)
E{v(k)} = v= 0
- - - - I PROCESS
A.v 1 D (z-1)
-
i/
Ju,
I
C (z-1 ) I
I n
I
u I I
e Q (i1) I 8 (z- 1 } y
~ I
p ( z-1) I
A ( z- 1) ~
~
- I
I
I
L _ _ _ _ _ _ _ _j
CONTROLLER
Now w(k) = 0, =
-y(k) is assumed. The problem is now to de-
i.e. e(k)
sign a controller which minimizes the criterion
(14.1-4)
The controller must generate an input u(k) such that the errors in-
duced by the noise process {v(k)} are minimized according to Eq.
(14.1-4). In the performance function I, y(k+1) is taken and not y(k),
as u(k) can only influence the controlled variable at time (k+1) be-
cause of the assumption b 0 = 0. Therefore y(k+1) must be predicted on
the basis of known signal values y(k), y(k-1), .•. and u(k), u(k-1),
Using Eq. (14.1-1) and Eq. (14.1-2) a prediction of y(k+1) is
z y ( z) (14.1-5)
and
14.1 Minimum Variance Controllers for Processes without Deadtime 255
-1 -1
A(z )C(z )z y(z)
(14.1-6)
or
-1 -m -1 -m
(1+a 1 z + ... +amz ) (1+c 1 z + ... +cmz )z y(z)
-1 -m -1 -m
(b 1 z + •.• +bmz ) (1+c 1 z + ... +cmz )z u(z)
-1 -m -1 -m (14.1-7)
+ A(1+a 1 z + ... +amz ) (1+d 1 z + ..• +dmz )z v(z).
After multiplying and transforming back into the time domain we obtain
At time instant k, all signal values are known with the exception of
u(k) and v(k+1). Therefore the expectation of v(k+1) only must be ta-
ken. As in addition v(k+1) is independent of all other signal values
+ A[ (a 1 +d 1 ) v ( k) + • • . + ad v(k-2m+1)]1E{v(k+1)}
m m ~
+ ru 2 (k). (14.1-10)
di (k+1)
()u(k) 2 [- (a 1 +c 1 ) y (k) - ... - amcmy (k-2m+1)
+ 2ru(k) = o. (14.1-11)
256 14. Minimum Variance Controllers for Stochastic Disturbances
-1 -1 -1
u (z) A (z ) [D ( z ) -c (z ) ]z
GRMV1 (z)
y (z) -1 -1 r -1 -1 •
zB(z )C(z )+~A(z D(z )
1
(Abbreviation: MV1) (14.1-13)
This controller contains the process model with polynomials A(z- 1 ) and
B(z- 1 ) and the noise model with polynomials C(z- 1 ) and D(z- 1 ). With
r = 0, the simple form of the minimum variance controller is produced
-1 -1 -1
A(z ) [D ( z ) -c (z ) ]z
-1 -1
zB(z )C(z )
_ zA(z -1 ) [ D(z -1 ) _ 1 ] .
-1 -1
(14.1-14)
zB ( z ) C ( z )
(Abbreviation: MV2)
-1 -1
_ [D(z )-A(z ) ]z
-1 r 1
(14.1-15)
zB(z )+~D(z )
1
(Abbreviation: MV3)
and for r = 0
-1 -1
G (z) = _ [D(z )-A(z ) ]z (14.1-16)
RMV4 zB(z-1)
(Abbreviation: MV4)
a) Controller order
numerator denominator
MV1 2m-1 2m
MV2 2m-1 2m-1
MV3 m-1 m
MV4 m-1 m-1
Because of the high order of MV1 and MV2, one should assume C(z- 1 )
A(z- 1 ) for modelling the noise and then prefer MV3 or MV4.
MV1: The poles of the process (A(z- 1 ) = 0) are cancelled. Therefore the
controller should not be applied to processes whose poles are near
the unit circle or to unstable processes.
MV2: The poles and zeros of the process (A(z- 1 ) = 0 and B(z- 1 ) = 0)
are cancelled. Therefore the controller should not be used with
processes as for MV1 nor processes with nonminimum phase behavi-
our.
-1
MV4: The zeros of the process (B(z ) 0) are cancelled. Therefore
this controller should not be used with processes with nonmini-
mum phase behaviour.
c) Stability
The dynamic control factor of the closed loop means for the controller
MV1
zB(z- 1 )C(z- 1 )~A(z- 1 )D(z- 1 )
R(z) ~
r -1 -1 -1
n(z) [b A(z )+zB(z ) ]D(z )
1
(14.1-18)
For r = 0, i.e. for the controller MV2,
R(z) (14.1-19)
GP (z)
v
Therefore the dynamic control factor for r 0 is the inverse of the
noise filter. It follows that
lliL 1. ( 14 .1-20)
A.v(z)
bA(z
r -1
)[D(z
-1
)-C(z
-1
)]
1 + ~1----~--------------- (14.1-21)
[:A(z-1)+zB(z-1) JC(z-1).
1
f) Special case
From Eq. (14.1-13) it follows that the static behaviour of MV1 satis-
fies
A(1) [D(1)-C(1) J Lai [ Ldi -Lei J
GRMV1 (1) (14.1-23)
r
B ( 1) C ( 1) +:A ( 1) D ( 1) LbiLci-h,""LaiLdi
1
m
Here L is read as L • If the process GP(z) has a proportional action
behaviour, i.e. La~=~~
0 and Lb.
J.
f 0, then the controller MV1 in gene-
ral has a proportional action static behaviour. For constant distur-
260 14. Minimum Variance Controllers for Stochastic Disturbances
bances, therefore, offsets occur. This is also the case for the mini-
mum variance controllers MV2, MV3 and MV4. To avoid offsets with mini-
mum variance controllers some modifications must be made, and these are
discussed in chapter 14.3.
zA(D-C) A- = 0 D - = 0 c (1) =0
c (1)=0
B-
MV2 r=O -
zBC = 0 B = 0
- = 0
MV3 C=A _z[D-AJ
- D - A(1)=0
zB+:D
1
-
B- = 0 -
C=A _z[D-A] D = 0 A. ( 1) =0
MV4 r=O zB B- = 0
d1 - c,
q = (14.1-24)
0 b +..£..
1 b1
ml z
-d
z
-d (14 .2-1)
u(z)
__
AV ---1 G ( l D(z1J
1--------, ~ Pv z =C (z-1)
,__ __.
I
I
I
_j
n
w y
1 1 (d+1)v(z).
z = B(z- ) zu(z) + A D(z- )
z (d+1) y () 2 (14.2-3)
A(z- 1 ) C(z- 1 )
As at the time k for which u(k) must be calculated the disturbance sig-
nals v(k+1), •.• , v(k+d+1) are unknown, this part of the disturbance
filter is separated as follows
As can also be seen from Fig. 14.2.1, the disturbance filter is sepa~
rated into a part F(z- 1 ) which describes the parts of n(k) which cannot
be controlled by u(k), and a part z-( 1 +d)L(z- 1 )/C(z- 1 ) describing the
part of n(k) in y(k) which can be influenced by u(k). The corresponding
polynomials are
-1 -d
F (z - 1 ) + f 1z + + fdz (14.2-5)
-1 -(m-1)
L (z - 1 ) 10 + 1 1 z + + lm-1 z . (14.2-6)
(14.2-7)
Example 14.2.1
f1 d1 - c,
10 d 2 - c 2 - c 1f 1
14.2 Minimum Variance Controllers for Processes with Deadtime 263
11 d3 - c3 - c2f1
12 - c3f1
f1 d1 - c1
f2 d2 - c2 - c1f1
10 d - c - c1f2 + c2f1
3 3
11 - c2f2 - c3f1
12 - c3f2.
After multiplying and transforming back into the time-domain, one ob-
tains from Eq. (14.1-7) to Eq. (14.1-10) I(k+1) and from ai(k+1)/au(k)=O
as in Eq. ( 1 4 . 1-1 2)
-1
AV(Z) = ~_l y(z)
D (z - 1 )
u(z) A ( z - 1 ) ( D ( z - 1 ) - F ( z - 1 ) C ( z - 1 ) Jz ( d+ 1)
GRMV1d (z) = - - -1 -1 -1 r -1 -1
y (z) zB(z )C(z )F(z )+~A(z )D(z )
1
-1 -1
A(z )L(z ) • (14.2-10)
zB(z 1 JC(z 1.)F(z 1 )+: A(z- 1 )D(z- 1
1
(In short: MV1-d)
264 14. Minimum Variance Controllers for Stochastic Disturbances
For r = 0
GRMV2d(z) (14.2-11)
GRMV3d(z) (14.2-12)
and with r = 0
a) Controller order
c) Stability
zB(z)D(z) = 0. (14.2-15)
They are identical with the characteristic equations for the mini-
mum variance controllers without deadtime, and therefore one reaches
the same conclusions concerning stability.
14.3 Minimum Variance Controllers without Offset 265
e) Controlled variable
The larger the deadtime the larger is the variance of the controlled
variable.
u 1 (z}
(14.3-2}
YTZ>
by the proportional integral action term
-1
G (} u(z} = 1 + _SL = 1-(1-a}z (14.3-3}
PI z = u 1 (z} z- 1 1-z- 1
For a +0 then
is fulfilled if for controllers MV1 and MV2 D(1} + C(1}, and for MV3
and MV4 D(1} + A(1}. If these conditions are not satisfied, additional
poles at z can be assumed
so that the variances around the non-zero operating point [w(k); uw(k) J
are minimized with
A( 1) 1
uw(k) = B1TT w(k) = KP w(k) (14.3-6)
the value of u(k) for y(k) = w(k) 1 the zero-offset case. A derivation
corresponding to section 14.2 then leads to the modified minimum vari-
ance controller [14.2]
-1 -1 -1
u (z)
L(z l[D(z )-C(z )]z
-1 -1 -1 r -1 -1 Y ( z)
zB(z )C(z )F(z ) + ~A(z )D(z )
1
GRMV1-d(z)
(1 + ...£. ;.)w(z).
b1 p
(14.3-7)
This controller removes offsets arising from variations in the reference
variable w(k). Another very simple possibility in the connection with
closed loop parameter estimation is shown in section25.3.
-1 -(d-1)
B(z )z (14.4-1)
-1
b 1z and the deadtime d-1 as in section 14.2 1 the follow-
ing controllers can be derived (c.f. Eq. (9.1-4):
268 14. Minimum Variance Controllers for Stochastic Disturbances
(14.4-2)
-1
G = _ L(z ) (14.4-3)
RMV2d b1C(z-1)F(z-1)
-1 -1 -1 -d -1 (14.4-5)
D(z )=F(z )C(z )+z L(z ).
-1 -1
D (z ) -+ C (z )
L (z - 1 )
GRMV4d(z) (14.4-7)
1
b 1F(z )
cannot be considered nor controlled (see Eq. (14.2-4) and Eq. (14.2-19)).
If now the order of D(z- 1 ) ism= d-1 then
Then D(z- 1 ) = F(z- 1 ), and the disturbance signal consists of the uncon-
trollable part so that the minimum variance controller cannot lead to
14.5 Simulation Results with Minimum Variance Controllers 269
1
G(s) = (1+7.5s) (1+5s)
For the minimum variance controller MV4, Eq. (14.1-15), the quadratic
mean values of the disturbance signal n(k), the control variable y(k)
and the process input u(k) were determined by simulation for weighting
factors on the process input of r = 0 ••• 0.5 and weighting factors on
the integral part of~= 0 ••• 0.8, applying Eq. (14.1-25). Then the cha-
racteristic value (the stochastic control factor)
K = (14.5-2)
In Figure 14.5.1 N 150 samples were used. Figure 14.5.1 now shows:
270 14. Minimum Variance Controllers for Stochastic Disturbances
K=~~
1.0
.,.
',0.
~i".
.
o.1-\\J\_
'\
1\.
o.1\ /""'
o.os·'\. 1 "-.
0.5
' ............/ '------- 0 L.
r=o.oi'-.....•~:2
O.Q1 ~
r
-oicx=O
r- I
MVL.
-1
G (z) = _ 11.0743- 0.0981z
RMV 4 1 + 0.6410z- 1
-1
G (z) = _ 5.4296 - 0.0481z
RMV 3 1 + 0.569z- 1 + 0.1274z- 2
For MV4 it can be seen that the standard deviation of the controlled
variable y is significantly smaller than that of n(k); the peaks of
n(k) are especially reduced. However, this behaviour can only be ob-
tained using large changes in u(k). A weighting factor of r = 0.02 in
the controller MV3 leads to significantly smaller amplitudes and some-
what larger peak values of the controlled variable.
For a = 0.2, Fig. 14.5.2 c), the offset vanishes. The time response of
the manipulated and the controlled variable is more damped compared with
Fig. 14.5.2 b). The transient responses of the various controllers are
shown in Figure 14.5.3.
The simulation results with process III (third order with deadtime)
show that with increasing process order it becomes more and more diffi-
cult to obtain a satisfactory response to a step change in the refe-
rence variable
272 14. Minimum Variance Controllers for Stochastic Disturbances
n
04
w w
0.3
0.2 1 .................... 1 ....................
0.1
0~~~~~~-+~--~+---~ 0 t---+...--..::.k-
50 10 20 10 20
-0.4
y
-0.3
-0.2
................
1.5 MV 4 lr=OI 1.5 •
1.0 \/
0
10 20 10 20
u u
10 10
MV4ir=OI
0
20
-1.0 -5 5
-1.5
-2.0 -10 10
y
0.3
02
0.1
............. , z'-.;;·"··············-
10 20 10 20
-0.1
-0.2 u u
10 10
MV 3 (r= 0.02)
10 20
Figure 14.5.2 Signals for process VII with minimum variance controllers
MV4 and MV3
14.5 Simulation Results with Minimum Variance Controllers 273
u u
10 10
MV4 (r=O) MV4 (r=O)
5 5
0 10 20 k 0 10 20 k
u u
10 10
MV 3 (r=0.02)
5 5
0 10 20 k 0 10 20 k
a)cx=O b) 0<=0.2
Figure 14.5.3 Transient responses of the controllers MV3 and MV4 and
process VII for different r and a
15. State Controllers for Stochastic
Disturbances
The process model assumed in chapter 8 for the derivation of the state
controller for deterministic initial values is now excited by a vector
stochastic noise signal ~(k)
E{~(k)} = 0 (15.1-2)
(15.1-3)
c
where
for i j
0 .. =
+j
~J
for i
E{~(O)} =0
(15.1-4)
cov[~(O) J
The matrices V and ~O are positive semidefinite.
N-1
E{I} = E{~T(N)~ ~(N) + E (~T(k)~ ~(k)+~T(k)~ ~(k)J} (15.1-5)
k=O
becomes a minimum. Here ~ is assumed to be positive semidefinite and
symmetric, and ~ to be positive definite and symmetric. As the state
variables and input variables are stochastic, the value of the perfor-
mance criterion I is also a stochastic variable. Therefore the expec-
tation of I is to be minimized, Eq. (15.1-5). As in section 8.1 the
output variable y(k) is not used. Section 15.3 considers the case of
nonmeasurable state variables ~(k) and the use of measurable but dis-
turbed output variables. The literature on stochastic state controllers
started at about 1961, and an extensive treatment can be found in
[12.2], [12.3], [12.4], [12.5], [8.3].
together with Eqs. (8.1-30) and (8.1-31}. For N + oo the stationary so-
lution becomes
but not on ~(k-1), ~(k-2), .•• and ~(k-1}, ~(k-2), ••• and furthermore
~(k) is statistically independent of ~(k-1), ~(k-2), ••• the control
276 15. State Controllers for Stochastic Disturbances
law for large N can be restricted to ~(k) = !(~(k)) (c.f. Eq. (15.1-9)).
For small N both the stochastic initial value ~(0) and the stochastic
disturbances y(k) have to be controlled, and the resulting optimal con-
troller is Eq. (15.1-8). As the optimal control of a deterministic ini-
tial value ~(O) leads to the same controller, Eq. (8.1-33) is an opti-
mal state controller for both deterministic and stochastic disturbances
if the same weighting matrices are assumed in the respective performance
criteria.
! (k+1) (15.1-11)
(15.1-12)
The value of the performance criterion which can be attained with the
optimal state controller, can be determined as follows. If Eq. (15.1-1)
instead of Eq. (8.1-7) is introduced into Eq. (8.1-6), the calculations
of that section follow until Eq. (8.1-19) giving
T T T
E{IN-1 ,N} E{~ (N-1) ~N-1,N~(N-1)+y (N-1lK 2 K y(N-1)}
E{~T (N-1) T T
E{IN-1} ~N- 1 ~(N-1)+y (N-1lK2Ky(N-1)}
or
T T T
E{~ (N-2)~N-2 ~(N-2) + y (N-2)K 2 K y(N-2)
+ yT(N-1)KT2 K y(N-1)}
and therefore finally, if y(k) is stationary, for N steps
(15.1-13)
In the steady state ~ = ~' and instead of the single initial state
~(0) the disturbance signals K y(k) can be taken. Then
(15.1-14)
using Eq. (12.2-28).
15.2 Optimal State Controllers with State Estimation 277
or
E{g(k)} = 0
i.e. white noise. In section 15.4 it will be shown that the unknown
state variables can be recursively estimated by a state variable filter
(Kalman filter) which measures y(k) and ~(k) and applies the algorithm
then one again obtains an optimal control system which minimizes the
performance criterion Eq. (15.1-5) [12.4]. For the overall system we
then have
[~(k+1 '1 [~ ~ ~ c
!
A-B K-r
- B
w(k)J
~(k+1)
----- ~ ~(k)
+ [~ f K
0]
I
[v(k) ]
g(k+1l
(15.2-6)
[~(k+1)I
~(k+1) "[:- B K
A -
B K
.I. f ~
I [x(k)l
x<k>
+ [ [~ - 0] [ v(k) J (15.2-8)
r fJ~-.I. ,!!(k+1)
(15.2-9)
As the state controller is the same for both optimally estimated state
variables and exactly measurable state variables, one can speak of a
'certainty equivalence principle'. This means that the state controller
can be designed for exactly known states, and then the state variables
can be replaced by their estimates using a filter which is also design-
ed on the basis of a quadratic error criterion and which has minimal
variances. Compared with the directly measurable state variables the
control performance deteriorates (Eq. (15.1-14)), because of the time
delayed estimation of the states and their error variance [12.4].
15.3 Optimal State Controllers for External Disturbances 279
Note that the certainty equivalence principle is valid only if the con-
troller has no dual property, that means it controls just the current
state and the manipulated variable is simply computed so that future
state estimates are uninfluenced in any definite way [15.1]. A general
discussion of the separation and certainty equivalence principles is
in chapter 25.
y(k) =f ~(k)
with
(15.3-2)
cov[>
..2
(k) ,T=i-j J = :::
-
6 ...
~J
(15.3-5)
can be realized with the disturbance model of Eqs. (15.3-3) and (15.3-4
Here we consider a process with one input and one output. Then, from
Eq. (3.2-50) for a disturbance signal n~j = n~, we have
(15.3-8)
and, depending on the choice of fi' one obtains for each ~i (z) a dis-
turbance signal filter
Di ( z)
Gp.,-· (z) = -- i 1,2, ... ,m (15.3-9)
"1 A(z)
with
A ( z) (15.3-10)
£Tadj[z.!_-~]£
G (z) = lJ!l
u(z)
= B(z)
A(z)
(15.3-12)
P
The denominators of GP(z) and GP~(z) are therefore identical, and the
polynomials Di(z) and B(z) contain common factors. The following exam-
ple will show possible forms of Di(z) for two canonical state repre-
sentations (c.f. section 3.6).
Example 15.3.1
A(z)
B(z)
:][ ~]
With F as a diagonal matrix, one obtains
:,J
= [f 2 b 2 z + f 2 a 1 b 2 + a 2b 1].
f 1b 1 z + f 1b 2
d11 f1b1
d21 f1b2
~ = [0
1
-a2]
-a 1
b
[~]
A(z)
This example shows that with the assumption of white vector disturbance
signals ~(k) or ~(k) with independent disturbance signal components,
the parameters of the corresponding disturbance signal polynomials can-
not possess arbitrary values. This position changes, however, if the
disturbance signal components are equal:
where
a) Controllable canonical form
ll (15. 3-15)
(15.3-16)
~ _ D(z) (15.3-17)
I; (z) - A(z)
with
(15.3-18)
where e.g.
v = - (15.3-21)
see Figure 15.4.1 and section 12.2.1. This process will be extended
subsequently to include a measurable input ~(k). Here the following
symbols are used:
~(k) (mx1) state vector
~(k) (px1) vector input noise, statistically independent with
covariance matrix V
y(k) (rx1) measurable output vector
15.4 State Estimation (Kalman Filter) 285
x(O) n(k+1)
~' C and F
(15.4-3)
E{_!!(k)} = Q
where
1 for i = j
{
0 for i f j
X (15.4-6)
(15.4-7)
K'
~1 + ~ [y 2 -cx 1 ]
T
.Q = E{ (~ 1 -E{~ 1 }) (~ 1 -E{~ 1 }) } (15.5-9)
y (15.4-10)
15.4 State Estimation (Kalman Filter) 287
(15.4-13)
If now we set
S ST =£ 2 £T + y (15.4-14)
S RT £ 2 {15.4-15)
then
{~ £ _ ~) (~ £ _ ~)T ~{£ 2 £T + !l~T _ ~{£ Ql _ (£ Q)T~T + ~ ~T.
(15.4-16)
This equation agrees with (~-2) except for g gT; see Eq. (15.4-12).
R RT can be obtained from Eq. {15.4-14) and Eq. {15.4-15) as follows
STS RT £T£ 2
RT (£T£)-1£T£ 2
R 2 £T£(£T£)-1
R RT 2 £T£(£T£)-1 (£T£)-1£T £ Q
w
STW S = (£T£) (£T£)-1(£T£)-1(£T£) I
S STW S ST =S I ST =S ST
288 15. State Controllers for Stochastic Disturbances
(15.4-18)
T
In this equation only the term (~ ~ - ~) (~ ~ - ~) depends on K. The
diagonal elements of this term consist only of squares of elements of
(~ ~ - ~) and therefore can have only positive or zero values. The dia-
K S R
K R ST(~ ~T)-1
K (15.4-19)
A
The minimum variance of the estimation error of x is then, using Eq.
(15.4-12)
( 15. 4-20)
(15.4-8)
Here the expectation ~(k) is used as the exact value ~(k) is unknown.
The recursive estimation algorithm is then
g(k+1) E{(~(k+1)-E{~(k+1J})(~(k+1)-E{~(k+1)})T}
~ ~(k) ~T + ~ y ~T (15.4-24)
A
E{~(k+1)~T(k+1)} = ~· (15.4-25)
(15.4-26)
and with Eq. (15.4-20) the covariance matrix of the estimation error
of ~(k+1) becomes
Starting values
To start the filter algorithm, assumptions on the initial state have
to be made. If it is unknown
~(0) ~(0)
290 15. State Controllers for Stochastic Disturbances
is taken. The initial value of the covariance matrix ~(0) must also be
assumed. For properly chosen ~(0) and ~(0) their influence vanishes
quickly with time k, so that precise knowledge is unnecessary. See the
example 22.2.1 in [2.22].
Block diagram
In Figure 15.4.1 the estimation algorithm is shown for y(k) = 0. The
Kalman filter contains the homogenous part of the process. The measured
y(k+1) and its model predicted value y(k+1) are compared and an error
1 rt
I PJ
rt
I (I)
I
I n(k+1) t<:l
[Jl
I I rt
f-'·
I
I ~~~+1) y(k+1) ~
I rt
I f-'·
0
I measured I ::s
output
I I :>::
PJ
I I f-'
!3
L _______ _______ _ _ _ ______ j PJ
::s
I - - - - - - - - - - - - -xTk+D -;ew - - - +- ~tk:1l ____ l '"'.1
f-'·
f-'
1 ~ .~ estimate rt
~ output error I (I)
I H
(redidual,
I innovation) I
I I
I
predicted
8lk+1) predicted new output
estimate ytk ... 1)
I new estimate
I
L __ __....___ __ --------- ----'
KALMAN FILTER
Figure 15.4.2 Markov process with Kalman filter for the estimation of ~(k+1)
N
1.0
292 15. State Controllers for Stochastic Disturbances
Orthogonalities
In the original work of Kalman [15.4] the recursive state estimator
was derived by applying the orthogonality condition between the esti-
mates and the measurements
(15.4-35)
(15.3-36)
(15.3-37)
is statistically independent
(15.4-38)
mam auxiliary
(major) (minor!
The main controller forms the control error as for the single loop by
subtracting the (main} control variable y 1 from the reference value w1 .
The controlled plant of the main controller is then the inner control
loop and the process part GPu 1 . The auxiliary control loop is therefore
connected in cascade with the main controller. A cascade control sys-
tem provides a better control performance than the single loop because
of following reasons:
1} Disturbances which act on the process part GPu 2 ' that means in
the input region of the process, are already controlled by the
16. Cascade Control Systems 295
y2(z) GR2(z)GPu2(z)
Gw2 (z) ( 16-1 )
w2 (z) 1+GR2(z)GPu2(z)
With
( 16-3)
In addition to the plant GPu(z) of the single loop the plant of the
main controller of the cascade control system now includes a factor
which acts as an acceleration term. Therefore a 'quicker' plant results.
For the closed loop behaviour of a cascade control system it finally
results
w 1 ( z) 1+GR 1 ( z) GPu ( z)
GR1 (z)GR2(z)GPu(z)
( 16-4)
296 16. Cascade Control Systems
Example 16. 1
The process under consideration has two process parts with the s-trans-
fer function
-1
0.4134z 0.4134
GPu2 (z) -1
1-0.5866z z-0.5866
-1 -2
0.1087z +0.0729z
GPu1 (z) -1 -2
1-1.1197z +0.3012z
-1 -2 -3
0.0186z +0.0486z +0.0078z
-1 +0.958z -2 -0.1767z -3
1-1.7063z
0.2894 0.5374
Gw2(z) = z-0.2972 or z-0.0492"
The pole moves toward the origin with increasing q 02 reaching the ori-
gin for q 02 = 1.42. This shows that the settling time of the auxiliary
control loop becomes smaller than that of the process part GPu 2 • The
resulting closed loop behaviour of the cascade control system compared
with that of the single loop becomes better only for q 02 > 1.3. If q 02
is chosen too small then the behaviour of the cascade control system
becomes too slow because of a smaller loop gain compared with that of
the optimized main controller. Notice that the parameters of the main
controller were changed when the gain of the auxiliary control loop
changed. The gain of the auxiliary loop varies for 0 < q 02 ~ 1.3 by
0 < Gw 2 (1) ~ 0.54. It makes more sense to use a PI-controller as auxi-
liary controller
0.8268(z-0.7000)
Gw2 (z) = (z-0. 7493) (z-0.0105) ·
One pole and one zero approximately cancel and the second pole is near
the origin. The settling time of y 2 (k) becomes smaller than that of the
process part GPu 2 (z), Figure 16.2a). The overall transfer function of
the plant of the main controller is given by Eq. (16-3)
(z+0.1718) (z+2.4411)
(z-0.5866) (z 0.6705) (z-0.4493) •
N
\D
co
w
..............
0 5 10 15 k
u
w2
1 ............... . 4
3
0 5 10 15 k
u u.w2 2
2
....................,._
Or---;---~---4--
0 5 10 15 k 0 5 10 15 k 5 10 15
y, ~
>'2 f G'pu
1 1+ • ••
0
0'1
1 t •
\ ••••••••••••
0 Glp • ~"I I • • •
&&•······
• 0 • 0
~ ()
'o 0 0
o 0
\G Pu PJ
Ul
()
• o·~ GPu PJ
0
A.
(1)
0~
10 15 k 5 10 15 k 5 10 15 k ()
5 0
::l
cT
11
a) Auxiliary control variable y 2 b) Control variable y 1 c) Control variable y 1 with 0
I-'
without main controller main and auxiliary controller
Ul
'<:
Figure 16.2 Transients of a control loop with and without cascade auxiliary controller. Ul
cT
The main controller has PID-, the auxiliary controller PI-behaviour. (1)
Table 16.2.1 shows that the parameters of the main controller are
changed by adding the auxiliary controller as follows: K larger, cD
smaller, ci larger.
3-PC3 r=0.1 qo q1 q2 K CD CI
u(O) = q 02 w2 (o)
w2 (0) = q01w1 (0)
and therefore
( 16-9)
process
~---------1
v I
I
I
ly
feedforward
controller
When designing a control system one should always try to control the
effects of measurable disturbances using feedforward, leaving the in-
completely feedforward-controlled parts and the effect of unmeasurable
disturbances on the controlled variable to feedback control.
~ z -d -1 -m
z
-d ( 17 .0-1)
u(z) 1+a 1 z + •.• +amz
is assumed to be known.
304 17. Feedforward Control
(17.1-1)
and therefore
(17.1-2)
and only the numerator of the process transfer behaviour has to be can-
celled.
If these feedforward controls can be realized and are stable, the in-
fluence of the disturbance v(k) on the output y(k) is completely eli-
minated. One condition for the realizability of Eq. (17.1-2) is that
if the element h 0 is present an element f 0 must also be present, and
if h 1 is present f 1 must also be present, etc. This means that for the
assumed process model structure of Eqs. (17.0-1) and (17.0-2) d 0
and d 0 = 0 must always be fulfilled. Therefore one can assume d = 0
from the beginning if GPv(z) has no jumping property and does not al-
ways contain a deadtime d' ~ d. Then only the part B/A is cancelled.
our on or outside the unit circle in the z-plane (nonminimum phase be-
haviour).
Example 17.1.1
T0 [s] a1 a2 a3 b1 b2 b3 d
T 0 [s] c1 c2 d1 d2
for processes
2 -1 . 02 7 0.264 0.144 0.093
I and II
for process
4 -0.527 0.070 0. 385 0.158
III
u{k)
0,15
- 0,133 ------.-.-.-. .................. .--
••
•
0,1 • •
•
•
- 0,05
0+++4~~++4-~--------~----
0 10 20 k
Process III (third order with deadtime; model of a low pass process)
The feedforward element given by Eq. (16.1-2) is unrealizable for pro-
cess III because d + 0. If the deadtime din Eq. (17.1-2) is omitted,
that means only the deadtime-free term B/A is compensated, then
D
17.2 Parameter-optimized Feedforward Control 307
G8 (z) = G8
o (z) G R (z) (17.1-4)
8
u 8 (z)
(17.2-1)
v(z)
are assumed. Because the structure need not be correct, one does not
obtain in general an ideal feedforward control, and transient devia-
tions may occur.
hence
dS 2
eu 0. ( 17. 2-4)
da
with
u = u(oo) final value for e.g. step disturbances
GPv(1) h 0 +h 1 +h 2
---- = Ks = (17.2-6)
GPu(1) 1+f 1 +f 2
then for 1 = 2 four parameters and for 1 1 only two parameters have
to be determined through optimization.
(17.2-7)
u(O) ho
u ( 1) (1-f 1 )u(O) + h1
u (2) -f 1 u(1) + (1-f 2 )u(O) + h1 + h2 (17.2-8)
ho u(O) (17.2-9)
u(1) - Ks
f1 (17.2-10)
u(O) - Ks
h
1
= u ( 1) - u(O) (1-f 1 ). (17.2-11)
Here u(1) can now be chosen as the single independent variable in the
parameter optimization, and its value is such that
dV (17.2-12)
0.
d(u(1)]
(17.2-13)
(17.2-15)
Examples 17.2.1
Process II
Figure 17.2.1 shows V[u(1) J and Figure 17.2.2 the corresponding time
responses of the manipulated and the controlled variable for u(O) = 1.5.
Because of the nonminimum phase behaviour, the initial deviation is in-
creased by feedforward control, the deviation for k ~ 3, however, is
improved.
Process III
Figure 17.2.3 shows V[u(1) J for different u(O). The minima are relative-
ly flat for large u(O). Figure 17.2.4 shows that the feedforward cont-
rol improves the behaviour fork ~ 2.
D
100
2:: y2 !kl
k=O
2,0
1,0
O+------------r~r-~.-
1,0 1,5 u(1)
y(k)
1,0 - - - - - -o o o-o-o o-o C>--Q o-o -o- ~ o-o o - - - -
0~
0 ~
without
0
feedforward control
0
0,5
••
•
0 •
• •
•
•
•
0
••
u(k)
- 1,5
- 1,0
•• •• .....
---------~~J·~··········~
...
0,_~~~++~~----------~---------.-
0 10 20 k
Figure 17.2.3
0~-------r~-----+-L------~------ Loss function V[u(1) J
0,5 1,0 1,5 20 ulll for process III
ylk)
1,0 - - - -0 -o-o-o o-o-o-o-o-o--o-o-o-o-o-o-- ~-
0
."/
Figure 17.2.4
• Transient responses of the
manipulated variable of
•
•••or.,~~~----
O~r+4-r+~.~4-~-.~.~ u(k) and output variable
•... 20 k y(k) for process III for
f 1=o.3; h 0 =3.o; h 1=-2.3.
ulk I
-3,0
-2,0
•
-1,0 •
-~ ..................................................... -
0+4~~~~~,_-----------r-----
0 10 20 k
17.4 Minimum Variance Feedforward Control 313
If the state variables ~(k) are directly measurable, the state variable
deviations ~v(k) are acquired by the state controller of Eq. (8.1-33)
~(k) = -! ~(k)
one sample interval later, so that for state control additional feedfor-
ward control is unnecessary. With indirectly measurable state variables,
the measurable disturbances ~(k) can be added to the observer. For ob-
servers as in Fig. 8.7.1 or Fig. 8.7.2 the feedforward control algo-
rithm is
is minimized. One notices that the manipulated variable u(k) can at the
earliest influence the output variable y(k+1), as b 0 = 0. The deriva-
tion is the same as for minimum variance control, giving Eqs. (14.1-5)
to (14.1-12). The only difference is that v(k) is measurable, and as
result instead of a control u(z)/y(z) = ... , a feedforward control
u(z)/v(z) = ... is of primary interest.
In this case for z y(z) Eq. (14.1-5) is introduced, and for feedforward
control it follows that
u(z)
GSMV1 (z) v(z) -1 -1 r -1 -1 (17.4-3)
zB(z )C(z l+t-A(z )C(z )
1
This will be abbreviated as SMV1.
If r = O, then
-1 -1 -1
G (z) = _ hA(z )[D(z )-C(z )] (17 .4-4)
SMV2 zB(z-1)C(z-1)
-1 -1
_ h[D(z )-A(z )]
GSMV3 (z)
zB(z- 1 )+: 1
A(z- 1 )
(17.4-5)
and for r = 0
_ J.z[D(z -1 )-A(z -1 ) ]
GSMV4(z) (17.4-6)
-1
zB(z )
The feedforward control elements SMV2 and SMV4 are the same as the mi-
nimum variance controllers MV2 and MV4 with the exception of the factor
A. As the discussion of the properties of these feedforward controllers
is analogous to that for the minimum variance controllers in chapter 14,
in the following only the most important points are summarized.
The feedforward control SMV1 affects the output variable in the follow-
ing way:
~
v(z)
AD (z -1 ) [ 1 + zB (z -1 )
-1 r -1
[ C (z -1 ) _ 1
-1
J] •
(17.4-7)
C(z- 1 ) zB(z )~A(z ) D(z )
C(z- 1 ) = A(z- 1 ) has to be set only for SMV3. When r + oo, Gv(z) +
AD(z- 1 )/C(z- 1 ), and the feedforward control is then meaningless. For
17.4 Minimum Variance Feedforward Control 315
(17.4-8)
u(z) AA ( z - 1 ) L ( z - 1 )
8 SMV1d(z) v(z) -1 -1 r -1 -1 (17.4-9)
zB(z )C(z l+tA(z )C(z )
1
or with r = 0
AA ( z - 1 ) L ( z - 1 )
8 SMV2d(z) -1 -1 . (17.4-10)
zB(z )C(z )
8 SMV3d(z) -1 r -1 (17.4-11)
zB(z l+tA(z )
1
or for r 0
AL(z- 1 )
8 SMV4d(z) -1 . (17.4-12)
zB(z )
(17.4-13)
___........____
-
' ' -----
"""~-- -..._ ...- ---/
""/
v'-
/
/
><._-- ..--.._ -
/
.
r--
-----
-----
/Y~
/// ',"--
l..v? .....::_,
Up Yr
pressure
fuel valve ~Mr b.Ms,vi evaporator ~Pdr gauge
U2 Msm Pdr y2
disturbance superheater
fi Iter ~tvls
,----------,
~s I I
I
I
G,6 G7
~qG I
qG I
I
I
injection Gs G6 I temperature
valve I gauge
u, t.~si -~~so Y,
I
G, G2 G3 I G4
I I
L_. _ _ _ _ _ _ _ _ j
y1 (s)
Superheater: G11 (s)
u 1 (s)
y2(s)
Evaporator
u 2 (s)
318 18. Structures of Multivariable Processes
Coupling
superheater-evaporator: G12 (s)
Coupling y 1 (s)
evaporator-superheater: G21 (s) = -- = G 10 (s) G5 (s) G6 (s) G4 (s)
u 2 (s)
G11 and G22 are called the 'main transfer elements' and G12 and G21 the
'coupling transfer elements'. Assuming that the input and output signals
are sampled synchroneously with the same sample time T 0 , the transfer
functions between the samplers are then combined before applying the
z-transformation, as in appendix giving
y 1 ( z)
G11 (z) = = G1G2G3G4 (z)
u1 ( z)
y2(z)
u 2 (z)
(18.1-1)
y2 (z)
u 1 ( z)
y 1 ( z)
u 2 (z)
This example shows that there are common transfer function elements in
this input/output representation. The transfer functions can be summa-
rized in a transfer matrix Q(z)
In this example the number of inputs and outputs are equal, leading to
a quadratic transfer matrix. If the number of inputs and outputs are
different, the transfer function becomes rectangular. It should be no-
ted that the transfer function elements describe only the controllable
and observable part of the process. The non-controllable and non-obser-
vable process parts cannot be represented by transfer functions, as
known.
In the case of the P-canonical structure each input acts on each out-
put, and the summation points are at the outputs; P-canonical multiva-
riable processes are described by Eq. (18.1-2). Changes in one trans-
fer element influence only the corresponding output, and the number of
inputs and outputs can be different. The characteristic of the V-cano-
nical structure is that each input acts directly only on one correspon-
ding output and each output acts on the other inputs; this structure
is defined only for the same number of inputs and outputs. Changes in
one transfer element influence the signals of all other elements. For
a twovariable process with v-canonical structure we obtain the follow-
ing equation
y_ QH {~ + QK ~}. ( 18 .1-3)
If G is non-singular, we have
u = Q-1 Y.
so that
( 18. 1-6)
(18.1-7)
Both canonical forms can therefore be converted to each other, but rea-
lizability must be considered. For a twovariable process the calcula-
tion of the transfer function elements is for example given in [18.2].
quency responses or impulse responses, then one obtains only the trans-
fer behaviour in a P-canonical structure. If other internal structures
are considered, proper parametric models and parameter estimation me-
thods must be used.
The overall structure describes only the signal flow paths. The actual
behaviour of multivariable processes is determined by the transfer
functions of the main and coupling elements including both their signs
and mutual position. One distinguishes between symmetricaZ muZtivaria-
bZe processes, where
}
Gij (z) Gji (z) j 1I 2I • • •
With regard to the settling times of the decoupled main control loops
stow process etements Gii can be coupled with fast process etements Gij"
With lumped parameter processes signals can only appear at the input or
output of energy, mass or momentum storages. The main and coupling ele-
ments often contain the same storage components, so that a main trans-
fer element and a coupling transfer element possess some common trans-
fer function terms. Hence Gii ~ Gij or Gii ~ Gji can often be observed.
l [:;:::]
a P-structure of Eq. (18.1-2) connected with a twovariable controller
which consists of only two main controllers. The sample time is assumed
to be equal and sampling to be synchroneous for all signals. Further-
more w1 w2 = 0. Then we have
[1+G11R11
G12R11 1+G22R22 Y2
G12R11Y1 + (l+G22R22)y2 0.
If the first equation is solved for y 2 and introduced into the second
equation, one obtains
[1+G 11 (z)R 11 (z) J(1+G 22 (z)R 22 (z) J - G12 (z)R 11 (z)G 21 (z)R 22 (z) = 0.
(18.1-10)
322 18. Structures of Multivariable Processes
The expressions 1+G 11 R11 and 1+G 22 R22 are the characteristic polynomials
of the uncoupled single control loops arising from the main transfer
elements and in the main controllers. The term -G 12 R11 G21 R22 expresses
the influence of the coupling between both single control loops by the
coupling elements G12 and G21 on the eigenbehaviour. This term describes
the affect on the characteristic equations of the single loops induced
by the coupling elements. If G12 = 0 and/or G21 = 0 the coefficients of
the single control loops are unchanged.
0.
The transfer functions with the reference variables as inputs are in-
troduced
Gii (z) Rii (z)
i 1 '2 (18.1-12)
1+Gii (z)Rii (z)
so that
(18.1-13)
is the dynamic coupling factor. Eq. (18.1-13) shows that the eigenva-
lues of a multivariable system in P-structure consist of the eigenva-
lues of the single main control loops and additional eigenvalues cau-
sed by the couplings G12 and G21 . Again, if G12 = 0 and/or G21 = 0 the
eigenvalues of the twovariable control system are identical to those of
the single uncoupled loops. From Eq. (18.1-10) it follows after divi-
sion by (1+G 22 R22 l that
( 18 .1-15b)
Under the influence of the coupled control loop the controlled "process"
of the main controller changes as follows:
G11 + G11{l-KGw2)
G22 + G22{l-KGw1)
Now the change in the gain of the controlled "processes" caused by the
coupled neighbouring control loop is considered. For the controller
Rii(z) the process gain is Gii(1) in the case of the open loop j and
G .. {1)[1-K 0 G .{1) J in the case of the closed neighbouring loop. The
1.1. WJ
factor [1-K 0 G . (1) J = 8 .. describes the change of the gain through the
WJ 1.1.
coupled neighbouring loop. Ko is called the static coupling factor
G12{ 1 )G21 ( 1 ) K12K21
{18.1-16)
K{1) = G11{1)G22(1) K11K22.
This coupling factor exists for transfer elements with proportional be-
haviour, or integral behaviour if there are two integral elements Gii{z)
and G .. {z). In Fig. 18.1.4 the factor 8 .. is shown as a function of K0 •
l.J 1.1.
For an open neighbouring loop j, 8ii = 1 is valid. If the neighbouring
loop is closed, following cases can be distinguished [18.7]:
1
a) 2: K0 > o
G . {1)
WJ
1
b) Ko > G . { 1) + 8 ..
1.1.
< o.
WJ
Therefore a twovariable process can be divided into negative and posi-
tive coupled processes. In case 1), the gain of the controlled "process"
increases by closing the neighbouring loop, so that the controller gain
must be reduced in general. In case 2a), the gain of the controlled
"process" decreases and the controller gain can be increased. Finally,
in case 2b) the gain of the controlled "process" changes such that the
sign of the controller Rii must be changed. Near 8ii ~ 0 the control
of the variable yi is not possible in practice.
324 18. Structures of Multivariable Processes
+
I
I
I
a) i
i
L - - - - - - - - - - - - - - - - - - - - - _j
b)
Ejj=(1-X 0 Gwj(1 ))
0 I
I
negative coupled positive coupled
without I with
Sign ·change
Figure 18.1.4 Dependence of the factor Eii on the static coupling fac-
tor Ko for twovariable control systems with P-canonical
structure
18.1 Structural Properties of Transfer Function Representations 325
c) Reference variables
In the exampZe of the steam generator of Fig. 18.1.1 these cases corres-
pond to the following disturbances:
Their product yields the static coupling factor K0 • Therefore for posi-
tive coupling KO > 0 the groups
+ + + + I)
K21
+ - - + reinforcing
K22
> 0
- + + - K12
> 0
- - - - K11
positive
Ko > 0 + + - - II)
K21
+ - + - counteracting K22
< 0
- + - + K12
< 0
- - + + K11
+ + - + III)
K21
+ - + + R11 reinforces R22 -- < 0
K22
- + - - R22 counteracts R11 K12
> 0
negative - - + - K11
K0 < o
+ + + - IV)
K21
+ - - - R11 counteracts R22 > 0
K22
- + + + R22 reinforces R11 K12
< 0
- - - + K11
shows that the response of the controlled variable is identical for the
different sign combinations within one group. If only one disturbance
n 1 acts on the output y 1 (and n 2 = 0), then the action of the neighbou-
ring controller R22 is given in Table 18.1.2. The controller R22 coun-
teracts the controller R11 for positive coupling and reinforces it for
negative coupling.
328 18. Structures of Multivariable Processes
Table 18.1.2 Effect of the main controller R22 on the main controller R11
for one disturbance n1 on the controlled variable y 1 . Sign
combinations and groups as in Table 18.1.1.
positive counteracting I
Ko > 0 counteracting II
with
+A z-m
~0 + -m (18.1-18)
-m
+ -m
B z .
-1
A 1~ ( z )
[
0
A22(z
-1
)
J
(18.1-19)
Here
To set a first view of the forms of the matrices ~, ~ and f and the cor-
responding structures of the block diagram, we consider three examples
of a twovariable process as in section 18.1.
Fig. 18.2.1 shows two main transfer elements for which the state vari-
ables are directly coupled by the matrices ~~ 2 and ~; 1 . This means phy-
sically that all storages and state variables are parts of the main
transfer elements. The coupling elements have no independent storage
W"
or state variable. The state representation is
[ y1 (k)l
[£!' £i,] [x, (k) l (18.2-4)
y2(k) ~22 (k) .
18.2 Structural Properties of the State Representation 331
l
T
~11(k)j
0 ~12(k)
~21 (18.2-6)
T ~21(k)
~12 0
~22(k)
In this case all matrices of the main and coupling elements occur in A
as diagonal blocks.
~22(k+1) 0 ~22(k)
(18.2-7)
~11(k)j
0
0
0
0
l ~12(k)
~21 (k)
~22 (k) •
(18.2-8)
a)
b)
c)
d)
Chapter 18 has already shown that there are many structures and combi-
nations of process elements and signs for twovariable processes. There-
fore general investigations on twovariable processes are known only for
certain selected structures and transfer functions. The control behavi-
our and the controller parameter settings are described in [19.1],
[19.2], [19.3], [19.4], [19.5] and [18.7] for special P-canonical pro-
cesses with continuous-time signals. Based on these publications, some
results which have general validity and are also suitable for discrete-
time signals, are summarized below.
a) Stability, modes
G12 G21
- asymmetric processes
G11 + G22
G12 + G21
o coupling factor
G12 (z)G21 (z)
-dynamic K(z)
G11 (z)G22 (z)
K12K21
- static
K11K22
negative coupling
positive coupling
In the case of sampled signals the samp~e time To may be the same in
both main loops or different. Synchroneous and nonsynchroneous sampling
can also be distinguished.
The next section describes the stability regions and the choice of con-
troller parameters for P-canonical twovariable processes. The results
have been obtained mainly for continuous signals, but they can be qua-
litatively applied for relatively small sample times to the case of
discrete-time signals.
the stability limits are shown in Figure 19.1.1 and 19.1.2 for positive
and negative values of the coupling factor [19.1]
19.1 Parameter Optimization of Main Controllers 339
2
KR11 "
KR11k '"
t 1.5
unstable
0.5
----
0 0.5 1.5 . 2
KR22
KR22k
Figure 19.1.1 Stability regions of a symmetrical twovariable control
system with negative coupling and P-controller s [19.1]
unst able
0.5
-
0 0.5
K =
0
The controller gains KRii are related to the critical gains KRiiK on
the stability limit of the noncoupled loops, i.e. KO = 0. Therefore the
stability limit is a square with KRii/KRiiK = 1 for the noncoupled
loops. In the case of increasing magnitude of the negative coupling
K0 < 1 an increasing region develops in the middle part and also the
peaks at both ends increase, Figure 19.1.1. For an increasing magnitude
of positive coupling K0 > 1 the stability region decreases, Figure 19.1.2,
until a triangle remains for K0 = 1. If K0 > 1 the twovariable system
becomes monotonically structurally unstable for main controllers with
integral action, as is seen from Figure 18.1.3 a). Then Gw 1 (0) = 1 and
Gw 2 (o) = 1 and with K0 = 1 a positive feedback results. If Ko > 1 the
sign of one controller must be changed, or other couplings of manipula-
ted and controlled variables must be taken. Figures 19.1.1 and 19.1.2
show that the stability regions decrease with increasing magnitude of
the coupling factor, if the peaks for negative coupling are neglected,
which are not relevant in practice.
Figure 19.1.3 shows- for the case of negative coupling- the change
of the stability regions through adding to the P-controller an integral
term (+ PI-controller) and a differentiating term (+ PID-controller) .
In the first case the stability region decreases, in the second case it
increases.
------,
I
I
I
I
PID I
2
-~
K'R22k
Figure 19.1.3 Stability regions of a symmetrical twovariable system with
negative coupling Ko=-1 for continuous-time P-, PI- and
PID-controllers [19.1]
PI-controller : TI = TP
Pro-controller: T 1 = TP T 0 = 0.2 TQ
Tp: time period of one oscillation Yor KRii = KRiiK
(critical gain on the stability limit), see figure in
Table 5.6.1
2 KR22
KR22k
Figure 19.1.4 Stability regions tor the same twovariab~e system as Fig.
19.1.3. However discrete-time P-controllers with different
sample time T0 .
342 19. Parameter-op timized Multivariable Control Systems
increasing asymmetry
/
'Ko
~ t? I
I
I
/
/
/
/I
increasing
positive
coupling
b_ // b/// b
I /
/ ..----v
0 - /
/
- ----- ---- - < l - - Qncouple d
increasing
~
negative
coupling
-1
tL
tl_
L
- 00
-
2 00
!P2
T p1
p M 2 2
l: a. l: [e. (k) + rit.ui (k) ]. (19.1-3)
i=1 ~ k=O ~
Here, the a.i are weighting factors for the main loops, with Za.i 1.
If these have a unique minimum
0 (19.1-4)
dq
T
51 = ( qo 1 ' q 11 ' · • · ' qv 1 ; · · · ; qop' q 1 p' · • · ' qvp J• (19.1-5)
A A
KR11k
~11k 12 c'
@ KR22k
KR11k KR11k
Now a rough picture of the stability region is known and also which
case a) to d) is appropriate.
TpC or TpiiK are the time periods of the oscillations at the stabi-
lity points c or A for i = 1 orB for i = 2.
These tuning rules can only give rough values; in many cases correc-
tions are required. Though the rules have been given for controllers
for continuous-time signals, they can be used in the same way for dis-
crete-time controllers. The principle of keeping a suitable distance
to the stability limit, remains unchanged.
damping, etc. The control behaviour depends much more on the mutual
effect of the main controllers (groups I to IV in Table 18.1.1). If
the system is symmetric, the control becomes worse in the sequence
group I + III ~ IV + II, and if it is asymmetric in the sequence group
III + I + IV + II. The best control resulted for negative coupling if
R 11 reinforces R22 and R22 counteracts R11 , and for positive coupling
if both controllers reinforce each other. In both cases the main con-
troZZer of the sZower Zoop is reinforced. The poorest control is for
negative coupled processes, where R11 counteracts R22 and R22 reinfor-
ces R11 , and especially for positive coupling with counteracting cont-
rollers. In these cases the main controZZer of the sZower Zoop is coun-
teracted. This example also shows that the faster loop is influenced
less by the slower loop. It is the effect of the faster loop on the
lower loop which plays significant role.
(19.2-1)
G G
-v -w
whereas for missing external signals the modes are described by
19.2 Decoupling by Coupling Controllers (Non-interaction) 347
n
y
L! + QpBJ y = Q. (19.2-2)
(19.2-4)
must be diagonal.
c) Non-interaction of modes
The modes of the single loops do not influence each other if the
system has no external disturbance. Then the elements of y are de-
coupled and Eq. (19.2-2) leads to the open loop matrix
(19.2-5)
The diagonal matrices can be freely chosen within smme limits. The
transfer function can be given for example in the same way as for un-
coupled loops. Then the coupling controllers Rij can be calculated and
checked for realizability. As a decoupled system for disturbances is
difficult to design and is often unrealizable [18.2] in the following
only independence of modes which also leads to the non-interaction for
reference variables, is briefly considered.
348 19. Parameter-optimi zed Multivariable Control Systems
adj ~P
R .Qo· (19.2-6)
det ~P
:::]
G = [G11
-P
G12
and the controller matrix is
R = [ R11 R21]·
R12 R22
(19.2-7)
G22G11R1 -G21G22R2J·
R [ (19.2-8)
-G12G11R1 G11G22R2
R
-H
= [ R11
0 ~K
= [ :12
R~1l
R:J
the overall controller
(19.2-9)
(19.2-6) 1 if
(19.2-10)
(19.2-11)
(19.2-12)
(19.2-13)
(19.2-14)
(20.1-1}
l
with polynomial matrices
-1 -1
f(z } = R0 + f 1z +
(20.1-2)
Q(z-1} = 9o + 21Z-1 +
(20.1-3}
Y..{&
w(z)
u(z)
w(z)
(20.2-1)
if ~(z- 1 ) has a finite order of m+d. The controller equation can also
be written as
(20.2-4)
-1 -1
q 0 [1-z /a]A(z )
u(z)
e(z) = 1-q0 (1-z
-1
/a)B(z
-1
)z
-d
q 0 = 1/(1-a 1 )B(1)
1/a. = a1•
The multivariable analogy (MDB2) is
(20.2-5)
with
(20.2-6)
( 1) [ J-1 d = _B -1 ( 1 )
2omin = ~ -1 !-~1 an Qomax (20.2-7)
satisfying ~(1) = ~(0) for 90min· For the smallest process inputs,
~(1) = ~(O), this requires that
Q
-0
= B- 1 [I-A
- - -1
f 1 (20.2-'8)
yielding
H ~1· (20.2-9)
-1 -1 -d -1
~(z >x<zl = ~(z )z ~(z) + Q(z )!(z) (20. 3-1)
is assumed, with
(20.3-2)
value of u(k)
u (k) = B- 1 (1)A(1)w(k).
-
(20.3-4)
-w - -
Corresponding to Eq. (14.2-4), the process and signal model is split
up into
z (d+1) Y. (z)
(20.3-5)
where the new matrix polynomials are defined by
(20.3-6)
(20.3-7)
(20. 3-8)
Eq. (20.3-5) is now transformed into the time domain and analogously
to Eq. (14.1-7) to Eq. (14.1-10) I(k+d+1) is obtained. Then di(k+d+1)/
a~(k) = 0 is computed, resulting in
~ T[
-1 (z -1 )[~(z -1 )z~(z)+~(z -1 )~(z) J- ~(z) ] + ~[~(z)-~w(z) J = 0
1 ~
(20.3-9)
-1 -1 -1 -1 -d (20. 3-10)
~(z)=Q (z l[~(z )y_(z)-~(z )z ~(z)].
(20.3-11)
with m state variables, p process inputs and r process outputs. The op-
timal steady-state controller is then
and possesses pxm coefficients if each state variable acts on each pro-
cess input.
(21.2-1)
11 • • • 1 r o
resulting in
(21.3-3)
(21.4-4)
(21.4-5)
where ~(k) is predicted using Eq. (21.4-1). If the deadtime is not in-
cluded in the system matrix~, the controller equations are [20.1]
(21.4-7)
d-1 .
~(k+d) E{x(k+d) jk-1} = ~d~(k) + E Ad- 1 - 1B ~(k-d+i). (21 .4-8)
i=O
Another version which corresponds to the minimum variance controllers
discussed in chapters 14 and 20.3 is obtained by using the criterion
Here the variances of the outputs rather than all the state variables
are weighted. Introducing 9 = fT£ fin Eq. (21.4-3) yields
(MSMV2).
There are many more ways in which adaptive controllers or adaptive con-
trol algorithms can be realized with digital computers, i.e. process
computers and microcomputers, than with analog techniques. The great
progress in the production of cheap digital processors has enabled the
implementation of complex control algorithms which would otherwise ei-
ther not be realized at all or only at unjustifiable expense using a-
nalog techniques. In addition there are many advantages in having pro-
cess models and controllers in discrete-time form compared with conti-
nuous-time form, especially in theoretical development and in computa-
tional effort. Furthermore, the progress in the field of process iden-
tification and in the design of control algorithms since about 1965
was necessary so that adaptive control algorithms could be developed to
meet practical requirements. For these reasons interest in adaptive con-
trol has increased considerably during the last ten years. Many early
papers on adaptive control were published in 1958 to 1968; most of them
were based on analog signals and have been realized by analog compu-
ters. Surveys of these early adaptive control systems are given for ex-
ample in papers [22.1] to [22.5] and in books [22.6] to [22.10). How-
ever, because of the expense of practical realization and particularly
because of the lack of universal applicability, the interest in adap-
tive control subsequently declined.
This section gives a short review and introduction to the most impor-
tant basic structures of adaptive control systems. A comparison of con-
tributions on adaptive control shows that there are many different de-
finitions of the term 'adaptive'. In [22.15] some of these definitions
are summarized and new ones are proposed. The following description
considers adaptive control schemes in an input/output framework. Then
space is devoted to different adaptive control principles and algorithms.
For simplicity only single input/single output processes are treated.
22. Adaptive Control Systems - A Short Review 361
w y
a) b)
Adaptive controllers with feedback can be divided into two main groups.
Self-optimizing adaptive controllers try to attain an optimal control
performance, subject to the design criterion of the controller and to
362 22. Adaptive Control Systems - A Short Review
the obtainable information on the process and its signals, Fig. 22.2 a).
w y w y
a) b)
2. contro~ler calculation
Recent surveys are in [22.14], [25.12], [25.6]. The other group compri-
ses model reference adaptive control~ers, Fig. 22.2 b), which try to
obtain a closed loop response close to that of a given reference model
for a given input signal. This requires a measurable external signal
(e.g. the reference value for servo-systems), and the system then adapts
only if this given signal changes. In this case also three stages can
be distinguished:
2. controller calculation
(23.1-1)
where
are the deviations of the absolute signals U(k) and Y(k) from the d.c.
('direct current' or steady-state) values u 00 and Y00 . d = 0,1,2, ...
is the discrete deadtime. From Eq. (23.1-1) the z-transfer function be-
comes
yu(z) -d -d
z z (23.1-3)
u(z)
B (z- 1 ) Yu Y
A(z- 1)
Figure 23.1.1 Process and noise model
(23.1-5)
E{v(k)} = 0
where cr 2 is the variance and o(T) is the Kronecker delta function. The
v
z-transfer function of the noise filter is
Eq. (23.1-3) and (23.1-6) yield the combined process and noise model
y(z) (23.1-8)
y(z) (23.1-9)
23.2 The Recursive Least Squares Method (RLS) 367
Considering measured signals y(k) and u(k) up to time point k and the
process parameter estimates up to time point (k-1), one obtains from
Eq • ( 2 3 . 1-1 )
where the equation error (residual) e(k) replaces "0" in Eq. (23.1-1).
This error arises from the noise contaminated outputs y(k) and from the
erroneous parameter estimates. In this equation the following term can
be interpreted as a one-step ahead prediction y(kik-1) of y(k) at time
(k-1)
(23.2-4)
Now inputs and outputs are measured fork= 1,2, ... ,m+d+N. Then N+1 e-
quations of the form
with
368 23. On-line Identification of Dynamical Processes
T
y (m+d+N) [y(m+d) y(m+d+1) ... y(m+d+N) (23 .2-7)
!<m+d+N)
u(0)1
~
-y (m+d-1) -y (m+d-2) -y(d) u(m-1) u(m-2)
-y(m+d) -y (m+d-1) -y(1+d) u(m) u(m-1) u (1)
(23 .2-8)
and therefore
dV
d0 I0=0
A
0 (23.2-11)
(23 .2-14)
The correcting vector is given by
and
(23.2-16)
Convergence Conditions
The general requirements for the performance of parameter estimation
methods are that the parameter estimates are unbiased
0. (23 2-23)
0
b) The input signal u(k) U(k) - u00 must be exactly measurable and
0 oo must be known.
N-1
• (t) = lim E u(k)u(k+t) exists, see [23.15], [23.16].
uu N+oo k=O
f) E{e(k)} = 0.
Instead of u(z) and y(z) the signals t.u(z) = u(z)[1-z- 1 J and t.y(z)[1-z~ 1 ]
then are used for the parameter estimation. As this special high-pass
filtering is applied to both, the process input and output, the process
parameters can be estimated in the same way as in the case of measuring
u(k) and y(k). In the parameter estimation algorithms u(k) and y(k)
23.2 The Recursive Least Squares Method (RLS) 371
(23.2-27)
For slowly time varying d.c. values recursive averaging with exponen-
tial forgetting leads to
(23.2-28)
with A< 1. The same can be applied for u00 • The variations u(k) and
y(k} can be determined by Eq. (23.1-2}.
(23.2-29}
with
(23.2-30}
y(k) = ~T(k)~(k)+e(k)
with
T
~ (k) = [-y(k-1)u(k-1) J
A A A T
~(k) = [a 1 (k)b 1 (k) J .
r b1(k)
~1(k)] = r~1(k-1)]
b1(k-1)
+ [y,(k-1)]
y2(k-1)
e(k)
from g)
-y (k)]
e) E_(k)~(k+1) r
p11 (k) p12 (k)]
p21 (k) p22 (k)
r u(k)
from
h) = r-p,, (k) y (k) +p12 (k) u (k)l
-p21 (k) y (k) +p22 (k) u (k)
f) .!!?(k+1)E_(k)~(k+1) [-y(k)u(k)J i1 j
'-----v----'
from e)
h) E_(k+1) * [E_(k)-y(k)~T(k+1)E_(k) J
*l
p11(k)-y1i1 p12(k)-y1~2l
= p21 (k)-y2i1 p22 (k) -y 2 1 2
The method of recursive least squares can also be used for the parame-
ter estimation of stochastic signal models. A stationary autoregressive
moving average process (ARMA)
where
If v(k-1) , ... ,v(k-p) were known, the RLS method could be used as Eqs.
(23.2-14) to (23.2-17), as v(k) in Eq. (23.2-33) can be interpreted as
equation error, which is statistically independent by definition.
Now the time after the measurement of y(k) is considered. Here y(k-1),
... ,y(k-p) are known. Assuming that the estimates ~(k-1) , ... ,;(k-p)
and ~(k-1) are known, the most recent input signal ~(k) can be estima-
ted via Eq. (23.2-33), (23.1], [23.2]
=
~ ~T ~
v(k) y(k) - ]!_ (k) ~(k-1) (23.2-36)
with
~T (k)
~
Then also
(23. 2-40)
with a correlated signal e:(z) = D(z- 1 )e(z) is used, the recursive me-
thods for dynamical processes and for stochastic signals can be com-
bined to form an extended least squares method [23.3], [23.2]. Based
on
AT A
y(k) = ~ (k)g(k-1) + e(k) (23.3-3)
(23.3-5)
For convergence of the least squares method the error signal e(k) must
be uncorrelated with the elements of ~T(k). The instrumental variables
method bypasses this condition by replacing the data vector ~T(k) by an
instrumental vector ~T(k) whose elements are uncorrelated with e(k).
This can be obtained if the instrumental variables are correlated as
T
strongly as possible with the undisturbed components of ~ (k). There-
fore an instrumental variables vector
are taken fr0m the undisturbed output of an auxiliary model with para-
meters G
-aux
(k). The resulting recursive estimation algorithms have the
same structure as for RLS, [23.5], [23.6], c.f. Table 23.7.1. To have
the instrumental variables h(k) less correlated with e(k), the parame-
ter variations of the auxiliary model are delayed by a discrete first
order low-pass filter with dead time [23.6]
-1 (23.5-1)
A(z )y(z)
The abbreviations
(23.5-3)
T (23. 5-4)
y(k) = !1!_ (k) ~ + v(k).
with iT(k) as defined in Eq. (23.3-4). From the derivation of the non-
recursive maximum likelihood parameter estimation it follows [3.12],
[3.13], that the loss function for a normatty distributed error signat
is the same as for the least squares method
1 N 2
V(~) L e (k) (23.5-6)
2 k=1
and has to be minimized with respect to the parameters ai' bi and di.
As this loss function is linear in the parameters ai and bi but nonli-
near in the parameters di' the minimization must be performed itera-
tively, for example by using gradient search algorithms. Therefore the
full maximum likelihood method can only be applied nonrecursively. How-
ever, after simplifying the gradient algorithm it becomes possible to
obtain a recursive algorithm [23.7], [23.8].
(23.5-7)
with ~ 8 (~) the vector of first derivatives and ~ 88 (~) the matrix of
~ (k+1l
A
= ~ (kl
A
This leads to
Cle(G,k+1)
~e <~,k+1 > ~e<~,kl + e(~,k+1l 38 (23.5-10)
~A
~ee (~,k) +
[ae(G~-k
a8
+1l]T ae(Q~~+1) uv
2 A
with
~ {k)~ {k+1)
y{k) = ~{k+1)~{k+1) = ---=-------- {23.5-13)
1 + ~T{k+1)~{k)~{k+1)
-1 A
~{k) Yee<Q<k-1),k) {23.5-14)
AT
.!J!. {k+1) = [ -y {k) ••• -y {k-m+1) u {k-d) ••. u {k-d-m+1)
ae{z)
z - --- - y 1 {z) z- {i-1)
aai
ae{z) - {i-1) -d
z---=- u 1 {z)z z {23.5-21)
abi
z ae{z) - e {z)z-{i-1)
--acr;-
1
i 1, ... ,m.
T
~ {k+1) = [ -y I {k) -y 1 {k-m+1) u 1 {k-d) u 1 {k-d-m+1)
e 1 {k) e 1 {k-m+1) ] {23.5-22)
23. On-line Identification of Dynamical ProcesseE 379
e 1 (k) e(k) - a e
1
1 (k-1) - ... - a me I (k-m) •
In comparison to the RELS method the RML method differs in using se_(k+1)
instead of ~(k+1) in the correction vector X(k), c.f. Table 23.7.1. Ne-
cessary conditions for unbiased and consistent estimates are:
c) The noise filter must be of the form D(z- 1 )/A(z- 1 ), such that
e(k) is uncorrelated.
d) The roots of D(z) = 0 must lie within the unit circle of the z-plane,
so that Eq. (23.5-23) is stable.
380 23. On-line Identification of Dynamical Processes
e (k+1)
By choice of
the errors e(k) are weighted as shown in Table 23.7.2 for N' = 50. The
weighting then increases exponentially to 1 for N'. The recursive esti-
mation algorithms given in Table 23.7.1 are modified as follows:
k 1 10 20 30 40 47 48 49 50
unbiased and
method §_ iT (k+1) ~ (k+1) ~(k+1) ~(k+1) consistent for
noise filter
N
" [ -y (k) ••• -y (k-m+1) 1 1 w
RLS ':'-1 [.!-!:. (k) iT (k+1 l ]~ (k) ~(k+1)
.
~
u(k) ••• u(k-m+1)] 1+iT(k+1)~(k)l(k+1) A(z- 1 ) 0
::l
am I
1 [ -h (k) ••• -h (k-m+1) D(z-1) 1-'
RIV as RLS [,!-!:.(k)~T(k+1) ]~(k) .....
1+iT(k+1)~(k)~(k+1) u(k) ••• u(k-m+1)] ::l
~1
. C(z- 1 ) ro
H
a 1 p,
STA sm as RLS 1 p (k+1).! = k+1 .! i(k+1) ro
A(z- 1 ) ::l
rt
" .....
[ -y (k) ••• -y (k-m+1) HI
':'-1 D(z- 1 ) .....
RELS
. u (k) ••• u (k-m+1) as RLS as RLS .! (k+1) 0
" A(z- 1 ) PI
am rt
e (k) ••• e (k-m+1)] .....
0
::l
~1 [ -y I (k). • • -y 1 (k-mt-1) D (z - 1 )
. 1 [,!-y(k)~T(k+1) )~(k) 0
u 1 (k) ••• U1 (k-m+1) HI
RML as RELS A(z- 1 )
s 1~T(k+1)~(k)~(k+1) 0
m e 1 (k) ••• e 1 (k-m+1) ]
" '<
::l
~1 PI
. s.....
"am 0
PI
1-'
'"tl
11
0
0
ro
Ill
Ill
ro
Ill
23.7 A Unified Recursive Parameter Estimation Algorithm 383
(23.7-8)
with Ao < 1 and A(O) < 1. For Ao 0.95 and A(O) 0.95 one obtains
for example
The weightings given by Eq. (23.7-8) and Eq. (23.7-5) can be combined
in the algorithm
(23.7-9)
lim A(k+1) A.
k+oo
For small identification times and larger noise/signal ratios all me-
thods (except STA) lead to parameter estimates of about the same qua-
lity. Then in general RLS is preferred because of its simplicity and
its reliable convergence. The superior performance of the RIV and RML
methods is only evident for a larger identification time.
23.8 Modifications to Recursive Parameter Estimation Algorithms 385
v {23.8-1)
The main idea is now to transform the matrix D into an upper triangular
matrix Qt
{23. 8-3)
(23.8-4)
6 = f.eef.eR
-1
where f.ee and f.eR have upper triangular form so that a simple recursive
calculation is possible. For more details of this discrete square root
filtering method see [23.20), [23.19], [23.18]. As there is no initial
value to be selected, this method has a quick convergence in both non-
recursive and recursive versions. A modified square root filtering al-
gorithm which is similar to RLS, is described in [23.20). Other nume-
rical modifications to recursive estimation have the goal of reducing
the number of calculations after each sample. The resulting methods
are called 'fast' algorithms and are based on certain invariance pro-
perties of matrices due to shifted time arguments [23.21].
sensitive to
(p 2 +7p+2)n
FRLS 1136 starting values large
2
+2p 3 +3p sometimes slow
unreliable,
STA 2n 1 34 high
too slow
2n2+.Jl.n reliable,
MDSF 371 small
2 2 very good
the usual RLS is only smaller for n > 10 parameters (order m > 5) . How-
ever, the convergence is sensitive to the starting values and the pro-
gram storage is much higher than for the other methods. Very good pa-
rameter estimates have been obtained with the square root filter algo-
rithms for larger measurement times. The discrete square root filter
algorithms show, however, relatively high oscillations in the parame-
ters in the starting phase, which is not good for parameter adaptive
control. The best overall properties are possessed by the modified
square root filter algorithm. But for typical processes the advantages
are small in comparison to the RLS. If, however, numerical problems a-
rise with RLS and exact parameter estimates are required one should try
MDSF. Hence, for many applications the simple recursive least squares
method or its extensions (RELS, ~L) are preferable.
24. Identification in Closed Loop
Case d: Only the input u(k) and output y(k) are measured.
Yu(z) -d -d
z z (24.1-1)
u(z)
PROCESS
r-------~
v I D ( z- 1 ) I
f--- I
I A ( z- 1 )
I
I nl
w
---
ew Q ( z-1)
p ( z-1)
u I
I
I
B(i1)
--Z
A!i 1J Yu
L _______ _j
I
-d
I
y
y (z) = yu(z) + n ( z)
ew(z) = w (z) - y ( z) .
Y.EL
v (z)
1+GR(z)Gp(z)
D(z- 1 )P(z- 1 )
-1 -r
1+B 1 z + .. . +Brz
(24.1-4)
-1 -~
1 +a 1 z + ... +a 9- z
~ max[ma + jl, mb + v + d]
}(24.1-5)
:t" == md + Jl•
(24.1-6)
can be estimated using the methods given in chapter 23, if the poles
of ~(z) = 0 lie within the unit circle of the z-plane and if the poly-
nomials D ( z - 1 ) and )t( z - 1 ) have no common root.
390 24. Identification in Closed Loop
for given a.
l.
and s..
l.
In order to calculate these parameters uniquely,
certain identifiability conditions must be satisfied.
Identifiability Condition
Ay = Bu + Dv.
A* B* u D*
which leads to
This shows that the process B/A and the noise model D/A can be repla-
ced by
B* BQ-SP D* DQ
{24.1-10)
and
A* AQ+SQ A* AQ+SQ
-1
without changing the signals u{k) and y{k) for a given v{k). As S{z )
is arbitrary the orders of A and B cannot be uniquely determined based
24.1 Parameter Estimation without Perturbations 391
Identifiability Condition 2
Eq. (24.1-4) shows that the rna + ~ unknown process parameters ai and
b.1 have to be determined by the ~ parameters a..
1
If the polynomials D
and Jt have no common zero, a unique determination of the process para-
meters requires t ~ rna + ~ or
+ v ~ rn
a
- d
or ) (24.1-121
+
~ ~ o, (24.1-13)
which means any controller. If~(z- 1 ) and D(z- 1 ) have p common poles,
they cannot be identified, but only t-p parameters ~i and r-p parame-
ters Si. The identifiability condition 2 for the process parameters ai
and b.1 then becomes
(24.1-14)
Note that only the common zeros of A- and D are of interest, and not
those of .A- and 13, as 13 = DP and P is known. Therefore the number of
common zeros in the numerator and denominator of
(24.1-15)
ler parameter sets [24.2], [24.3]. One then obtains additional equa-
tions for determining the parameters. Some examples may discuss the
identifiability condition 2.
Example 24.1.1
d.
A
a1 + b1q0 a1 - p1
a1p1 + a2 + b1q1 + b2q0 a2 - p2
(24.1-16)
or in matrix form
0 0 qo 0 0 a1 a1-p1
p1 0 q1 qo 0 a2 a2-p2
p1 q1
p~ qo a a~-p~
m
0 p~ p1 qm q1 b1 a~+1
0 0 0 qm b2 a~+2
I
P~ I
0 0 0 I 0 0 qm b a 2m
m
s 8 a*. (24.1-17)
Again it can be seen that the matrix S must have the rank r ~ 2m for a
unique solution of Eq. (24.1-17), i.e. v ~ m or~~ m. If v > m or ~>m
the overdetermined equation system Eq. (24.1-17) can be solved by using
the pseudo inverse
(24.1-19)
u ( z) -GR ( z) GPv ( z)
(24.1-20)
v ( z)
1 +GR ( z) Gp ( z)
and
GPv (z)
~ (24.1-21)
v (z)
1 +GR ( z) Gp ( z)
would have been obtained, i.e. the negative inverse controller trans-
fer function. The reason is that the undisturbed process output yu(k)=
y(k)-n(k) is not used. If yu(k) were known, the process
could be identified. This shows that for direct closed loop identifi-
cation the knowledge of the noise filter n(z)/v(z) is required. There-
fore the process and noise model
(24.1-24)
is used.
The basic model for indirect process identification is the ARMA, c.f.
Eq • ( 2 4• 1 - 4 )
-1 -1 -1 -d -1
[A(z )P(z )+B(z )z Q(z )]y(z)
A A
24.1 Parameter Estimation without Perturbations 395
(24 .1-26)
results in
and after cancellation of the polynomial P(z- 1 ) one obtains the equa-
tion of the process model as in open loop, Eq. (24.1-23). The diffe-
rence from the open loop case is, however, that u(z) or P(z- 1 )u(z) de-
pend on y(z) or Q(z- 1 )y(z), Eq. (24.1-26), and cannot be freely chosen.
N
v l: e 2 (k) (24.1-28)
k=1
(24.1-29)
(24. 1-30)
A unique minimum of the loss function V with regard to the unknown pro-
cess parameters requires a unique dependence of the process parameters
in
-d Q AP + Bz-dQ __ ~
p-J
A A
[A + B z = (24.1-31)
D DP 13
on the error signal e. This term is identical to the right-hand side
of Eq. (24.1-4), for which the parameters of A, Band 6 can be unique-
ly determined based on the transfer function y(z)/v(z), provided that
the identifiability conditions 1 and 2 are satisfied. Therefore, in
the case of convergence with e(z) = v(z) the same identifiability con-
ditions must be valid for direct closed loop identification. Note that
the error signal e(k) is determined by the same equation for both the
indirect and the direct process identification - compare Eq. (24.1-4)
A
and Eq. (24.1-30, 31). In the case of convergence this gives A = A,
B = B and D
= D and therefore in both cases e(k) = v(k).
396 24. Identification in Closed Loop
T
y(k) = .'1!_ (k)~ = [-y(k-1) ... -y(k-ma) u(k-d-1) ..• u(k-d-~) J~.
(24.1-32)
.'1!_T(k) is one row of the matrix! of the equation system (23.2-6). Be-
cause of the feedback, Eq. (24.1-26), there is a relationship between
the elements of .'1!_T(k)
T
u (k-d-1) is therefore linearly dependent on the other elenents of .'1!_ (k)
if~$ mb-1 and v $ ma-d-1. Only if~~~ or v ~ma-d does this linear
dependence vanish. This holds also for the actual equation system Eq.
(23.2-6) for the LS method. This shows that linearly dependent equa-
tions are obtained if the identifiability condition 2 is not satisfied.
c.f. Eq. (23.2-5) and Eq. (23.5-5). The convergence condition is that
e(k) is statistically independent of the elements of .'1!_T(k). For the
LS method this gives
T
.'1!. (k) = [-y(k-1) u (k-d-1) ... J
and for the RML method
T
.':1!. (k) = [-y(k-1) ... u(k-d-1) •.. v(k-1) •.. ].
The most important results for closed loop identification without exter-
nal perturbation but assuming a linear, time invariant, noisefree con-
troller can be summarized as follows:
(24.2-1)
with
-1
u (z) = - Q(z ) y(z). (24.2-2)
R P(z-1)
-1 -1
If Gs(z) = GR(z) = Q(z )/P(z ) then s(k) = w(k) is the reference va-
lue. There are several ways to generate the perturbation us(k). It is
only important that this perturbation is an external signal which is
uncorrelated with the process noise v(k).
398 24. Identification in Closed Loop
,-------,
PROCESS
s V I 0 (z-1) I
Gs(:z1) - I
f-----
I
I A (z- 1)
us I nl
w ew Q (z_, J UR u I 8 (i 1J -d I y
I
- p (z-1 J A(z-1) z
I i
L _______ _j
y(z) (24.2-4)
resulting in
-d -d
[AP + Bz Q]y(z) = DP v(z) + Bz P us(z).
(24.2-5)
1
and after cancellation of the polynomial P(z- ) one obtains the process
equation
(24 .2-6)
Unlike Eq. (24.1-27), u is generated not only from the controller based
on y, but from Eq. (24.2-1) also by the perturbation us(k). Therefore
the feedback, c. f. Eq. (24.1-33), is
The ARMA parameters ai and Si' Eq. (24.1-4), can be estimated by the
RLS method for stochastic signals, section 23.2.1, or by the method of
recursive correlation and least squares (RCOR-LS), [24.3]. However,
the parameter estimates converge very slowly with indirect process i-
dentification because the number of parameters (~ ~ rna+~ and r = md+~)
If only u(k) and y(k) are used for parameter estimation, and not the
perturbation, the RLS, RELS and RML methods are suitable. A measurable
perturbation can be introduced into the instrumental variables vector
of the RIV method. Then this method can also be applied.
25.1 Introduction
'J
~
---
Jj
controller parameter/
parameter state f--
calculation estimation
~
-
w u y
controller process
-
a) Process models
-1 -1 -d -1
A(z )y(z) - B(z )z u(z) = D(z )v(z) (25.1-1)
-1 -1 -d
A(z )y(z) - B(z )z u(z) = v(z) (26.1-2)
Suitable parameter estimation methods for the closed loop case are con-
sidered in chapter 23 and 24. For state estimation and state observa-
tion see section 8.6 and 15.4.
(25.1-5)
o State estimation:
- state estimates
o Signal estimation:
If a noise model is included in the parameter estimation the non-
measurable vi(k) or ni(k) can be estimated
and if the parameters~ and state variables ~ 0 (k) are then replaced
by their estimates
Cautious controllers
A controller which employs the separation principle in the design and
uses the parameter and state estimates together with their uncertain-
25.1 Introduction 405
e) Control algorithms
The next section discusses which of the control algorithms meet these
re~irements for parameter-adaptive control. Within the class of self-
406 25. Parameter-adaptive Controllers
- deadbeat controller DB ( v) , DB ( v+ 1 )
- minimum variance controller MV3, MV4
- parameter-optimized controller iPC-j
(25.2-1)
Its orders are v =rna and~ ~+d. Eq. (24.1-15) is for the case of
inexactly adjusted controller parameters
25.2 Suitable Control Algorithms 407
(25.2-2)
D(z- 1 )
(25 .2-3)
A(z - 1 )
Because of the assumed noise filter D/A, Eq. (24.1-2), only MV3 and
MV4 are of interest. The z-transfer function of MV3-d is, Eq. (14.2-12),
A -1 -dA -1 r A -1 (25.2-4)
zB(z )z F(z )~D(z )
1
with
(25.2-5)
fo
i-1
E fpai-p: i 1, ••. ,d
p=O
d
li di+d+1 - E a i+d+1-p f p'· i O, .•. ,m-1.
p=O
v = max[md,ma] -
~ max[~ -1,md]
}d 0
(25.2-6)
408 25. Parameter-adaptive Controllers
and for d ~ 1
(25.2-7)
D (25.2-8)
(25.2-9)
and p = md common roots appear, which means that the process is no lon-
ger identifiable ford= 0. If d ~ 1, Eq. (25.2-7) leads to
(25.2-10)
(25.2-11)
which is satisfied only for relatively large dead time. With r = 0 the
MV4-d controller arises, Eq. (14.2-13),
(25.2-12)
(25.2-13)
are increased by one and the process becomes identifiable for the same
conditions as for inexactly tuned MV3-d and MV4-d.
25.2 Suitable Control Algorithms 409
a) Pole-assignment design
or
-1 -2 -(4+d)
= 1 + a 1z + a 2z + ... + a 4 +dz = 0 (25.2-15)
d = 0:
a1 -1+a 1 +q 0 b 1
a2 a2-a1+q0b2+q1b1
a3 -a2 +q1b2+q2b1
a4 q2b2
d = 1:
a1 a 1-1
a2 a2-a1+qOb1
a3 -a2+qob2+q1b1
410 25. Parameter-adaptive Controllers
d = 0:
d = 1:
q = 1
0 (1-a 1 ) (b 1 +b 2 )
c.f. Eq. (7.2-13), and only q 1 and q 2 need be calculated. The design
depends, of course, on the proper placement of selected poles.
for k 0 and k + ~, c.f. Figures 5.2.2 and 7.2.2. The step response
25.2 Suitable Control Algorithms 411
m+1 * * m+1 *
u * (k) E p.+du (k-d-j)+ E qJ.e(k-j) (25. 2-18)
j=1 J j=O
*
u(O) = qo = (1-a )Eb •
1 (25.2-19)
1 i
l
follows by
o "" u * (M) - *
u .(M-1)
or (25.2-21)
1 * (M)-u * (M-~)]
o ,.. ~[u
o ,.. ~u
* (k) = u * (k)-u* (k-1) (25.2-22)
and stop if
~u
* (k)-~u
* (k-1) < EU
* (k) (25.2-23)
with for example E = 0.02. The increase 6 can also be calculated expli-
citely using
with i = 1,2, ... ,m which follows from Eq. (25.2-17). The parameters of
the PID-controller 3PC-2 are obtained as follows, c.f. Figure 5.2.2:
*
qo (25.2-25)
u*(M)-Mo
c) qo+q1+q2 = o
til
~
danger of evaluation for parameter- 1-'·
closed loop computational effort rt
control identifiability instability adaptive control Ill
tT
algorithm conditon 2 parameter for *) 1-'
operation (1)
satisfied calculation
()
parameter- suitable,
small dependent on proper design
optimized v;;,m -d medium -
a (v=2)
contr. i-PC-j
....
w
414 25. Parameter-adaptive Controllers
The methods discussed in section 23.2 can be used for treatment of the
d.c. values u00 and Y00 of the signals. Assuming that only stochastic
disturbances with E{v(k)} = 0 act on the loop, the d.c. values can be
estimated by simple averaging (method 2 in section 23.2), before the
adaptive control starts. Then both minimum variance controllers and
state controllers can be applied without additonal methods for remo-
ving of offsets, as no offset occurs. However, if the disturbances have
a non-zero mean (as in most cases) and there are also changes in the
reference variable w(k), the d.c. value must be taken into account and
the compensation for offsets must be considered for controllers with-
out integral action such as minimum variance controllers and state con-
trollers. The simplest way to reduce the d.c. value problem is to use
first order differences 6u(k) and 6y(k) in the parameter estimation
(method 1 in section 23.2). Offsets can then be avoided by adding a
pole at z 1 = to the estimated process model by multiplication with
S/(z-1) and by designing the controller for this extended model. How-
ever, this still leads to offsets for constant disturbances at the
process input and does not give the best control performance. Another
possibility is to replace y(k) by (y(k)-w(k)J and u(k) by 6u(k)=u(k)-
u(k-1) in both the parameter estimation and the control algorithm, as
proposed in [25.9]. This leads, however, to unnecessary changes of the
parameter estimates after setpoint changes and therefore to a negative
influence during a transient. Relatively good results have been ob-
tained by the estimation of a constant (method 3 in section 23.2).
Using Y00 = W(k) the d.c. value u00 can be easily calculated such that
offsets do not appear [25.15]. Then controllers without integral ac-
tion can be used directly.
a) Stability
y(z) (25.3-1)
25.3 Appropriate Combinations of Parameter Estimation 417
that the orders m and d are exactly known, that the forgetting factor
is A = 1 and that the reference variable w(k) = 0. For the assumed sto-
chastic disturbances the minimum variance controllers MV3 and MV4 are
suitable. During the transient phase, where the controllers are not ex-
actly tuned to the process and are assumed to be piecewise constant,
the identifiability condition 2
(24.1-14)
A(z- 1 ) + A(z- 1 )
B(z- 1 ) + B(z- 1 )
D(z- 1 ) -1
+ D(z ) .
A(z- 1 ) A(z- 1 )
B(z- 1 ) B(z- 1 )
D(z- 1 ) D(z- 1 )
(25.3-2)
0.6
0.2
1500 2000 k
-0.2 .r---~-..r--- .......
-0.6
a,
-1.
1500 2000 k
Now changes in the referenae variable w(k) are considered and the
noise is assumed to be zero, v(k) = 0. The process is described by
-1
B(z ) -d ( ) (25.3-3)
y(z) -1 z u z .
A(z )
Further assumptions are that the orders m and d are exactly known, the
forgetting factor is A = 1 and the reference variable w(k) is (deter-
25.3 Appropriate Combinations of Parameter Estimation 419
Extensive simulation and experience with real processes have shown that
parameter adaptive control algorithms are stable if the discussed con-
420 25. Parameter-adaptive Controllers
> ••• >~uu(m) or if m harmonics are involved. Even if the process signal
has the persistently exciting property only for a short period, the im-
provement in the process model may be sufficient. The results based on
simulation and experience, are not so comprehensive that a general sta-
bility proof is possible. Therefore new conditions for global stabili-
ty of the parameter adaptive control systems could contribute a lot.
[25.12] gives a review of this stability problem. Based on convergence
analysis of recursive parameter estimation methods some general condi-
tions for the combination of RLS, RELS, RML with MV controllers for
stochastic disturbances are given - see the next section. A further
reference is [25.20].
d) Computational effort
RLS/MV4
This was one of the first proposals [25.7], [25.8], [25.9]. D(z- 1 )=1
is assumed for the process model. Hence
(25.3-4)
(25.3-6)
j = 1-p (25.3-7)
parameters can be estimated. Applying RLS all rna+~ parameters can on-
ly be estimated ford~ 1. If d = 0 one parameter must be assumed known.
-d
FAy - BFz u = Fv (25.3-9)
422 25. Parameter-adaptive Controllers
(25. 3-10)
With Eq. (25.3-5) one obtains
(25.3-11)
This modified model contains the controller parameters qi and pi which
can be directly estimated by applying RLS. For this Eq. (25.3-11) is
written as a difference equation with v
with
-8 = [ qo • · • qv P 1 • • • P~ J (25.3-14)
T
~ (k-d) = [-y(k-d-1) ... -y(k-d-ma) p 0 u(k-d-2) .•. p 0 u(k-~) J.
(25. 3-15)
The RLS method is applied to Eq. (25.3-13) giving
T
~ (k+1) J.
A A
The parameter estimates are inserted into the control algorithm Eq.
(25.3-5) and the new process input is calculated by
This implies ID(eiwTo)-1 I < 1, i.e. that the error involved in assu-
ming D(z- 1 ) = 1 should have a frequency response lying within the
unit circle and hence does not magnify any frequency.
+ .~ ai]/~~
1.=1 1.=1 r
bi koo<k+1)
+ Pm+d- 1u(k-m-d+1)
- q 0 ew(k) - q 1ew(k-1) -
- qm_ 1ew(k-m+1)
5. Cycle
a) Replace y(k+1) by y(k) and u{k+1) by u(k).
b) Step to 1.
Notice that the old parameters ~{k) are used to calculate the process
input u(k+1) in order to save computing time between 4. a) and e).
RLS-DB
A particularly simple parameter-adaptive controller is obtained by com-
bining RLS and DB(v) or, better, DB(v+1). The design effort for DB is
very small and no offset problem accurs. The adaptive algorithm is ge-
nerated by combining the parameter estimation algorithms given in Table
23.1 and the controller parameter calculations stated in chapter 8.
~trol algorithm
stochastic deterministic
p
estimation ~ MV4 MV3 DB(v) DB (v+1) 3PC-3 LCPA
1) 1)
RLS x3) x3) X X X X
1
(25.4-1)
GP(s) = (1+3.75s) (1+2.5s)
was used with T 0 = 2 sec (test process VII, see Appendix). The z-trans-
fer function with zero-order hold is
0.1387z- 1 +0.0889z- 2
Y.J.& (25. 4-2)
u(z) 1-1.036z- 1 +0.2636z_ 2 .
Only a second order model was chosen in order to obtain good parameter
estimates so enabling direct comparison with the exact parameters.
(This is difficult for higher order processes, even if the input/out-
25.4 Simulation of Different Parameter-adaptive Controllers 427
put behaviour fits well [23.9]. The d.c. values are zero.
Stochastic disturbances
n(z) -1 -2
+0.05z +0.8000z
v(z) -1 -2
1-1.036z +0.2636z
Some examples using two fixed and two parameter-adaptive control algo-
rithms are shown in Figure 25.4.1. The fixed controllers were designed
with the exact process parameters. For the parameter-adaptive algo-
rithms the forgetting factor was chosen to :\ = 0.98. This example shows
that:
rdkl
a)
y (k)
y (k) u (k)
u (k)
b)
Umax for k ~ 10
y (k)
u (k)
c)
100k
y(k)
u(k)
d)
u (k) - _f
e)
Figure 25.4.1 Input signal u(k) and output signal y(k) for stochastic
disturbances. ( y(k) is drawn stepwise).
a) no control y(k) = n(k)
b) fixed controller MV3 (r=0 . 01)
c) adaptive controller RML/MV3 (r=0.01)
d) fixed controller DB( v )
e) adaptive controller RML/DB( v )
a,
a2
a, IV
Ul
02 a2
- - -- a2 .1:>
100 Ul
- ------- -------a.J _____ _ a2
~"''
100 k
- -a,-- ------ ----- a, I-'
Ill
rt
-- --- ~----------~,--------- -- a,
r ------~~~~~ "''0::l
0
Hl
~,
b2 0
6, r., Hl
"''
Hl
b2 o.qh~=--=-= =-:-:: -- - _=:::_ _- --= :::::.:::.:: ~~ (1)
t1
(1)
62 100 k ::l
rt
b, b,
L_ '0
Ill
62 100 'b2 t1
K Ill
~
rt
(1)
t1
I
K Ill
p.
Ill
'0
rt
"'<:'
(1)
(")
100 k 0
::l
rt
t1
0
I-'
I-'
(1)
K t1
{Jl
100
Figure 25.4.2 Parameter estimates ~. (k) and b. (k) and gain factor K(k) in the case of stoehastie
disturbanees ~ ~ .1:>
a) RML/MV3 (r=0.01) b) RML/DB(v) IV
\0
430 25. Parameter-adaptive Controllers
Different processes
Various parameter-adaptive control algorithms were applied to differ-
ent types of stable and unstable, proportional and integral action,
minimum phase and nonminimum phase processes, as given in Table 25.4.2.
Figure 25.4.5 presents the results for step changes in the reference
value with the best algorithms in each case. For all proportional ac-
tion and stable processes RLS/DB show quick adaptation to exact tuning.
With RLS/MV3 a stable closed loop can be achieved for the integral ac-
ting and the unstable processes.
These simulations may give a first insight into how the parameter-adap-
tive control algorithms work in conjunction with different signals and
different processes. They have shown that convergence to the true pa-
rameters is not a necessary condition for stable adaptive control.
100 k
y (K)
u (K)
w(K) v U(K)
I/ Y(K)
1
~ [J
ll [']_
100 k
Y(K) I
U(K)
W(K)
100 k
Y(K)
U(K)
W(K)
100 k
y (K)
u (K)
w(K) !--U(K)
ILY(K)
1
ru
"
I" "
100 k
Figure 25.4.3 Input signal U(k) and output signal Y(k) for
a) step changes in the reference value W(k)
b) fixed controller MV3 (r=0 . 025). D(z-1)=1.
c) adaptive controller RLS/MV3 (r=0.025)
d) fixed controller DB( v )
e) adaptive controller RLS/DB(v)
w
"'"
IV
a, a,
a? a,
_/~
-- --- - - - --- - - - -- _,..:.02-- ~0 2
100 k ~
a, 100
·<;;, a,
i),
6,
b, IV
l11
0.1
100 k '0
AI
I"!
AI
s
Ill
R R rt
Ill
I"!
I
AI
p.
AI
"0
rt
100 k
H
-+L '_,
100 k
1-'·
<
Ill
()
0
::s
rt
I"!
0
Figure 25.4.4 Parameter estimates ai(k) and bi(k) and gain factor K(k) with (deterministic) reference 1-'
1-'
value step changes Ill
a) RLS/MV3 (r=0.025) b) RLS/DB(v) I"!
(IJ
N
U1
ol:>
sample
~
f-'
process s-transfer function z-transfer function characterization Ill
time T0 rt
f-'·
0
1 0.1387z- 1 +0.0889z- 2 low-pass behaviour ::s
1 G1 ( s) 2 .o G1 (z) . -1 -2
1-1.036z +0.2636z 0
(1+2.5s) (1+3.75s) Hl
ol:>
w
w
434 25. Parameter-adaptive Controllers
RLS/081
CD
ru
10 u so 100 k
ylkl ylkl
ulkl
wlk l
RLS/08 2
1 ~
~
10 k so 100 k
J
.-I
0
1-<
b
+.1
1::
y(kl 0
u(kl u
wlkl
Q)
RLS/081 :>
® 1
·.-l
+.1
0..
rrJ
"'
IS k 7S
rrJ
I
1-<
Q)
b
+.1
Q)
~
1-<
RLS/08 1 rrJ
1
0..
15 k 15 "'.::
rrJ
.::
0
·.-l
+.1
u
.::
ylkl ylkl ::lN
ulkl 44•
w [k ) +.1 • ""
>::lfl
Q)N
·.-l
Ul Q)
>:.-I
rrJ.Q
1-< rrJ
E-tE-t
® RLS IDB 2
~~
10
wur so 100 k
25.4 Simulation of Different Parameter-adaptive Controllers 435
RLS/ MV 3
®
165
ylk l
ulkl
wlkl
y(k) ylkl
u lkl
-Mkl
® RLS/MV3
165
RLS/DB1
(2)
X - X - X
RLS/MV3 X X X X X
RLS/MV4 X - - X -
RML/MV3 X X - X X
RML/MV4 X - - X -
436 25. Parameter-adaptive Controllers
T0 sample time
A forgetting factor
(25.5-1)
(25.5-2)
This indicates that the order needs not be exactly known. However, the
adaptive control algorithms are sensitive to the choice of the dead
time d. If d is unknown or changes with time, the control can become
either poor or unstable, but this can be overcome by including dead
time estimation [25.21).
(O:>r':>1) (25.5-3)
where qOmin = 1/(1-a 1 )Lbi, qomax= 1/Lbi, c.f. sections 7.2 and 20.2. In
the case of MV3 q 0 depends hyperbolically on r/b 1 for r/b 1 > b 1 , Eqs.
(14.1-25), (14.1-27). With q 0 = qomax = 1 0 /b 1 for r = 0 one obtains,
Eq • ( 1 4• 1 - 2 7 ) I
2
r" . r" > 1 (25. 5-4)
1+r/b 1
Kll
M =300m'lh
7
6
5
4
M=550m%
3
2
0
2 3 4 5 6 7 8 9 10
25
23
21
k
19
17
u1 rv1
10 50 k
4 ~n
3 N
(J1
'"0
2 Ill
11
D>
~
rt
ro
11
I
550
"mi~~ l Ill
A.
Ill
'"0
:: 1o 5o[ 100 1So ' rt
I .....
300 <
ro
()
0
::l
Figure 25.6.3 Adaptive control of the air temperature for constant spray water flow, changing reference rt
values w(k) and changing air flow M. Adaptive controller: RLS/DB( v +1). 11
0
rn = 3; d = O; A = 0.9; T0 = 70 sec. 1-'
1-'
ro
11
Ul
25.6 Examples of Applications 441
Adaptive
Controller
K [pH/%]
0.2
0.1
10 20 30 40 50 60 70 80 90 100 u[%]
:t-
1:~ ......._,______-~ ~ 'CI
'CI
5 10 1S 20 25 30 t(min) f-'
1-'·
AM., [ \) 0
Pi
rt
a) 5 1~ I 15 20 25 30 t(min] 1-'·
0
_ 1~ 1 :::l
(I}
~ [pHB) f
7~-vP~P~-~J~~
~---~~--~],
S ~~-~~~~~~~~~~
=• """20 ~~~--~
30"""" 10 15 25 - t[min]
0
""'::L~~n~~----J~
. 5 10 15 20 25 30 t[min]
llMa [11 ]1
10
0 I
_ 10 S 10 15 20
I 25 30 t ( min]
-2q .._I_ _ __ _
[\]f
b) _ 1~ 5 10 15 I 20 2S 30 t[min]
Figure 25.6.6 Adaptive control of pH. Ma : acid flow; Mb: base flow; w neutral water flow M:
a) RLS/DB( v +1), r'=0.5, T 0 =15sec, m=3, d=2, A=0.88, ~(0)=500!.
...w...
b) RLS/MV3, r' '=0.15, T0 =15sec, m=3, d=2, A=0.88, ~(0)=500!.
444 25. Parameter-adaptive Controllers
(25.7-4)
1 -1 -d -1 -d
A(z- )y(z) = B(z )z Pu(z) + D(z )z Vv(z) (25.7-5)
with
25.7 Parameter-adaptive Feedforward Control 445
A(z- 1 ) -1 -n
1 + a 1z + .•. + anz
B(z- 1 ) -1 -n
s 1z + ... +Snz } (25.7-6)
D(z- 1 ) -1 -n
01Z + .•. + onz
A(z- 1 ) is the common denominator of GP(z) and Gv(z) and B(z- 1 ) and
D(z- 1 ) the corresponding extended numerators. As all signals of Eq.
(25.7-5) are measurable, the parameters ai, Si and oi can be estimated
by recursive least squares (RLS), see section 23.2, using
T
§_ = (a 1 . . • an s 1 • . . sn o 1 ... on] (25. 7-7)
:l!_T(k) = (-y(k-1) .•• -y(k-n) u (k-d -1) ... u (k-d -n)
p p
(25.7-8)
and the elements in Eq. (25.7-8) then become linearily independent on-
ly i f
max (~;v+(d
p -d v ) J ;;,; n for d -d ;<: 0
max ( ~+ (d -d ) ; v J
v p
p v
;<: n for d -d
p v
,;; 0, } (25.7-10)
Based on the model Eq. (25. 7-5) and the parameter estimates ~, feed-
forward control algorithms can be designed using pole-zero cancella-
tion (section 17.1), minimum variance (section 17.4), the deadbeat
principle (25.29] or by parameter optimization (section 17.2). There-
sulting adaptive algorithms are described in (25.29]. They show rapid
adaptation, and an example is shown in Figure 25.7.2. The combination
of RLS with MV4 was proposed in (25.9].
y(k)
v(k) v(k)
1 I
100 k
y(k )
u(k)
v(k)
/v(k) /u(k)
/;
L II
100 k
ty(k)
- p-canonical model
p -1 -d r -1
L B .. (z )z u.(z) + L D .. (z )v.(z) (25.8-1)
j=1 1J J j=1 1J J
i 1I • • • I r
T
y* (k) = [y(k)y(k+1) .•. y(k+n-1) JT (25.8-6)
448 25. Parameter-adaptive Controllers
y_* (k) = _g_*~* (k) + ~*£* (k) + y_* (k) + Q*y_* (k) . (25.8-7)
i 1, ... ,r (25.8-9)
a)
u1 2 yl
1 + 10s + 21s
1 + 7s + 1 2s 2
+ 12s +
2 + 17s + 32s
u2 y2
bl
w1 1~1~
I
30 60 90 120 k
w21~~
k
y1 1~1~ ~
n
v
1- l r ""
I
~-0 k
y2(k)
1
v
u1 (k)
1\ lr
u
II k
v- v
k
30 60 90 120
y1lk)
1
The results of this chapter have demonstrated that suitably chosen pa-
rameter-adaptive control algorithms are asymptotically stable and con-
verge rather quickly if following conditions are satisfied:
(a) The linear process model and the noise model approximately corres-
pond to the real process.
In practice the process parameters are usually time varying. The para-
meter-adaptive control algorithms can then track the process if a for-
getting memory (A<1) is used in parameter estimation and the process
parameters change slowly compared with the process dynamics. However,
to avoid the process model 'talking asleep', it is required that
'0
A>
k k H
A>
ffi
rt
(D
H
I
A>
Q,
A>
'0
~-==
. rt
k f-'·
k <:
·~ 55 55 t v-- (D
l
~
1--'
u~ !Vl U<~ !VI rt
f-'·
<:
I. I. A>
H
f-'·
A>
k
In~~ ~~~u~~ ~ tr
J~ ~t;J k 1--'
(D
2 2 (l
0
U'I'!VI u.,!VI ::J
rt
H
2 2 0
1--'
1--'
(D
~~nif~V ~ ~[W H
k k Ul
0 0
M[m3/h] M[m3/h]
500 500
1.0 80 k
I
1.0
I
80 k
300
I
300
In addition the third feedback level may also supervise the functions
of the adaptive loop, particularly if the stability and convergence
conditions are violated by the operating conditions, for example if the
process parameters change too quickly or no external signal excites the
parameter estimation. In the last case if no external signal is exci-
ting the parameter estimation with a forgetting memory it can happen
that the model dies out ('falls asleep') with time until the control
algorithm is changed such that the loop becomes unstable. A process in-
put is generated which looks like a burst, the parameter estimation is
restarted again (i.e. 'awakes') and the adaptive loop becomes stable,
until the next burst, etc. There are several ways to overcome this,
using functions in the third feedback level. For example the forgetting
factor A is changed to or the process model is frozen if the control
deviation or the process input is within a certain limit, for example
le (k) I = lw(k)-y(k) I < E • Simulation and experience with real pro-
w
w
cesses have also shown that the parameter-adaptive control algorithms
can also be applied to processes with large stepwise process parameter
changes [25.16].
LEVEL 3:
COORDINATION COORDINATION
,--
1
I controller 1 parameter I LEVEL 2:
I
/\
parameter ~ I
I calculation .--- estimation I--
I ADAPT ION
L-~~----- - - - - - - - _j
~-- ---- -------- I
w I "'ONTROLLER u PROCESS
I y LEVEL 1:
- I
I CONTROL
I
L _______________ j
NR = 2WL - 1. (26.1-1)
(26.1-2)
NR
If the largest numerical value is the voltage 10 V = 10000 mV, for word
lengths of 7 ..• 15 bits the smallest representable unit is!':.= 78.7 .•.
0.305 mV. If a temperature of 100°C is considered, this gives /',. = 0.787
•.. 0.003°C.
The remainder oy < !':. is either rounded up or down to the next integer,
i.e. to L, or simply trunaated. Both cases give
(26.1-4)
- for rounding
(26.1-5)
- for truncation
(26.1-6)
The ADC discretized signal (yQ)AD is transferred to the CPU and is theiE
represented mainly using a larger word length - the word length WLN of
number representation. For a linear control algorithm the following
computations are made:
:<J
ro
P>
Ul
0
::J
Ul
H1
0
"
0
c
P>
::J
rt
t-'·
N
P>
,-------l rt
kTo t-'·
I k! o I 0
(walcPu ::J
I t:'l
Yl H1
H1
I ro
()
I rt
Ul
I rounding 1
I in A/D- 1
L __ ~o~ver~ ~
Figure 26.1.1 Simplified block diagram of the nonlinearities in a digital closed loop, caused by am~ li
tude quantization
(J1
""
\.!)
460 26. The Influence of Amplitude Quantization on Digital Control
For fixed point representation the quantization units shown for the
ADC hold if 8 bits or 16 bits word length CPUs are used. The quantiza-
tion can be decreased by the use of double length working.
(26.1-9)
for example can be represented using two words of 16 bits each, with
7 bits for the exponent E (point after the lowest digit) and 23 bits
for the mantissa M (point after the largest digit), within a numerical
range of
-0.24651902·10- 39 ~ L ~ 0.14272476·10 39 •
The above discussions have shown the various places where nonlineari-
ties crop up. As it is hard to treat theoretically the effect of only
one nonlinearity on the dynamic and static behaviour of a control loop
the effects of all the quantizations are difficult to analyze. The
known publications assume either statistically uniformly distributed
quantization errors or a maximal possible quantization error (worst
case) [26.1] to [26.6], [2.17]. The method of describing functions
[5.14], [2.19] and the direct method of Ljapunov [5.17] can be used to
analyze stability. Simulation probably is the only feasible way, for
example [26.3], to investigate several quantizations and nontrivial
processes and control algorithms.
b) The control loop does not return into the zero steady state posi-
tion as offsets occur
lim e(k)
k+oo
+ 0.
c) An additional stochastic signal - the quantization noise or roun-
462 26. The Influence of Amplitude Quantization on Digital Control
One multiple point characteristic with quantization unit ~ for the ADC
is assumed within the loop, as drawn in Figure 26.1.1. The possible
quantization errors o then are given by Eq. (26.1-5) and Eq. (26.1-6)
for rounding and truncation.
Quantization noise
If a variable changes stochastically such that different quantization
levels are crossed it can be assumed that the quantization errors o(k)
are statistically independent. As the o(k) can attain all values with-
in their definition interval Eq. (26.1-5) and Eq. (26.1-6) uniform dis-
tribution can be assumed, Figure 26.2.1. The digitized signal yQ then
p (6) 1 pi OJ
J6
L I1 - L-IK ...
-2
6
6 0 6
2 6
a) b)
Figure 26.2.1 Probability density of the quantization error for
a) rounding b) truncation
(26.2-4)
and a P-controller are assumed, c.f. Example 16.1. The control devia-
tion is
In the ADC the measured analog signal y(k) is rounded to the second
place after the decimal point, resulting in yQ(k). The response of the
signals without and with rounding to a reference value step w(k) = 1 (k)
the initial conditions y(k) = 0 and u(k) = 0 for k < 0 and the gain
q0 = 1.3 is shown in Table 26.2.1
0 1 .3000 0 1.3000 0
1 0.6015 0.5373 0.5980 0.5373 0.54
2 0.5670 0.5638 0.5720 0.5640 0.56
3 0.5653 0.5651 0.5720 0.5649 0.56
4 0.5652 0.5652 0.5720 0.5649 0.56
5 0.5652 0.5649 0.56
0 2.0000 0 2.0000 0 0
1 0.3468 0.8266 o. 3400 0.8266 0.83
2 0.7434 0.6283 0. 7400 0.6254 0.63
3 0.6482 0.6759 0.6600 0.6727 0.67
4 0.6711 0.6644 0.6600 0.6675 0.67
5 0.6656 0.6672 0.6800 0.6644 0.66
6 0.6669 0.6665 0. 6600 0.6708 0.67
7 . 0.6667 0.6600 0.6663 0.67
8 . . 0.6600 0.6664 0.67
9 . 0.6800 0.6637 0.66
10 . 0.6600 0.6705 0.67
11 0. 6600 0.6661 0.67
12 0.6800 0.6636 0.66
13 0.6600 0.6703 0. 67
14 0.6600 0.6661 0.67
15 0.6800 0.6636 0.66
0
A third order process, the test process VI (see Appendix) was simula-
ted together with a P-controller having q 0 = 4. The controlled variable
was quantized (ADC) with quantization unit~ 0.1 and the manipulated
y
variable with ~u = 0.3 (DAC). In Figure 26.2.2 the response is shown
to the initial condition y(O) = 2.2 and the reference variable w0 = 3.5.
A limit cycle occurs with amplitudes J~yJ ~ ~Y and J~uJ ~ 3~u·
466 26. The Influence of Amplitude Quantization on Digital Control
y,u
0
0 20 40 60 80 100 120 140 160 180 t [sec]
D
The describing function or the direct method of Ljapunov can be used in
stability investigations for the detection of limit cycles. To deter-
mine the describing function of one multiple point characteristic, for
example two three-point characteristics can be connected in parallel,
to obtain a five-point characteristic, etc. [5.14, chapter 52]. A limit
cycle results if there is an intersection of the negative inverse locus
-1/G(iw) of the remaining linear loop with the describing function.
~(k+1)
} (26 .2-5)
y(k)
As with Eq. (26.2-1) only the solution for the superimposed quantiza-
tion error oy as input is considered and the stability of
~(k+1) (26.2-6)
is analyzed. Further details are given in [5.17, chapter 12]. After de-
fining of a Ljapunov function
T T
V(k) ~ (k)~ ~(k); ~ ~ ~ - Y = I
q = Q~ + oq ; e = E~ + oe (26. 2-7)
-----
~
(26.2-9)
468 26. The Influence of Amplitude Quantization on Digital Control
2 2 2 222 2 222
aoqe = a 1 + a 2 ~ [1+~ (Q +E ) )a 0 ~ [1+q +e )a 0 • (26.2-11)
This shows that with increasing values of the factors q and e, the fac-
tor rounding mainly determines the overall error.
~ 2 v 2
E aopui + E aoqei" (26.2-13)
i=1 i=O
The same control loop is assumed as in Example 26.2.2. The factors and
the product in the control algorithm are rounded to the second decimal
place such that the quantization unit is~= 0.01. The results are
shown in Table 26.2.3. A limit cycle with period M 3 arises as with
quantization in the ADC. The amplitude is also about the same: I~YI~
0.0034 and l~ul = 0.01, though there is only one product.
26.2 Various Quantization Effects 469
0 2.0000 0 2.00 0
1 0.3468 0.8266 0.34 0.8266
2 0.7434 0.6283 0.74 0.6255
3 0.6482 0.6759 0. 66 0. 6728
4 0.6711 0.6644 0.66 0.6675
5 0.6656 0.6672 0. 68 0.6644
6 0.6669 0.6665 0.66 0.6708
7 0. 6667 0.66 0.6664
8 0. 68 0.6637
9 0. 66 0.6705
10 0.66 0.6661
11 0. 68 0.6636
12 0.66 0. 6704
13 0. 66 0.6661
14 0. 68 0.6636
15 0.66 0.6704
Dead band
If the parameters of feedforward control algorithms or digital filters
lie within certain ranges, offsets in the output variable can arise, by
product rounding, which are multiples of the quantization units of the
products.
Example 26.2.6
Depending on the initial values, the following final values are attain-
ed: u(O) $ 0.9640: lim u(k) 0.96
k->-oo
u(O) ~ 1.0450: lim u(k) 1.05.
k->-oo
470 26. The Influence of Amplitude Quantization on Digital Control
Fork~ 1 all initial values 0.9639 $ u(O) $ 1.0449 give a nearby roun-
ded steady state value within the range 0.97 $ uQ $ 1.04. The region
0.96 $ uQ $ 1.05 is called a dead band [26.6] which lies around the
steady state value for a constant process input. If starting with
u(O) 0.96 the input v(k) = 0 is applied, the signal u(k) approaches
u(k) 0.05 for k ~ 24. The dead band always lies around the new steady
state.
In [2.22, chapter 27], it is shown how the dead band can be calculated
for a first order difference equation.
The word length of the ADC should be chosen such that its quantization
error is smaller than the static and dynamic errors of the sensors. A
word length of 10 bits (resolution 0.1 %) is usually sufficient. The
word length of the DAC must be coordinated with that of the ADC. For
digital control it can be taken such that one quantization unit of the
manipulated variable arises, after transfer through the process, in a-
bout one quantization unit of the ADC.
27. Filtering of Disturbances
Some control systems and many measurement techniques require the deter-
mination of signals which are contaminated by noise. Suitable filter-
ing methods then have to separate the signal from the noise. It is as-
sumed that a signal s(k) is contaminated additively by n(k) and only
y(k) = s(k)+n(k) is measurable. If the frequency spectra of the signal
and the noise lie in different ranges, they can be separated by suita-
ble bandpass filters, Figure 27.0.1. This is treated in this chapter
for some important cases in the control field. However, if the spectra
of the signal and the noise have overlapping frequency ranges, estima-
tion methods have to be used to determine the signal. In this case it
is not possible to determine the signal without error. The influence
of the noise can only be minimized. The Wiener filter was developed
first in 1940 for continuous-time signals; the method of least squares
estimation was used. However, there are considerable realizability pro-
blems. A considerable extension to filter design is the Kalman filter,
published in 1960. This filter does not use a nonparametric signal mo-
del but a parametric model instead. It was first derived for discrete-
time signals in state space form. With the aid of the method of least
squares state estimation of ~(k) of the signal model is performed which
allows the calculation of the signal s(k). This problem was discussed
in section 15.4.
s y band pass SF
filter
In section 27.1 the noise sources and the noise spectra which usually
contaminate control systems are considered. Various filters are then
briefly described: analog filters in section 27.2 and digital filters
in section 27.3.
with s(t) the undisturbed signal and n(t) the noise. y(t) is sampled
474 27. Filtering of Disturbances
(27 .1-3)
As well as the basic spectrum (v=O) 1 side spectra (side bands) with
distance w0 appear for v = ±11±2 1 ••• These are shown in Figure 27.1.1
a) for the signal s(t) and in b) for the noise n(t) for v = +1.
Sss (w)
a) ..,..4-, /
I '
,/
,1'
I ' /
/ /
Wmax Ws 3Ws W
b) Snn(W) SnniW+Wo)
d)
Wo w
Figure 27.1.1 Power density spectra S(w) for the signal s(k) 1 the
noise n(k) and their low-pass filtering
a) signal c) low-pass filter
b) noise d) filtered signals
w0 : sample frequency; w8 = w0 /2: Shannon frequency
27.1 Noise Sources in Control Systems and Noise Spectra 475
(27.1-4)
(27.1-5)
n (t) n(k)
w,
•• • •••
k
•••
Using analog techniques broad band noise with w > w5 = n/T 0 can be fil-
tered. For filtering of noise before sampling, low-pass filters must
be used which have sufficient attenuation at w = w5 = w0 /2 of about
1/10 ... 1/100 or -20 ... -40 dB, depending on the noise amplitudes.
To design frequency responses of low-pass filters there are the follow-
ing possibilities [27.4].
~g = w/wg or o i~
g
(27.2-2)
GF (o) = 2 n (27.2-3)
1+a 1 o+a 2 o + ... +ano
with magnitude
(27.2-4)
IGFI N
kJ B)
-10 ~
:~ ~
: '.
-20 ~'I\
-30
4\~\3~~ '
0,001 0,01
(27.2-6)
cp W g ) = -c~
g
. (27.2-7)
The time delay caused by the phase shift is then ~t = -cp/w = -cp/~ w
g g
=
c/w and is therefore independent of the frequency. This results in a
g
step response with little overshoot. The amplitude does not descend so
quickly to the asymptote 1/~~ as for the Butterworth filter.
c (27 .2-8)
As analog filters for frequencies off < 0.1 Hz become expensive, such
g
low frequency noise should be filtered by digital methods. This section
first considers digital low pass filters. Then digital highpass filters
and some special digital filtering algorithms are reviewed.
SF ( z) B ( z- 1)
(27.3-2)
y(z) A ( z 1)
sF(s)
(27 .3-3)
y(s) 1+Ts
SF (z)
(27.3-4)
-1
y(z) 1 +a 1 z
-1. (27.3-5)
y(z) 1+a 1 z
GF 1 ( 1 ) = T ( 1:a 1 ) and GF 2 ( 1 ) = 1.
b0 (27.3-6)
GF1 (z) = -1 ·
1+a 1 z
0
b [(1+a 1 cos wT 0 )+ia 1 sin wT 0 ]
(27.3-7)
(1+a 1 cos wT 0 ) 2 +(a 1 sin wT 0 ) 2
(27. 3-8)
480 27. Filtering of Disturbances
2.0
1.0
!
0.1
I
I I
W T0 = 21l 3TI
0.01
0.1 WT0 10 20
with IGF 1 I =
1 for wT 0 = 0,2n,4n, ..• In Figure 27.3.1 the amplitudes of
a discrete-time and continuous-time filter are shown for T0 /T = 4/7.5.
There is good agreement in the low frequency range. At wT 0 = 1 the
difference is about 4 %. Unlike the continuous filter, the discrete
filter shows a first minimum of the amplitudes at the Shannon frequen-
cy w0 T = n with the magnitude
(27. 3-9)
(27.3-10)
GF(s) = (1+Ts)2
b ,z -1
l
b ,z -1
1
G:f(z) = (27.3-11)
For noise filtering in control systems for frequencies f g < 0.1 Hz di-
gital low-pass filters should be applied. They can filter the noise in
the range of wg < w < wS. Noise with w > ws must be reduced with ana-
log filters. The design of the digital filter, of course, depends much
on the further application of the signals. In the case of noise pre-
filtering for digital control, the location of the Shannon frequency
ws = ~;T 0 within the graph of the dynamic control factor IR(z) I, sec-
tion 11.4, is crucial, c.f. Figure 27.3.2. If ws lies within range III
IRiz)j IR(z)j
--,---
1
I
I
I
w w
a) b)
Figure 27.3.2 Location of the Shannon frequency ws = ~;T 0 within the
dynamic control factor
a) ws in range III b) ws in range II
small sample time large sample time
must be detuned. The graph of the dynamic control factor would change,
leading possibly to a loss in the control performance in regions I and
II. Any improvement that can be obtained by the low-pass filter depends
on the noise spectrum and must be analyzed in each case. The case of
Figure 27.3.2 a) arises if the sample time T0 is relatively small and
the case of Figure 27.3.2 b) if T0 is relatively large.
T2 s
GF(s) = (27. 3-12)
1+T 1 s
with parameters
(27. 3-14)
In the high frequency range is IGF(iw) I = 0 for wT 0 = vn, with v=2,4, ...
For low frequencies the behaviour is as the continuous filter.
(27. 3-15)
As well as the above simple low order filters, many other more complex
discrete-time filters can be designed. The reader is referred for exam-
ple to [27.1], [27.2], [2.20].
Recursive averaging
For some tasks only the current average value of the signals is of in-
terest, i.e. the very low frequency component. An example is the d.c.
value estimation in recursive parameter estimation. The following algo-
rithms can be applied.
N N 2
l: e 2 (k)
A
v l: [y(k)-s] (27.3-18)
k=1 k=1
N
s (N) N l: y(k). (27.3-19)
k=1
1 k,-1 1
J = -k- s{k-1) + ky{k). {27.3-21)
A
s{k-1} + k [y{k)-s{k-1)
1 1 1
The z~transfer function of this algorithm is
-1
{27. 3-22)
1+a 1 z
with a 1 = -(k 1-1)/k 1 and b 0 = 1/k 1 • Hence, this algorithm is the same
as the discrete first order low-pass filter, Eq. {27.3-6).
A 1 k
s {k) = N E y{i). {27 .3-23)
i=k-N+1
Subtraction of s(k-1) gives recursive averaging with limited memory
1
s{k) = s(k-1) +
A A
N [y(k)-y(k-N) J {27.3-24)
1-/.
--_-1. (27.3-29)
1-z
Note that the pole of Eq. (27.3-22) is close to one, and the poles of
Eqs. (27.3-25) and (27.3-28) are z 1 = 1.
The recursive algorithm with a constant correcting factor has the same
frequency respons.e as a discrete low-pass filter with T0 /T = ln ( 1 /-a 1 ).
Noise with wT 0 > ~ cannot be filtered and therefore increases the va-
riance of the average estimate. The frequency response of the recursive
algorithm with limited memory becomes zero at wT 0 = v~/N with v=2,4,6, •..
Noise with these frequencies is eliminated completely, as with the in-
tegrating A/D converter. The amplitudes have a maximum at wT 0 = v~/N
with v = 1,3,5, •.• Therefore noise with wT 0 > 2~/N cannot be effective-
ly filtered. The magnitude of the frequency response of the algorithm
with a fading memory is IG(iw) I ~ (1-A)/T 0 w for low frequencies. It be-
haves like a continuous integral acting element with integration time
T = T0 /(1-A). Because of the pole at z 1 = 1 it satisfies IG(iw 0 ) I
for wT 0 = v~, with v = 2,4,6, ... Near the Shannon frequency wT 0 = ~
the magnitude behaves as that of the discrete low-pass algorithm. There-
fore averaging with a fading memory can only be recommended if no noise
appears for wT 0 > ~, or if used in conjunction with analog filters.
Filtering of outliers
Sometimes measured values appear which are totally wrong and lie far
away from the normal values. These outliers can arise because of dis-
turbances of the sensor or of the transmission line. As they do not
correspond to a real control deviation they should be ignored by the
controller. A few methods are considered to filter these types of dis-
turbance. It is assumed that the normal signal y(k) consists of the
signal s(k) and the noise n(k), and that outliers in y(k) must be eli-
minated. The following methods can be used:
This section deals with the connection of control algorithms with va-
rious types of actuator. Therefore the way to control the actuators
and the dynamic response of the actuators are considered initially.
Actuator control
If the position U(k} (0 ••• 100 %) is transmitted, the DAC requires only
one sign but a relatively large \mrd length (8 ••• 12 bits). If the change
u(k) is transmitted the DAC must have both signs, but a smaller word
length (6 ... 8 bits) is sufficient.
Response of actuators
With respect to the dynamic response the following grouping can be ma-
de:
28. Combining Control Algorithms and Actuators 489
actuator
address
actuator
command
digital
memory
a) selector analog
switch holds
actuator
address
actuator u
command
b) selector
·digital
memory
D/A-
converter and
switch digital holds
actuator
address
actuator u
command
C) selector
digital
memory
switch
Various actuator control schemes are used to adjust the actuator posi-
tion change uA(k) to the manipulated variable uR(k) required by the
control algorithm, Figure 28.2:
n
rising simplified 0
t y p e of con- input D/ A- analog time
actuator struc- signal conver - trans - behaviour group power time blo c k diagram 5-
f-'·
e nergy tion s i on by mitte r [rnkp] [se c] ::l
f-'·
::l
pneuma- mem- air D/A- e lectr./ proport. 0.01 ... 1 ... .a
tical brane pressure conver- pneumat. with I 200 10 n
0
with 0.2 ... t er trans- time lag ~ ::l
mitte r rt
spr i ng 1 .o 2
kp / cm 0
"
f-'
\!)
""
492 28. Combining Control Algorithms and Actuators
Scheme a} is the simplest, but gives no feedback from the actuator res-
ponse. Schemes b), c) and d) require position feedback. b) and c) have
the known advantages of a positioner which acts such that the required
position is really attained. c) requires in general a smaller sample
time in comparison to that of the process, which is an additional bur-
den on the CPU. Scheme d) avoids the use of a special position control
algorithm. The calculation of u(k} is based on the real positions of
the actuator. This is an advantage with integral acting control algo-
rithms if the lower or the upper position constraint is reached. Then
no wind-up of the manipulated variable occurs.
Proportional actuators
For the proportional actuators of group I the change of the manipulated
variable calculated by the control algorithm can be used directly to
control the actuator, as in Figure 28.2 a}. In the case of actuator
position feedback control the schemes Figure 28.2 b) and d) are appli-
cable. Figure 28.3 indicates the symbols used.
or
T0 z -1
T ----1
1-z
493
28. Combining Control Algorithms and Actuators
process
control
algorithm
a)
process
control
algorithm
b)
process a.
position
controlalg
c)
process
control.
algorithm
d)
Figure 28.2 Various possibilities for actuator control
{shown for an analog controlled actuator)
a) feedforward position control
b) analog feedback position control
c) digital feedback position control
d) position feedback into the control algorithm
4 94
u ---
Amax
__ L __ UA
- - - operating
I point
I I
uAoo
u tLJ;i
-- _,.'---+--'--'-l___-_._ _ _ ___
Am1n
URoo
(28-2)
-1
T 1-z
The actuator then becomes part of the controller. Its integration time
T must be taken into account when determining the controller parameters.
(Note for mathematical treatment that no sampler follows the actuator.)
the actuator speed must be increased and the switch durations TA < T0
must be calculated and stored. To save computing time in the CPU this
is often performed in special actuator control devices [1.11]. This
actuator control device can also be interpreted as a special D/A-con-
verter outputting rectangular pulses with quantized duration TA. Figure
28.4 shows a simplified block diagram of the transfer behaviour of the
actuator-control device and the actuator.
llu
(28-4)
496 28. Combining Control Algorithms and Actuators
TA TA uRO
J UA (t) dt 0 Amax T I uRO I •
8
(28-5)
0
from this nonlinearity and stable steady state can be attained, c.f.
[5.14, chapter 52). To generate the position changes u(k) calculated
by the control algorithm, the actuator control device has to produce
pulses with amplitudes uRO' 0, -uRO and the switch duration TA(k), i.e.
pulse modulated ternary signals, see Figure 28.4. This introduces a
further nonlinearity. The smallest realizable switch duration TAO de-
termines the quantization unit 6A of the actuator position
(28-6)
(28-8)
with quantization unit 6A can be realized within one sample time. They
result in the ramps shown in Figure 28.4.
The rampwise step responses of the actuator with three-way switch and
control device can be described approximately by first-order time lags
with the amplitude dependent time constant
(28-9)
If these time constants are negligible compared with the process time
constants, a proportional action element without lag can be assumed and
therefore a linearized actuator. Process model simplification by the
neglection of small time constants was investigated in [3.4], [3.5].
For the case of continuous-time PID-controllers, small time constants
Tsm can be neglected for processes with equal time constants T of order
n = 2,4,6 or 8, assuming an error of ~ 20 % of the suadratic performan-
ce index Eq. (4-1) for r = 0 if
where TZ =nTis the sum of time constants, c.f. section 3.7.3. Eqs.
(28-9) and (28-10) give position changes for which the actuator can be
linearized
(28-11)
(28-13)
o. 72 (28-14)
(28-15)
498 28. Combining Control Algcrithms and Actuators
Note for the application of this rule that the sample time has to be
chosen suitably such that wmax = w8 .
Avoidance of wind-up
To avoid this 'wind-up' the following actions can be taken. If the ac-
tuator reaches a constraint at uAmax or uAmin in the control algorithm
these true positions must be used instead of the calculated u(k-1),
u(k-2), ••• This can be triggered by end position switches or by posi-
tion feedback or in the case of a unique relationship between the com-
puter outputs and the actuator by position counting software. Another
possibility is the feedback of the real actuator position into the con-
trol algorithm described by Eq. (28-1).
29. Computer Aided Control Algorithm
Design
Based on the process models, the computer aided control algorithm de-
sign may be organized as follows, if a computer in on-line operation
is used:
U1
0
w
504 29. Computer Aided Control Algorithm Design
1,0
0 +--+--+---
0 20 40 t[secl 0 20 40 t[sec] 0 20 40 t[sec] 0 20 40 t[sec]
Figure 29.2 Closed loop behaviour for an analog simulated process and
CADCA designed control algorithms. Process: Gp(s) = 1.2/
(1+4.2s) (1+1s) (1+0.9s), To = 2 sec.
a) process input and output during on-line identification
b) process input and output for 4 control algorithms and
with a step change of the reference value
30. Case Studies of Identification and Digital
Control
In the first case the process model is identified once only and a con-
stant (fixed) control algorithm is designed (on-line or off-line), c.f.
chapter 29. For the second case the process model is identified sequen-
tially and the control algorithm is designed after each identification
(on-line), c.f. chapter 25. Sections30.1 and 30.2 demonstrate the ap-
plication of ID-CAD to a heat exchanger and a rotary drier, and sec-
tion 30.3 shows the application of ID-CAD and SOAC to a simulated
steam-generator.
warm water
u
......------tlt--1___.,.,_~
n 11--L--~.J...L..t!fJnnnrf
r
steam
steam
valve
Mw
pipe condensate
water
valve cold water
ulmAl
5.0
~----------r-------~-------,--------,--------,----------~-
0 117 210 303 396 489 tis eel
81
0 117 210 303 396 489 tis eel
Figure 30.1.2 Process input and output signal during on-line identifi-
cation. PRBS: period N 31. Clock time A= 1. Sample
time T 0 = 3 sec.
U1
0
())
7a a t = tlsed200
100= o 100 llsecl
7aBCO
y y
j•cJ
,.cl . l"cl 1•cJ
78 c
81.2 0 100 200
81.2
78t:
81.2 0 100 tlsec1
81.2
w
0
()
OJ
Ul
CD
tlsecl 100 tlsecl
(/)
0 100 rt
u ~
lrMJ p.
f-'·
61.r------ 6 CD
0 100 200 100 tlsed Ul
!Iseel 0
u
1rMl
~ 8
b 0
H1
H
8 p.
CD
::J
rt
f-'·
H1
f-'·
state controller with observer ()
parameter-op timized controller OJ
3PC-3 (PID) sc rt
f-'·
0
::J
Figure 30.1.3 a) Closed loop response with CADCA designed control algorithms based on an identified pro- OJ
::J
cess model . Reference variable steps in both directions. p.
I cl l'cl 0
1-'·
81.2
x!
,.C 81.;~L_
0 100
81.2
100
82
1.
\!)
1-'·
rt
IIseeI tlsecl Ill
f-'
tlsed
0 100 0 0
0
4 ::J
rt
li
0
2 f-'
0 100 u
tlsecl 0
u Hl
ImAl
lmAI Ill
8
4 I~ - - --
6
b 0 100 Iisee] ::r:
CD
u Ill
ImAl rt
u
8 t'l
lmAI 6 X
()
10 ::r
Ill
::J
\!)
8;o CD
100 li
tlsecl
12
Figure 30.1 . 3 b) Closed loop response with CADCA designed control algorithms based on an identified pro-
cess model. Reference variable steps in both directions .
measured response .. •. .. simulated response (during design phase)
qo q1 q2 q3 q4 q5 Po p1 p2 p3 p4 P5
DB( \! ) -8. 4448 10.4119 -4.0384 1 .0775 0.0000 - 1 .0000 0.0000 -0 .2 3 17 -0.5841 -0.1842 -
DB (\!+1 ) -3.8612 0.1770 3 .8048 -1.6993 0.5848 0.0000 1 .0000 0.0000 -0.1059 -0.3928 -0.4013 -0.100
lJ1
0
\!:>
510 30. Case Studies of Identification and Digital Control
Figure 30.2.1 shows the schematic of a rotary dryer. The oven is hea-
ted by oil. Flue gases from a steam boiler are mixed with the combus-
tion gases to cool parts of the oven. An exhaust fan sucks the gases
through the dryer. The wet pulp (pressed pulp with about 75-85 % mois-
ture content) is fed in by a screw conveyor with variable speed. The
drum is fitted inside with cross-shaped shelves so as to distribute.
the pulp within the drum. Because of the rotation of the drum (about
1.5 rpm) the pulp drops from one shelf to another. At the end of the
drum another screw conveyor transports the dried pulp to an elevator.
The heat transmission is performed mainly by convection. Three sections
of drying can be distinguished. In the first section the evaporation
of water takes place at the surface of the pulp, in the second section
the evaporation zone changes to the inner parts of the cosettes and in
the third section the vapour pressure within the cosettes becomes less
than the saturated vapour pressure because of the pulp's hygroscopic
properties.
The goal was to improve the control performance using a computer. Be-
cause of the complicated dynamical behaviour and the large settling
time, computer aided design of the control system was preferred. The
required mathematical models of the plant cannot be obtained by theo-
retical model building as the knowledge of the physical laws descri-
bing the heat and mass transfer and the pulp motion is insufficient.
Therefore a better way is process identification. Because of the strong
disturbances step response measurements give hardly any information on
the process dynamics. Hence parameter estimation using PRBS as input
was applied [30.1], [30.2]. Special digital data processing equipment
based on a microcomputer was used, generating the PRBS, sampling and
digitizing the data for storage on a paper tape. The evaluation of the
data was performed off-line using the parameter estimation method RCOR-
LS of the program package OLID-SISO. The initial identification experi-
ments have shown that the following values are suitable:
gases
fuel pressed
pulp measuring po-
back sition: band
j conveyor w
0
Jl#/7? ()
PI
(ll
---ll-, I
CD
DRUM (/l
I rt
I
I cp.
flue ' '~1 ~-J f-'·
) \ j 11 j:H: I 1· CD
gases (ll
from 0
Hl
the
H
boiler p.
CD
~G ::1
rt
cumbustion f-'·
Hl
air revolu- water f-'·
tions gas (}
content temperature PI
of the gas tempera- rt
mass flow of gas tempera- of the in the dry substance f-'·
screw ture at the 0
the fuel oi 1 ture at the pressed middle of percentage ::1
con- exhaust
oven outlet pulp the drum PI
veyor 1/JTS
-&0 WPS ::1
~ n -&M ~ p.
0
f-'·
\Q
Figure 30.2.1 Schematic of the rotary dryer (Sueddeutsche Zucker AG, Werk Plattling) f-'·
rt
drum diameter DD 4.6 m oil mass flow MF ~ 4.0 t/h PI
max 1-'
drum length 21 .0 m temperatures 1? 0 ~ 1oso 0c
()
~D 0
wet pulp mass flow MPSmax ~ 80 tlh !?H ~ 140-210 °C ::1
rt
flue gas mass flow MKG ~ 8000•) Nm 3 /h IJA ~ 1i0-155 °C 11
max 0
1-'
w
gas temperature at 0
the oven outlet .,F [t/tU
···--··· ... ·.-·· ...... N
-·····- ..-....... - ....
"o 0
1-'·
...." ...... <!l
1-'·
" -· .. .. ···· .... ······ ···-····· rt
gas temperature in " AI
the middle of the I-'
mass flow of drum ()
the fuel
~ ~C] 0
1::1
MF ""' ... ...:· rt
··-· , ··-··.. ·... .......... 11
• ' ,•
0
I-'
.....,
,··· .. ._.. _ 0
1110 Hl
····· .... ....... -····
AI
revolutions ""
of the screw gas temperature a t ::a
0
conveyor the plant outlet [•c) rt
~
,. .. AI
n .. .. -. 11
"A .. ··-· '<:
lOl
0
... 11
'<:
,.. .·-· ro
11
;,.t [•c)
water content of
,.,
the pressed pulp dry substance
""
WPS <i'ts lll'IT ....
<1>,.1 [•iJ
.
:f/ .·.
:: ·=··...
...
Figure 30.2.2 Block diagram of the rotary dryer "'
"•
Figure 30.2.3 .."
Data records of an identification experirnen~ with fuel flow I
" U1
changes t (n)
....
w
514 30. Case Studies of Identification and Digital Control
M0 [•C )
90
80
70 ···························································· ···
60
50
40
30
20
10 2 3 t [h)
- 10 ·..............................................................
2 3 t[h)
-20
m d = 0
-30
-40
ll>lM (•C)
40 .····························································
30
20 m d
10
2 3 I [h)
2 3 I [h)
m d = 0
-10 ·····························································
-20
MA [•C] -30
-40
40 .............................................. MA r•ct
30
20
10 m 3 d 2
2 3 t)h]
m = 3 d 2 3 I [h)
-2
fi!JITS (%) -4
········· ······················· -6 ····································
10 .·· -8
8
-10
6
4 lllllrs [%]
2
0
-2 2 3 t[h) m 5 d 6
-4
m 5 d = 2
a) b)
1.0 1.0
0.4 0.4
state control state control
0.2 0.2
o.o
·:,.,;::·\~·:~~ · ~·~:t~~~
.···::··.,,,_ ...fascade control _.,,. ..,,,_ ... /cascade control
10 10
•;:~: .... .-··· ... ····· ······· .....·--.
\ ........................................
,
.
-J _II;i II
\""
-3 I
_, \ •A
'·\\ /
.
_, i
r
I
'{
-5 i ithout control \
without control
'
./.................................................... .
-5
-6
'·.
·......-···
. ..............................................
-6
\. ..-····
...
a} b)
Figure 30 . 2.5 Simulated control behavi our of the rotary dryer for step
changes of the screw conveyor speed of 6n = 1 rpm, mea-
sur i ng the dry substance and the flue gas temperatures
~M and ~A' To = 3 min .
a) without feedforward control b) with feedforw. control
GF1
w
0
N
t:l
f-'·
.a
f-'·
f).Wps rT
PJ
1-'
()
0
!:l
rT
~
0
1-'
6nmon
0
Hl
DRYING
y=t.~TS PJ
PROCESS
::0
0
rT
PJ
6 MKG man 'i
'<
t:l
~
'<
ro
'i
UF
w UR -y W1 UR1 w2
Yz=t.~M y1=6~A
Figure 30.2.6 Block diagram of the cascaded control system implemented on a process computer
U1
-..J
<~>Ts ! [%1 V1
95 .'-'", ollrsf%1
a ~ CX>
94 1'·. _
_.,.· .,. . . . . . .r\ ,-....._ ./--.. . . , .,....-.
:·. 92 ........-.-- \ / ~ ... .~... / "--../"'~,;--....,._,...'·........, v l .... "/...-..
,/'.. 91 \,/ ... w.
93 /"\. !
/\- . ....../ \} ...._./
_.,.... ,-/
92 \.....--'"'\,,/
!
91··· \ ../
-..._:._). -,~_i.:
90
~~
·cl
140
120 ~~
i). -,.['CI 100
140
120 ......__.~------~~~
~~~['CI w
100- 0
190 __ _ _ _........_ ...._ ___..------------..-.----...---~
155 ()
P>
Ul
n l [~l (1)
~:F--~--~------ (/)
rt
c
0:
:;~::~__fo-~,-~,,,--C'--~----~'~ t-'·
(1)
Ul
w,.u%1 0
H1
•
--~ MJ!r; ...
~ t a ... H
0:
(1)
·-·r. . . . . ::l
rt
t-'·
H1
t-'·
(l
P>
rt
t-'·
0
::l
P>
::l
0:
18 19 20 21 22 23 00 01 02 03 04 OS t!hl
0
t-'·
a) b) '-0
t-'·
15 16 17 16 19 20 21 22 23 00 01 02 03 tlhl rt
P>
Figure 30.2.7 Signal records of the rotary dryer. Signals are defined in Figure 30.2.1.
....
()
MM: molasses mass flow 0
b) digital control with cascaded control system ::l
a) manual control rt
and feedforward controller GF 1 1-1
0
....
30.3 Digital Control of a Steam Generator 519
In all cases program packages based on FORTRAN were used, and these
involved between 6-16 K words of core memory and 25-60 K words of disk
memory.
520 30. Case Studies of Identification and D~gital Control
b) Feedforward controllers
20 40 50 80 100 [min)
''l~
',~f
3PC3/2PC2
-10
3PC3/ 2PC2
~~~==--------------
y2 [ba~
5
'~I
b) 20 40 50 [min]
:
v troll
' l\
-10
3PC3
,::+'1---------------------
3PC3 -------
y,[K]j
""""- ==-=-----= _,v
-5
,,:~ u,[%]h
~
-5 -10r
'1 1~
y,[~l=
""
2 PC2
FEED FORWARD
-5
''~C:=
u[%]
2
-10 -10
d) 20 40 50 [min] e) 20 40 50 [min]
Figure 30.3 .1
b) Responses to setpoint steps w1 (k) c) Responses to setpoint steps
of the steam temperature. Two w2(k) of the steam pressure.
main feedback controllers. Two main feedback controllers
Steam temperature controller: (controllers as b)).
PID r = 0. e) Responses to a disturbance step
Steam pressure controller:
v(k) (steam flow). Two main
PI r = 0.
feedback controllers as in b)
d) Responses to a disturbance step and one proportional feedfor-
v(k) (steam flow). Two main ward controller from steam
feedback controllers as in b) . flow v to fuel flow u 2 .
524 30. Case Studies of Identifica tion and Digital Control
u,l%)
/DENT/F. ADAPTI VE CONTROL FIXED CONTROL
RLS -D82
.~~~ r
u 10
n ~
II
~
~
·10
\
vv
Y, [I(J (\ r--.
t
1
·5
-5
' ':j
a) T-~--~2o~~~,~o--~-s~o~--~s~o----~--~~---
IOO mini t
C) 20 40 I min) I
X.=.
u,(o/oli-_ _ _ ____:.F....:I.:..: CD:::.N:.:_T.:_:R~O=.!l=------l
E.=.D-.:::.
20
~
-10
,~,1 1\
co:::---
·5
·5
~I
IDENTF. ADAPTIVE CONTROL
uoor--T------------~--~
-, RLS - MVJ r : 0,0096
5
·5 ~
Y,I~I
v[%1 1
~
10
-10
b) 20 40 60 (rrinl t d) 20 40 (min] t
ADAPTI VE
CON TROL
RL 5 - 15 11 5C
-10 -10
a) 1-----~2o ,~
~----~ o ----r~~7
o ----~6~ l ~-
·n~
b) 20 LO 60 {monl I
u 21%1 u 2(%J
10 10
-10 -10
~--~~2~0----~L 0----~6~0~~~~
7 l ~l-
m in~ 7
60 [monl I
c) d) 20 LO
The methods discussed and results for only the superheater (SISO) and
their implementation time requirements are summarized in Table 30.3.1.
Tests
Identi- Controller Adapta- [min] Total
Procedure
fication design tion time setp.setB time
time [min] time [min] [min] w1 w2 [min]
SISO 1. Identification and c.a.d.
of control algorithms
60 15 - 20 - 95
(super-
heater) 2. Adaptive control 15 - 25 20 - 60
The numbers given may also be valid if small/medium size process dis-
turbances act on the control variables. By using parameter-adaptive
control algorithms far less time is required.
An evaluation of the results of this study and also of the other case
studies of this chapter is given in section 30.4. Specific results for
steam generator control are:
30.4 Conclusions
The case studies discussed in this chapter have shown that the follow-
ing results can be summarized concerning the different methods of iden-
tification and control.
Advantages:
- any control algorithm can be designed and evaluated
- different control schemes can be designed and compared by simulation
- off-line or on-line identification methods can be used
- general open loop model obtained
Disadvantages:
- process should be time invariant during identification and design
time
- requires more time than adaptive algorithms
Therefore this method should be used for the basic design of the con-
trol scheme and for the design of fixed control or feedforward adap~
tive control if the process is time invariant.
Advantages:
- requires less time than identification and c.a.d.
- can be used for slowly time varying processes
Disadvantages:
- control algorithms may have to satisfy closed loop identifiability
conditions
- small computational effort for identification and controller design
required
- special closed loop model obtained
Therefore parameter-adaptive control algorithms should be used if the
process is slowly time varying and the available design time is small.
These algorithms may be used for tuning of fixed control algorithms
or for self-adaptive control.
Appendix
Various 'test processes' have been used in this book to simulate the
typical dynamical behaviour of processes in order to test control sys-
tems with various control algorithms, identification and parameter es-
timation methods and adaptive control algorithms. These test processes
are models of processes with various pole-zero configurations and dead
times, and were chosen with regard to several viewpoints. The discrete-
time transfer functions G(z) were determined by z-transformation from
the continuous-time transfer function G(s) with a zero-order hold, c.
f. Eq. (3.4-10), if not otherwise indicated.
-1 -2
b z + b 2z
... .
ylk)
-1 2 10
1+a 1 z +a 2 z
7,5 - -.- - - - - · . ;-.-.- ........ ·~ ..L ........ _____
-1.5; a 2 = o. 7 5
2 sec.
0,5
= 4 sec; T 2 10 sec
-1 -2 o++~~~~,_---------r------~
b 1z +b 2 z
o·· 10 20
-1 -2
1+a 1 z +a 2 z T 0 = 2 sec
K(1+T 4 s) -T s
( e t ylkl
GIII s) = (1+T 1 sl (1+T 2 sl (1+T 3 s)
10 sec; T 2 = 7 sec;
1.0 - -~--
.
-.----; -;-.-.-.--.---. ................... ---
0,5
Tt = 4 sec; T 3 = 3 sec; T 4 2 sec.
-1 -2 -3
b 1 z +b 2 z +b 3 z -d
GIII(z) = -1 -2 -3 z 0+++++++++4~--------~--------
1+a 1 z +a 2 z +a 3 z 0 10 20 k
T0 40 sec:
G11 ( s)
0.96 bar
695s ( 1+15s) %
G12(s) =
0.0605
695s
[I ba%r
T0 = 20 sec:
0.01237z- 1+0.00798z- 2
1-1.264z- 1+0.264z- 2
0.001741z- 1
1-z- 1
T0 = 4 sec: y
1.0 - - - - - -- -
••
-
••••• ~· .........
••
-1.7063; a 2 = -0.9580; a 3 = -0.1767 .•
•
o~------~----~------
o.o186; b 2 = o.o486; b 3 o.oo78 0 10 20 k
For T0 = 2; 6; 8; 10; 12 sec see Table 3.7.1.
••••
-----~_. ................ -
b 1 z -1 +b 2 z -2
••
•
•
0+------+------+---_.
4 sec: 0
To 10 20 k
a, -1 .036; a2 0.2636
b, 0.1387; b2 0.0889
Appendix 531
K
GVI II ( s) = ( 1+T 1 s) ( 1 +T 2 s)
K = 1; T 1 10 sec; T 2 = 5 sec.
b 1z
-1 +b z -2
2
GVIII (z)
To 4 sec:
a1 -1.1197; a2 0.3012
b1 0.1087; b2 0.0729
ru
but only to its transpose X
T
This results in
ax, ax2
~
T aa 1 aa 1 aa 1
ax
aa
ax 1 ax2
~
a an a an a an
X
T
v w
a T avT ClwT
a a [~ Y!':] aa w +
aa v
av, av aw 1 aw
a a, w,
+ ... + __1?.
aa 1 p
w
a a, v,
+ . .. + __1?. v
a a, p
+
av, av aw,
aan w, + ... + __1?.
aa
w
p aan v, + ... + ~
aa
v
p
n n
Appendix 533
() T
()a [~ ~J = v.
If, on the other hand, the elements of w do not depend on the parame-
ters ai and v ~,
The above pair of equations is also valid for the matrices V and W in-
stead of the vectors v and w
a
()a
~LaTWJ W
•
d T
ax [~ ~ y] ~y
d
ay
[~T~ y] ATx
d
ax [~T~ ~J 2 A X A symmetrical.
534 Appendix
Table of z-Transforms
The following table contains some frequently used time functions x(t),
their Laplace,transforms x(s) and z-transforms x(z). The sample time
is T 0 • More functions can be found in [2.15], [2.19], [2.21], [2.11],
[2.13], [2.14].
z
s z-1
T0 z
1
t
2s (z-1) 2
2
T0 z(z+1)
( z-1) 3
-at 1 z
e s+a z-e-aTo
-at
t·e 2
(s+a)
To2ze-aTO(z+e-aTo)
2 -at 2
t ·e 3
(s+a) (z-e -aTO) 3
-at a (1-e-aTO)z
1 - e s(s+a) (z-1) (z-e-aTo)
sin w1 t
s
cos w1 t 2 2
s +w 1
e-at sin w1 t
2 2
(s+a) +w 1
-at s+a
e cos w1 t 2 2
(s+a) +w 1
Literature
[1.1] Thompson, A.: Operating experience with direct digital control.
IFAC/IFIP Conference on Application of Digital Computers for
Process control, Stockholm 1964, New York: Pergamon Press.
[1.2] Giusti, A.L., Otto, R.E. and Williams, T.J.: Direct digital
computer control. Control Engineering 9 (1962), 104-108.
[1.9] Lee, T.H., Adams, G.E. and Gaines, W.M.: Computer process con-
trol: modeling and optim~zation, New York: Wiley (1968).
[2.3] Jury, E.I.: Sampled-data control systems. New York: John Wiley
(1958).
536 Literature
[2.7] Tou, J.T.: Digital and sampled-data control systems. New York:
Me Graw Hill (1959).
[2.12] Zypkin, J.S.: Sampling systems theory. New York: Pergamon Press
( 1964) •
[2.18] Cadzow, J.A. and Martens, H.R.: Discrete-time and computer con-
trol systems. Englewood-Cliffs, N.J.: Prentice Hall (1970).
[3.2] Koepcke, R.W.: On the control of linear systems with pure time
delays. Trans. ASME (1965), 74-80.
[3.6] Campbell, D.P.: Process dynamics. New York: John Wiley (1958).
[3.14] Wilson, R.G., Fisher, D.G. and Seborg, D.E.: Model reduction
for discrete-time dynamic systems. Int. J. Control (1972),
549-558.
[5.1] Bernard, J.W. and Cashen, J.F.: Direct digital control. Instr.
and Contr. Systems 38 (1965) Nr. 9, 151-158.
[5.2] Cox, J.B., Williams, L.J., Banks, R.S. and Kirk, G.J.: A prac-
tical spectrum of DDC chemical process control algorithms.
!SA-Journal 13 (1966) Nr. 10, 65-72.
[5.7] Isermann, R., Bux, D., Blessing, P. and Kneppo, P.: Regel-
und Steueralgorithmen fur die digitale Regelung mit ProzeB-
rechnern - Synthese, Simulation, Vergleich -. PDV-Bericht Nr. 54
KFK-PDV, Karlsruhe: Gesellschaft fur Kernforschung (1975).
[5.8] Rovira, A.A., Murrill, P.W. and Smith, C.L.: Modified PI algo-
rithm for digital control. Instruments and Control Systems,
Aug. (1970), 101-102.
[5.9] Isermann, R., Bamberger, W., Baur, u., Kneppo, P. and Siebert,
H.: Comparison and evaluation of six on-line identification
methods with three simulated processes. IFAC-Symposium on Iden-
tification, Den Haag 1973, und IFAC-Automatica 10 (1974), 81-103.
[5.12] Beck, M.S. and Wainwright, N.: Direct digital control of che-
mical processes. Control (1968) Part 5, 53-56.
[5.15] Lopez, A.M., Murrill, P.W. and Smith, C.L.: Tuning PI- and
PID-digital controllers. Instrum. and Control Systems 42
(1969), 89-95.
[8.3] Athans, M. and Falb, P.L.: Optimal control. New York: Me Graw
Hill (1966).
[9.2] Smith, O.J.M.: Closer control of loops with dead time. Chern.
Engng. Progr. 53 (1957) Nr. 5, 217-219.
[9.3] Smith, O.J.M.: Feedback control systems. New York: Graw Hill
( 19 58) .
[9.5] Giloi, W.: Zur Theorie und Verwirklichung einer Regelung fur
Laufzeitstrecken nach dern Prinzip der erganzenden Rlickflihrung.
Diss. Univ. Stuttgart (1959).
[10.8] Anderson, B.D.O. and Moore, J.B.: Linear optimal control. En-
glewood Cliffs: Prentice-Hall (1971).
[12.8] Bendat, J.S. and Piersol, A.G.: Random data: analysis and mea-
surement procedures. New York: Wiley Interscience (1971).
[12.9] Box, G.E.P. and Jenkins, G.M.: Time series analysis, fore-
casting and control. San Francisco: Holden Day (1970).
[15.2] Sage, A.P. and Melsa, J.L.: Estimation theory with applications
to communications and control. New York: Me Graw Hill (1971).
[15.3] Nahi, N.E.: Estimation theory and applications. New York: John
Wiley ( 1 9 6 9) •
[15.5] Kalman, R.E. and Bucy, R.S.: New results in linear filtering
and prediction theory. Trans. ASME, Series D. 83 (1961), 95-108.
[15.7] Bryson, A.E. and Ho, Y.C.: Applied optimal control. Watham: Ginn
(Blaisdell) (1969).
[18.6] Isermann, R., Baur, U. and Blessing, P.: Test case C for the
comparison of different identification methods. Boston: Proc.
of the 6. IFAC-Congress (1975).
[21.1] Falb, P.L. and Wolovich, W.A.: Decoupling in the design and
synthesis of multivariable control systems. IEEE Trans. AC 12
(1967)' 651-659.
[22.6] Mishkin, E. and Braun, L.: Adaptive control systems. New York:
Me Graw Hill (1961).
[22.8] Mendel, J.M. and Fu, K.S.: Adaptive, learning and pattern re-
cognition systems. New York: Academic Press (1970).
[22.11] Maslov, E.P. and Osovskii, L.M.: Adaptive control systems with
models. Automation and Remote Control 27 (1966), 1116.
[22.15] Saridis, G.N., Mendel, J.M. and Nicolic, Z.J.: Report on de-
finitions of self-organizing control processes and learning
systems. IEEE Control System Soc. Newsletters (1973) Nr. 48,
8-13.
[22.16] Gibson, J.: Nonlinear automatic control. New York: Me Graw
Hi 11 ( 1 9 6 2) .
[23.3] Young, P.C.: The use of linear regression and related proce-
dures for the identification of dynamic processes. Proc. 7th
IEEE Symp. on Adaptive Processes. New York: IEEE (1968).
[23.9] Isermann, R., Baur, U., Bamberger, W., Kneppo, P. and Siebert,
H.: Comparison of six on-line identification and parameter
estimation methods. IFAC-Automatica 10 (1974), 81-103.
[23.14] Hannan, E.J.: Multiple time series. New York: J. Wiley (1970).
[23.19] Peterka, V.: A square root filter for real time multivariate
regression. Kybernetika 11 (1975), 53-67.
[23.21] Ljung, L., Morf, M. and Falconer, D.: Fast calculation of gain
matrices for recursive estimation schemes. Int. J. Control 27
(1978), 1-19.
[23.22] Mancher, H.: Vergleich verschiedener Rekursionsalgorithmen flir
die Methode der kleinsten Quadrate. Technische Hochschule
Darmstadt: Diploma thesis,Institut flir Regelungstechnik (1980).
[24.3] Kurz, H. and Isermann, R.: Methods for on-line process iden-
tification in closed loop. 6th IFAC-Congress, Boston (1975).
[25.4] Gunckel, T.L. and Franklin, G.F.: A general solution for linear
sampled-data control. J. bas. Engng. 85 (1963), 197.
[25.13] Clarke, D.W. and Gawthrop, B.A.: Self tuning controller. Proc.
lEE 122 (1975), 929-934.
[25.16] Kurz, H.: Digitale adaptive Regelung auf der Grundlage rekur-
siver Parameterschatzung. Dissertation Technische Hochschule
Darmstadt. Karlsruhe: Ges. f. Kernforschung, Bericht KFK-PDV
188 ( 19 80) .
[25.17] Ljung, L.: On positive real transfer functions and the conver-
gence of some recursions. IEEE Trans. AC-22 (1977), 539.
548 Literature
[25.19] Clarke, D.W. and Gawthrop, P.J.: Self tuning control. Proc.
IEE 126 ( 1979), 633-640.
[26.3] Knowles, J.B. and Edwards, R.: Effect of a finite word length
computer in a sampled-data-feedback system. Proc. IEE, Vol.
112 (1965), 1197-1207 and 2376-2384.
[26.6] Scheel, K.H.: Der EinfluB des Rundungsfehlers beim Einsatz des
ProzeBrechners. Regelungstechnik 19 (1971), 326, 329-331 and
389-392.
[26.7] Blackman, R.B.: Linear data-smoothing and prediction in theory
and practice. Reeding, Mass.: Addison-Wesley (1965).
[27.6] Schenk, Ch. and Tietze, U.: Aktive Filter. Elektronik 19 (1970).
[30.2] Mann, w.: Digital control of a rotary drier in the sugar indu-
stry. 6th IFAC/IFIP Conference on Digital Computer Applications
Dusseldorf (1980).
Literature 551
[30.3] Mosel, P., Feuerstein, E., Peters, P. and Scholze, G.: Flihrung
einer Trommeltrockneranlage flir PreBschnitzel mit einem ProzeB-
rechner. Zuckerind. 105 (1980), 554-561.
[30.4] Isermann, R.: Digital control methods for power station plants
based on identified process models. IFAC Symposium on Automatic
Control in Power Generation, Distribution and Protection
Pretoria (1980).
List of Abbreviations and Symbols
Symbols
a
b
parameters of the difference equations of the process
c
d
parameters of the difference equations of stochastic signals
d dead time d = Tt/T 0 = 1,2 ...
e control deviation e = w - y (also ew = w- y); or equation error
for parameter estimation; or the number e = 2. 71323 ...
f frequency, f = 1/Tp (Tp period), or parameter
g impulse response (weighting function)
h parameter
i cinteger; or index; or i2 = -1
k discrete time unit k = t/T 0 = 0,1, 2 ...
l integer; or parameter
m order of the polynomials A( ) , B( ) , C( ) , D( )
n disturbance signal (noise)
p parameters of the difference equation of the controller, or integer
p ( l probability distribution
q parameters of the difference equation of the controller
r weighting factor of the manipulated variable; or integer
s variable of the Laplace transform s = a+ iw; or signal
t continuous time
u input signal of the process, manipulated variable u (k) = U (k) - Uoo
v nonmeasurable, virtual disturbance signal
w reference value, command variable,setpoint w(k) =W(k) -w 00
x state variable
y output signal of the process, controlled variable y (k) = Y (k) - Y00
z variable of the z-transformation z = eTos
a
parameters of the differential equations of the process
b
List of Abbreviations and Symbols 553
b control vector
c output vector
k parameter vector of the state controller
n noise vector (rx1)
u input vector (px1)
v noise vector (px1)
~ reference variable vector (rx1)
X state variable vector (mx1)
y output vector (rx1)
554 List of Abbreviations and Symbols
coefficient
(3 coefficient
y coefficient;or state variable of the reference variable model
0 deviation, or error
E coefficient
c; state variable of the noise model
n state variable of the noise model; or noise/signal ratio
K coupling factor;or stochastic control factor
standard deviation of the noise v(k)
order of P(z)
v order of Q(z); or state variable of the reference variable model
')[ 3.14159 ••.
0 standard deviation, 02 variance, or related Laplace variable
time shift
w angular frequency w = 2'TI/T (T period)
.P p
.
X = dx/dt
xo exact quantity
..
X
x, t:.x
estimated or observed variable
...
= x- x 0 estimation error
X average
xoo value in steady state
Mathematical abbreviations
E { } expectation of a stochastic variable
var []variance
cov [] covariance
dim dimension, number of elements
tr trace of a matrix: sum of diagonal elements
adj adjoint
det determinant
Indices
P process
Pu process with input u
Pv process with input v
R or C feedback controller, feedback control algorithm, regulator
S or C feedforward controller, feedforward control algorithm
0 exact value
00 steady state, d.c.- value
Other abbreviations