0% found this document useful (0 votes)
57 views

Sanet - ST Digital Control Systems

This document is the preface to the book "Digital Control Systems" by Rolf Isermann. It discusses the increasing use of digital techniques in process automation and control systems due to advances in integrated circuits, processors, and programming. It notes a shift from centralized systems using large computers to decentralized systems using microprocessors. Digital controllers can now perform standard control functions as well as new functions via software. The preface aims to introduce engineers to the basic theory and application of digital control systems, covering methods from classical and modern control theory for designing digital control algorithms.

Uploaded by

Manci Peter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Sanet - ST Digital Control Systems

This document is the preface to the book "Digital Control Systems" by Rolf Isermann. It discusses the increasing use of digital techniques in process automation and control systems due to advances in integrated circuits, processors, and programming. It notes a shift from centralized systems using large computers to decentralized systems using microprocessors. Digital controllers can now perform standard control functions as well as new functions via software. The preface aims to introduce engineers to the basic theory and application of digital control systems, covering methods from classical and modern control theory for designing digital control algorithms.

Uploaded by

Manci Peter
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 583

Rolf Isermann

Digital
Control Systems
With 159 Figures

Springer-Verlag Berlin Heidelberg GmbH 1981


Prof. Dr.-Ing. ROLF ISERMANN
Technische Hochschule Darmstadt
SchloBgraben 1
D-6100 Darmstadt

Revised and enlarged edition of the German book ,Digitale Regelsysteme" 1977,
translated by the author in cooperation with Dr. D. W. Clarke, Oxford, U.K.

ISBN 978-3-662-02321-1 ISBN 978-3-662-02319-8 (eBook)


DOI 10.1007/978-3-662-02319-8

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned,
specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine
or similar means, and storage in data banks. Under § 54 of the German Copyright Law where copies are made for
other than private use a fee is payable to 'Verwertungsgesellschaft Wort', Munich.

© by Springer-Verlag Berlin Heidelberg 1981


Originally published by Spriuger-Verlag Berlin Heidelberg New York in 1981
Softcover reprint of the hllrdcover 1st edition 1981

Library of Congress Cataloging in Publication Data


Iserrnann, Rolf.
Digital control systems.
Rev. and en!. translation of Digitale Regelsysteme.
Bibliotraphy: p.
Includes index.
I. Digital control systems. I. Title.
TJ213.l64713 629.8'312 81-5599
AACR2

The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific
statement, that such names are exempt from the relevant protective laws and regulations and therefore free for
general use.

2061/3020-543210
Preface

The great advances made in large-scale integration of semiconductors,


the resulting cost-effective digital processors and data storage devi-
ces, and the development of suitable programming techniques are all
having increasing influence on the techniques of measurement and con-
trol and on automation in general. The application of digital techni-
ques to process automation started in about 1960 when the first process
computer was installed. From about 1970 computers have become standard
equipment for the automation of industrial processes, connected on-line
in open or closed loop. The annual increase of installed process compu-
ters in the last decade was about 20- 30 %. The cost of hardware has
shown a tendency to decrease, whereas the relative cost of user soft-
ware has tended to increase. Because of the relatively high total cost,
the first phase of digital computer application to process control is
characterized by the centralization of many functions in a single
(though sometimes in several) process computer. Such centralization
does not permit full utilization of the many advantages of digital
signal processing and rapid economic pay-off as analog back-up systems
or parallel standby computers must often be provided to cover possible
breakdowns in the central computer.

In 1971 the first microprocessors were marketed which, together with


large-scale integrated semiconductor memory units and input/output mo-
dules, can be assembled into more cost-effective process microcompu-
ters. These process microcomputers differ from the larger computers
in the adaptability of their hardware and software to specialized,
less comprehensive tasks. In addition, up to now microprocessors have
had mostly a shorter word length, slower operating speed and smaller
operational software systems with fewer instructions. They find many
applications resulting in their high-volume production leading to low-
er hardware costs.

Decentralized automated systems can now be built with these process


microcomputers. Tasks that were until now carried out centrally by one
process computer are now delegated to various process microcomputers.
Together with digital buses or ring networks and, eventually, superim-
VI Preface

posed process computers, many different hierarchically arranged auto-


mation structures can be built that can be adapted to the respective
process. This avoids heavy computing loads, the need for comprehensive
and complex user software and the higher susceptibility to computer
breakdowns, all of which are prevalent with centralized machines. In
addition, decentralized systems are easier to put into operation step
by step, can be provided with mutual redundancy (lower susceptibility
to malfunctions), and can lead to savings in wiring, etc. The second
phase of digital signal processing which is now beginning to take shape
is thus characterized by decentralization.

Besides their use as substations in decentralized automation systems,


process microcomputers have found increasing application in individual
instruments used for measurement and control. Digital controllers and
user-programmable sequence control systems, based on the use of micro-
processors, have been on the market since 1975. Digital aontrollers
can replace several analog controllers. They usually require an analog-
digital converter at the input because of the wide use of analog sen-
sors, transducers and signal transmission, and a digital-analog conver-
ter at the output to drive actuators designed for analog techniques.
It is possible that, in the long run, digitalization will extend to
sensors and actuators. This would not only save a-d and d-a converters
but would also circumvent certain noise problems, permit the use of
sensors with digital output and the reprocessing of signals in digital
measuring transducers (for example choice of measurement range, correc-
tion of nonlinear characteristics, automatic failure detection, etc.).
Actuators with digital control are electrical stepping motors, for in-
stance. Digital controllers can replace not just one or several analog
controllers, however, but can perform additional functions previously
exercised by other devices, or new functions. These additional func-
tions are such as programmed sequence control of setpoints, automatic
switching to various controlled and manipulated variables, feedforward
adjusted controller parameters as functions of the. operating point,
additional monitoring of limit values, etc. Examples of new functions
are: communication with other digital controllers, mutual redundancy,
automatic failure detection and failure diagnosis, the possibility of
choice between different control algorithms and, in particular, adap-
tive control algorithms. Entire control systems such as cascade-control
systems, multivariable control systems with coupling controllers, con-
trol systems with feedforward control which can be easily changed via
software at commissioning time or later, can be effected with a digi-
Preface VII

tal controller. Finally, very large ranges of the controller parameters


and the sample time can be realized. It can without doubt be assumed,
therefore, that digital techniques for measurement and control will
take their place alongside the proven analog techniques.

As compared with analog control systems, here are some of the charac-
teristics of control systems using process computers or process micro-
computers:

Feedforward and feedback control functions are realized in the form


of programmed algorithms (software or firmware).
- Sampled (discrete-time) signals.
Discrete-amplitude signals through the finite word lengths in a-d
converters, the central processor unit and d-a converters.

Because of the great flexibility of control algorithms in the form of


software one is not limited, as with analog control systems, to stan-
dardized modules with P-, I- and D-behaviour, but one can further use
more sophisticated algorithms which employ modern design methods for
sampled-data control based on mathematical process models. Several
books have been published dealing with the theoretical treatment and
synthesis of linear sampled-data control, based on difference equations,
vector difference equations and the z-transform. So far no in-depth
study is available in which the various methods of designing sampled-
data controls have been surveyed, compared and presented so that they
can be used immediately for designing control algorithms for various
types of process. Among other things one must consider the form and
accuracy of mathematical process models obtainable in practice, the
computational effort in the design, and the properties of the resulting
control algorithms such as the relationship between control performance
and the manipulation effort, the behaviour for various processes and
various disturbance signals, and the sensitivity to changes in process
behaviour. Finally, the changes being effected in control behaviour
through sampling and amplitude quantization, as compared with analog
control must also be studied.

This book is directed to engineers in industry and research and to


students of engineering who are familiar with the basic design tech-
niques for analog linear control systems and who want to be introduc-
ed to the basic theory and application of digital control systems. It
is true that at first a certain familiarity is necessary for dealing
with linear sampled-data systems. However, this can be acquired with
the aid of chapter 3 which gives a short introduction to discrete-time
VIII Preface

systems by concentrating on the basic relationships required by the


practising engineer. Based on these fundamental equations of discrete-
time systems, suitable methods of control system design have been cho-
sen, modified and further developed.

The sequel considers parameter-optimized, cancellation and deadbeat


control algorithms resulting from classical control theory, as well as
the state control algorithms and minimum variance control algorithms
derived from modern control theory, based on state-space representation
and parametric stochastic process/signal models. In order to investi-
gate the behaviour of the various feedforward and feedback control al-
gorithms, many of the algorithms involved (and the resulting closed
loops) were simulated on digital computers, designed with the aid of
process computers using program packages, and tried out in practice in
on-line operation with analog-simulated processes, pilot processes and
industrial processes. Part of the book is dedicated to on-line identi-
fication algorithms and to self-optimizing digital adaptive control
systems and their application. Also more practical aspects such as noi-
se filtering and actuator control are considered. It turns out that
the synthesis of discrete-time control systems is often simple if ma-
thematical models of the processes are available, and that it is ad-
vantageous to use the digital processor itself for gaining the process
models and for designing its control algorithms. Compared with analog
control systems described by differential equations, the difference
equations describing discrete-time control systems are easier to handle
and to program. The book is written such that certain chapters can be
omitted at a first reading. Therefore extensive referencing to other
chapters is made and sometimes short repetitions are inserted.

Many of the methods, developments and results have been worked out in
a research project funded by the Bundesminister fur Forschung und Tech-
nologie (DV 5.505) within the project 'Prozesslenkung mit DV-Anlagen
(PDV)' and in research projects funded by the Deutsche Forschungsge-
meinschaft in the Federal Republic of Germany.

The author is also grateful to his coworkers - who had a significant


share in the generation of the results through several years of joint
effort - for their energetic support in the calculation of examples,
assembly of program packages, simulations on digital computers and on-
line computers, practical trials with various processes and, finally,
with proofreading.
Preface IX

The first edition of the book was published 1977 in German with the
title 'Digitale Regelsysteme'. This book is a translation of the first
edition and contains several supplements. To ease the introduction in-
to the basic mathematical treatment of linear sampled-data systems,
chapter 3 has been extended. The design of multivariable control sys-
tems has been supplemented by section 18.1.5., chapter 20 (matrix poly-
nomial approach) and sections 21.2 to 21.4 (state control approach).
The old chapters 21 and 22 on signal filtering and state estimation are
moved to chapters 27 and 15. Therefore all chapters beginning with 22
have one number less than in the first edition. Because of the progress
in recursive parameter estimation section 23.8 has been added. Chapter
25 has been extended considerably taking into account the recent deve-
lopment in parameter-adaptive control. Finally, chapter 30 on case stu-
dies of digital control has been added.

The author is very grateful to Dr. David W. Clarke, University of Ox-


ford, U.K., for screening and improving the translation.

My thanks also go to Springer-Verlag for the publication of this Eng-


lish edition and for their close cooperation. Finally, my special ap-
preciation goes to Mrs. G. Contag for her careful and excellent typing
of the whole text.

Darmstadt, September 1980 Rolf Isermann


Contents

1 • Introduction •••.•••.••••.•....•.•••••••••••••.•.••...•...••.•••

A Processes and Process Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2. Control with Digital computers


(Process Computers, Uicroprocessors) • • • • • • • • • • • • • • • • • • • • • • • • • • • 9

3. Discrete-time Systems • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 14
3. 1 Discrete-time Signals • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • . • 14
3.1.1 Discrete-time Functions, Difference Equations •••••••• 14
3. 1 • 2 Impulse Trains • • • • • • • • • • • • . • • • • • • • • • • • • • . • . • • . . • • • • • • 18
3.2 Laplace-transformation of Discrete-time Functions •.•.•••••• 20
3. 2. 1 Laplace-transformation • • . • • • • • • • • • • • • • • . • • . • • • • • • • • • • 20
3. 2. 2 Sampling Theorem • • • • • • • • • • . • • • • • • • • • • • • • • . • • • • • • • • • • • 20
3. 2. 3 Holding Element ••••••••••••••••••••••••••••.••••••••• 23
3. 3 z-Transform • • • • . • • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • • • • • • • 24
3.3.1 Introduction of z-Transform .•••••.•••••.••••••••..••• 24
3.3.2 z-Transfor~ Theorems •••••••.•.••••••••••••••.•••••••• 26
3. 3. 3 Inverse z-Transform • • • • • • • • • • • • • • • • • • . • • • • • • . • . • • . • • • 26
3.4 Convolution Sum and z-Transfer Function •••••••••••••.•••••• 27
3. 4. 1 Convolution Sum • • . . . • • • • • • • . • • • . • • • • • • • • • . . . • . • • . . • • • 27
3.4.2 Pulse Transfer Function and z-Transfer Function •••••• 28
3.4.3 Properties of the z-Transfer Function •••••••••••••••• 31
3. 5 Poles and Stability • • • • • • • . • • • • • . • • • • • • • • • • • • • • • • • • • • • • • . • • 33
3.5.1 Location of Poles in the z-Plane •.•••••••••..•••.•••• 33
3.5.2 Stability Condition •••••••..••.••••••••••••..•••••••• 37
3.5.3 Stability Criterion throughBilinear Transformation ..• 37
3.6 State Variable Representation •••.•••••••••••••.••..•••••••• 39
3.7 Mathematical Models of Processes •••••.••••••••.•••••••••••• 51
3,7.1 Basic Types of Technical Process ••••••••••••••••••••• 52
3.7.2 Deternination of Discrete-time Models from Continuous
Time Hodels • • • • • • • • • . • • • • • . • . • • • . • • • • . • • . • • • • • . • • • • • • 54
XII Contents

3.7.3 Simplification of Process Models for Discrete-time


Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.7.4 Process Modelling and Process Identification ......... 65

B Control Systems for Deterministic Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4. Deterministic Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5. Parameter-optimized Controllers . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . 74

5.1 Discretizing the Differential Equations of Continuous


PID-Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . • . • . . . . . . . . . . . . . . 74

5.2 Parameter-optimized Discrete Control Algorithms of


Low Order . . . . . . . . . . • . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2.1 Control Algorithms of First and Second Order ...•.... 79
5.2.2 Control Algorithms with Prescribed Initial
Manipulated Variable •.....•..•..........•........... 83
5.3 Modifications to Discrete PID-Control Algorithms .•....•.•• 85
5. 4 Simulation Results . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5. 4. 1 Test Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4.2 Simulation Results of Second-order Control Algorithms 89
5.5 Choice of Sample Time for Parameter-optimized Control
Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 05

5.6 Tuning Rules for Parameter-optimized Control Algorithms ... 109

6. Cancellation Controllers . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

7. Controllers for Finite Settling Time (Deadbeat) 122


7.1 Deadbeat Controller with Normal Order . . . . . . . . . . . . . . . . . . . . . 122

7.2 Deadbeat Controller with Increased Order ................•• 127

7.3 Choice of the Sample Time for Deadbeat Controllers •..•.•.• 131

8. State Controllers 134

8.1 Optimal State Controllers for Initial Values .•••...•...•.. 135

8.2 Optimal State Controllers for External Disturbances ....... 145

8.3 State Controllers with a Given Characteristic Equation .... 150


Contents XIII

8. 4 Modal State Control . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

8.5 State Controllers for Finite Settling Time (Deadbeat) 157

8. 6 State Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . 159

8. 7 State Controllers with Observers . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3


8.7.1 An Observer for Initial Values . . . . . . . . . . . . . . . . . . . . . . 164
8.7.2 An Observer for External Disturbances . . . . . . . . • . . . . . . 165
8.8 A State Observer of Reduced Order .................•••...• 175

8.9 Choice of Weighting Matrices and SarnpleTime ...•.......... 179


8.9.1 Weighting Matrices for State Controllers and
Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . 180
8.9.2 Choice of the Sample Time . . . • . . . . . • . . . . . . . . . . . . . . • . 181

9. Controllers for Processes with Large Deadtime . . • . . . . . . . . . . . . . 183

9.1 Models for Processes with Deadtirae . . . . . . . . . . . . . . . . . . . . . . . 133


9.2 Deterministic Controllers for Deadtime Processes •........ 185
9.2.1 Processes with Large Deadtime and Additional
Dynamics . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
9.2.2 Pure Deadtime Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.3 Comparison of the Control Performance and the Sensitivity
of Different Controllers for Deadtime Processes . . . . . . . . . . 192

10. Control of Variable Processes with Constant Controllers ...... 199


10.1 On the Sensitivity of Closed-loop Systems . . . . . . . . . . . . . . . 200
10.2 Control of Processes with Large Parameter Changes 206

11. Comparison of Different Controllers for Deterministic


Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.1 Comparison of Controller Structures: Poles and Zeros •.•. 207
11.1.1 A General Linear controller for Specified Poles .. 209
11.1.2 Low Order Parameter-optimized Controllers . . . . . . . . 211
11.1.3 General Cancellation Controller . . . . . . . . . . . . . . . . . . 211
11.1 4 Deadbeat Controller . . . . . . . . . . . . . . . . . . . . • . . . . . . . . . 212
11.1.5 Predictor Controller . . . . . . . . . • . . . . . • . . . . . . . . . . . . . 213
11. 1. 6 State Controller . . . . . . . . . . . . . . . . • • . . . . • • • . . . . . . . . 214
11.2 Characteristic Values for Performance Comparison ..•..••. 217

11.3 Comparison of the Performance of the Control Algorithms . 219

11.4 Comparison of the Dynamic Control Factor . . . . . . . . . . . . . . . . 232

11.5 Conclusion for the Application of the Control Algorithms 239


XIV Contents

C Control Systems for Stochastic Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

12. Stochastic Control Systems ..•.•.•.........•..•.•.. .....•.... 241


12.1 Preliminary Remarks 241
12.2 Mathematical Models of Stochastic Signal Processes ..... 242
12.2. 1 Basic 'l'erms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
12.2.2 Markov Signal Processes . . . . . . . . . . . . . . . . . . . . . . . . . 244
12.2.3 Scalar Stochastic Difference Equations . . . . . . . . . . 247

13. Parameter-optimized Controllers for Stochastic Disturbances .. 249

14. Minimum Variance Controllers for Stochastic Disturbances 253


14.1 Generalized Minimum Variance Controllers for
Processes without Deadtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
14.2 Generalized Minimum Variance Controllers for
Processes with Dead time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
14.3 Minimum Variance Controllers without Offset . . . . . . . . . . . . . 265
14.3. 1 Additional Integral Acting Term . . . . . . . . . . . . . . . . . . 266
14.3.2 Minimization of the Control Error . . . . . . . . . . . . . . . . 266
14.4 Minimum Variance Controllers for Processes with Pure
Deadtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
14.5 Simulation Results with Minimum Variance Controllers .... 269

15. State Controllers for Stochastic Disturbances . . . . . . . . . . . . . . . . 274


15.1 Optimal State Controllers for White Noise .....•......... 274
15.2 Optimal State Controllers with State Estimation for
White Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.3 Optimal State Controllers with State Estimation for
External Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
15.4 State Estimation (Kalman Filter) . . . . . . . . . . . . . . . . . . . . • . . . 284
15.4.1 Weighted Averaging of Two Vector Measurements .... 286
15.4.2 Recursive Estimation of Vector States . . . . . . . . . . . . 288

D Interconnected Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 9 3

16. Cascade Control Systems . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

17. Feedforward Control . . . . . . . . . . . • . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . 302


17.1 Cancellation Feedforward Control . . . . . . . . . . . . . . . • . . . . . . . . 304
17.2 Parameter-optimized Feedforward Control . . . . . . . . . . . . . . . . . 307
17.2.1 Parameter-optimized Feedforward Control without
a Prescribed Initial Manipulated Variable ........ 307
17.2.2 Parameter-optimized Feedforward Control with
Prescribed Initial Manipulated Variable .....•.... 308
17.3 Stlte Variable Feedforward Control . . . . . . . . . . . . . . . . . . . . . . 313
17.4 Minimum Variance Feedforward Control .........•.•........ 313
Contents XV

E Multivariable Control Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 6


18. Structures of Hul ti variable Processes . . . . . . • . . • . . . • . . . . . . • . . • 316
18.1 Structural Properties of Transfer Function
Representations . • • • . . • . . . • . . . • . . • • . • . . . • • . . . • . . . . . . . . . . . 317
1 8. 1 . 1 Canonical Structures • • • • • • • • • • • • . • • • • • • • • • • . . • • • • 31 7
18.1.2 The Characteristic Equation and Coupling Factor •• 321
18.1.3 The Influence of External Signals ••••••••.•••••.• 325
18.1.4 Mutual Action of the Main Controllers •••••••••••• 326
18.1.5 The Matrix Polynomial Representation ••••••••••••• 329

18.2 Structural Properties of the State Representation 329

19. Parameter-optimized Multivariable Control Systems •••••••••••• 335

19.1 Parameter Optimization of Main Controllers


without Coupling Controllers •••••••••••••••••••••••••••• 337
19. 1. 1 Stability Regions . • • • • . . • • • • • • • • • • • • . • • • • • • • • • • . • 338
19.1.2 Optimization of the Controller Parameters
and Tuning Rules for Twovariable Controllers 343
19.2 Decoupling by Coupling Controllers (Non-interaction) 346
19.3 Parameter Optimization of the Main and Coupling
Controller .•••.•.••••.•••••••..••••••••••••••••••.•••••• 350

20. Multivariable Matrix Polynomial Control Systems ••••••••.••••• 352

20.1 The General Matrix Polynomial Controller ••••••••.••••••• 352


20.2 The Matrix Polynomial Deadbeat Controller ••••••••••••••• 353
20.3 Matrix Polynomial Minimum Variance Controllers •••••••••• 354

21. Multivariable State Control Systems .••••••••••.•••••••••••••• 356

21.1 Multivariable Pole Assignment State Controllers ••••••••• 356


21.2 Multivariable Matrix Riccati State Controllers •••••••••• 357
21.3 Multivariable Decoupling State Controllers •••••••••••••• 357
21.4 Multivariable Minimum Variance State Controllers •••••••• 358

F Adaptive Control Systems Based on Process Identification . . . . . . . . . . . . . . . . . . 360

22. Adaptive Control Systems - A Short Review 360

23. On-line Identification of Dynamical Processes and


Stochastic Signals • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 364

23.1 Process and Signal Models ••••••••••••••••••••••••••••••• 365


XVI Contents

23.2 The Recursive Least Squares Method (RLS) ..••............ 367


2 3. 2. 1 Dynamical Processes • . . . • . . • • • . . • • . . . . . • • • . . . . . . • • 36 7
23.2.2 Stochastic Signals ......••..•.•.•.....•.••....••. 373
23.3 The Recursive Extended Least Squares Method (RELS) •..... 374
23.4 The Recursive Instrumental Variables Method (RIV) •..•... 375
23.5 The Recursive Maximum Likelihood Method (RML) .•..•.•.... 376
23.6 The Stochastic Approximation Method (STA) •...•...•..•..• 380
23.7 A Unified Recursive Parameter Estimation Algorithm ...... 381
23.8 Numerical Modifications to Recursive Parameter Esti-
mation Algorithms . . . . . . . • . . . . • . . . • . . . . • . . . . . . . . . . . • . . • . . 385

24. Identification in Closed Loop •.....••...•••...•.•...••.•..••. 387

24.1 Parameter Estimation without Perturbations .•.•••.••..••. 388


24.1.1 Indirect Process Identification ...••.........•... 389
24.1.2 Direct Process Identification •..••.•.•.••...•...• 394
24.2 Parameter Estimation with Perturbations .•.•..•...•..••.. 397
24.3 Methods for Closed Loop Parameter Estimation •..•.••...•• 399
24.3.1 Indirect Process Identification without
Perturbation ....•.•.......••••..•.•..•.••.•.•.•.. 399
24.3.2 Direct Process Identification without
Perturbation ......•.......••.•.••.•...•...••...•. 399
24.3.3 Direct Process Identification with
Perturbation . . . . . . . • . . • . • • . . . . . • • . . . • . . • . • • • • . • • . 400

25. Parameter-adaptive Controllers •..••••..•..•••.•••............ 401

25. 1 Introduction . • . . • . . • • • . . • . . • • . . . . . . . . . . . • . . . • • . . • . . . . . . . 401


25. 2 Sui table Control Algorithms . • . . . . • . . • . • • . . • . . . • . . . . . . . . . 406
25.2.1 Deadbeat Control Algorithms ....••..•...•.•....... 406
25.2. 2 Minimum Variance Controllers . . . • . . . . . • . . • • . . . • • • . 407
25.2.3 Parameter-optimized Controllers .•..........••.•.. 409
25.2.4 General Linear Controller with Pole-assignment ..• 412
25.2.5 State Controller ...••.•.....••.•••...••.•.•....•. 412
25.3 Appropriate Combinations of Parameter Estimation and
Control Algorithms (single input, single output) ..•....• 414
25.3.1 Certainty Equivalent Parameter-adaptive
Controllers . . . . • . . . . . • . . . . . • . . . • . • • • . • . . . . . . . • . . . 414
25.3.2 Stochastic Parameter-adaptive Controllers ...•.•.• 421
25.3.3 Deterministic Parameter-adaptive Controllers ••••. 425
25.4 Comparison by Simulation of Different Parameter-adaptive
Controllers ...•...•..•.••..•..•••.••...•...•.•••••..••.. 426
Contents XVII

25.5 Choice of A Priori Factors ....•..••..••..•.•.•.••••.•.•. 436


25.6 Examples of Applications ................................. 438
25.6.1 Adaptive Control of an Airheater . . . . . . . . • . . . . . . . 438
25.6.2 Adaptive Control of a pH-Process ...•..•......... 441
25.7 Parameter-adaptive Feedforward Control .••.•..•••.••.... 444
25.8 Parameter-adaptive Multivariable Controllers .•••......• 447

G Digital Control with Process Computers and Microcomputers . . . . . . . . . . . . . . . 4 56

26. The Influence of Amplitude Quantization on Digital Control •. 457


26.1 Reasons for Quantization Effects •.•.•.•.•..••.•••...•.. 457
26.1. 1 Analog Input . . . . . . . . . . . . . . . . . . . . . . . . . . . • . • . . . . • . 457
26.1.2 The Central Processor Unit ....•.........•...•... 458
2 6. 1 . 3 Analog Output • . . • • . . . . . . . . . • • • • • • . • • . • • . • . • • . . . • 4 61
26.2 Various Quantization Effects .•.•...•...•••..•......•.•.• 462
26.2.1 Quantization Effects of Variables •.••.....•....• 462
26.2.2 Quantization Effects of Coefficients ....•...•... 467
26.2.3 Quantization Effects of Intermediate Results .... 467

27. Filtering of Disturbances •.•.••.......•.....•.•••..••...•.•. 472


27.1 Noise Sources in Control Systems and Noise Spectra ...•. 473
27.2 Analog Filtering .••••..••..•.•.•••••.•..••..•.......... 476
27.3 Digital Filtering ..•....••......•.•..••.•..••.••..•..•. 478
27.3.1 Low-pass Filters •••..•.•..•..•.•...••••••.••.... 479
27.3.2 High-pass Filters ..••..•.•.•......••.•.•.•.••..• 482
27.3.3 Special Filters ......•.•.•.•..••.•.•..••...•.... 483

28. Combining Control Algorithms and Actuators 488

29. Computer Aided Control Algorithm Design •..•...•.•.......•... 500

30. Case Studies of Identification and Digital Control .•.•••.••. 505


30.1 Digital Control of a Heat Exchanger •..•..•••..••..•...• 505
30.2 Digital Control of a Rotary Dryer ...••.•..•••....••..•. 510
30.3 Digital Control of a Steam Generator .•••.••••..••••••.. 519
30.3.1 Two Input/Two Output Identification and C.A.D. of
Parameter-optimized Controllers .•.•••.•.•••..•.• 520
30.3.2 Alternating Single Input/Single Output Selftuning
Control . . . . • • . • • . . . . • • • • • • • • . . . . . . • • • • . . • . • . . • . . 521
30.3.3 Two Input/Two Output Adaptive Control ....•.••..• 521
30.4 Conclusions . . • . . . . . . . • • . . . . . . . . • . . . . . . • • • . • • • • • . . • • . • . • 527
XVIII Contents

Appendix • • • . • . • . • • • . . • • . . . • . . • . . . • . • • . • . . • . . • . . • • • • . . . . . . . • . . • . • 528
Literature . • • . • . • . • • . . • . • . • • . . • • . • . • • . • . . . • . • • . • . . . . . . . . . . • . . . . . 535
List of Abbreviations and Symbols • . . . . . . . • • . . . • • • . • . • • • . . • . . • . . • 552
Subject Index . . . • • • . . . • • . . • . • . . • . • . . • • • . • . • . • . • . • . . . . • . . . . . . . • . . 557
1. Introduction

Multilevel Process Control

The overall control of industrial processes can be described in terms


of hierarchical levels, shown in Fig. 1.

At the first level directly measurable variables y are controlled using


feedforward of feedback methods. The reference values w are either con-
stant or provided by the hisher levels. If more than one variable is
controlled this is called multivariable control. Automatic start-up and
shut-down is also considered to be part of this first level.

LEVEL 5

LEVEL 4

LEVEL 3

LEVEL 2

LEVELl

!:!, ......_ _ ___, 1.,

Figure 1.1 Multilevel process control


2 1. Introduction

At the second level the process is monitored. The functions of the pro-
cess are checked, for example by testing if special variables exceed cer-
tain limits. Monitoring can be restricted to current variables, but pre-
dicted future variables can also be taken into account; the outputs of
the monitoring level are alarms to the plant personnel. If steps are
automatically taken to undergo a disturbance or a shut-down of the plant,
this is called security control.

At the third level process optimization can be arranged, in which the


efficiency or the throughput is maximized, or the consumption or costs
minimized. Frequently optimization of steady-state behaviour is of para-
mount interest; this is called steady-state optimization. If the opti-
mization is performed on-line, a performance index is calculated from
the measured variables y, and its extremum is searched through systema-
tic changes of inputs, e.g. reference values ~, by using an optimization
method.

If several processes are coupled they can be coordinated at the fourth


level. For a set of thermal and hydromechanical power plants this coor-
dination consists of load dispatching, whilst within a complex of pro-
cesses in the steel industry it is the mutual adjustment of blast fur-
naces, the steel work and rolling mills.

The upper level, here the fifth level, is for management. A whole system
of processes (factory, interconnected networks, large economic units)
are organised for planning the market, the given raw materials and the
given personnel.

At all levels the principles of feedforward and feedback control can be


applied. If feedback control is used, as well as control loops one can
speak of monitoring loops, optimization loops and coordination loops,
or in general, of multilevel control loops.

Historically, some tasks of process control were performed manually;


this remains current practice in some plants. With increasing automation,
equipment first took over the lower level tasks. Until about 1960 auto-
matic control was implemented by analog controllers using electrical,
pneumatical of hydraulical energy. Sequence control was realized with
electrical or pneumatical elements, using binary signals. For monito-
ring, analog or binary equipment was used. Process optimization or co-
ordination was either manually perfomed or not at all. The off-line
1. Introduction 3

operation of digital computers made it possible to automate partially


in the upper levels.

Process Computer Applications

The introduction of digital process computers then influenced the auto-


mation of process control in terms of both structure and function. The
following stages of development have been observed.

By 1959 the first process computers were used on-line, but mainly in
open-loop for data-logging, data reduction and process monitoring. The
direct control of process variables was performed by analog equipment,
principally because of the unsatisfactory reliability of process compu-
ters at that time. Then, the reference values (set points) of analog
controllers were given by the process computers (supervisory control) ,
for example according to time schedules or for process optimization.
Process computers for direct digital control (DDC) in an on-line, closed-
loop fashion have been used since 1962 for chemical and power plants
[1.1], [1.2], [1.3], (1.4], [1.5], [1.6].

As a result of the development of more powerful process computers and


relevant software, the application of computers for process control
has increased considerably. Today, computers are standard components of
process automation [1.5], (1.6]. For further details is referred to the
books [1.7] to [1.14]. Process computers until recently have been used
mainly for direct control, monitoring and coordination, as well as data
logging and data reduction [1.7] to [1.11]. On-line optimization has been
rarely applied. One characteristic of the first 15 years of process com-
puter application is the centralization of control within one computer,
often requiring parallel analog back-up systems or a back-up computer
for higher reliability.

Micro Computer Applications

Cheap microprocessors became available in 1971, and as they can be ass-


embled into microcomputers by adding semiconductor memories and input/
output devices, they enable the distribution of the process control
tasks on several computers. New structures of process control systems
became possible, characterized by decentralization. In 1975, micro
computers became available which were designed for feedforward and
feedback control of 8 to 16 variables and for monitoring, and they be-
4 1. Introduction

gan to take over the lower level functions of analog devices and mini-
computers. Further developments are underway; microcomputers will have
a major influence on r.1easurement and control engineering.

Digital Control Systems

Signal processing with digital computers is not, unlike conventional


analog control or feedforward control with binary elements, limited to
some few basic functions. They are ?rogrammable and can perform com?lex
calculations. Therefore many new methods can be developed for digital
process control, which for the low levels can be realized as programmed
algorithms and for the higher levels as programmed problem-solving me-
thods. Since manipulation in all levels is in terms of generalized feed-
forward and feedback control, multilevel control algorithms have to be
designed, selected and adjusted to the processes.

This book considers digital control at the lowest level of process con-
trol. However, many of our methods for designing algorithms, for obtain-
ing process models, for estimation of states and parameters, for noise
filtering and actuator control can also be applied to the synthesis of
digital monitoring, optimization and coordination systems.

The Contents

This book deals with the design of digital control systems with refe-
rence to process computers and microcomputers. The book starts with part
A Processes and Process Computers.
The general signal flow for digital control systems, and the steps taken
for the design of digital control systems are explained in chapter 2. A
short introduction to the mathematical treatment of linear discrete-
time systems follows in chapter 3. The basic types of technical processes
and the ways to obtain mathematical process models for discrete-time
signals are also discussed.

The remaining subjects are shown by the following:


B Control Systems for Deterministic Disturbances
c Control Systems for Stochastic Disturbances
D Interconnected Control Systems
E Multivariable Control Systems
F Adaptive Constrol Systems Based on Process Identification
G Digital Control with Process Computers and Microcomputers.
1. Introduction 5

The topics discussed in the individual chapters are described by key


words in the survey in Fig. 1 .2. The method of approach to the design
of di~ital control systems also emerges from this survey and from chap-
ter 2.

The process and signal models used in this book are mainly parametric,
in the form of difference equations or vector difference equations,
since modern synthesis procedures are based on these models. Processes
lend themselves to compact description by a few parameters, and methods
of time-domain synthesis are obtained with a small amount of calculation
and provide structurally optimal controllers. These models are the direct
result of parameter estimation methods and can be used directly for state
observers or state estimators. Nonparametric models such as transient
functions or frequency responses in tabular form do not offer these ad-
vantages. They restrict the possibilities for synthesis, particularly
with regard to computer-aided design and to adaptive control algorithms.

A survey of control algorithms designed for deterministic and stochastic


disturbances is provided in chapter 4, particularly Fig. 4.3 and section
12. 1 .

Chapter 5 discusses the derivation and design, based on conventional


analog controllers, of parameter-optimized control algorithms with, for
instance, P-, PI- or PID-behaviour as well as, separate from the conti-
nuous signals, general discrete time controllers of low order. Rules
for setting the controller parameters are compiled from the literature
and new suggestions are made, based on the results of several simula-
tions. Hints are also provided for computer-aided design.

The deadbeat controllers described in chapter 7 are distinguished by a


very small synthesis effort. The modified deadbeat controller of higher
order is of particular importance in adaptive control.

Chapter 8 deals with the design of state controllers and state obser-
vers. As well as other topics, the design for external, constantly ac-
ting disturbances is treated and further modifications are indicated
for computer applications. The design is based on minimization of qua-
dratic performance functions as well as on pole assignment.

The control of processes with large time delays, including the predictor
controller, is taken up in chapter 9.
0)
lS-ISO-CONTROL SYSTEM~ INTERCONNECTED IMIMO-CONTROL SYSTEMS!
CONTROL SYSTEMS
E-< par am. opt. c. ( 5, 1 3, 25) cascaded c. (16) param.opt. c. (18,19)
Ul
:>< lin.c.w. pole assignment ( 11 ) feedforward c. (17) state c. (18,21)
Ul I'Ll
p:; cancellation c. (6) matrix polynom. (21)
....:l p
0 E-< deadbeat c. ( 7)
p:;
E-<
up predictor c. (9)
z p:; state c. (8,15)
0 E-<
u Ul c. ( 14)

::;::
E-<
Ul Ul
p Ul CONTROL A 0 R IT H M s:
'-=> I'Ll noise
0 u
-:t: 0 filter (27)
p:; f i x e d a d a p t i v e
0 D.
~ .......
tuning rules computer aided self-optimizing actuator
(9
z(9
H ....:l (5,19) design (29) param.adaptive control (28)
Ul -:t:
I'Ll control alg. (22-25)
0 u

( 11 )
( 2 5)

PARAMETER STATE
z z0
0 ESTIMATION 8 -OBSERVER ( 8) X
H
z Ul
0 (3,23,24) -ESTIMATION (15)
H 0
E-< z
-:t:
H
~ u MEASURABLE PROCESS HODEL ~
0 u rt
f.4 0 (3,18,23) (12,15,23) fi
z p:; y SIGNALS 0
H D. p,
s::
(l
Fig·. 1.2 Survey of the control system structures under discussion, information used by them on the processes rt
and signals, and adjustment of control systems and processes. () Chapter-number; c.: control algo- f-'·
0
rithm. (c.f. chapter 2.) SISO: single input/single output; !1IHO: multi input/multi output. ~
1. Introduction 7

Since changes in process behaviour must nearly always be taken into


account for the design of control systems, the sensitivity of various
control algorithms is investigated in chapter 10 and suggestions are
given for its reduction. Chapter 11 adds a detailed comparison of the
most important control algorithms designed for deterministic signals.
The resulting closed-loop poles and zeros, control performance and ma-
nipulation effort are compared. This is followed by suggestions for the
choice of control algorithms. After a brief introduction to mathemati-
cal models of discrete-time stochastic signats in chapter 12, the set-
ting of optimal parameters for parameter-optimized controt atgorithms,
under the influence of stochastic disturbance signals, is described in
chapter 13, among other topics. Minimum variance controtters, designed
on the basis of parametric, stochastic process and signal models, are
derived and analyzed in chapter 14. The modified minimum variance con-
trollers were developed in the given parametric form with application
to adaptive control in mind. Chapter 15 looks at state controtters for
stochastic disturbances and includes an illustrative derivation of
state estimation. Several examples are used for the design of inter-
connected controt systems in the form of cascaded controt systems in
chapter 16 and for feedforward controt systems in chapter 17. Various
design methods for feedforward control algorithms, for instance through
parameter optimization or according to the minimum variance principle,
supplement the design of feedback control algorithms.

The structures of multivariable processes are important in the design


of control algorithms for muttivariabte controt systems, chapter 18.
Transfer function and the state representations are treated. In chap-
ter 19 the design of multivariable control systems with parameter-op-
timized controt atgorithms considers master controllers, coupling con-
trollers (with a tendency for reinforcement of coupling, or for decoup-
ling), stability regions, mutual influences of master controllers and
rules for tuning of twovariable control systems. Based on the matrix
polynomial representation, muttivariabte deadbeat and minimum variance
controtters can be designed, chapter 20. The design of multivariable
control systems with state controtters, in chapter 21, includes multi-
variable pole assignment, matrix Riccati, decoupling and minimum vari-
ance state controllers.

Chapters 22 to 25, on adaptive controt atgorithms, form a further key


area of this book. After a brief survey in chapter 22, various methods
for the on-tine identification of dynamic processes and stochastic
signals using recursive parameter estimation algorithms are described
8 1. Introduction

and compared in chapter 23. Parameter estimation in closed loop, with


and without a perturbation signal, is discussed in chapter 24. Finally,
various parameter-adaptive control systems emerge in chapter 25, using
suitable combinations of parameter estimation and controller design me-
thods. Particularly appropriate are those control algorithms that re-
quire a small design effort and meet the requirements for closed loop
identification, i.e. deadbeat controllers and minimum variance controll-
ers. Various parameter adaptive control algorithms were programmed,
tested and compared on-line to analog-simulated and pilot processes.
Several examples demonstrate the quick convergence and good control
quality of these digital adaptive control algorithms. The principle of
parameter adaptive control is also extended to adaptive feedforward
control and adaptive multivariable control. Nonlinearities, introduced
through amplitude-quantization or rounding errors, and the effects re-
sulting from them, such as offsets and limit cycles for digital feed-
back control systems and dead bands in digital feedforward control sys-
tems and filters, are investigated in chapter 26. Chapter 27 deals with
analog and digital filtering of disturbance signals. Discrete high-pass
and low-pass filters and recursive averaging is considered. The various
possibilities for feedforward and feedback control of the actuators are
described in chapter 28. Chapter 29 describes briefly the procedure of
computer-aided design (c.a.d.) of control algorithms.

The last chapter presents case studies. The application of process i-


dentification and c.a.d. of control algorithms is shown for a heat ex-
changer and a rotary dryer. Finally the two basic approaches, process
identification/c.a.d and parameter adaptive control are performed and
compared for a simulated steam generator.
A Processes and Process Computers

2. Control with Digital Computers


(Process Computers, Microprocessors)
Sampled-data Control

In data processing with process computers signals are sampled and digi-
tized, resulting in discrete (discontinuous) signals, which are quan-
tized in amplitude and time, as shown in Fig. 2.1.

y (t)

sampling analog/digital conversion

amplitude value: contin. contin. discrete

time value: contin. discrete discrete

Figure 2.1 An amplitude modulated, discrete-time and discrete-amplitude


signal produced by sampling and analog/digital-conversion

Unlike continuous signals, these signals have discrete amplitude values


at discrete times. Amplitude modulated pulse series emerge, for which
the pulse heights are proportional to the continuous signal values. The
pulse heights are rounded up or down, according to the quantization de-
vice.
10 2. Control with Digital computers

The sampling is usually performed periodically with sampling time T0


by a multiplexer which is constructed together with an effective range
selector and an analog/digital-co nverter. The digitized input data is
sent to the central processor unit. There, the output data are calcula-
ted using programmed algorithms. If an analog signal is required for the
actuator, the output data emerge through a digital/analog-co nverter
followed by a hold device. Fig. 2.2 shows a simplified block diagram.

sampler analog/ digital sampler digital/ hold


digital computer analog
converter converter

b t
........ udL
.....
t
k t

Figure 2.2 The process computer as sampled-data controller. k=0,1,2,3 ...

The samplers of the input and output signal do not operate synchronous-
ly, but are displaced by an interval TR. This interval results from the
AID-conversion and the data processing within the central processing
unit. Since this interval is usually small in comparison with the time
constants of the actuators, processes and sensors, it can often be ne-
glected. Synchronous sampling at process computer input and output can
therefore be assumed. Also the quantization of the signal is small for
computers with a wordlength of 16 bits and more and A/D-converters with
at least 10 bits so that the signal amplitudes initially can be regar-
ded as continuous.

These simplifications lead us to the block diagram in Fig. 2.3, which


shows a control loop with a process computer as a sampled-data control-
ler. The samplers now operate synchronously and generate time-discrete
signals. The manipulated variable u is calculated by a control algorithm
using the control variable y and the reference value w as inputs. Such
sampled-data control loops do not only exist in connexion with process
computers. Sampled data also occur when:
2. Control with Digital computers 11

- measured variables are only present at definite instants


(e.g. rotating radar antenna, radar distance measurement,
sas chromatographs, material sampling followed by later
laboratory analysis, socio-economical, biological and
meteorological data)
- multiplexing of expensive equipment (cables, channels)

sampler control sampler hold process


algorithm

Figure 2.3 Control loop with a computer as a sampled-data controller

Design of Digital Control Systems

Using electrical, pneumatic or hydraulic analog controllers design is


restricted mostly to single-purpose elements with P-, I- or D-behaviour
for technical and economic reasons. The possibilities for synthesis of
control systems were therefore highly restricted. However, these restric-
tions are not valid for control algorithms in process computers. Because
of high programming flexibility, much latitude can be used in the rea-
lization of sophisticated control algorithms. This enables the practi-
cal application of modern control theory, but also reinforces the ques-
tion as to which control algorithms are suited for which application
purposes. An answer to this question is possible only if enough is known
in the form of mathematical models of the processes and their signals,
and if it is known how the various control algorithms compare with each
other with respect to control performance, manipulation effort, sensi-
tivity, disturbance signals, design effort, routine computing effort
and the process behaviour (linear, non-linear, placement of poles and
zeros, dead times, structure of multivariable processes).

An extensive literature exists on the theory of sampled-data systems,


and in many books on control engineering some chapters are dedicated
to sampled-data control. Special books on the theory of sampled-data
systems were first based on difference equations [2.1], [2.2]. They were
followed by books which also used the z-transformation [2.3] to [2.13],
[2.15], [2.20]. Finally, the state representation was introduced [2.14],
[2.17], [2.18], [2.19], [2.21]. Numerous papers from conferences can be
12 2. Control with Digital computers

added to this list. In the more application-oriented books L1.7]-(1.11]


on process computer control, however, only those control algorithms
which rely upon discretized analog PID-controllers are treated.

For the design of digital control systems as described in this book the
following stages are considered (compare also the survey scheme, Fig.
1 • 2) •

1. Information on the processes and the signals

The basis for the systematic design of control systems is the avai-
lable information on the processes and their signals, which can be
provided for example by
- direct measurable inputs, outputs, state variables,
- process models and signal models,
- state estimates of processes and signals.
Process and signal models can be obtained by identification and para-
meter estimation methods, and in the case of process models, by theo-
retical modelling as well. Nonmeasurable state variables can be re-
constructed by observers or state estimators.

2. Control system structure

Depending on the process, and after selection of appropriate manipul-


ated variables and controlled variables, the control system struc-
ture has to be designed in the form of, for example:
- single input/single output control systems
- interconnected control systems
- multi input/multi output control systems.

3. Feedforward and feedback control algorithms (design and adjustment)

Finally, the feedforward and feedback control algorithms are to be


designed and adjusted (or tuned) to the process. This can be done
using:
- simple tuning rules for the parameters
- computer-aided design
- self-optimizing adaptive control algorithms.
Since several control algorithms with different properties are usual-
ly available, they have to be compared and selected according to va-
rious points of view.
2. Control with Digital computers 13

4. Noise filtering

High-frequency noise which contaminates the controlled variables and


which cannot be controlled, has to be filtered by analog and digital
filters.

5. Feedforward or feedback control of the actuators

Depending on the construction of the actuator, various feedforward


or feedback controls of the actuator are possible. The control al-
gorithms for the process have to be adjusted to the actuator control.

Finally, for all control algorithms and filter algorithms the effects of
amplitude quantization have to be considered. In Fig. 2.4 a scheme for
the design of digital control systems is given. If tuning rules are
applied to the adjustment of simple parameter-optimized control algo-
rithms, simple process models are sufficient. For a single computer-
aided design, exact process/signal models are required which most appro-
priately can be obtained by identification and parameter estimation.
If the acquisition of information and the control algorithm design is
performed continuously (on-line, real-time) then self-optimizing adap-
tive control systems can be realized.

CONTROL SYSTEM,------ - l ,---------- --------l


INFORMATION
SYNTHESIS~
.. ~ I ; u. II ~ VACQUISITION
I I I • I
CONTROLSYSTEMI CONTROL I~ STATE PROCESS/ 1
ALGORITHM I
STRUCTURE DESIGN
I I ESTIMATION ~6GD~~~- :
I

CONTROL
I
I
I
: v~ vt ~ :
: PROCESS/ :
ALGORITHM I r;=io========:>l SIGNAL- t<:==:=========il
ADJUSTMENT I I
L _______________
ANALYSIS :

L_C~~;~~-J
~

==::>O_===::>IALGORITHMI=~t=~
" ACTUATOR 0 ~ NOISE
y
CONTROL ~ PROCESS l=;:::c)==C>I FILTER

Figure 2.4 Scheme for the design of digital control systems


3. Discrete-time Systems

In this chapter a short introduction to the mathematical treatment of


linear discrete-time systems (sampled-data systems) is given. Only those
basic relationships which are fundamental for the design of digital con-
trol systems are given. For an in-depth study of the theory of discrete-
time systems the reader is referred to the wellknown text books, e.g.
[2.3], [2.4], [2.10]-(2.14], [2.17], [2.18].

3.1 Discrete-time Signals

3.1.1 Discrete-time Functions, Difference Equations

Discrete (discontinuous) signals are quantized in amplitude or time.


In contrast to continuous signals which describe any amplitude value
for any time instant, discrete signals contain only values of discrete
amplitudes for discrete-time points. In the following, only discrete-
time signals are considered. They consist of trains of pulses at cer-
tain time points. Discrete-time signals are usually generated by samp-
ling continuous signals at constant time intervals. The single pulses
of the series can be modulated in several ways, and pulse amplitude,
pulse width and pulse frequency modulation are distinguished. For digi-
tal control systems pulse amplitude modulation is usually of interest,
especially for the case when the pulse height is proportional to the
continuous signal value, the pulse width is constant and the pulses
occur at equidistant sampling instants, Fig. 3.1.1. This type of dis-
crete signal leads to linear relationships in the treatment of linear
dynamic systems, because the theorem of superposition applies. Figure
3.1.1 shows the generation of discrete-time amplit~de modulated pulse
trains through periodic detection of the continuous signal x(t) with
a switch which closes with sampling time T0 for the time period h. If
the switch duration h is very small in comparison to the sampling time
T0 and if the switch is followed by a linear transfer element with time
constants T. >> h, the pulse trains x (t) can be represented by the
~ p
discrete-time signal xT(kT 0 ), Fig. 3.1.2. Then xT(kT 0 ) are the ampli-
tudes for the sampling instants and the switch becomes a sampler.
3.1 Discrete-time Signals 15

X It) .. ~o-tTr-o __ !!.,.l


xP..

Figure 3.1.1 Generationof an amplitude modulated discrete-time signal


through a switch with duration h and sampling time T0

X
X It I
.8h « T0
...
xTit I

XT
'
'

1 2 3

Figure 3.1.2 Discrete-time signal xT(k) for h << T0 , generated by a


sampler

An amplitude modulated discrete-time function xT{t) which is generated


through equidistant sampling of a continuous function x(t) with sampling
time T0 is defined as

x(kT 0 ) for t = kT 0
}k=0,1,2, ... (3.1-1)
o for kT 0 < t < (k+1)T 0

The formation of various types of discrete-time functions is shown by


examples.

Example 3.1.1

a) If the continuous function

X (t) = e -at
16 3. Discrete-time Systems

is sampled, the discrete-time function becomes, with t kT 0

k=0,1,2, ...

Hence an explicit function results

b) The integration
1 t
x(t) = T J w(t)dt
0
is to be performed numerically by a staircase approximation of the
function w(t). This leads to
k-1
T Z T0 w (vT 0 ) .
v=O
In this case, x(kT 0 ) depends on a second function

Hence an implicit function results.


0

The next example shows how a difference equation can be obtained for an
implicit function.

Example 3.1.2

In example 3.1.1 b)
1 k
x((k+1)T 0 ) = T z T0 w(vT 0 )
v=O
is also valid. Subtraction yields
To
T w(kT 0 )

or
x(k+1)+a 1 x(k)

or
x(k)+a 1x(k-1) = b 1w(k-1)

with a 1
This is a first order linear difference equation.
0
3.1 Discrete-time Signals 17

If discrete-time functions depending on another discrete-time function


can be written in recursive form, difference equations are obtained. An
mth order difference equation is

x(k)+a 1 x(k-1)+ ... +amx(k-m) = b 0 w(k)+b 1w(k-1)+ ... +bmw(k-m). (3.1-2)

The argument kT 0 has now been replaced by k. The current output at time
k can be calculated recursively by

( 3. 1-3)

if the current input w(k) and m past inputs w(k-1) , ... ,w(k-m) and m past
outputs x(k-1) , ... ,x(k-m) are known.

Difference equations can also be obtained by discretizing differential


equations. Here a first order differential is approximated by a first
order difference, a second order differential by a second order diffe-
rence, etc. The following relationships result if backward differences
are used:

Continuous function Discrete-time function

first order differential first order difference

dx(t) x(t)-x(t-l:lt)
lim l:lx (k) = x (k) -x (k-1)
~
l:lt+O l:lt

second order differential second order difference


dx(t) dx(t-M)
d 2 x(t) dt
-~
lim ~ ll 2 x(k) llx(k)-llx(k-1)
l:lt+O lit
x(k)-2x(k-1)+x(k-2)

The next example shows the discretization of a first order differential


equation.

Example 3.1.3

A first order differential equation is

T dx ( t) = w ( t) .
dt

Discretizing backwards with sample time T0 yields


To
x(k)-x(k-1) = ' r w(k).
18 3. Discrete-time Systems

If the discretization is done forwards by applying

6x' (k) = x (k+1) -x (k),

the same difference equation as in example 3.1.2 results


To
x(k+1)-x(k) T w(k).
0

These approximations are only satisfactory if the sampling time T0 is


small in comparison with the time constant T.

Eq. (3.1-2) is the standard form of a difference equation. A form corres-


ponding to a differential equation results from introducing differences
up to roth order

rn rn-1
am(', x(k)+arn_ 1 6 x(k)+ •.. +a 1 6x(k)+x(k)

rn rn-1 (3. 1-4)


= Srn6 w(k)+Srn_ 1 6 w(k)+ ... +S 1 6w(k)+S 0 w(k).

In the following, another method for obtaining difference equations


which is also valid for large sample times T0 is described.

3.1.2 Impulse Trains

An expedient mathematical treatment of discrete-time functions is ob-


tained if the pulse train x (t) is approximated by an impulse train,
p
where an impulse is defined by

t'fo
o(tl ={ 0 ( 3. 1-5)
t=O

and has the area

I a<tldt = 1 sec. (3. 1-6)

If the switch duration becomes very small compared with the sample time,
h << T0 , the pulses of the pulse train xp(t) with the area x(t)h can
be approximated by impulses o(t) with the same area

X (t) ~X (t) = X(t)h ~ o(t-kT 0 ). (3. 1-7)


p o 1 sec k=O

The resulting impulse train x 0 (t) is not a realizable signal but an


3.1 Discrete-time Signals 19

approximation of the pulse train x (t). This assumption of an ideal


p
sampler, however, leads to a considerably simplified mathematical des-
cription of the transfer behaviour of discrete-time signals if the
switches are followed by linear transfer elements. Fig. 3.1.3 illustrates
this approximation. The length of the arrows corresponds to the area of
the impulses.

approximation
Xp
h«T 0

t
Figure 3.1.3 Approximation of the pulse train x (t) by an impulse
train xo(t) p

Since the impulse train only exists for t kT 0 , Eq. (3.1-7) becomes

(3. 1-8)

The switch duration h cancels for transfer systems with identical syn-
chronously operating switches at the input and the output. Furthermore,
different values for h do not affect the result if a date. hold follows
the switch. Therefore, for simplicity, the switch duration will be ig-
nored (or we choose h = 1 sec) which leads to the "starred" impulse
train

x* (t) L x(kT 0 )o(t-kT 0 ). ( 3. 1-9)


k=O
Through this approximation and normalisation, the output of the sampler,
Fig. 3.1.2, is just multiplied by impulses o(t-kT 0 ).

Note that this approximation assumes that:


a) h << T 0 ,
b) the sampler is followed by a linear realizable system
G ( s ) = Z ( s ) /N ( s ) .
20 3. Discrete-time Systems

3.2 Laplace-transformation of Discrete-time Functions

3.2.1 Laplace-trans formation

The Laplace-trans formation


00

x(s) = L{x(t)} = J x(t)e-stdt (3.2-1}


0
where s = o+iw, applied to an impulse, yields

L{o(t)} = f
0
o(t)e-stdt (3.2-2)

and for shifted impulses

(3.2-3)

The Laplace-trans form of the impulse train, Eq. (3.1-9), then becomes

-kT 0 s . (3.2-4)
L{x*(t)} = x*(s) I x(kT 0 )e
k=O

This Laplace-trans form of a discrete-time function is now periodic with


frequency

(3.2-5)

since

x*(s+ivw 0 ) = x*(s) (3.2-6)

This can easily be shown by substituting s by s+ivw 0 in Eq. (3.2-4).


With s = o+iw one obtains

x*(o+i(w+vw 0 )) = x*(o+iw) (3.2-7)

which means that x*(s) is repeated for all vw 0 . If x*(o+iw) is known


for all o and -w 0 /2 ~ w < w0 /2, i.e. one strip in the s-plane, it is
known completely. The function x*(s) has the same value at all congru-
ent points in the com?lementary strips for higher frequencies.

3.2.2 Sampling Theorem

If the continuous signal x(t) is sampled with a small sample time T0 =Lt,
the Laplace-trans form of the continuous signal, Eq. (3.2-1), can be
approximated by
3.2 Laplace-transformation of Discrete-time Functions 21

x(t l lx(iw)l

t -Wmax Wmax w
a)

x*(t) lx"liwll

I
/
-- '

t w
b)

IG!iwll

c)
-Wmax Wmax w

Figure 3.2.1 a) Magnitude of the Fourier-transform of a continuous signal


b) Magnitude of the Fourier-transform of a sampled signal
c) Magnitude of the frequency response of an ideal band-
pass filter
22 3. Discrete-time Systems

x(s) ~ L x(k~t)e-k~ts~t. (3.2-8)


k=O

A comparison with Eq. (3.2-4) yields

(3.2-9)

if T0 is sufficiently small.

It is assumed that the continuous signal is band-limited, so that its


Fourier-transform is

x(iw) f 0 for -w
max
,;; w ,;; w
max
x(iw) 0 for w < -w and w > w
max max

(see Fig. 3.2.1a). This band-limited signal is sampled with sam~le time
T0 and approximated by the impulse train x*(t). If T0 is sufficiently
small, the Fourier-transform then exists in a "basic spectrum"

-w ,;; w ,;; w
max max

and periodically with w0 repeating "complementary spectra" (sidebands)

v=±1,±2, ...

as shown in Fig. 3.2.1b. In comparison to the continuous signal, addi-


tional higher frequency components arise from the sampled signal.

If the originally continuous signal has to be reconstructed from the


sampled signal by filtering with the ideal bandpass filter

I G(iw) I -w
max
,;; w ,;; w
max
I G(iw) I 0 w < -w
max
and w > w
max

(see Fig. 3.2.1c), this is only possible without error if w0 /2 ~ wnax·


If the sampling frequency is too small, w0 /2 < wmax' then overlapping
basic and side spectra arise, and an errorfree filtering of the basic
spectrum is not possible and therefore the band-limited continuous sig-
nal cannot be recovered without errors.

In order to recover the continuous band-limited signal which has a maxi-


mum frequency wmax from the sampled signal, the sampling frequency w0
has to be

(3.2-10)
3.2 Laplace-transformation of Discrete-time Functions 23

or, with Eq. (3.2-5), the sample time T0 has to be

""--
1T
w (3.2-11)
max

This is Shannon's sampling theoPem.

It should be mentioned that band-limited continuous signals in reality


do not arise in communication or control systems. However, in sampled
data control, the Shannon frequency (also called Nyquist frequency)

(3.2-12)

plays an important role, at least as a referencefrequency. It is also


the maximum frequency for frequency responses of discrete-time systems.

3.2.3 Holding Element

If a sampler is followed by a zero-order hold which holds the sampled


signals x(kT 0 ) for one sampling interval, then a staircase signal re-
sults, Fig. 3.2.2.

_x_ __..,....a,.. x• .., hold m



sampler
X x" m

t t t
Figure 3.2.2 Sampler with zero-order hold

The transfer function of a zero-order hold can be derived as follows.


For the impulse train as input

x*(t) = E x(kT 0 )o(t-kT 0 )


k=O

with Laplace-transform
24 3. Discrete-time Systems

x* (s)

the staircase output becomes

m(t) = E x(kT 0 )[1 (t-kT 0 )-1 (t-(k+1)T 0 )]


k=O
with Laplace-transform

m(s) = E
k=O

x*(s)

The transfer function of the zero-order hold is then

(3.2-13)

For low frequencies it behaves like a first order low-pass filter

lim H(iw) = lim~ (1 - (3.2-14)


w+O w+O lW

3.3 z-Transform

3.3.1 Introduction of z-Transform

Introducing the abbreviation

(3.3-1)

into Eq. (3.2-4), leads to the z-transform

-k -1 -2
x(z) = }{x(kT0 )} = E x(kT 0 )z = x(O)+x(1)z +x(2)z + ... (3.3-2)
k=O
This infinite series converges if lx(kT 0 ) I is restricted to finite va-
lues and if lzl > 1. Since a can be chosen appropriately, convergence
is possible for many functions x(kT 0 ). The assumptions made for the
Laplace-transform, especially x(kT 0 ) = 0 for k < 0, are also valid for
the z-transform.

Some examples of z-transform calculations are given.


3.3 z-Transform 25

Examples 3.3.1:

a) The step function: x(kT 0 ) = 1 (kT 0 )


According to the definition Eq. (3.3-2)

-1 -2
x(z) = 1+z +z + ...

a power series results which can be written in closed form

1 z
x(z) = ----1 if Iz I > 1.
1-z z-1
-akT
b) The exponential function: x(kT 0 ) = e 0 (a is real)

z z
z-e-aTo

c) The sine function: x(kT 0 )


With

it follows, using the result of b) with a iw 1 , or with a


that

x(z)
2i [z_)w1To

zsinw 1 T0
2
z +a 1 z+1

These examples have shown how the z-transforms of some simple functions
can be obtained. In this way, a table of transforms of common functions
can be assembled. A short table of corresponding continuous time func-
tions, Laplace-transforms and z-transforms, is given in the Appendix.
This table shows:

---
a) There is a direct correspondence of the denominators of x(s) and x(z)

with z 1 = T s
e 0 1
n n
(s-s 1 ) (z-z 1 )

b) There is no direct correspondence of the numerators of x(s) and x(z).


For example, x(z) can possess numerator-polynomials even if x(s) does
not.
26 3. Discrete-time Systems

3.3.2 z-Transform Theorems

Some important theorems for the application of the z-transform are lis-
ted below. For their derivation, it is referred to the textbooks given
at the beginning of this chapter.

a) Linearity

b) Shifting to the right

d <!: 0

c) Shifting to the Zeft

}{x(kT0 +dT 0 )} zd[x(z)-d~1 x(qT 0 )z-q] d <!: 0


q=O

d) Damping

~{x(k)e-akTo} = x(zeaTO)

e) Initial- vaZue theorem

x(+O) lim x(z)


z+oo

f) Final- vaZue theorem

z-1
lim x(kT 0 ) = lim z x(z) lim (z-1) x (z)
k+oo z->-1 z->-1

3.3.3 Inverse z-Transform

In contrast to the Laplace-transform, where the transform x(t) + x(s)


and the inverse transform x(s) + x(t) are unique, the z-transform
x(t) + x(z) and the inverse z-transform x(z) + x(t) are not unique,
because of possible "ripple" between the sample points. However, the
transform of x(kT 0 ) + x(z) and the inverse transform x(z) + x(kT 0 ) are
unique. For practical purposes, the inverse z-transform can be found by
expanding x(z) into a sum of the simple low order terms listed in the
z-transform table or just by dividing the numerator of x(z) by the de-
nominator, yielding a series:
3.4 Convolution Sum and z-Transfer Function 27

A comparison with E~. (3.3-2) results in

3.4 Convolution Sum and z-Transfer Function

3.4.1 Convolution Sum

If a sampler operates at the input of a linear system with transfer


function G(s) or impulse response g(t), as in Fig. 3.4.1, the impulse
train approximation of the input to the system is

u*(t) = L u(kT 0 )o(t-kT0 ). (3.4-1)


k=O
With the impulse response g(t) as response to the impulse o(t), the con-
vol-ution sum

(3.4-2)

results. If also the input and output are sampled synchronously, we


have with t nT 0
00

y(nT 0 ) L u (kT 0 ) g ( (n-k) T0 )


k=O
00

L u ( (n-v) T0 ) g (vT 0 ) (3.4-3)


v=O

Figure 3.4.1 A linear process with sampled input and output


28 3. Discrete-time Systems

I To
~ -1 Ljm~l -j ~\~i I
"h 1::::_ ~ ~.________, 'k=
u{kl Hlsl yltl

t k t

Figure 3.4.2 A linear process with zero-order hold and sampled input
and output

3.4.2 Pulse Transfer Function and z-Transfer Function

For the sampled output one obtains by applying Eq. (3.2-4)


<X>

y*(s) = l: y(nT 0 )e-nTos


n=O

l: l: u(kT 0 )g((n-k)T0 )e-nTos (3.4-4)


n=O k=O

and by substituting q n-k

y*(s)

-qT s l: u(kT 0 )e-kTos


l: g(qT )e 0 (3.4-5)
0
q=O k=O

y*(s) G*(s)u*(s)

Hence, the pulse transfer function is defined by


<X>

~ -qT s
G*(s) l: g(qT 0 )e o . (3.4-6)
u_* (s) q=O
T s
With the abbreviation z = e 0 the z-transfer function is defined
<X>

G(z) Y.J.& l: g(qT 0 )z-q = }{g(q)} (3.4-7)


u(z) q=O

The following examples show the evaluation of z-transfer functions.


3.4 Convolution Sum and z-Transfer Function 29

Example 3.4.1: A first order lag without zero-order hold

A first order lag with the s-transfer function

K K'
G ( s) = 1 +Ts a+s

with a = 1/T and K' = K/T, has the impulse response

-at
g(t) = K'e .

Now, letting t = kT 0

g(kT ) = K'e-akTo
0

this leads with Eq. (3.4-7) and example 3.3.1b) to

K' l: (eaTOz)-q K' z


G ( z)
-aT 0
q=O z-e
bo
-1
1+a 1 z

Taking K = 1, T = 7.5 sec and T0 = 4 sec, yields the parameters

-aT
b 0 = K' = K/T = 0.1333; a 1 = -e o = -0.5866.
0

The above operations can be written as

(3. 4-8)

The symbol l{x(s)} means that one looks for the corresponding x(z) in
the z-transtorm table directly, as

G(z) = 't{G(s)} = l,{~} = K'z (3.4-9)


a+s z-e -aT 0 .

If the sampler at the input is followed by a zero-order hold, as shown


in Fig. 3.4.2, the z-transfer function becomes

1 -Tos
HG ( z) = t {H ( s) G ( s) } =l { -e
s
G ( s) }

(3.4-10)
30 3. Discrete-time Systems

Example 3.4.2: A first order lag with zero-order hold

Applying Eq. (3.4-10) and the z-transform table in the Appendix yields

z-1 K' z-1 ( 1-e -aTO) z K'


HG(z) -z- ~ {s (a+s)} z (z-1) (z-e -aTO) a
b 1z -1
(1-e-aTO) K'
(z-e-aTO) a -1
1-a 1 z

Using the same parameters K, T and T 0 as in Example 3.4.1, we have

-e-aTO = -0.5866

(1-ea T 0) -K' = 0.4134


a

Note the difference from example 3.4.1.


0

Examples for higher order systems are given in section 3.7.2. The dyna-
mic behaviour of linear, time invariant systems with lumped parameters
and continuous input and output signals u(t) and y(t) is described by
differential equations

a y(m) (t) + a (m-1) (t) + + a 1 y(t) + y(t)


m m-ly
= b u(m) (t) + b u (m- 1 ) ( t) + ... + b 1 u(tl + b 0 u(tl. (3.4-11)
m m-1
Laplace-transformation results in the s-transfer function

+ b sm-1
G(s) = ~ m-1 B (s)
(3.4-12)
u(s) m A (s) •
+ a
m
s

Now, for a given difference equation

y(k) + a 1 y(k-1) + .•• + amy(k-m)

(3.4-13)

the z-transfer function is obtained by applying the theorem of shifting


to the right

y(z)[1 + a 1 z
-1
+ ... + amz
-m -1
] = u(z)[b 0 + b 1 z + ... + b z -m]
m
b + b z-' 1 + + b z""'m B(z- 1 )
~ 1 m
G ( z) -1 -m (3.4-14)
u (z) + a 1z + + a z A(z- 1 )
m
3.4 Convolution Sum and z-Transfer Function 31

3.4.3 Properties of the z-Transfer Function

Proportional Behaviour

For processes with proportional behaviour, the gain is obtained by


using the finite value theorem

y(k+oo) bo+b1+ ... +bm


K lim G (z) (3.4-15)
u(k+oo) 1+a 1 + ... +am
z+1

Integral Behaviour

Processes with integral behaviour have a pole at z


-1 -m
b 0 +b 1 z + .•. +bmz
G ( z) (3.4-16)
-1
( 1-z )

The "steady-state" gradient after a step input of height u is


0

bo+b1+ .•. +bm


/',y(k) y(k)-y(k-1) (3.4-17)
1+a;+ ... +a~

If b 0+ 0 the system has a jump discontinuity at k = 0. However, for


most real processes b 0 is zero because, for synchronous sampling at the
input and output, at least the lag behaviour of the actuators and sen-
sors avoids the jump discontinuity.

Dead time

A dead time with s-transfer function

has the z-transfer function

-d
D (z) = z (3.4-18)

according to the shifting theorem, if Tt = dT 0 with d = 1,2,3, ...


A time lag system followed or preceded by a deadtime leads to

DG(z) Y..i&
u(z)
= G(z)z-d (3.4-19)
32 3. Discrete-time Systems

Realizabili~y

The realizability conditions depend on how the z-transfer function is


written in terms of negative or positive powers of z.
-1 -m
b 0 +b 1 z + ... +bmz
a) YM
u(z) 1 -n
ao+a1z + ... +anz

This transfer function and its corresponding difference equation is rea-


lizable if, by long division

-1 -1 -2
G(z ) = g (0) +g ( 1 ) z +g ( 2) z + ...

1 2
is obtained (c.f. Eq. (3.4-7)) not containing members with Z I Z I

because the impulse response has to satisfy the causality principle.


Hence the realizability conditions are

(i) i f bo
+0 then ao +0
i f b1
+0 then a1 +0 (3.4-20)

$
(ii) m :?_
n

m
bc)+bi z+ ... +b~z
b) G (z) ~
u(z)
0 l
a +a z+ ... +a~ z
n

This transfer function is realizable if in the corres9onding difference


equation

0
a y(k)+ ... +a~y(k+n) = b u(k)+ ... +b~u(k+m) 0
the term y(k+n) does not depend on future values u(k+m), invoking the
causality principle. Therefore, the realizability condition for this
form of G(z) is

m :s; n (3.4-21)

for a'
n
f 0.

Correspondence with the impulse response

The impulse response g(k) results from the difference equation, Eq.
(3. 4-13), with
u(O)
u(k) 0 for k > 0

because this describes a unit pulse at the input, see Eq. (3.4-1).
3.5 Poles and Stability 33

Substituting in the difference equation

y(k) = b 0 u(k)+ ... +bmu(k-m)-a 1 y(k-1)- ... -any(k-n)


leads to

g(O) bo
g(1) b 1 - a 1 g(O)
g(2) b 2 - a 1 g(1) - a 2 g(O) (3.4-22)

g(k) bk- a 1 g(k-1) - - akg(O) for k ~ m


g(k) - a 1 g(k-1) amg(k-m) for k > m.

Cascaded Systems

For deriving the z-transfer function of cascaded linear systems, all


elements not separated by a sampler must be multiplied out first. Exam~­
les are shown in Fig. 3.4.3. Note that each sampler leads to a multi-
plication sign.

3.5 Poles and Stability

3.5.1 Location of Poles in the z-Plane

Real Poles

It was shown in Example 3.4.1 that a first order lag with s-transfer
function

yJ.§l K' (3.5-1)


G(s) a+s
u(s)

and with no zero-order hold leads to the z-transfer function

K' z K'
G(z) = ~ -1
(3.5-2)
u(z) z-z 1
1-a 1 z

with pole
-aT
z1 = a1 = e 0 • (3. 5-3)

The corresponding difference equation is

y(k) - a 1 y(k-1) K'u(k). (3. 5-4)


34 3. Discrete-time Systems

a) y ( z) u(z)l{G 1 (s)G 2 (s)}

b) y(z)

c) y (z) = HGP (z)•GR (z)•[w (z) -y (z)]

HG p ( z ) · GR ( z )
G (z)
w
=~
w (z) 1+HG (z) ·GR(z)
p "

Figure 3.4.3 Examples of the derivation of overall z-transfer functions


of cascaded systems

For an initial value y(O) f 0 and for u(k) 0, k ~ 0, the homogeneous


difference equation is

y(k) - a 1 y(k-1) =0 (3.5-5)

giving

y ( 1)
y (2)

y (k) (3.5-6)

T-is first order system only converges to zero and is, therefore, only
asymptotically stable for ia 1 i < 1. The time behaviour of y(k) for diffe-
3.5 Poles and Stability 35

rent positions of the pole a 1 in the z-plane is shown in Fig. 3.5.1.


A negative value of a 1 results in alternating behaviour.

Since the poles in the s-plane and the z-plane are related by

-aT
z1 = a1 = e 0

the s-poles for -oo < a < +00 lead to the z-poles oo > z 1 > 0, i.e. only
positive z-poles. Therefore, a negative z-pole z 1 a 1 < 0 has no cor-
responding s-pole.

Conjugate Complex Poles

The second order a-transfer function


2 2 2 2
K(a +w 1 ) K (a +w 1 )
K
G(s) = ~
u (s) 2
(s+a) +w 1
2

with a = D/T
2
w1 21 (1-D 2 )
T

with no zero-order hold has the z-transfer function


2
a +w 1
(aK sinw 1T0 )z
w1
G(z) Yl!l
u(z)

Here a = e-aTe.

The poles are

(3.5-7)

and the homogeneous difference equation becomes

For initial values y(O) and y(1) = acosw 1 T0 , the solution of this equa-
tion is

y(k) (3.5-8)

The behaviour of y(k) is shown in Fig. 3.5.2 for positive values of a.


Negative a lead to alternating y(k). However, for this case no corres-
ponding pole in the s-plane exists.
36 3. Discrete-time Systems

yh(~.k Im
y { • I~
(1)
I. . . > 1
C(< -1 C(

• • • ~ k

ct=-1 yt (:} • _,+--~-*--+......;~-4-'*-"-


(1} Re Yl_l:l_--
~ ct= 1

~ 1 k

y t
(3}

y~5) -i
~ O<ct<1

YL:.__
-1 <ct<O • k
• k
aoQ

Figure 3.5.1 Real poles and corresponding eigenbehaviour

O<ct<1
yh4) • C( >1
Im
• k
w1T0 =150°
(1} •

~·- .-.
c)

l2l_,X:.y-tt-....7_•,...,........-(2,...,..... =1
t
_,,...._ C(

-1 I 1; e e e e • k

0< C( <1
y~5)
• k
-I
'i ' y...t. .·. . ·-.. . . .~.~,. . ). . . .~.--k O<ct<1

blw 1T0 =90°

Figure 3.5.2 Conjugate complex pole-pair with corresponding eigenbe-


haviour (Shannon frequency: wShTO = n)
3.5 Poles and Stability 37

3.5.2 Stability Condition

A linear system is asymptotically stable if after an initial perturba-


tion it returns to the equilibrium point. Our discussion of the pole
locations has shown that this is the case if the system poles are loca-
ted inside the unit circle. This means that all roots of the characte-
ristic equation

(z-z 1 ) (z-z 2 ) ... (z-zm) 0 (3. 5-9)

are such that

I z.l I < 1 i 1 ,2, ... ,m (3.5-10)

3.5.3 Stability Criterion through Bilinear Transformation

The bilinear transformation

z-1
w = z+1 ( 3. 5-11)

maps the unit circle of the z-plane onto the imaginary axis of the w-
plane. Therefore the lefthalf w-plane maps the inside of the unit cir-
cle. Because the w-plane plays the same role as the s-plane for conti-
nuous time systems the Hurwitz- or Routh-stability criterion can be
applied. For this purpose the inverse transformation

1+w
z = 1-w (3.5-12)

is introduced into the denominator

m m-1
A(z) = z +a 1 z + ... +am ( 3. 5-13)

of a transfer function

B (z)
G(z) = A(z) (3. 5-14)

leading to
1+w m-1
+a1(1-w) + ... +a (3.5-15)
m

Multiplying by (1-w)m yields

A (w) = ( 1+w) m + a 1 ( 1+w) ~1 ( 1-w) + ... +am ( 1-w) m (3. 5-16)

Now the Hurwitz criterion can be applied for the polynomial


:1\(w) = 0. (3.5-17)
38 3. Discrete-time Systems

This criterion states that the coefficients have to exist and have to
carry the same algebraic sign. Then the system is not monotonic unstable.
To avoid oscillatory unstability the Hurwitz-determinants have to be
positive for systems higher than second order (or the Routh criterion
has to be falfilled).

Example 3.5.1:

The second order denominator is

A(z) z2 + a 1z + a 2 •

Then
2
A(w) (1+w) +a (1+w) + a 2
1-w 1 1-w
and
A(w) (1+w) 2 + a 1 (1+w)(1-w) + a 2 (1-w) 2

(1-a 1+a 2 )w 2 + 2(1-a 2 )w + (1+a 1 +a 2 ).

If all coefficients of A(w) do exist and have positive sign it follows


for second order systems in general

A(-1) 1-a 1+a 2 > 0

A(1) 1+a 1+a 2 > 0

For higher order systems the application of this stability criterion


is not so straightforward because of the calculation of the Hurwitz-
determinants. For other stability criteria, as for example that of
Schur-Cohn-Jury, the reader is referred to the literature.
3.6 State Variable Representation 39

3.6 State Variable Representation

In modern control theory and especially in the design of multivariable


control systems, the state variable representation plays a very impor-
tant role. There are several ways to introduce the state variable re-
presentation of discrete-time systems. Two methods are presented here:
in the introduction by direct substitution in the difference equation,
and in solving the continuous-time vector differential equation for a
linear system with zero-order hold.

The Vector Difference Equation Based on the Difference Equation


(Direct Programming Method)

The substitution of k by k+n in the difference equation (3.4-13) leads


to

y(k+n) + a 1y(k+n-1) + ••• + any(k)


= b 0 u(k+n) + b 1u(k+n-1) + ••• + bnu(k). (3.6-1)

The corresponding z-transfer function is

G (z) ~ (3.6-2)
u(z)

The following state variables are introduced


y (k) = x1 (k) (3.6-3)
y (k+1) x 2 (k) x1 (k+1)
y(k+2) = x 3 (k) x 2 (k+1)

y(k+n-1)=xn(k)=xn_ 1 (k+1)
x (k+n) = xn (k+1)

Substitution of Eq. (3.6-4) in Eq. (3.6-1) gives, for bn


••• , bn-1 = o,
y (k+n) = xn (k+1) (3.6-5)
40 3. Discrete-time Systems

Eq. (3.6-4) and Eel. (3.6-5) lead to the vector difference equation

x, (k+1) 0 0 0 x, (k) 0

x 2 (k+1) 0 0 0 x 2 (k) 0
+ u(k) (3. 6-6)

0 0 0

xn (k+1) -a -a -a -a, xn(k)


n n-1 n-2

and the output equation

y(k) (1 0 OJ
l x, 1 (k)
x 2 (k)
(3 .6-7)

xn(k)

or, after introduction of a state vector ~I a system's matrix ~I a con-


trol vector b and an output vector c:

~(k+1) ~ ~(k) + b u(k) (3. 6-8)


T
y(k) ~ ~(k). (3.6-9)

Setting bn = 1 and b 0 , ... ,bn-l = 0, Eq. (3.6-2) and Eq. (3.6-3) lead to

y(z) = n n-l u(z) = x 1 (z). ( 3. 6-10)


z +a 1 z + ... +an

If, however, bn f 1 and b 0 , ... ,bn-l f 0, then Eq. (3.6-2) and Eq. (3.6..,-10)
give

or
(3.6-11)

From Eq. (3.6-4)

(3.6-12)

is also valid. Here xn(k+1) comes from Eq. (3.6-5), so finally we have

y(k) (bn-boan)x 1 (k)+(bn_ 1 -b 0 an_ 1 lx 2 (k)+ ... +(b 1 -b 0 a 1 )xn(k)+b 0 u(k).


(3.6-13)

In vector notation, the extended output equation becomes

y(k) = ((bn-boan) ... (b 1 -b 0 a 1 )J~ 1 (k)l+ b 0 u(k)

lxn(k)J
3.6 State Variable Representation 41

y(k) = ~T~(k) + du(k). (3.6-14)

For b 0 = 0, i.e. for a non-jumping system, Eq. (3.6-14) gives

y(k) (3.6-15)

Fig. 3.6.1 shows a block diagram in "regulator form" of the state re-
presentation for a difference equation directly taken from Eq. (3.6-4),
Eq. (3.6-5) and Eq. (3.6-12). The vector difference equation and the
output equation are

~(k+1) =A ~(k) + ~ u(k) (3.6-16)

y(k) = ~T~(k) + du(k) (3.6-17)

(c.f. Fig. 3.6.2).

..

Figure 3.6.1 Structural diagram of the (regulator form) state repre-


sentation of a difference equation

ulk) ylk)

Figure 3.6.2 Block diagram of a first order vector difference equation


42 3. Discrete-time Systems

The Vector Difference Equation Based on Vector Differential Equation

After defining a state vector ~(t) of dimension rn, the differential


equation (3.4-11) can be written as a vector differential equation

~(t) = i ~(t) + ~ u(t)

with output or observation equation

T
y(t) = ~ ~(t) + d u(t).

The solution of this equation for the initial state ~(0) is


t
~(t) _!(t)~(O) + f _!(t-T)~ U(T)dT, (3.6-19)
0
where
At (3.6-20)
! (t) e-

is a transition matrix, defined by the series expansion

At ; <it)\) (3.6-21)
e-
v=O v!

For sampled input and output signals, the state representation can be
simply derived from Eq. (3.6-18) and Eq. (3.6-19) of the linear process
is followed by a zero-order hold as in Fig. 3.4.2. Then for the input
signal

u(t) = u(kT 0 ) for kT 0 ~ t < (k+1)T 0

the state equation becomes for initial state ~(kT 0 ) for kT 0 ~ t < (k+1)~

t
~(t) = _!(t-kT 0 )~(kT 0 ) + u(kT 0 ) f _!(t-T) ~ dT. (3.6-22)
kT 0
If the solution for only t (k+1)T 0 is of interest, then
(k+1l T 0
~((k+1)T 0 ) = _!(T 0 )~(kT 0 ) + u(kT 0 ) f _!((k+1)T 0 -T) b dT
kT 0
With the substitution q (k+1)T 0 -T and with dq = -dT

To
~ (k+1) ,!(T 0 )~(k) + u(k) f _!(q) b dq. (3.6-23)
0
3.6 State Variable Representation 43

Introducing the abbreviations

A (3.6-24)

b (3.6-25)

the veetor differenee equation is obtained

~(k+1) = ~ ~(k) + ~ u(k), (3.6-26)

using Eq. (3.6-18) the output equation

y(k) = ~T~(k) + d u(k) (3.6-27)

d.

For the calculation of Eq. (3.6-24) and (3.6-25) see e.g. [2.19].

Canonical Forms

For process models in state representation, several realizations are


possible by applying Zinear transformations

~t = :!: x. (3.6-28)

The transformed representation then satisfies:

~t(k+1) = ~t ~t(k) + ~t u(k) (3.6-29)

T
y(k) = ~t ~t(k) + d u(k) (3.6-30)

with

~t T b
} (3.6-31)
T T -1
~t c T .

tanonieaZ forms of state representation are specially structured forms


of ~t' ~t and ~t· Some important canonical forms are given in Table
3.6.1 and Fig. 3.6.3.

Processes with Deadtime

If a process with a dead timed= Tt/T 0 =


1,2, ••• is to be described
by a state model which possesses a dynamic part with lumped parameters,
one has to distinguish whether the dead time exists at the input, at
44 3. Discrete-time Systems

Table 3.6.1 Canonical forms of state representation

~t ~t ~t
remarks

diagonal All eigenvalues z 1 1z 2 1


••• I Zm different.
form
z, 0 ... 0 b1 1 D c1 ID Correspond, to partial
0 z2 ... 0 b2 1 D c2 1 D fraction expansion.
= = ~t ~~llows from: !-1~t=
~! . T
~t or ~t = [ 1 1 ... 1 J
0 0 ... z
rn
b
m 1 D
cm
1 D
i f process controllable
and observable

column 0 ... 0 -a
rn
1 cTb
- -
companion
canonical
1 ... 0 -a
rn-1
0 eTA b
- - -
T
~t =[c1 1 s c2 1 s ... ern Is J
form T
~t=[g ( 1) 1 g(2) 1 ••• g(rn)J

0 1 -a, 0 cTArn-1b
- - -

controlla- 0 1 ... 0 0 b
rn
ble cano-
nical form
(regulator .
form) 0 0 .... 1 0 b2
-a -a 1
rn rn-1 .. -a, b1

row 0 1 ... 0 cTb


- - 1
T
~t=[b1 1 B b2 1 B ... brn 1 BJ
companion eTA b 0 T= [ g ( 1 ) 1 g ( 2 ) 1
canonical - - - !2.t •• •I g (rn) J
form
0 0 1
-a -arn-1 .. -a, cTArn-1b 0
rn - -
-
observa- 0 ... 0 -a
rn
-b
rn
0
ble
canonical
1 ... 0 -a
rn-1
b rn-1 0
form

0 ... 1 -a, b1 1
'-- -
3.6 State Variable Representation 45

STRUCTURAL DIAGRAM

y(k)

z
0
H:;;
z~
4;0
IJ.<Ii;
;:;;
O...:l
u«:
u
ZH
:;;z
::::>0
...:JZ
o«:
uu

~
1"10
,_:jli;
o:1
j ~u_l_k__) <>----1
...:JU
OH
~z
E-<0
zz
04;
uu

;:;;
z~
00
Hli;

~H
P-<4;
:;;u
OH
uz
0
:s:z
o«:
~u

u(k)

Xmlk) y[k)

Figure 3.6.3 Structural diagrams of canonical state representations


46 3. Discrete-time Systems

the output or between the state variables.

For dead time at the input we have

.?! (k+1} ~ .?! (k) + .!?. u (k-d) (3.6-32)

y(k) = ~T,?!(k) (3.2-33)

and for dead time at the output

.?! (k+1) ~ .?! (k) + b u(k) (3.6-34)

y (k+d) ~T.?! (k) • (3.6-35)

The dead time can be represented as a series of d delay elements at the


input or at the output, and the state variables of this series can be
included in the state vector. For a dead time at the input (c.f. Fig.
3.6.4) the following state representation holds

.?!u(k+1) = ~u.?!u(k) + !?_uu(k) (3.6-36)

U (k-d) CTX (k) (3.6-37)


-u-u
with
XT(k) [u(k-d) u (k-d-1) u(k-1)]
-u

0 0 0 0
0 0 1 0 0
A b (3.6-38)
-u -u
0 0 0
0 0 0 0

T 0]
c [1 0 0
-u

u(k~Glu(k-~f:11
~

Figure 3.6.4 State representation of a linear process with dead time


at the input
3.6 State Variable Representation 47

(3.6-32) and Eq. (3.6-36) the state representation gives

lf l
Combining Eq.

X
-u
(k+1)
l
r~(k+1) "r ~ E2~w(k) 0
-
A
-u
X
-u
(k) b
-u
u(k) (3.6-39)

y(k) C.s? .QJ


[ ~ (k) ] (3.6-40)
X (k)
-u
or concisely

~d (k+1) ~d ~d(k) +Ed u(k) (3.6-41)

y(k) = £.dT ~d(k). (3.6-42)

If the controllable canonical form is chosen for ~' the extended system
matrix becomes

0 0 •.. 0 0 0 .•• 0

0 0 0 0 0 0
-a -a .-a 1 I 0 0
m m-1 (3.6-43)
~d ----------~------
0
I
0
I
0 1 . . 0

I
I
0 0
I
0 . . 0 I 0 0 0

The state representation of dead time processes is treated for example


in [3.1], [3.2]. (c.f. section 9.1.).

Solution of the Vector Difference Equation

Two possible solutions of the vector difference equation

~(k+1) =~ ~(k) + .!:?_ u(k) (3.6-26)


T
y(k) = £_ ~(k) + d u(k) (3.6-27)

are presented.

A first possibility which corresponds to the recursive solution of dif-


ference equations for a given input signal u(k) and initial conditions
~ (0) is
48 3. Discrete-time Systems

~(1) !::_ ~(0) + b u(O)


~ (2) !::_ ~(1) + b u(1)
t::_ 2 ~(0) + Ab u(O) + b u(1)

k
~ (k) !::_k~(O) + L Ai- 1 b u(k-i). (3.6-44)
i=1- -
homoge- particular
neous solution (con-
solution volution sum)

where
Ak A•A ... A.
'-....--'
k

y(k) can finally be obtained from Eq. (3.6-27). If u(k) is given expli-
citely as a z-transform, a second possible solution can be used.

The z-transform then furnishes

}{~(k)} = ~(z)

}{~(k+1)} = z[~(z)- ~(0)]


(applying the theorem of shifting to the left) .

Then, it follows from Eq. (3.6-26)

z[~(z) - ~(0) J !::_ ~(z) + b u(z) (3.6-45)

or
(3.6-46)

and with Eq. (3.6-27)

y(z) = ~T[z~-!::_]- 1 z ~(0) + [~T~~-!::_J- 1 Q + d]u(z). (3.6-47)

Comparing Eq. (3.6-46) and Eq. (3.6-44) the following condition is ob-
tained

(3.6-48)
3.6 State Variable Representation 49

Determination of the z-Transfer Function

With initial state ~(0) = Q and Eq. (3.6-47) it follows that

G(z) = ~~~l = ~T[z!-~]- 1 £ + d

cT adj [zf-~] £ + d det [zf-~]


(3 .6-49)
det [zf-~]

The uncancelled denominator of the z-transfer function results in the


characteristic equation

(3.6-50)

Determination of the Impulse Response

Applying Eq. (3.6-27) and Eq. (3.6-44) taking ~(0) 0

k
y(k) = E cT Ai- 1 b u(k-i) + d u(k). (3.6-51)
i=1

Introducing
k=O
u(k) =
{~ k>O

the impulse response is given by

g(O) d
(3.6-52)
g(k) cT Ak- 1 b for k>O.

Here we obtain the following relationship between the impulse response


and the z-transfer function (c.f. Eq. (3.4-22),

G(z) = E g(k) z-k = d + E -k


z (3 .6-53)
k=O k=1

Controllability

A linear dynamic process is said to be controllable if there exists a


realizable control sequence u(k) which will drive the state for the
finite time interval N from any initial state ~(0) to any final state
~(N).

To obtain the input u(k) one can start with Eq. (3.6-44). Then, for a
process with one input
50 3. Discrete-time Systems

N N-1
~{N) ~ ~(0) + [~, A b ••• ~ ~] ~N (3.6-54)

with
T
~N [u(N-1)u(N-2) ... u(O) ]. (3.6-55)

Here, the unknown input can be determined uniquely for N m

-1 m
u Q [x(m) -A x(O)] (3.6-56)
-m -s - --

L~ Ab (3.6-57)
•.C
ll.

Q ;J. 0.
det -s (3.6-58)

gs is called the controttabitity matrix. This matrix should not have


linearly dependent columns or rows. Hence, for a controllable process
we have

Rank Qs = m (3.6-59)

with m as the order of A. For N < m no solution exists for ~, and for
N > m no unique solution.

Observability

A linear dynamic process with output variable y(k) is called observable


if any state ~(k) can be obtained from a finite number of output variab-
les y(k), y(k+1), ... , y(k+N-1). Conditions for the solution of this
observability problem can be derived as follows.

With the output equation

T
y(k) = ~ ~(k)

and the vector difference equation (3.6-26) a system of equations is


obtained

T
y(k) ~ ~(k)

y (k+1) ~T~ ~(k) + cTb u(k)


y(k+2) ~T~ 2 ~(k) + eTA b u(k) + cTb u(k+1)

y(k+N-1) ~T~N-1~(k) + l 0,~Tb-'~T~ _,


b ... T N-2b]
,~ ~ - ~N· (3.6-60)
3.7 Mathematical Models of Processes 51

Here
T
~ = [.u(k+N-1) .•• u(k+1) u(k) ]. (3.6-61)

If the input vector ~~ is completly known, m equations are required for


a unique determination of the m unknowns of the state vector ~(k) in
the system of equations (3.6-60). Hence, N = m. The system of equations
(3.6-60) then becomes

(3.6-62)

whence

y~ [y(k) y(k+1) ••• y(k+m-1)]

uT [u(k+m-1) ••• u(k+1) u(k)]


-m
. ( T T
Q
-B £_ £_ ~ .•• £_T~m-1JT
0

Then the saught state vector is

_!!(k) (3.6-63)

if det gB f o. A dynamic process is, therefore, observable if the obser-


vability matrix gB has
Rank gB = m. (3.6-64}

gB therefore has m linearly independent rows.

3. 7. Mathematical Models of Processes

Since the design of sophisticated and well adjusted control algorithms


presupposes the knowledge of mathematical process models, this section
discusses some ways of obtaining discrete-time process models with lum-
ped parameters, making use of examples.
52 3. Discrete-time Systems

3.7.1 Basic Types of Technical Process

The dynamic behaviour of technical processes has many facets. In the


following, some important characteristics are listed. Technical process-
es are characterized by the transformation and/or the transport of ma-
terials, energy and/or information; they can be classified according to
- the amplitude-time-behaviour of the signals
- the type of transport of materials, energy, information
- the class of mathematical models

a) Amplitude-time-behaviour of the Signals

From Fig. 3.7.1 one can distinguish


o continuous amplitude - continuous time
(+processes with continuous signals)
o continuous amplitude - discrete time
(+processes with discrete-time (sampled) signals.
The pulses appearing can be amplitude modulated,
width modulated or frequency modulated.)
o discrete amplitude - continuous time
(+processes with stepwise signals)
o discrete amplitude - discrete time
(+processes with sampled digital signals)
o binary amplitude - continuous or discrete time
(+processes with binary signals) •

continuous discrete

., continuous X !• • •
I I I
•I •I f ' ' 't

discrete
...
xi • • • • •
t I I I I I ' ' ' •t
binary
~bono. t

Figure 3.7.1 Amplitude-time-behaviour of different signals


3.7 Mathematical Models of Processes 53

b) Type of Materials, Energy and Information Transport

According to the transport of materials, energy and/or information,


technical processes can be divided as follows:

o Continuous Processes
Materials, energy, information flow in continuous streams
- Once-through operation
Signals: many combinations as in Fig. 3.7.1 are possible
- Mathematical models: linear and nonlinear, ordinary or partial
differential equations or difference equations
- Examples: pipeline, electrical power plant, electrical cable for
analog signals. Many processes in power and chemical industries.

o Batch Processes
Materials, energy, information flow in "packets" or in inter-
rupted streams
- Process operation in a closed space
-Signals: many combinations such as in Fig. 3.7.1 are possible
- Mathematical models: mostly nonlinear, ordinary or partial diffe-
rential equations or difference equations
Examples: processes in chemical engineering: processes for che-
mical reactions, washing, dyeing, vulcanising.

o Piece-good Processes
Materials, energy, information are transported in "pieces"
- Process operation: piecewise
- Signals: mostly discrete (binary) amplitude. Continuous or
discrete time.
- Mathematical models: flow schemes, digital simulation programs
- Examples: many processes in manufacturing technology. Processing
of work pieces, transport of parts, transport in storages.

c) Classes of Mathematical Models

The classes of mathematical models which describe the transient be-


haviour of continuous and batch processes can be divided as follows:
Ordinary differential equations Partial differential equations
(lumped parameters) (distributed parameters)
linear nonlinear
linear in the parameters nonlinear in the parameters
time invariant time variant
parametric nonparametric
continuous signals discrete signals
54 3. Discrete-time Systems

The definitions of these ideas are found in many wellknown books on


systems theory and control engineering.

In this book, the design of control algorithms is treated with particu-


lar reference to continuous and batch processes, for which models can
be linearised around an operating point. Since for digital control sys-
tems mainly mathematical models with discrete-time signals are of inte-
rest, some methods for obtaining these models are treated in the next
sections.

3.7.2 Determination of Discrete-time Models from Continuous Time


Models

This section describes how discrete-time models can be derived if the


models of lumped parameter processes are already given for continuous
signals.

For small sample times difference equations can be obtained by discre-


tizing differential equations. Here, backward differences are taken
using the following approximations

f(k) - f(k-1)
To

!::.£ (k) - f..£ (k-1)


(3. 7-1)
T2
0

f(k) - 2f(k-1) + f(k-2)


T2
0

For larger sample times the difference equations and the z-transfer
functions are calculated most appropriately by the use of z-transform
tables. For this, either the impulse response g(t) = f(t) in analyti-
cal form or the s-transfer function G(s) = f(s) is required, and the
corresponding f(z) = G(z) are taken from the z-transform tables. A par-
tial-fraction expansion has to be performed for higher-order processes
to obtain those terms of G(s) which are tabled. If there is a zero-order
hold, Eq. (3.4-10) has to be used, and from the table G(s)/s has to be
taken. (In the following, G(s) has to be replaced by G(s)/s, as in
3.7 Mathematical Models of Processes 55

Example 3.7.1.) For example, the transfer function

p+1
l: c.sj
·=o J
G(s) (3.7-2)
1
(s-s )P II (s-s.)
0 i=1 1

has to be expanded into

~ Aog 1 Ai
G(s) + l: -=--
q=1 (s-s 0 )q i=1 (s-si)

Here, the coefficients can be calculated using the residues

dp-q p ]
A [ - - [(s-s )-G(s)]
oq (p-q)! dsp-q 0
s=s
0
(3.7.3)
A.
1

The poles of G(s) and of G(z) are directly mapped through z eTos, as
in section 3.5.1.

Since the coefficients of the z-transfer function and the difference


equation are identical, the difference equation can be written as soon
as G(z) is known.

Example 3.7.1

Problem:
The z-transfer function of the process

G(s) = ~
u(s) = ~-=---~=K-~-~~---
(1+T s) (1+T s) ... (1+Tms)
1 2
with zero-order hold is to be calculated.

Solution:
1. Partial-fraction expansion of G(s)/s
56 3. Discrete-time Systems

1 1 1
KTT ··· T
G(s) 1 2 m
s
( s+} l
m

A1 A
m
+ --1- + ... + - 1 -
s S+.):; s+T
1 m

m
-K n T.
j=1 J
A. Hi i 1, ••• , m
.]_ m
n (- 1 + ..!_)
T. T.
j=1 .]_ J
jfi

2. Search for the corresponding z-transforms


Ao m A..]_
From G(s)
s +
- l: --1
s
i=1 S+T,
.]_

it follows that

3. From Eq. (3.4-10) we have

HG(z) = (1-z- 1 ll{G~s)}

To To
m T. 1 m -1 m -T:" -1
A 0 IT (1-e .L z- ) + l: ( 1-z ) A. n ( 1-e J z )
i=1 i=1 .Lj=1
---------~=---------jti--------
_To
m T..]_ -1
n ( 1-e z )
i=1

For the parameters

m = 3; K = 1; T 1 = 10 sec; T 2 = 7,5 sec; T 3 = 5 sec

the parameters of G(z) are given in Table 3.7.1 for different sample
times T0 . With increasing sample time the following trends can be reco-
gnized:
3.7 Mathematical Models of Processes 57

a) The magnitudes of the parameters a. decrease.


l.
b) The magnitudes of the parameters bi decrease.
c) The sum of the parameters, Lbi = 1 + Lai, increases.

For larger sample times we have la 3 1 << 1+Lai and lb 3 1 << Lbi' so that
a 3 and b 3 can be neglected. In practice this means that a second order
model is obtained.
D

Table 3.7.1 Parameters of the z-transfer function G(z) for the process
1
G(s) = (l+lOs) (l+?.Ss) (l+Ss) with zero-order hold, for
different sampling times T0 .

T0 [sec 2 4 6 8 10 12

b1 0.00269 0.0186 0.05108 0.09896 0.15867 0.22608

b2 0.00926 0.0486 0. 1086 0.17182 0.22570 0.26433

b3 0.00186 0.0078 0.01391 0.01746 0.01813 0.01672

a, -2.25498 -1.7063 -1.2993 -0.99538 -0.76681 -0.59381

a2 1.68932 0.958 0.54723 0.31484 0.18243 0.10645

a3 -0.42035 -0.1767 -0.07427 -0.03122 -0.01312 -0.00552

Lb. 0.01399 0.0750 0.17362 0.28824 0. 40250 0.50712


l.
= 1+La.
l.

Another method for calculating HG(z) from G(s), which makes no use of
z-transform tables, consists in the following approximation due to
Tustin [3.3]

2 z-1 (3. 7-4)


s ""' T 0 z+1 ·

To evaluate this approximation, an especially simple derivation is con-


sidered.

The integral equation

t
y(t) T J u(t)dt (3.7-5)
0
58 3. Discrete-time Systems

means that after Laplace-transform

.YJ2j_ (3.7-6)
u(s) Ts

If the continuous integration is first replaced by rectangular integra-


tion, then for small T 0 the following equations are valid

To k
u(i-1)
y(k)
""or L
i=1

To k-1
y (k-1)
"" or L
i=1
u(i-1)

To
u(k-1)
y (k) -y (k-1)
""or
-1 To -1
] u (z) z
y(z)(1-z
""or
-1
T0 z To
~ (3.7-7)
u(z) "" T (1-z
-1
) T (z-1)

Through correspondence of Eq. (3.7-6) and Eq. (3.7-7) for small sample
times we have

s + T1 (z-1).
0

A better approximation of the continuous integration is obtained using


trapezoidal integration

To k
1
y(k)
"" or i=1
L
2 (u (i) +u(i-1)]

To k-1 1
Lu(i) + u(i-1)]
y(k-1)
""or i=1
L
2

To
y(k)-y(k-1) (u(k) + u(k-1)]
"" 2T

To z+1
~ (3. 7-8)
u(z) "" 2T z-1

Hence, for small T 0 the correspondence

2 z-1 (3.7-9)
s + T 0 z+1
3.7 Mathematical Models of Processes 59

results. This correspondence can also be obtained by the series expan-


sion of z =eTas,

s = 1 ln z " "2[
- ~ 1 + (z-1) 3 + ..• ] (3.7-10)
To To z+1 3(z+1)3

stopping after the first term.

Example 3.7.2

For the process

1
G(s) = (1+10s) (1+5s)

with zero-order hold, Table 3.7.2 gives the exact parameters of the z-
transfer function HG(z) and the parameters resulting from the approxi-
mation Eq. (3. 7-4)

HG ( z) = G ( s) sI 2 z-1
T0 z+1

for different sample times.

- z-1 2
Table 3.7.2 Parameters of HG(z) and HG(z) for s = z+ 1 and the resul-
~
ting maximum error of the transient fun8tion for HG(z).
1
G(s) = (1+10s) (1+5s) •

To bo b1 b2 a,, a2 Eb.
~
(L'ly/yoo)ma X
[sec] for t[sec ]
1 0.00906 0.00819 -1.72357 0.74082 0.01725
0.00433 0.00866 0.00433 -1.72294 0.74026 0.01732 +0.024 6

2 0.03286 0.02690 -1.48905 0.54881 0.05976


0.01515 o. 03030 0.01515 -1.48485 0.54546 0. 06061 +0.048 6

0.10869 0.07286 -1.11965 0.30119 0.18155


4
0.04762 0.09524 0.04762 -1.09524 0.28571 0.19048 +0.087 8

0.20357 0.11172 -0.85001 0.16530 0.31529


6
0.08654 0.17308 0.08654 -0. 78846 0. 13462 0.34615 +0.124 6

0.30324 0.13625 -0.65123 0.09072 0.43949


8 -0.53968
0.12698 0.25397 0.12698 0.04762 0.50794 +0. 146 8

0.48833 0.15708 -0.39191 0.02732 0. 63541


12
0.20455 0.40909 0.20455 -0.15909 -0.02273 0.81812 +0.20 0
60 3. Discrete-time Systems

In this table, the maximum errors of the resulting transient function


y(k) are also given. Here

~y(k) = y(k) - y(k)

y(oo) = lim y(k).


k+oo
By using this approximation of HG(z), the parameter b 0 appears in +o
HG(z). Hence, a structural difference arises. For small sample times
T0 s 2 sec the parameters a 1 and a 2 agree relatively well and the maxi-
mum errors of the transient function are less than 5 %. However, with
an increasing sample ti~e the errors become larger. If T 95 is the sett-

ling time such that the output reaches 95 % of the final value y(oo) of
the transient function, for T 95 = 37 sec Table 3.7.2 shows that:

If an error in the transient function

is allowed, then the maximum sample time is

17.5 to 8.
0

3.7.3 Simplification of Process Models for Discrete-time Signals

For linear process models with transfer functions of the form

m
IT (1+T 8 s) -T s
G(s) ti& 8=1 e t (3.7-11)
u(s) m-2
(1+2DTs+T 2 s 2 ) IT (1+T s)
cx=1 ex

-T s
e t

the dynamic behaviour is approximately the same in both open and closed
loop if the generalized sum of time constants

m-2
2DT + E T
(l
(3.7-12)
cx=1

remains constant, [3.4], [3.5]. That means that the sum of energy, mass
or momentum which is stored during a transient process has to remain
3.7 Mathematical Models of Processes 61

constant if process models are simplified. It is to be supposed that


this condition, at least for small sampling intervals, is also valid
for discrete-time process models. If only the input/output behaviour
is of interest, the continuous model should be simplified as far as
possible before it is converted to a discrete-time model. For example,
1 small time constants Ti should be replaced by a dead time

T =
t

or poles and zeros which are approximately equal should be cancelled


considering Eq. (3. 7-12), [3.4], L3.5].

If process models are obtained through identification methods, the dead


time has to be chosen such that the resulting model order m becomes as
small as possible.

Now, some rules for the simplification of discrete-time models are gi-
ven. A z-transfer function of form

G(z) ~ (3.7-13)
u(z) -1
a 1z +

is assumed. Changes in the parameters ai and bi result, in contrast to


normalized continuous models, in changes in both the dynamic behaviour
(normalized transient function) and the static behaviour (gain) • There-
fore, conditions for changes of the parameters ai and bi should be de-
rived for which - during a transient process - the stored quantities
and the gain remain constant. The stored quantities for a dynamic pro-
cess with proportional behaviour are, corresponding to the continuous
case,

A' T0 L [u(kl - y(k)]. (3.7-14)


k=O
If a step function with height u 0 is chosen as input signal and if

y(k) ~ K u 0 for k ~ 1

with K as the gain

m m
K L b.
~
I L a.
~
(3.7-15)
i=O i=O

and with A" A' I Touo it follows that:


62 3. Discrete-time Systems

1
A" "" A l: [u(k)-y(kl] (3.7-16)
uO k=O

The difference equation of the process leads to the following system of


equations

b 0 u(O)

+ b 0 u(1) + b 1 u(O)

-a 1 y(m-1)

l+m
a0 l: y(k)
k=O
(3.7-17)
Hence, with u(k) = u 0 fork ~ 0 and Eq. (3.7-15)

m 1 m
l: ai l: y(k) = (1+1)u 0 l: bi+u0 Lmb 0 +(m-1)b 1 + ... +bm_ 1 ]
i=O k=O i=O

Substituting in Eq. (3.7-16) leads to

A= m
(3.7-18)
l: a.
i=O 1

For small parameter changes one obtains with

(3.7-19)

the approximation relation

t>.A ""_1_ (3.7-20)


m
l: a.
i=O 1
3.7 Mathematical Models of Processes 63

Considering changes of the stored quantity A it follows that:

a) The larger i, the smaller the effect of changes ~ai or ~bi

l
3A/3ai (m-i)-A

3A/aa 0 rn-A
(3.7-21)
aA.;ab. m-i
1.

3A/ab 0 m

b) Using

aA.;ab. (m-i)
1.
= K
3A/3ai A- (m-i)

leads to
> >
K for A 2 (m-i).
< <

From Eq. (3.7-15) for small parameter changes it follows that, for the
gain K 11

K m [ 3K
l:-
""" i=O aai
~a.
1.
+ l!S_
3b.
1.
~bi]
m
l: (~b.-~a.). (3.7-22)
m i=O 1. 1.
l: a.
i=O 1.

With ~A = 0 and ~K = 0 one obtains two equations for determining ~ai


and ~bi. Since form~ 1 there are always more than two unknowns ~ai
and ~bi, several solutions are possible.

A first solution is obtained directly from Eq. (3.7-20) and Eq. (3.7-22)

~a.
1.
~b. i 0, 1 1 • • • I m
1.

and
m
l: ~a. 0. (3.7-23)
1.
i=O

Hence, for A """ constant and K """ constant, small parameter changes are
permitted if ~a 1 = ~b 1 , ~a 2
+~a = 0.
m

However, other solutions exist, as shown in the following example.


64 3. Discrete-time Systems

Example of the Simplification of Process Models (Reduction of the Model


Order)

For the sampling time T0 = 10 sec one obtains from Table 3.7.2

0.1587 z- 1 +0.2257 z- 2 +0.0181 z- 3


HG(z)
1-0.7668 z- 1+0.1824 z- 2 -0.0131 z- 3

This process has the characteristic parameters K = 1 and A 2.75. Since


a 3 << 1 + Eai and b 3 << Ebi (see Table 3.7.2) it is supposed that this
process can also be described by a model of m = 2.

It follows from Eq. (3.7-20) that:

and from Eq. (3.7-22):

Now, a3 = 0 and b 3
- = 0 are set, i.e.

In these two equations, four unknowns remain, i.e. two variables can be
chosen freely. It is assumed, for example,

Then it follows that:

2~a 1 + (a 3 -~b 2 ) - A(~a 1 +a 3 -a 3 ) 0

-0.75~a 1 - ~b 2 = 0.0131

and

~a 1 + (a 3 -~b 2 ) + (-a 3 +b 3 ) 0

~a 1 - ~b 2 = -0.0181

and here

-0.0178 -0.0131

+0.0003 0.

Finally, the following approximation is obtained


0.1587z- 1+0.226o- 2
HG(z) = -1 2.
1-0.7847z +0.1693z
3.7 Mathematical Models of Processes 65

This approximation has K 1.000. A has changed by 1 °/oo.


D

For the simplification of discrete-time process models ~A = 0 and ~K =


0 can be assumed, based on the hypothesis thatthe stored physical quan-
tities during a transient do not change. Based on Eq. (3.7-20) and Eq.
(3.7-22) conditions are obtained for model simplification.

A reduction of the model order can also be obtained by neglecting those


eigenvalues having only a small influence [3.14]. However, the decision
as to which eigenvalues can be neglected, remains unanswered and the
residual part of the model is not corrected.

3.7.4 Process Modelling and Process Identification

Mathematical process models can be obtained by theoretical or by expe-


rimental process analysis. [3.4], [3.6] to [3.13].

In theoretical analysis (theoretical modelling) of processes the model


is determined by the stating of balance equations, state equations and
phenomenological laws. One then obtains (for continuous signals) in ge-
neral a system of ordinary and/or partial differential equations which
laeds to a theoretical model of the process with determined structure
and parameters if the model can be solved explicitly. For the deriva-
tion of discrete-time models the following methods are recommended:
approximation of continuous models by lumped-parameter models, sim~li­

fication of the continuous models, then discretization or z-transforma-


tion according to section 3.7.2.

In the case of experimental analyis of the process (process identifi-


cation) the mathematical model of the process is determined by using
measured signals. Input and output signals of the process are evalua-
ted by using identification methods such that their relationships are
expressed by a mathematical model. This model can be nonparametric, as
e.g. transient function or frequency response in tabular form, or para-
metric, for example a parametric differential or difference equation.
Nonparametric models are obtained by evaluation of the measured sig-
nals using Fourier-analysis or correlation analysis, whilst parametric
models are obtained by applying the methods of step response or fre-
quency response fitting or by parameter estimation methods. For the
66 3. Discrete-time Systems

design of control algorithms for digital processors, parametric models


are especially suitable, since modern systems theory is mainly based on
these models as they explicitly contain the parameters, and as the syn-
thesis of control algorithms can be performed directly.

For identification of discrete-time parametric models, parameter estima-


tion methods are especially suitable. Then for linear time invariant
processes, models of the form

-1 -1
y(z) B(z ) z-du(z) + D(z ) v(z) (3.7-24)
A(z- 1 ) C(z- 1 )

process model disturbance


model

(c.f. Eq. (3.4-14) and Eq. (12.2-31)) are assumed and the unknown para-
meters of the process and the disturbance models are estimated based on
the measured signals u(k) and y(k) [3.13]. For parameter estimation
methods such as the following can be used: Least Squares, Instrumental
Variables, Maximum Likelihood in nonrecursive or recursive form. In re-
cent years, using on-line and off-line computers, methods of process
identification have been extensively developed and tested in practice.
Many linear and nonlinear processes with and without perturbation signals,
in open and closed loop, can be identified with sufficient accuracy.
There are program packages which are easy to operate and which contain
methods for the determination of the model order and the dead time
(c. f. chapters 2 3, 2 4 and 2 9) .
B Control Systems for Deterministic
Disturbances

4. Deterministic Control Systems


Deterministic control systems are control systems that are designed for
external deterministic disturbances or deterministic initial values.
Deterministic disturbances or initial values are variables which, unlike
stochastic variables, can be described exactly in analytical form.

Common control systems can be classified as reference control systems


or terminal control systems. For their discussion a process with one
manipulated variable u(k), one controlled variable y(k), the state va-
riables ~(k) and the disturbances v(k) are considered, as in Fig. 4.1.
With reference control systems the controlled variable y(k) has to fol-
low a reference variable w(k) as closely as possible, resulting in con-
trol errors e(k) = w(k)-y(k) that are as small as possible, e(k) ~ 0.
If the reference variable changes with time a variable reference control
system or tracking control system is to be designed. If the controlled
variable is a position, velocity or acceleration, this is also called
a servo controZ system. If the reference variable is constant, this is
called a regulator.

For terminal control systems a definite final state ~(N) of the process
has to be reached and held at a prescribed or free final time point N.
For both reference and terminal control systems, the influence of ini-
tial values ~(0) or disturbances v(k) of the processes has to be com-
pensated for as much as possible. The control problem, moreover, is
such that for unstable processes a stable overall system is to be ob-
tained through the feedback.

These problems can be solved in general by applying controllers which


make use of the feedback of the process output y(k) or the rrocess states
~(k). The effect of feedback often can be improved by additional feed-
forward control elements. In Fig. 4.1 block diagrams of control systems
are presented for the case of one feedback controlled or feedforward
(X)
"'
,-- - - -- -l process
~I v
I I
L2::JI
1
nI n
I
y

L ______ _j
feedback
controller
----------------~

@ ®
-- ---,
____, [process

major
controller ""
n1
t:l
(D
Y1 y rT
(D
I 1-j
El
J process f-'·
:::1
f-'·
Ul
rT
f-'·
(l

@ @ n
0
:::1
rT
1-j
0
Figure 4.1 Block diagrams of the most important control system structures for one controlled variable f-'

b feedforward control system (I)


a single loop control
d state feedback '<
c cascaded control system Ul
rT
(D
sUl
4. Deterministic Control Systems 69

controlled output variable y. The following notation is used: GR for feed-


back controllers or feedback control algorithms; G8 for feedforward con-
trollers or feedforward control algorithms.

Fig. 4.1a shows a single-loop control. If the disturbance vis measu-


rable, feedforward can be used as in Fig. 4.1b, in combination with a
feedback loop for control of those disturbances which cannot be compen-
sated by the feedforward control. If along the signal flow path between
the manipulated variable and the control variable additional measured
variables can be used for feedback, subsidiary controls or, as shown in
Fig. 4.1c, cascaded control systems with major and minor control loops
can be realized. A state variable feedback for stabilization or changing
the eigenbehaviour of a process is shown in Fig. 4.1d.

design method

control algorithm

Fig. 4.2 To the design of control algorithms

The design of control systems generally is made according to Fig. 4.2.


Depending on the design method and the application, exact or approximate
mathematiaaZ modeZs of the processes and the signaZs (disturbances, re-
ference variables, initial values) are used as the basis for design.
Section 3.7 describes methods for obtaining process models. Frequently
models of the signals can only be estimated approximately. For simplici-
ty, step changes are often assumed, though they are rare in practice.
70 4. Deterministic Control Systems

However, by applying modern process computers it is possible to obtain


more exact models of deterministic and stochastic signals without too
much effort.

In this book the design of linear control systems for linearizable time-
invariant processes with sampled signals is considered. A schematic pre-
sentation of the most important control systems and their design prin-
ciples is given in Fig. 4.3.

Two major groups are distinguished: parameter optimized control systems


and structure optimized control systems. In the case of parameter opti-
mized control systems the controller structure, i.e. the form and the
order of the controller equation, is given and the free controller pa-
rameters are adapted to the controlled process, using an optimization
criterion or using tuning rules. Control systems are called structure
optimal if both the controller structure and the controller parameters
are adapted optimally to the structure and the parameters of the pro-
cess model. For both major groups subgroups can be distinguished. In the
case of parameter optimized controllers various lower-order controllers
of PID-type, and in the case of structure optimal controllers cancella-
tion and state controllers can be distinguished. For design, tuning
rules, performance criteria and pole placement are commonly used. Fig.
4.3 also gives the names of the most important controllers and the sui-
tability of the designs for deterministic or stochastic disturbances.

The choice of control performance plays a central role in controller


design. For the cancellation controllers the time behaviour of the con-
trolled variables is specified, either completely or after a final sett-
ling time. Little is specified of the time behaviour of the controlled
variable in the case of pole placement. The poles describe just the
single eigenoscillations. However, their superposition, the resulting
zeros and the behaviour for external disturbances are not included in
this design.

A more comprehensive evaluation of control system behaviour and a more


directed controller design is obtained by the introduction of perfor-
mance criteria. In recent years, integral criteria have been used for
the design of continuous control systems by integrating control errors,
squares of control errors, absolute magnitudes of control errors etc.,
each of which could also be weighted with time. For discrete-time sig-
nals, these performance criteria are:
""'
0
ro
rt"
ro
"s
!-'·
::J
!-'·
Major Parameteroptimized Structure Optimal Ul
Groups: Controller Controller rt"
!-'·
(l

n
0
::J
rt"
~ ~
Sub- zero first second higher cancellation 0
groups order order order order controller I state
controller J "
I-'
(Jl
'<
Ul
rt"
~~ ~~ /~ ro
sUl
Design tuning pole performance prescribed final sett- performance pole performance
Method: rules assignment criterion behaviour ling time criterion assignment criterion

t + t f~j
Controlle P- general
------tPI-,------- cancellation deadbeat minimum variance modal state state
I- PD-, PID- linear controller controller controller controller controller

External
disturbance:
deterministic X X X - - X
stochastic X - - X - X
-

Figure 4.3 Scheme for the design of linear controllers

--.}
72 4. Deterministic Control Systems

I1 l: e(k) "sum of the error"


k=O
00

I2 l: e 2 (k) "sum of the square of the error"


k=O

I3 l: \e(k) \ "sum of the absolute magnitude of the error"


k=O

"sum of time multiplied by the absolute


I4 l: k \e (k) I value of error"
k=O

Since I 1 cannot be used if the sign of e{k) changes, I 2 is used more


often. However, I 2 leads to heavily oscillating behaviour; better damped
behaviour of the control variable is obtained by using I 3 or I 4 .

In analytical design quadratic performance criteria are preferred as


they have mathematical advantages. This is due to the fact that when
searching for extremal values a single derivative results in relation-
ships which are linear in e{k). Additional degrees of freedom and the
possibility of a more directed influence on the damping of the control
systems behaviour result by adding of quadratic deviations of the mani-
pulated variable with a weighting factor r. Hence, a more general qua-
dratic performance criterion is

I5 =~ e 2 (k) + r u 2 (k) (4-1)


k=O

which for state control systems leads to the form

~ ~T(k) ~ ~(k) + r u 2 (k). (4-2)


k=O

These quadratic criteria are suited for both deterministic and stochas-
tic signals, so are preferred in this book.

Independently of the choice of the performance criterion, the steady-


state behaviour can be specified. For constant values of the reference
variable w(k), the disturbances n(k), v(k) and ur{k) (Fig. 4.1a) no
offset may result in general, i.e. lim e(k) = 0 and therefore, after
k-+oo
using the final value theorem of the z-transformation

lim (z-1)e(z) = 0
z-+1
4. Deterministic Control Systems 73

For the single control loop


G (z)
e ( z) = 1 [ w ( z) -n ( z) ] - ( ) ( ) uv ( z) , (4-3)
1+GR(z)Gp(z) 1+GR z Gp z

and therefore, for a step change 1 (z) = z/(z-1) different conditions


result from various disturbances.

1. w (k) = 1 (k) and n (k) 1 (k)

lim 1 o
z-+ 1 1 +GR ( z) Gp ( z)

-+ lim GR (z) GP (z)


z-+1

1 (k) (disturbance at the process input)


GP(z)z
lim 0
z-+ 1 1+GR (z) GP (z)

a) lim G (z) t oo
z-+1 P
-+ lim GR(z)
z-+1
-+ lim GR ( z) GP ( z)
z-+1

b) lim G (z) = oo
z-+1 P
-+ lim GR (z)
z-+1

Therefore in all cases

(4-4)

leads to a zero offset. This is given by a controller pole at z

(.! ( z)
GR ( z) = pI ( z) ( z -1 ) (4-5)

i.e. through integral action in the controller. A process pole at z = 1


with a proportional controller and for w(k) = 1 (k) and n(k) = 1 (k),
leads to diminishing offsets. However, this is not the case for a con-
stant disturbance uv(k) = 1 (k).

Similar requirements can be established for zero offsets in the case


of linearily or quadratically changing reference signals. The controller
then has to have double or triple poles at z = 1 (see [2.19]).
5. Parameter-optimized Controllers

5.1 Discretizing the Differential Equations of Continuous


PID-Controllers

As common parameter-optimized controllers have P-, PI- or PID-behaviour


the first attempts were to transform their equations by discretization.
Then experience with analog controllers could be used, and in principle
their well-known tuning rules could be applied. Furthermore, retraining
of the plant personnel was not necessary [5.1], [5.2], [5.3], [5.4],
[5.5].

The idealized equation of a PID-controller is

1 t de (t)
u(t) = K[e(t) + ~ f e(T)dT + TD ~ ) ( 5. 1-1)
I 0

with parameters:

K gain
TI integration time
T0 derivative time

For small sample times T0 this equation can be turned into a difference
equation by discretization. The derivative is simply replaced by a dif-
ference of first order and the integral by a sum. The continuous inte-
g·ration may be approximated by rectangular or trapezoidal integration,
as in section 3.2.

Applying rectangular integration gives


T0 k T0
u(k) K[e(k) + - E e(i-1) + - (e(k)- e(k-1))]. (5. 1-2)
TI i=O TO

This is a non-recursive control algorithm. For the information of a sum


all past errors e(k) have to be stored. As the overall change u(k) of
the manipulated variable is produced, this algorithm is called a "!>o-
sition algorithm", as in [5.1], [5.3].
5.1 Differential Equations of Continuous PID-Controllers 75

However, recursive algorithms are more suitable for programming on com-


puters. These algorithms are characterized by the calculation of the
current manipulated variable u(k) based on the previous manipulated va-
riable u(k-1) and correction terms. To derive the recursive algorithm
one has to subtract from Eq. (5.1-2)
T k-1 TD
u(k-1) = K[e(k-1) + _Q r e(i-1)+ (e(k-1)-e(k-2))] (5 .1-3)
TI i=O TO

and one obtains

u(k)-u(k-1) (5 .1-4)

with parameters

(5 .1-5)

Now, only the current change in the manipulated variable

~u(k) = u(k) - u(k-1)

is calculated and so this algorithm is also called a "velocity algo-


rithm".

It should be noticed that by a slightly modified integration in Eq.


(5.1-2) e(i) can be used instead of e(i-1) in the sum. Then, the coeffi-
cients q 0 and q 1 change such that no agreement exists with the coeffi-
cients defined in section 5.2 for large sample times.

If the integration uses the trapezoidal method Eq. (5.1-1) leads to

u(k) = K[e(k) + TO(e(O)+e(k) + k~


1 e(i))+ TD (e(k)-e(k-1))).
TI 2 i=1 TO
(5 .1-6)

After subtraction of the corresponding equation for u(k-1), another re-


cursive relation in the form

is obtained, with parameters


76 5. Parameter-optimized Controllers

To
-K(1 + 2 - - (5 .1-7)
To

For small sample times the parameters q 0 , q 1 and q 2 can be calculated


using the parameters K, T 1 and T0 of the analog Pro-controllers by ap-
plication of Eq. (5.1-5) or Eq. (5.1-7).

5.2 Parameter-optimized Discrete Control Algorithms of


Low Order
For larger sample times the approximations of the continuous controllers
used in section 5.1 are no longer valid. Since, additionally, a direct
z-transformation of the continuous controller equation is not possible
because of the derivative term, in this section the connections with
continuous controllers are dropped.

A simple control loop as shown in Fig. 5.2.1 is considered. The z-trans-


fer function of the controlled process including a zero-order hold is
-1 -m
b0 + b 1 z + ... + bmz -d
G (z) ~ z (5.2-1)
p u(z) 1 + a 1 z 1 + ... + amz m

v z
w y

Figure 5.2.1 Single loop

The general transfer function of the linear controller is


-1 -v
u(z) go + q1z + + qvz
(5 .2-2)
e(z) -1 + P z-)l
Po+ p1z + - ]l
5.2 Parameter-optimized Discrete Control Algorithms of Low Order 77

This algorithm can be realized if p 0 + 0. However, v $ ~ or v > ~· Usu-


ally q 0 + 0, and p 0 = 1 is chosen.

For structure-optimized controllers the orders ~ and v are functions


of the orders of the process model. For example, in the deadbeat control-
lers v = m and ~ = m+d. However, for parameter-optimized controllers
the controller order can be smaller than the order of the process model,
so v $ m and ~ $ m+d. Parameter-optimized controllers, therefore, re-
quire less on-line computation.

wben defining the structure of a parameter-optimized controller, one


must generally ensure that changes of the reference variable w(k) and
of the disturbances uv(k) and n(k), Fig. 5.2.1, do not lead to offsets
in the control deviation e(k). From the final value theorems of the z-
transform it follows that the controller must have a pole at z = 1. The
simplest control algorithms of vth order therefore have the structure
-1
-1 -v
qo + q1z + + qvz
G (z) = 12 (z ) (5 .2-3)
R P(z-1) 1-z- 1

For v = 1 one obtains, by appropriate choice of parameters, a control-


ler of PI-type, for v = 2 of PID-type, for v = 3 of PID 2 -type, etc. The
resulting difference equation means that

u(k) = u(k-1) + q 0 e(k) + q 1 e(k-1) + ..• + qve(k-v). (5 .2-4)

The parameters q 0 , q 1 , q2 , •.• , qv must be matched to the process to ob-


tain a good control performance, and the following methods are availa-
ble:
a) Based on a process model the controller parameters can be obtained
by minimizing a performance criterion using parameter optimization.
Only for processes and controllers of very low order are analytical
solutions possible, and numerical methods must be used in general.

b) Tuning ruZes can be used which lead to approximately optimal con-


troller parameters based on certain criteria. Here either the cha-
racteristic parameters of measured step responses are determined,
or oscillation tests with proportional controllers at the stability
limit are made. (See section 5.6)

c) Starting with small values (giving a low-gain control), the control-


ler parameters during closed-loop operation are systematically in-
creased until the loop damping becomes too small. Then the parameters
are decreased by some fraction (triaZ-and-error method).
78 5. Parameter-optimized Controllers

If there is no special requirement on the control performance and if


the process has a simple behaviour and a small settling time, then me-
thods b) or c) may suffice. However, for stringent performance require-
ments or complicated or slow or changing process behaviour, one has to
use a). This method is also suitable for computer aided design.

Control behaviour can only be evaluated in conjunction with practical


process considerations. This evaluation, however, is partly subjective.
There are many ways to describe control performance, which depends on
the disturbances as well as on the process and the controller under con-
sideration. For simplification and better interpretation, step changes
of disturbance and command variables are often assumed when tuning con-
troller parameters and comparing different control systems.

When synthesizing parameter-optimized control systems, one is interes-


ted in a single characteristic value of the control performance. There-
fore, for continuous signals integral criteria (which for time discrete
signals become sum criteria) are especially suitable. The sum of qua-
dratic control errors is preferred for mathematical reasons. It can
also be interpreted as an averaged power and so is also suitable for
other controller design methods. Hence in the following quadratic per-
formance criteria of the form:

M
l: (e 2 (k) + r t.u 2 (k)) (5.2-6)
k=O
are used for parameter optimization (c.f. chapter 4). Here

e(k) = w(k) - y(k)

is the control error,

t,u(k) = u(k) - u

is the "manipulated variable deviation" from


the final value u u(oo) for step disturbances
the expectation u E{u(k)} for stochastic disturbances
and r is a weighting factor on the manipulated variable.

In this quadratic performance criterion an averaged quadratic centro~


deviation
1 M 2 (5.2-7)
M+1 l: e (k)
k=O
5.2 Parameter-optimized Discrete Control Algorithms of Low Order 79

and the averaged quadratic manipulated variable deviation or averaged


input power
1 M 2
E flu (k) (5 .2-8)
M+1 k=O

can be related by the appropriate choice of the weighting factor r. If


r is chosen to be small, then a small s; can be obtained using a large
input power s 2 • The more s 2 is weighted by r, the less the input changes
u u
and the larger the error, so that the control loop has a more restrained
behaviour. By controller parameter optimization the parameters gT
[q0 q 1 .•. qv] have to be found such that s!u is a minimum, i.e.

2
dS eu
"""Clq 0. (5.2-9)

For higher-order processes numerical optimization methods such as simple


search methods (equidistant search, Hooke-Jeeves-method) , or gradient
methods of first or second order (Newton-Raphson) , or combinations of
several methods (Fletcher-Powell) must be used [5.6], [5.19], [ 5.20], [5.21].

5.2.1 Control A~gorithms of First and Second Order

a) Control Algorithms of Second Order


Control algorithms of second order are regarded first. The other para-
meteroptimized control algorithms then are simplified cases.

With v 2 Eq. (5.2-3) gives


=
qo + q1z-1 + q2z-2
GR(z) = 1 (5.2-10)
1-z-

and according to Eq. (5.2-4)

(5.2-11)

Assuming a step input


1 for k <!: 0
{ (5.2-12)
e(k) = 1 (k) = 0 for k < 0

one obtains as controller step response


80 5. Parameter-op timized Controllers

u(O)

u ( 1) 2 qo + q1

u (2) u ( 1) 3 qo + 2 q1 + q2

(5.2-13)

If u(1) < u(O)we obtain a discrete controller like a continuous PID-


controller with an additional first order lag. For the controller pa-
rameters with q 0 > 0 we have:

From u(1) < u(O): q0 + q 1 < 0 or q 1 < -q 0


From u(k) > u(k-1) for k~2: q 0 + q 1 + q 2 > 0 or q 2 > -(q 0 +q 1 l

For a positive controller gain q 0 > q 2 (see Eq. (5.2-15). Summarizing


the ranges of the various parameters:

(5.2-14)

The resulting step response is shown in Fig. 5.2.2a) and the resulting
parameter ranges in Fig. 5.2.3. The parameter q 0 determines the mani-
pulated variable u(O) after the step input.

The following characterist ic coefficients can be defined:

K qo - q2 gain
q2 I K lead coefficient (5.2-15)
<qo+q1+q2) I K integration coefficient.

These characterist ic coefficients are shown in the step response in


Fig. 5.2.4. They have been defined so that for small sample times they
are related to the parameters of the continuous PID-control algorithms
(c.f. Eq. (5.1-5)) as follows:
To To
(5.2-16)
A

- T; CI = T"
0 I

For small sample times the gains agree exactly. c 0 is the ratio of lead
time to sample time and ci the ratio of sample time to integration time.

Applying Eq. (5.2-14) it follows that for usual PID-behaviou r

If these characterist ic coefficients are substituted in Eq. (5.2-10)


the z-transfer function becomes
5.2 Parameter-optimized Discrete Control Algorithms of Low Order 81

(5.2-18)

It should be noticed that the above second-order control algorithm is


only similar to a continuous PID-controller with positive parameters if
conditions (5.2-14) or (5.2-17) are satisfied. The parameters deter-
mined by the optimization can, depending on the process, the choice of
optimization criterion and the disturbance signal, fail to satisfy these
conditions.

b) Control Algorithms of First Order


Setting q 2 0 the z-transfer function becomes

(5.2-19)

and the difference equation is

u(k) = u(k-1) + q 0 e(k) + q 1e(k-1).

The step response values then become

u ( 1) u(O)

u(2) u ( 1)
(5.2-20)

u(k) = u(k-1) + q 0 + q1 = (k+1)q 0 + kq 1 •

For u(1) > u(O) the first-order control algorithm can be compared with
a continuous PI-controller with no aaditional lag. With q 0 > 0 we ob-
tain q 0 + q 1 > o or q 1 > -q0 •

In Fig. 5.2.2b) the corresponding step response is shown. As in Eq.


(5.2-15) the following characteristic coefficients can be defined:

K gain
(5.2-21)
ci (q 0 +q 1 )/K integration coefficient.

For PI-behaviour with positive characteristic coefficients

(5.2-22)

These factors, introduced in Eq. (5.2-19), lead to


82 5. Parameter-optimized Controllers

Figure 5.2.2 Step responses


of first-and second-order
control algorithms
a) Second-order control
algorithm with Pro-
behaviour
b) First-order control
algorithm vii th PI-
behaviour

Figure 5.2.3 Ranges of parameters q 0 , q 1 and q 2 for PID-behaviour.


For fixed q 0 i, q 1 i and q 2 i must be placed within the dot-
ted angles (according to line 1-2-3-4)

Figure 5.2.4 Ste9 response


u(k) of a second-order control
algorithm with

.......... t gain
lead coefficient c0
K

' __ • ..-......... • & Kcr

-. . . . . .--•
_:~-----,-
K(1+c 1 )
T
integration coeff. c 1

K
k
5.2 Parameter-optimized Discrete Control Algorithms of Low Order 83

(5.2-23}

Choosing q 0 0 results in an integral action controller with transfer


function
-1
q1z
G (z} --_-1 (5 .2-24}
R 1-z

and difference equation

u(k} = u(k-1} + q 1e(k-1}. (5.2-25}

Other special cases are the proportional action controller

(5.2-26}

and the proportional derivative action controller

(5.2-27}

obtained by putting ci = 0 in Eq. (5.2-10}.

5.2.2 Control Algorithms with Prescribed Initial Manipulated Variable

The transfer function between the reference and the manipulated variable
in closed loop is

u(z}
w{z) (5.2-28}

Introducing the process transfer function, Eq. {5.2-1}, and the second-
order controller transfer function, Eq. (5.2-10}, and setting b 0 = 0,
results in

[ {1-z -1 } ( 1+a z -1 + ... +amz -m }


1

+ {q 0 +q 1 z -1 +q 2 z -2 } {b 1 z -1 + ••• +bmz -m }z -d) u(z} (5.2-29}

-1 -2 -1 -m
(q 0 +q 1 z +q 2 z } (1+a 1 z + .•. +amz }w(z}

and for the manipulated variable


84 5. Parameter-optimized Controllers

u(k) = (1-a 1 ) u(k-1) + (a 1 -a 2 ) u(k-2) + ...

- q 0 b 1 u(k-d-1) - (q 0 b 2 +q 1b 1 ) u(k-d-2) + ...

+ q 0 w(k) + (q 0 a 1+q 1 ) w(k-1)

+ (q0a2+q1a1+q2) w(k-2) + ... (5.2-30)

The first two manipulated variable values are, after a ster change of
the command variable w(k) = 1 (k):

1. Case d = 0
u(O)
u ( 1) (5.2-31)

2. Case d ;:, 1
u(O)
u (1) (5.2-32)

Independently of the dead time d, the value of u(O) for a step change
of the command variable depends only on the controller parameter q 0 .
Therefore by prescribing the manipulated variable u(O), the parameter
q 0 can be fixed.

The correspondence between the first manipulated variable and the con-
troller parameter q 0 is useful during design when considering the allow-
able range of manipulated variable change. One has only to select a
certain operating point of the control loop and the maximum process in-
put change u(O) for the (worst) case of a step change w0 of the referen-
ce variable w(k) (or the error e(k)) and one simply sets q 0 = u(O)/w 0 .

In order to avoid a larger manipulated variable u(1) than u(O) an in-


equality in the controller parameter q 1 must be considered.

From Eq. (5.2-31) and Eq. (5.2-32) it follows that within u(1) ~ u(O)

d o: q1 ~ -qo< 1 -qob1)
(5.2-33)
d ;;::: 1: q1 ~ -qo.

These inequalities are also valid for first-order controllers. If a


small u(O) is prescribed and results in a damped loop behaviour, r = 0
can be chosen in the optimization criterion of Eq. (5.2-6).
5.3 Modifications to Discrete PID-Control Algorithms 85

The determination of q 0 by prescribing u(O) means that for a second-


order controller only two parameters and for a first-order controller
only one parameter must be optimized; this results in fewer computations.
Of course, this design does not produce a controller which considers
a hard restriction on the manipulated variable for all disturbances.
The use of q 0 is only a simple design aid which regards the constraints
on the manipulated variable for only one type of disturbance.

5.3 Modifications to Discrete PID-Control Algorithms

Based on the discretized differential equation of the continuous Pin-


controller, Eq. (5.1-1) to (5.1-4), many modifications have been pu-
blished, some of which are given here.

In order to reduce large manipulated variable changes after rapid chan-


ges of the reference variable, the reference variable w(k) is not in-
cluded in the derivative term [5.3]. Instead of the common PID-control
algorithm

u (k) -u (k-1) K[e(k)-e(k-1) + :~e(k-1) + :~(e(k)-2e(k-1)+e(k-2) )J


(5.3-1)
(c.f. Eq. (5.1-4)), the modified algorithm is

u(k)-u(k-1) = K[e(k)-e(k-1) + :~e(k-1) + :~(-y(k)+2y(k-1)-y(k-2) )J


(5.3-2)
with
.e (k) = w (k) - y (k) •

The amplitude changes of the manipulated variable are further reduced


if the reference variable is only present iri the integration term [ 5. 1],
[5.3], [1.12]

TO TD ( -y(k)+2y(k-1)-y(k-2)].
u(k)-u(k-1) = K[-y(k)+y(k-1)+~e(k-1)+~ )
I 0
(5. 3-3)
With this algorithm it is then more appropriate to use e(k) instead of
e(k-1) (c.f. page 75). These modified algorithms are less sensitive to
the higher frequency signals of w(k) than to those of y(k). Therefore,
for the same type of disturbance at e.g. the process input and the com-
mand variable, the differences between the controller parameters ob-
tained by parameter optimization become smaller (c.f. [5.8]). Large
86 5. Parameter-optimized Controllers

changes of manipulated variables can also be avoided by limiting the


velocity of the command variable and/or the manipulated variable. Since
these restrictions are effective for all disturbances, they should be
used in preference to the modified control algorithms Eq. (5.3-2) and
Eq. (5 ~ 3-3) .

Other modifications are obtained by different realizations of the deriva-


tive term. If the controlled variable contains relatively high frequen-
cy noise which cannot be controlled, large unwanted manipulated variab-
le changes can occur if the first-order difference

T ~e(k) = TD (e(k) - e(k-1))


D TO T0

in the nonrecursive form, Eq. (5.1-2), or in the recursive form, Eq.


(5.3-1),

T
T~ (e(k) - 2e(k-1) + e(k-2))

is used. The derivative term, however, can be necessary for improving


control performance for medium frequency noise since, if it is not too
large, a process pole can be approximately cancelled, the stability re-
gion can be increased and larger gains can be used. Therefore, one has
to compromise.

A first possibility is to choose TD/T 0 smaller than the ideal value.


One can also smooth the derivative action by using 4 values for calcu-
lating the difference [5.2]. First an average

ek = t [e(k) + e(k-1) + e(k-2) + e(k-3)]

is taken and then all approximations to the first derivative are ave-
raged in relation to ek. The derivative term for the nonrecursive form
therefore becomes

TD
GT [e(k) + 3e(k-1) - 3e(k-2) - e(k-3)]. (5.3-4)
0

For the recursive form we have:

TD
- [e(k) + 2e(k-1) - 6e(k-2) + 2e(k-3) + e(k-4)]. (5.3-5)
6T 0
5.4 Simulation Results 87

A further choice for small sample times consists in using a differen-


tial term with a first order lag as in the continuous transfer function
T s
G(s) = K(1 + --1-- + ---0---]
TIS 1+TIS

and by applying the correspondences+ 2(z-1)/T0 (z+1) as an approxima-


tion (2.19] (as in section3.7). The resulting control algorithm becomes

(5.3-6)

with parameters

-4c 1 /(1+2c 1 )
(2c 1-1)/(1+2c 1 )
CI
K(1+2(c 1 +c 0 )+ ;r<1+2c 1 )]/[1+2c 1 ]
K(ci-4(c 1 +c 0 )]/[~+2c 1 ]
K(c 1 (2-ci)+2c 0 + J -1]/l1+2c 1 ]
with

Further possibilities arise from the filtering of the control variable


y(k) with filters which act before the control algorithms and therefore
influence all terms of the PID-algorithms. This is treated in chapter
27.

5.4 Simulation Results

During the design of control systems appropriate free parameters have


to be selected. In the case of parameter optimized discrete control
algorithms these are the sample time T0 and the weighting factor r of
the manipulated variable in the optimization criterion or the chosen
initial manipulated variable u(O). In order to assist in obtaining
first estimates for their selection, this section shows some simulation
results (5.7]. Free parameters cannot be chosen independently of the
process under consideration and its technological properties. Therefore,
very general rules cannot be given. However, results for two simulated
test processes will show that qualitative results can be obtained which
may be valid for similar processes.
88 5. Parameter-optimized Controllers

5.4.1 Test Processes

For investigating control algorithms in closed loop operation, processes


II and III which were proposed in [5.9] as test cases are used. See the
Appendix.

Process II: Process with nonminimal phase behaviour


K ( 1-T 1 s)
(5.4-1)

K 1; T 1 = 4 sec; T 2 10 sec.

b 1z
-1 +b z -2
2
(5 .4-2)

Figure 5.4.1a) shows the step response of this process

Table 5.4.1 Parameters of process II

Sample time T0 [sec] 1 4 8 16

b1 - 0.07289 - 0.07357 0.13201 0.55333

b2 0.09394 0.28197 0.34413 0.23016

a, - 1.68364 - 1.0382 - 0.58466 - 0.22021

a2 0.70469 0.2466 0.06081 0.0037

Process III: Process with low-pass behaviour and dead time


K ( 1+T 4 s) -T s
GIII (s) --~~~~~--.,~~--.
(1+T s) (1+T s) (1+T s)
e t (5.4-3)
1 2 3

K 1; T 1 = 10 sec; T 2 = 7 sec; T 3 3 sec; T 4 2 sec; Tt 4 sec.

-1 -2 -3
b 0 +b 1 z +b 2 z +b 3 z
-d
-1 2 -3 z (5.4-4)
1+a 1 z +a 2 z +a 3 z
5.4 Simulation Results 89

Table 5.4.2 Parameters of process III

Sample time T0 (sec] 1 4 8 16

d 4 1 1 1
bo 0 0 0.06525 0.37590
b1 0.00462 0.06525 0.25598 0.32992
b2 0.00169 0.04793 0.02850 0.00767
b3 - 0.00273 - 0.00750 - 0.00074 - 0.00001
a, - 2.48824 - 1.49863 - 0.83771 - 0.30842
a2 2.05387 0.70409 0.19667 0.02200
a3 - 0.56203 - 0.09978 - 0.00995 - 0.00010

Figure 5.4.1b) shows the step response of this process.

5.4.2 Simulation Results of Second-order Control Algorithms

a) Control algorithms with no prescribed initial manipulated variable

In this section, some results of digital computer simulations of the


test processes II and III with the second-order control algorithms with
no prescribed initial manipulated variable are considered. All three
controller parameters are optimized. The abbreviation of this control-
ler given by Eq. (5.2-1) is: 3 PC-3 <1 £arameter Controllers with 3
optimized parameters). The quadratic criterion of Eq. (5.2-6) was used
as optimization criterion. The controller parameters q 0 , q 1 and q 2 were
determined through numerical search using the method of Fletcher-Powell.
The settling time taken was M = 128.

For a step change of the reference variable, the control performance ex-
pressed in the form of

se ~ = y N+1
1 N 2
E e (k)
(Quadratic average of
the control deviation) (5.4-5)
k=O

Ym (t) - w(t) (Max. overshooting) (5.4-6)


ymax
(Settling time for
k1 I e <k> I »o.o1lw<k> I> (5.4-7)

and the corresponding quadratic average of the manipulated variable de-


viation

S
u
=
Vt;u 2 (k) =Y- ~1-
N+1 k=O
t;u 2 (k) (5.4-8)
90 5. Parameter-optimized Controllers

---------------~~~~------

0,5

40 50 [sec] 60 t

10 20 30 40 50 [sec] 60 t

Figure 5.4.1 Step responses of test processes II and III


a) Process II b) Process III
5.4 Simulation Results 91

the "manipulating effort" are functions of the sample time T0 and the
weighting factor r of the manipulated variable in the optimization cri-
terion Eq. (5.2-6). For the simulations a settling time TN = 128 sec
was chosen to be large enough that the control deviation becomes prac-
tically zero. Therefore we have N = 128 sec/T 0 • For Se and Su the term
"quadratic average" was chosen; this value is equal to the "effective
value" and to the "root of the corresponding effective power" for a one-
Ohm resistance.

Comment on the choice of the disturbances:

A step disturbance excites predominantly the lower frequencies and leads


to a larger weighting of the integral action of the controller. In chap-
ter 13 stochastic disturbances are used which contain higher frequency
components and which lead to more stress on the proportional and the
derivative actions.

Influence of the Sample Time T0

Figure 5.4.2 shows the discrete values of the control and the manipula-
ted variables for both processes after a step change of the command
variable for the sample times T0 = 1; 4; 8 and 16 sec and for r = 0.
For the relatively small sample time of T0 = 1 sec one obtains an ap-
proximation to the control behaviour of a continuous PID-controller.
For T0 = 4 sec the continuous signal of the control variable can still
be estimated fairly well for both processes. However, this is no lon-
ger valid for T0 = 8 sec for process II and for T0 = 16 sec for both
processes. This means that the value of Se' Eq. (5.4-5), which is de-
fined for discrete signals, should be used with caution as a measure
of the control performance for T0 > 4 sec. However, as the parameter
optimization is based on the discrete-time signals (for computational
reasons) Se is used in the comparisons.

In Fig. 5.4.3 the control performance and the manipulating effort are
shown as functions of the sample time. For process II the quadratic
mean of the control deviation Se, the overshoot ym and the settling
time k 1 increase with increasing sample time T0 , i.e. the control per-
formance becomes worse. The manipulating effort Su is at a minimum for
T0 = 4 sec and increases for T0 > 4 sec and T0 < 4 sec. For process III
all three characteristic values deteriorate with increasing sample time.
The manipulating effort is at a minimum for T0 = 8 sec. The improvement
of the control performance for T0 < 8 sec is due to the fact that the
92 5. Parameter-optimized Controllers

0--rr-------------------+----------~-
50 100 t [sec[

u
u

0_,---------+--------~--------~
50 100 t [sec] 50 100 t [sec]

T0 =I. sec

50 100 t [sec] 50 100 t [sec]

Figure 5. 4. 2 a) Step responses for changes of the :reference variable for


process II and different sample intervals T0 and r = 0
5.4 Simulation Results 93

50 100 1 [sec]

10 10

0 0
50 100 t [sec] 50 100 1 [sec J

T0 = 4sec
T0 =1sec

y
~
I
j

50 100 t [sec J 50 100 t [sec]

10 10

50 100 t [sec] 50 100 t [sec]

T0 ' 16sec
T0 , 8 sec

Figure 5. 4. 2 b) Step responses for changes of the reference variable for


process III and different sample intervals T 0 and r· = 0
94 5. Parameter-optimized Controllers

0.7515 0751.5

t t tt Seu
::,....- .,.-X.
Se Su Se Su
\_....- ...- --
Seu X

_x-;---·
·-- ----------
0501.0 0.501.0 x.---x...- •
>E--x--

• ~
/-.
o--
-----0 o-- - - - 0
0,....-'
0;::.<. 0250.5
02505 0 ........

---
• - - • Su
o--o Se
0. 0.
4 8 To(S) 16 4 8~ 16

tt
/
200 •

L~(S)
Ymk1
(S)

./~ /./0
0.3150 0.3150 •

0.2100

;!

0.2100

01/

0.1 50 .I 0.1 50

10
0
;•-•k,
0

o--oym 10
0 0
4 8rYsj 16 4 8ris) 16

process II process III

Figure 5.4.3 Control performance and manipulating effort as functions


of the sample time T0 for r = 0
5.4 Simulation Results 95

manipulation effort increases considerably. Therefore the initial mani-


pulated variable u(O) also increases with decreasing sample time, Fig.
5.4.2.

Based on these simulations it follows that for the given optimization


criterion with r = 0, T0 = 4 sec to 8 sec are appropriate sample inter-
vals for both processes. The smaller sample interval enables a somewhat
better control performance. The sample time T0 = 1 sec is unsuitable
for process III as the manipulating effort becomes too large in compa-
rison with the improvement of the control performance. T0 = 16 sec is
unsuitable for either process because of a poor control performance.

To assist the choice of a suitable sample interval, the behaviour of


the performance criterion Seu due to Eq. (5.2-6) can also be used. This
criterion takes into account both the control performance Se and the
manipulating effort Su. For a weighting of the manipulated variable of
r = 0.25 in Fig. 5.4.3 the value of Seu is shown. This "mixed" crite-
rion shows a flat minimum for process III for T0 = 5 sec and for process
II a flat behaviour for T0 < 8 sec. Hence, a suitable region for the
sample interval is: for process III approximately T0 = 3... 8 sec and for
process II approximately 1 ... 8 sec. If T 95 is the settling time of a
step response until 95 % of the final value, the following rules can
be used for choosing the sample time:

Process I I : s T 95 ;T0 4.4 11.7


(5.4-10)
Process III: s T 95 ;T 0 5.6 15 .0.

Table 5.4.3 gives the controller parameters. With increasing sample time
the parameters q 0 , q 1 and q 2 become smaller. The controller gain K hard-
ly changes for T0 ~ 4 sec, the lead factor c 0 reduces and the integra-
tion factor ci increases. The inequalities Eq. (5.2-14) or Eq. (5.2-17)
are satisfied for T0 = 1, 4 and 8 sec, so that a control algorithm with
normal PID-behaviour emerges.

Influence of the weighting r on the manipulated variable

For the sample time T0 = 1 sec Figure 5.4.4 shows step responses to
changes of the t:eference variable as functions of the weighting factor r
in the optimization criterion. A change from r = 0 tor= 0.1 leads to
a more restrained control behaviour than the change from r = 0.1 to
r = 0.25.
96 5. Parameter-optimized Controllers

y
y

u u

0>--t-------~~-------+-----------
25 50 t (sec I
01--r-------~--------------------
25 50 t (sec)

01--+-------------------~----------~
25 50 t (sec(

T0 = 1sec
r;0.25

Figure 5.4.4 a) Step responses to changes in the reference variable for


different weighting factors r of the manipulated vari-
able. Process II. Sample time T0 = 1 sec.
5.4 Simulation Results 97

25 50 t [sec[ 25 50 t [sec I

10 10

50 t [sec I 25 50 t [sec I

25 50 t [sec!

10

25 50 t [sec!

T0 =1sec
r ~ 0.25

Figure 5. 4. 4 b) Responses to a step change of the reference variable for


different weighting factors r of the manipulated vari-
able. Process III. Sample time T 0 = 1 sec.
98 5. Parameter-optimi zed Controllers

I
0.75 1.5 0.75 1.5 I
I

t t ----Su
--Se
t 1 I
I
Se Su se su I
0.50 1.0
I
0.50 1.0 I
I

0.25 0.5
--·t==i
,-c----c------c
0

0.25 0.5 / ' .


~~"i --c------o
0

' •........ ......


'•----.

0.
0.1 - r 0.5
0.
0.1 0.5

200 200

' t
• T0 =1s
f o T0 =4s t
Ym k1 0 T0 =8s Ym k1
[s] (S)

0.3 150 0.3 150


- - - - k1
--ym

~\· ----,
0.2 100 \ 0.2 100
\
\
\
b_ ............ o
- o----o---
0.1 50 __ 0.1 50

10 0 ............... 0 10
""' 0
0. 0.
0.1 0.5 0.1 0.5

process II process III

Figure 5.4.5 Control performance and manipulating effort as functions


of the weighting factor r of the optimization criterion.
5.4 Simulation Results 99

Table 5.4.3 Controller parameters for different sample times T 0 and r=O

Process II Process III

T 0 [sec] T0 [sec]

4 8 16 4 8 16

qo 5.958 2.332 2.000 1. 779 19.408 4.549 2.437 1 . 95 7


q1 -10.337 -3.074 -2.080 -1.089 -36.623 -7.160 -2.995 -1.660
q2 4.492 1. 105 0.748 0.361 17.3 70 3.030 1 . 15 8 0.667
K 1 . 466 1 . 227 1. 252 1 . 41 8 2.038 1. 519 1 . 279 1 .290
CD 3.065 0. 901 0.597 0.255 8.524 1. 994 0.905 0.517
CI 0.077 0.297 0.534 0.742 0.076 0.275 0.469 0.748

Figure 5.4.5 shows the characteristic parameters of the control perfor-


mance and the manipulating effort for T 0 = 1; 4 and 8 sec as functions
of the weighting factor r. For both processes, with increasing r, the
control performance Se increases and the manipulating effort Su decrea-
ses. For process III this effect is greater than for process II. The
selection of the weighting factor r has greater influence on Se and Su
for process III. Furthermore, r has smaller influence the greater the
sample time.

Table 5.4.4 Controller parameters for different weighting factors r and


T0 = 4 sec and 8 sec

Process II Process III

r r
T0 =4sec 0 0.1 0.25 T0 =4sec 0 0.1 0.25

qo 2.332 1 . 9 33 1 . 66 3 qo 4.549 2.688 2.049


q1 -3.076 -2.432 -2.016 q1 -7.160 -3.798 -2.723
q2 1 . 11 7 0.816 0.637 q2 3.030 1. 398 0.916
K 1 . 21 5 1. 11 7 1 .026 K 1 . 51 9 1. 290 1 . 133
CD 0.919 0. 730 0.621 CD 1 . 99 4 1 .083 0.808
CI 0. 307 0.284 0.277 CI 0.275 0.223 0.213

r r
T0 =8sec 0 0.1 o. 25 T0 =8sec 0 0.1 0.25

qo 2.000 1 . 71 4 1 . 51 2 qo 2.437 1 . 944 1 . 653


q1 -2.080 -1.685 -1.423 q1 -2.995 -2.222 -1.795
q2 0.748 0.557 0.440 q2 1.158 0. 780 0.587
K 1 . 252 1 . 175 1 .072 K 1 . 279 1.164 1 .066
CD 0.597 0. 481 0.410 CD 0.905 0.669 0.550
CI 0. 534 0.507 0.494 CI 0 . .f69 0.431 0.417
100 5. Parameter-optimized Controllers

The overshoot ym also decreases with increasing r. The response time k 1


increases with r for T 0 = 1 sec. However, for T 0 = 4 sec and 8 sec, k 1
first decreases and then increases for greater r. The choice of r in-
fluences the characteristic values ym and k 1 much more than Se and Su
for all sample times. An increase of the weighting factor r of the ma-
nipulated variable in the optimization criterion Eq. (5.2-6) therefore
results in a decrease of the manipulating effort Su' an increase in Se
and a decrease in the overshoot Ym· The choice of r depends very much
on the application. A suitable compromise between good control perfor-
mance and small manipulation effort can be obtained in the region of
0.1 ~ r $ 0.25 if the process gain is unity.

Table 5.4.4 shows the controller parameters for the sample times T 0 =
4 sec and 8 sec. With increasing weighting r of the manipulated variable
the parameters q 0 , q 1 and q 2 decrease. K and cD also decrease, whilst
ci hardly changes.

b) Control algorithms with chosen initial manipulated variable u(O)

In section 5.2.2 i t was shown that for a step change of thereference va-
riable by 1 or w0 , the parameter q 0 of the control algorithm is equal
to the manipulated variable u(O) oru(O)/wo from Eq. (5.2-6). By properly
chosing u(O), taking into account an allowable region of the manipula-
ted variable, the parameter q 0 can be readily determined. Then only two
parameters q 1 and q 2 have to be optimized. The control algorithm is
therefore called 3 PC-2.

Since the control behaviour can be constrained by assuming a relatively


small u(O), the weighting factor r in the optimization criterion Eq.
(5.2-10) can be taken as zero when choosing the parameters.

Influence of the prescribed manipulated variable

In Fig. 5.4.6 the responses to step changes in the command variable are
shown for different values of the initial manipulated variable u(O) = q0 .
Starting with a value q 0 t ' which results from optimization of all pa-
,op
rameters for r = 0, a decrease of the chosen manipulated variable u(O)
results in a more restrained control behaviour. The overshoot ym decrea-
ses. In Fig. 5.4.6 b) q 0 has been reduced such that both of the first
two manipulated variables u(O) and u(1) are equal. However, the resul-
ting overshoot increases again. The same result was obtained for pro-
cess II.
5.4 Simulation Results 101

- 3 PC-3 q 0 •4.55

---- 3 PC- 2 q0 • 2.55


u
- q0 •2.3 -·-3PC-2q0 •1.55
--- q0 •1.75

0-+--------~---------+-----------
so 100 t [sec] so 100 trsecl

a) process II b) process III

Figure 5.4.6 Responses to step changes in the reference variable for


different values of the chosen manipulated variable
u(O) = q 0 • T0 = 4 sec.

Fig. 5.4.7 shows all the characteristic values of the control perfor-
mance and the manipulating effort for the practically significant sample
times T0 = 4 sec and T0 = 8 sec. With a reduction in the chosen manipu-
lated variable u(O), i.e. decreasing q 0 , the manipulating effort decrea-
ses and the control performance Se increases slightly. The overshoot Ym
and the response time k 1 also decrease for T0 = 8 sec. For T0 = 4 sec,
102 5. Parameter-optimized Controllers

o-- --o
----o---
4-o t t
1

==e Su
Se
---c---1
b-----9I 0.25 0.5
I I
I
I
I
I
i q~opt
---Su Vkas
1 q'opt
--s. I o
0.05 0.1 I To=4s I 0.05 0.1

-
I ~
0. 0.
1.5 2.0 2.5
q~
200 200

t t t t
Ym k1 Ym k1
(S) (5)
0
II
0.3 150 0.3 150 1:
II
I I
I
I 1
I I
I I
I I
I I
0.2 100 0.2 100 I 0
I
,o
,,._o' I
I
" ........ c, I
I
I
I
0.1 50 0.1 50 I
I
I
I
I
q~opt--J

-
T0:4s I
0. 0.
1.5 2.0 q"0 2.5 1.5 2.5 3.5 ---q;- 4.5
0

a) process II b) process III

Figure 5.4.7 Control performance and manipulating effort as functions


of the prescribed initial manipulated variable u(O) = q 0 •
5.4 Simulation Results 103

the same trend occurs at first for both processes. If q 0 is chosen too
small then both values increase again. A minimum occurs for ym and k 1 .

If the manipulated variable u(O) = q 0 is prescribed not too small the


control performance Se deteriorates slightly. However, the manipulating
effort Su' the overshoot ym and the response time k 1 decrease conside-
rably.

A good choice of the initial manipulated variable u(O) produces not only
good control performance but also fewer computations in the parameter
optimization. Table 5.4.5 shows the controller parameters for different
values of u(O). The parameters q 1 and q 2 follow the trend of q 0 but ci
hardly changes. For the same q 0 the other controller parameters hardly
vary for both processes.

c) Conclusions based on the simulation results

For parameter optimized control algorithms of first and second order the
coefficients K, ci and c 0 of the gain, the integral and the lead action,
can be simply determined using the parameters q 0 , q 1 and q 2 in Ea.
(5.2-15). These coefficients need not be derived from the corresponding
coefficients of the analog controller differential equations. The rela-
tions Eq. (5.2-15) are also valid for large sample times.

From the value of the parameter q 0 , the initial manipulated variable


u(O) after a step change or control error can be determined. This has the
advantage that one fewer parameter needs be optimized by numerical search
methods, so saving computing time. Furthermore, the control behaviour
can be simply restrained and r = 0 can be set in the optimization cri-
terion.

The simulation of a process of second order with nonminimum phase beha-


viour and a low pass third order process with dead time lead to the
following results for second order control algorithms:

Choice of the sample time T0

The smaller the sample time the better the control behaviour. However,
if the sample time becomes very small, further improvement of the con-
trol behaviour can only be obtained by a considerable increase in the
manipulating effort. Therefore, too small a sample time should not be
chosen. For the selection of proper sample times the following rules
can be used:
Table 5.4.5 Controller parameters for different chosen manipulated variable u(O) = q 0 for 0
ol>.
T 0 = 4 sec and 8 sec

Process II Process III

T0 =4sec 3 PC-2 3 PC-3 T0 =4sec 3 PC-2 3 PC-3

qo 1.500 1. 750 2.000 2.332 qo 1.500 2.000 2.500 4.549

q1 -1 . 59 3 -2.039 -2.484 -3.076 q1 -1.499 -2.406 -3.320 -7.160

q2 0.375 0.591 o. 810 1 . 105 q2 0.223 0.656 1 .097 3.030


K 1 . 125 1 . 159 1 . 190 1. 227 K 1. 277 1 . 244 1. 403 1 . 519
CD 0.333 0.511 0. 681 0.901 CD 0.175 0.488 0.783 1 • 994

CI 0.251 0.261 0.274 0.295 CI 0.176 0. 186 0.198 0.275

T0 =8sec 3 PC-2 3 PC-3 T0 =8sec 3 PC-2 3 PC-3

qo 1.500 1. 750 2.250 1. 999 qo 1. 500 1 • 750 2.000 2.437


U1

q1 -1.338 -1.717 -2.405 -2.079 q1 -1.451 -1.864 -2.280 -2.995


'1:1
0.364 0.556 0.936 0.748 q2 0.372 0.576 0.784 1 . 158 P>
q2 '1
K 1.128 1.174 1 . 216 1 . 279 P>
K 1 . 1 36 1 . 194 1.314 1. 251 s
(])
0.321 0.466 0.712 0.597 0. 330 0.490 0.645 0.905 (1"
CD CD (I)

0. 374 o. 393 0.414 0.469 '1


CI 0.464 0.494 0.594 0.534 CI I
0
'0
(1"
1-'·
s
1-'·
N
(])
Q,

()
0
:::1
(1"
'1
0
r-'
r-'
(])
'1
Ul
5.5 Choice of Sample Time for Parameter-optimized Control Algorithms105

(5.4-11)

Here T 95 is the settling time of the step response to 95 % of the final


value.

Choice of the weighting factor r

If all three parameters of the second order control algorithm (3 PC-3)


have to be optimized for a step change of the reference variable one ob-
tains a good compromise between good control performance and small ma-
nipulating effort for

r/K 2 "" 0.1 •.. 0.25, (5.4-12)

where K GP(1) is the process gain. The larger the sample time the
smaller the influence of r.

Choice of the initial manipulated variable u(O) = q 0

The magnitude of q 0 depends on the allowable range of the manipulated


variable and on the process under consideration. For the nonminimum
phase process q 0 = 1.75 and for the low-pass process with dead time
q 0 = 2.5 are appropriate values. However, q 0 can be chosen within a
larger region depending on the allowable range of the manipulated va-
riable. For the estimation of q 0 the maximum value of a step change in
the reference variable must be assumed.

If the sample time is not too small, u(O) can be chosen according to

u(O) ,;; 1/(1-a 1 J (b 1 + b 2 + •.. + bm) (5.4-13)

which is obtained for the modified deadbeat controller DB(v+1) from Eq.
(7.2-13).

5.5 Choice of Sample Time for Parameter-optimized


Control Algorithms
As is well known, sampled data controllers have generally inferior per-
formance than continuous control systems. This is sometimes explained
by the fact that sampled signals contain less information than conti-
nuous signals. However, not only the information but also the use of
this information is of interest. As the class and the frequency spec-
trum of the disturbance signals also plays an important role, general
106 5. Parameter-optimized Controllers

remarks on the control performance of sampled data systems are diffi-


cult to make. However, for parameter-optimized controllers one can assume
in general that the control performance deteriorates with increasing
sample time. Therefore, the sample time should be as small as possible
if only the control performance is of importance.

The choice of sample time depends not only on the achievable control
performance but also on:

- the desired control performance


- the process dynamics
- the spectrum of the disturbances
- the actuator and its motor
- the measurement equipment
- the requirements of the operator
- the computational load or costs per control loop
- the identified process model.

These factors will be discussed.

When considering the desired control performance, it can be seen from


Figures 5.4.2 and 5.4.3 that a sample time T0 = 4 sec, compared with
T0 = 1 sec which is a good approximation to the continuous case, leads
only to a small deterioration in control performance. If only the con-
trol performance is of interest the sample time can usually be greater
than required by the approximation of the continuous control loop. Some
rules of thumb for determining the sample time based on the approxima-
tion of the continuous control loop behaviour are given in Table 5.5.1.

The process dynamics have a great influence on the sample time in terms
of both the transfer function structure and its time constants. Rules
for the sample time, therefore, are given in Table 5.5.1 as functions
of the time delay, dead time, sum of time constants etc. In general,
the larger the time constant the larger the sample time.

Now the dependence of the sample time on the disturbance signal spectrum
or its bandwidth is considered. As is well known, for control loops
three frequency ranges can be distinguiphed [5.14] (c.f. section 11.4):

The low frequency range (0 ~ w ~ w1 ): disturbances of the control


variable are reduced.
The medium frequency range (w 1 < w ~ w2 ): disturbances are
amplified.
Ul
Table 5.5.1 Summary of rules for determining the sample time for low-pass process
Ul

(l
::::>
criteria to determine determination of sample time remarks 0
literature process III .....
the sample time ()
the sample time [sec] ro
0
H)
[5.10], [5.3] T 0 ~(1/8 ... 1/16lt 3 ... 1
(ll
[5.10], [ 5. 3 J T 0 ~(1/4 ... 1/8)Tt - processes with dominant Pl
dead time ,a
I-'
ro
larger settling time
[5.11], [5.17J T 0 ~(1.2 ... o.35)Tu 4.5 0.1 !>Tu/T!>1 .0 >'3
as with continuous .....
T 0 ~(0.35 ... 0.22)Tu 1 . O!>Tu/T!>1 0 sro
PI-controller: 15 %
H)
0
ti
compensation of dis- 8 ... 2 hj
To=Tr/wmax wmax chosen such that for
Pl
ti
turbances until w the process I G (wrnax) I = Pl
max s
as contin. loop 0.01 ... 0.1 ro
rT
ro
ti
I
simulation, [ 5. 7 J T 0 ~t1/6 ... 1/15)T 95 8 ... 3 0
'C
section 5.4 rT
.....
s.....
N
identification of [3.13] 8 ... 4 ro
T 0 ~(1/6 ... 1/12)T 95 Q..

the process model (l


0
:0
rT
ti
0
I-'

!J;>
f : eigen frequency of the closed loop in cycles/sec I-'
cQ
0
Tt : dead time ti
.....
T 95 : 95 % settling time of the step response rT
::::>
Tu : delay time, c.f. Table 5.6.1 sUJ
0
-...]
108 5. Parameter-optimized Controllers

The high frequency range (w 2 < w < oo): disturbances are not
affected by the loop.

Control loops in general have to be designed such that the medium fre-
quency range comes within that range of the disturbance signal spectrum
where the magnitude of the spectrum is small. In addition, disturbances
with high and medium frequency components must be filtered in order to
avoid unnecessary variations in the manipulated variable. If disturban-
ces up to the frequency wmax = w1 have to be controlled approximately
as in a continuous loop, the sample time has to be chosen in accordance
with Shannon's sampling theorem

The sampling theorem can also be used to determine the sample time if
an eigenvalue with the greatest eigenfrequency wmax is known. Then this
frequency is the highest frequency to be detected by the sampled data
controller. Particularly with an actuator having a long rise-time it
is inappropriate in general to take too small a sample time, since it
can happen that the previous manipulated variable has not been acted
upon when a new one arrives. If the measurement equipment furnishes
time discrete signals, as in chemical analysers or rotating radar an-
tennae, the sample time is already determined. An operator generally
wants a quick response of the manipulated and control variable after
a change of the reference;variableat an arbitrary time. Therefore, the
sample time should not be larger than a few seconds. Moreover, in a
dangerous situation such as after an alarm, one is basically interested
in a small sample time. To minimise the computational load or the costs
for each control loop, the sample time should be as large as possible.

If the control design is based on identified process models, and if pa-


rameter estimation methods are used for the identification, then the
sample time should not be too small in order to avoid the numerical
difficulties which result from the approximate linear dependence in the
system equations for small sample times [3.13].

This discussion shows that the sample time has to be chosen according
to many requirements which partially are contradictory. Therefore sui-
table compromises must be found in each case. In addition, to simplify
the software organization one must use the same sample time for several
control loops. In Table 5.5.1 rules for choosing the sample time are
summarized, based on current literature. Note that rules w~ich are based
5.6 Tuning Rules for Parameter-optimized Control Algorithms 109

on approximating the continuous control performance frequently predict


too small a sampling time. Considering only the control performance,
about 6 to 15 samples per settling time T 95 are sufficient, at least
for low pass processes. For some processes in the power and chemical
industries the sample times given in Table 5.5.2 have often been propos-
ed [5.12], [5.13], [5.5].

Table 5.5.2 Recommended sample times for processes in the


power and chemical industry

Control variable Sample interval T0


[sec]

Flow
Pressure 5
Level 10
Temperature 20

5.6 Tuning Rules for Parameter-optimized Control Algorithms

In order to obtain approximately optimal settings of parameters for con-


tinuous time controllers with PID-behaviour, so-called "tuning rules"
are often applied. These rules are mostly given for low pass processes,
and are based on experiments with a P-controller at the stability limit,
or on time constants of processes. A survey of these rules is e.g. given
in [5.14]. Well-known rules are for example those by Ziegler and Nichols
(5.14].

The application of these rules in modified form for discrete time Pin-
control algorithms has been attempted. (5.15] gives the controller pa-
rameters for processes which can be approximated by the transfer func-
tion
-T s
G(s) e t (5.6-1)
1+Ts

However, the resulting controller parameters can also be obtained by


applying the rules for continuous time controllers if the modified dead
time (Tt+T 0 /2) is used instead of the original dead time Tt. Here, T0 /2
is an approximation to the dead time of the sample and hold procedure.
110 5. Parameter-optimized Controllers

Tuning rules which are based on the characteristics of the process step
response and on experiments at the stability limit, have been treated
in [5.16] for the case of the modified control algorithms according to
Eq. (5.3-3). These are given in Table 5.6.1.

To obtain a more detailed view of the dependence of the parameters of


the control algorithm

(5.6-2)

on the process parameters for low-pass processes, on the control perfor-


mance criterion and on the sample time, a digital computer simulation
study [5.18] was made. Processes with the transfer function

(5. 6-3)

with zero order hold, orders n =


2, 3, 4 and 6, and sample times T0 /T =
0.1; 0.5 and 1.0 were assumed and transformed into z-transfer functions.
The controller parameters q 0 , q 1 and q 2 were optimized by using the
Fletcher-Powell method with the quadratic control performance criterion
M
2 2
I: [e (k) + r ~u (k)] (5.6-4)
k=O

(c.f. Eq. (5.2-6)), for step changes of the command variable w(k) with
weighting of the manipulated variable r = 0; 0.1 and 0.25. Then, the
characteristic values of the controller K, cD and c 1 given by Eq.
(5.2-15) were determined. The results of these investigations are shown
in Figures 5.6.1 to 5.6.3 (tuning diagrams). The characteristic values
of the controller are shown as functions of the ratio Tu/TG of the pro-
cess transient functions in Table 5.6.1. The relationship between the
characteristic values Tu/T or TG/T and Tu/TG can be taken from Figure
5.6.4.

These Figures show that:

a) With increasing Tu/TG (increasing order n)


the gain K decreases
- the lead factor cD increases
- the integration factor c 1 decreases

b) With increasing sample time To (continuation see


- K decreases page 11 5)
- CD decreases
- CI increases
Table 5.6.1 Tuning rules for controller parameters according to Takahashi [5.16] based on the rules of Ul

Ziegler-Nich ols m
[ T T
Control algorithm: u(k)-u(k-1) = K y(k-1)-y(k)+ T0 [w(k)-y(k)]+ T 0 [2y(k-1)-y(k -2)-y(k)] ] >-3
1 0 c
::J
1-'·
To TD ::J
To TD I.Q
K K
TI To TI To ;o
c
I-'
p CD
TG Ul
- Kkrit
Tu+To
- -2- - - t-n
0
ti
'U
0, 9 TG 0,135 TGTO 0,27 TGTO P>
o 54 Kkrit To ti
PI - - P>
Tu+T 0 /2 (Tu+To/2)2
[o, 45Kkri t •.. o ,27Kkri t] ~: ' K - s
CD
K(Tu+T 0 /2) 2 Tp
~ 4Tu rt
CD
smaller values for T 0
ti
1,2 TG I
0, 3 TGTO 0,6 TGTO 0, 5 TG 3 0
PID - --- 1 2 Kkrit To Kkrit ::2_ '0
[ 0 ' 6 Kkr it· · • 0 ' 6 Kkr i ' K 40 K rt
(Tu+Tol (Tu+To/2)2 K(Tu+T0/2) 2 K To
J~: Tp To 1-'·
s
1-'·
N
Not applicable for Tu/T 0 ~ 0 Range of validity: T0 ~ 2Tu CD
p.
()
Not recommended for T0 ~ 4Tu 0
::J
rt
y• I ti
/ y. 0
/ I-'
1
~
/ I-'
Q I.Q
I 0
'\J t ti
1-'·
A
~---1 t
r - - Tp_...
rt
::r
sUl
Tu TG
STEP RESPONSE MEASUREMENT
OSCILLATION MEASUREMENT
------
112 5. Parameter-optimized Controllers

12 ~ I
I
!
1\
\
8 \ 32 - - t--
/"-
\
~ ~

0/

6 \ 2~

-i -- I
I~ i/
~l \\ 1E>
I
1',
2 1----o-· o-- 0' t-_8- 8
1/ +-----
!

--- +---

f--!~~-,·~=> t:---..
0- o..- l---i
o_
o ___

n= i j3 4 l ~: o-·-·
o-
o-·-ro-·- ·-·---i
0 0-2 0-~ ().E) () ().2 0-~ 0-6
Tu/TG o, 1/Tr
u G
' .,
I

0.5
'0 -
I
\
\

o--o To/T =().1 r--- 1",.,


.,
o----0 = 0-5 o, ''
().3 ' 0" 1 \>-1
o-·-·-· -·-o = lO
--'1;,---+
I
I

',i
02
Figure 5.6.1 Optimal controller
parameters of the
control algorithm
3 PC-3 (Pro-behaviour) ().1
1----
+ I "-a
due to the perfor-
mance criterion Eq. r- i - o - to-
(5.6-4) with r = 0
for processes 1--1
G
P
(s) = 1
( 1+Ts) n
0
Characteristic
values according to
Eq. ( 5. 2-1 5) .
K = q0-q2
CD = q2/K
CI = (q0+q1+q2)/K
5.6 Tuning Rules for Parameter-optimized Control Algorithms 11 3

2·0 \ 5
-. \
0\\
~\
!

evj iJ
'
'· r-o-, \
!
lL
·o '
'· ..... ~-~ j_
l2 3
lL
"
•o..........
~
·~e
..;-
0

1 l-/
0.8 2
I /
0
//

/
, -~

04 1 J <1'.,_
o'.

~~-
0/

r----c,..
l
0 02 0·4 0·6 0 0-2 0-4 0-6
1/TG
05
jr =0.1j r---- p,\
T0 /T =0-1
\

o o -
' 'o
o---o =0-5 r--~·
'·'
l'o,
o---·-·-·-·-<1 = lO 0.3
·,.
' ' .,
r----o,

0-2 ' o,
....
o...._
1'-.......0
Figure 5.6.2 Optimal controller
0-1
parameters of the
control algorithm
3 PC-3 (PID-behaviour) I o- o- -l
due to the performance
criterion Eq. (5.6-4) 0 0-2 0-4 06
with r = 0.1 for pro-
cesses 1
t/TG
G (s) = --'--
p (1+Ts)n
Characteristic values
according to Eq.
(5.2-15): K = q 0 -q 2
CD = q2/K
CI = (q0+q1+q2)/K
114 5. Parameter-optimized Controllers

20

-~~ ;, i

·..,=~
'-.....
Q ',

l2 ........ 3
~~ /0 I
v
r------ ~
·.;..~
I I

I
0-8 2
Ll ./

04 1 / 0
// ..,.,. ..o

h........
o""'·
..,o/_...-
...
-~ -
g;..·
T
0 0-2 04 0-6 0 0.2 04 0-6
Tu/TG Tu/TG
Ir = 0-251 &
\\
o o T0 /T = 0-1 ·b,_

o---o =O.S \
0-3 'o.. .,
o-·-·-·-·-·-0 = 1-0 '-,
o, 'o
0-2 ' o,·,
o....._
.............. 0

Figure 5.6.3 Optimal controller


0-1
parameters of the
control algorithm
3 PC-3 (PID-behaviour) -~- o- o - -~I
due to the performance
criterion Eq. (5.6-4) 0 02 0-4 0-6
with r = 0.25 for pro-
cesses 1
1/TG
G (s) = ----'--
P (1+Ts)n
Characteristic values
according to Eq.
(5.2-15): K = q 0 -q 2
CD = q2/K
CI = (q0+q1+q2)/K
5.6 Tuning Rules for Parameter-optimized Control Algorithms 11 5

c) With increasing weighting factor r in the performance criterion


- K decreases
- cD decreases
- c 1 decreases

Figure 5.6.4 Characteristics of nth ~rder lags with equal time con-
stants. G(s) = 1/(1+Ts)n. Taken from (3.11]. Tu and TG
see Table 5.6.1 (DIN 19226).

With the aid of these tuning diagrams the controller parameters of a


3-parameter-control algorithm can be determined as follows, on the ba-
sis of a measured process step response.

1. The tangent to the point of inflexion is drawn and the characteris-


tic values Tu' TG and Tu/TG' and the gain KP y(oo)/u(oo) are determined.

2. From Fig. 5.6.4 Tu/T' and TG/T" follow. Then T -} (T'+T").

3. After selecting the sample time T0 , the ratio T0 /T is determined.


116 5. Parameter-optimized Controllers

4. The Figures 5.6.1 to 5.6.3 yield, after choice of the weighting fac-
tor r of the manipulated variable, the characteristic values K0 , cD
and ci depending on Tu/TG and T0 /T. Here, K0 is the loop gain K0 =K KP

5. From Eq. (5.2-15) and K K0 /Kp' cD and ci' the controller parame-
ters follow:
qo K{1+cD)
q1 K(ci-2cD-1)
q2 K CD.

Though the tuning diagrams in Fig. 5.6.1 to 5.6.3 depend on equal time
constants, this procedure for determining the controller parameters can
also be used for low pass processes with widely differing time constants,
as simulations have shown {c.f. section 3.2.4). This can also be reco-
gnized in Table 5.6.2 where a comparison is made for process III, show-
ing the optimized controller parameters and the parameters based on the
tuning rules of Table 5.6.1 and based on the tuning diagrams in Figures
5.6.1 to 5.6.3. The tuning diagrams yield controller parameters which
compare well with the optimal values. Applying the tuning rules accor-
ding to Table 5.6.1 {left part) the gain K is too large. cD and ci'
however, compare well.

Table 5.6.2 Comparison of the results of tuning rules for the controller
parameters based on step response characteristics. Process
III. T0 = 4 sec; K = 1; Tu/TG = 6.6 sec/25 sec= 0.264.

Process III To TD Process


K = CI = CD model
T = 4 sec TI To
0

Parameter optim.
for r=O .•. 0.25 1. 52 ... 1. 13 0.28 . .. 0.21 1.99 ... 0.81 Process
III
{Table 5.4.4)

Takahashi (5.16] 1 -Tus


2.34 0.33 1.3 --e
{Table 5. 6. 1) TGs

. ..
Figures 5.6.1
to 5.6.3 for 1.7 1. 2 0.27 . .. 0.17 3.8 ... 0.85
1
{1+Ts)n
r=O ••• 0.25
6. Cancellation Controllers

The problem in tracking control system design is to make the controlled


variable y follow the command input w as closely as possible. If the
model GP of a stable process is known exactly and if there is no other
disturbance, this problem could be solved by a feedforward control sys-
tem as in Figure 6.1. Ideally, one could require that the output y fol-
lows the input w exactly. This would be the case if

(6-1)

If G~T(z) is realizable, this feedforward element would "compensate"


the process completely since it has the reciprocal transfer behaviour.

y
..
Figure 6.1 Feedforward control system

For processes with time lags, however, the feedforward element is not
realizable and one has to add a "realizability term"

1 R
GlZf Gs(z) (6-2)
p

which leads to deviations between w and y. In considering the cancella-


tion of poles and zeros of the feedforward element and the process, the
effects discussed on page 120 have to be taken into account. If the
assumptions made for the design of this feedforward control element do
not hold, e.g. the process model is not known exactly and disturbances
arise, one has to change the system to a feedback control system as in
Fig. 6.2.
118 6. Cancellation Controllers

~~- IL-G_R~-u-. . . .IL-_G


e ____..
.. . P__,~,__n+-'r---
Figure 6.2 Feedback control system

Unlike a feedforward control system, one cannot require e(t)=w(t)-y(t)=O


for t ~ 0 for a feedback control system. The simple reason is that a
manipulated variable u can only be produced by a control deviation that
is non-zero at least in a transitory state. Therefore, the closed-loop
transfer function
GR (z) GP (z)
G (z)
w
= ~
w(z) (6-3)
1+GR ( z) Gp ( z)

is specified (Gw(z) + 1) and the resulting controller

1 Gw(z)
GR (z) = Gp (z) (6-4)
1-Gw(z)

is determined. The controller transfer function consists of the inverse


process transfer function and an additional term which depends on the
given closed-loop transfer function. Therefore, a part of the controller
cancels the poles and zeros of the process.

The design of these "cancellation" controllers is not only restricted


to the reference variable as input but can also be made for specified
disturbances. For a given Gn(z) = y(z)/n(z) it is e.g.
1 1-Gn(z)
GR(z) = ~ G (z) • (6-5)
P n

For the design of these cancellation controllers, many papers have been
published, especially for continuous signals. Discrete cancellation con-
trollers have been described in (6.1], [2.4], [2.14], [6.2], [6.3].

For prescribing the required closed-loop transfer function Gw(z) or


Gn(z), the following restrictions have to be noted:
6. Cancellation Controllers 11 9

a) Realizability

If a z-transfer function in the form of

s0 + s1 z + ... + Snz
n
G (z) (6-6)
m
1 + a 1 z + ... + amz

is given, then the realizability condition is n $ m if a


m
+0 (see see-
tion 3.4). In the transfer functions

the indices describe the orders of the single polynomials. With Eq.
(6-3) it follows the closed-loop behaviour is given by:

Gw(z) = P~Am + QvBn"


If GR(z) and GP(z) are realizable, i.e. if v $~and n $ m, it follov1s
the orders of the polynomials of G (z) are given by:
w.
G (z): order (v+n)
w order (~+m) ·

The pole excess of Gw(z) is therefore

pe = (~-v) + (m-n) . ( 6-7)

To make this as small as possible, ~ = v is chosen. The pole excess of


the closed-loop transfer function Gw(z) is therefore

pe (Gwl = (m-n)

and so is equal to the pole excess of the process G (z):


p

pe(G ) = (m-n) = pe(G ) . (6-8)


p w

This means that because of the realizability condition, the ?Ole excess
of the command transfer function Gw(z) has to be equal to or greater
than the pole excess of the process if the controller order is ~ ~ v
[2.19].

In general, the z-transfer function of the process given by Eq. (3.2-8)


has b 0 = 0, since for processes without jumping properties at least
the sensor or the actuator has a time lag. Then it follows in Eq. (6-6)
n = m-1, i.e. a pole excess of one, so that e.g.
-1
Gw(z) = z

could be assumed.
120 6. Cancellation Controllers

b) Cancellation of poles and zeros

If the cancellation controller GR(z) given by Eq. (6-4) and the process
GP 0 (z) are in a closed loop, the poles and zeros of the processes are
cancelled by the zeros and poles of the controller if the process model
Gp(z) matches the process exactly. Since the process models GP(z)
B(z)/A(z) used for the design practically never describe the process be-
haviour exactly, the corresponding poles and zeros will not be cancelled
exactly but only approximately. For poles A+(z) and zeros B+(z) which
are "sufficiently spread" in the inner of the unit disc of the z-plane,
this leads to only small deviations of the assumed behaviour Gw(z) in
general. However, if the process has poles A-(z) or zeros B-(z) near or
outside the unit disc one has to be careful.

To discuss this problem [2.4], the process will be described by


+ (z)
B0
- (z)
B0
GPO(z) = ~----~--- (6-9)
A~(z) A;(z)

and the process model by


B+(z) B-(z)
(6-10)
A+(z) A-(z)

If now the cancellation controller compensates ideally for the uncriti-


cal poles and zeros inside the unit circle

A~(z) A-(z)
(6-11)
B~ (z) B- (z)

the closed-loop behaviour becomes:

A-(z) B;(z) Gw(z)


Gw,res(z) (6-12)

and with A-(z) A;(z) + liA-(z)

B (z) B;(z) + 1\B-(z)

it follows that:

(6-13)
Gw,res

For 1\A- (z) 0 and 1\B-(z) = 0, the poles of this transfer function are
near or outside the unit circle. They are, however, exactly cancelled
6. Cancellation Controllers 121

by the zeros. For small differences ~A-(z) and ~B-(z) the poles change
by small amounts and are therefore no longer cancelled. Then a weakly
damped control behaviour or, if the poles are outside the unit circle,
an unstable behaviour results. Therefore, one should not design cancel-
lation controllers for processes with poles or zeros outside or near the
unit circle in the z-plane. One always has to take into account that
small differences ~A-(z) and ~B-(z) occur.

Therefore the design of cancellation controllers according to Eq. (6-4)


has to be restricted to sufficiently damped, asymptotically stable pro-
cesses with minimal phase behaviour.

c) Behaviour between the sampling points

Unlike cancellation controllers for continuous signals, with discrete


time signals the behaviour at only the sampling points is given through
prescribing a certain Gw(z). If Gw(z) is not chosen properly, the desir-
ed behaviour at these sampling points is obtained, but between the samp-
ling points deviations such as oscillations or "ripples" can occur. These
ripples are mostly weakly damped and result in large changes of the mani-
pulated variables for so called "minimal prototype responses"

-1 -1 -2 -1 -2 -3
Gw(z) = z or 2z -z or 3z -3z +z

[2.4], (2.19]. However, this problem can be bypassed through requiring


that

where KP is the process gain. Then one obtains the socalled predictor
controllers which can be of advantage for processes with large dead
times, see chapter 9.

Though the design of cancellation controllers has the advantage of sim-


plicity, it is not recommended in general because of the above restric-
tions. In particular, for processes of higher order the prescribing of
the closed-loop behaviour becomes difficult, and other design methods
are better.
7. Controllers for Finite Settling Time
(Deadbeat)

The ripples between the sampling points that can appear with the cancel-
lation controllers treated in chapter 6 can be avoided if a finite sett-
ling time is required for both the controlled variable and the manipula-
ted variable. Jury [7.1], [2.3] has called this behaviour "deadbeat-res-
ponse". For a step change of the reference variable the input and the
output signal of the process have to be in a new steady state after a
definite finite settling time. In the follwoing, methods for the design
of deadbeat controllers are described which are characterized by an es-
pecially simple derivation and for which the resulting synthesis re-
quires little calculation.

7.1 Deadbeat Controller with Normal Order

A step change of the reference variable at the instant k 0 is assumed,


giving

w(k) = 1 fork= 0,1,2, ••. (7. 1-1)

For dead time d = 0 the requirement for minimal settling time is

y(k) w(k) = 1 for k ~ m (7 .1-2)


u(k) u(m) for k ~ m.

Then the z-transforms of the reference, controlled and the manipulated


variable [7.2] become, for the case where b 0 = 0,

w(z) = (7 .1-3)
( 1-z - 1 )

y (z) y(1)z- 1 + y(2)~- 2 + ••• + 1[z-m + z-(m+ 1 ) + .•• ] (7 .1-4)

u(z) u(O) + u(1)z- 1 + ••• + u(m)[z-m + z-(m+ 1 ) + .•. ] (7 .1-5)

Dividing Eq. (7.1-4) and Eq. (7.1-5) by Eq. (7.1-3), we obtain


7.1 Deadbeat Controller with Normal Order 123

P (z) (7.1-6)

p1 y ( 1)
p2 y(2) - y(1)

Pm 1 - y (m-1)

u(z)
w(z) = qo + q1 z
-1
+ ... + q mz
-m
Q(z) (7 .1-7)

qo u(O)
q1 u ( 1) - u(O)

qm u(m) - u(m-1).

It should be notified that

(7. 1-8)
1 (7. 1-9)
u(m) Gp (1).

The closed-loop transfer function is

GR (z) Gp (z)
~ (7.1-10)
w(z) 1 +GR ( z) Gp ( z)

Hence, the cancellation controller (compare Eq. (6-4)) becomes


1 Gw(z)
(7.1-11)
GR(z) = GP(z) · 1-Gw(z) ·

Comparison of Eq. (7 .1-6) and Eq. (7 .1-10) leads to

P (z). (7.1-12)

Moreover, it follows from Eq. (7.1-6) and Eq. (7.1-7)

P (z)
GP(z) = QTZ) (7.1-13)

and with Eq. (7.1-11) the controller becomes


-1 -m
Q(z) qo+q1z + ... +qmz
GR ( z) = 1-P ( z) (7 .1-14)

The parameters of this controller are obtained through comparison of


the coefficients in Eq. (7.1-13), and with Eq. (7.1-8) and Eq_. (7.1-9)
124 7. Controllers for Finite Settling Time (Deadbeat)

q u(O)
0 + bm

(7.1-15)

The controller parameters, therefore, can be calculated in a very simple


way. It can be seen that the first manipulated variable u(O) depends
only on the sum of the b-parameters of the process. Since this sum de-
creases with decreasing sampling time, the manipulated variable u(O) in-
creases the smaller the sampling time is.

This deadbeat controller can be regarded as a cancellation controller


because of Eq. (7.1-11). However, the closed-loop transfer function, Eq.
(7.1-12) and Eq. (7.1-6), is only determined as a result of the design
and is not prespecified as described in chapter 6. The resulting closed-
loop transfer function becomes, with Eq. (7.1-12) and Eq. (7.1-6),

-1
Gw(z) = P(z) = p 1 z +

m-1
p1z +
m
z
The characteristic equation is therefore

m
z 0. (7.1-16)

Hence the control loop with the deadbeat controller possesses m poles
at the origin of the z-plane.

If d f 0 one has to use the process model [5.7]

+ b z-(m+d)
m
-1 -m
1 + a 1z + + a z
m
- -1
b 1z + ...
-1 (7.1-17)
1 + a 1z +
7.1 Deadbeat Controller with Normal Order 125

Considering

E1 b2 bd 0 am+1 0

b1+d b1

b2+d b2 a 0. (7.1-18)
v

b
m

For the command behaviour it is now required that

y(k) w(k) for k ;;,_ v = m+d


} (7.1-191
u(k) u(m) for k "'- m.

Eq. (7.1-3) to (7.1-15) can be applied using Eq. (7.1-17). Then it fol-
lows from Eq. (7.1-17) and Eq. (7.1-13) that

qo
b1 + b2 + ... + b
m
u(O)

q1 a1q0

q2 a2q0 p1 b1q0 0

qm amqo pd bdqO 0
(7.1-20)

qm+1 am+1q0 0 p1+d b1+dq0 b1q0

qv = avqo = 0 Pv bvqo bmqo

and the controller transfer function is


-1 -m
qo + q1z + + qmz
-(1+d) -(m+d) (7.1-21)
1
- p1+dz - · ·· - Pm+dz

From Eq. (7.1-20) and Eq. (7.1-21) the transfer function of the deadbeat
controller DB(v) becomes

u (z)
e ( z) -1 -d (7.1-22)
1-q0 B(z )z

Therefore, the command transfer function for an exact match of process


model and process is
126 7. Controllers for Finite Settling Time (Deadbeat)

a)

y
1.0 ---- -~--()- -o-~-~-G--~-~-~-G--G--G--
0

b) + +
0-- -·.....:..•---..----- ..·. . .:. .•-·~·~~--
... -.,
9.0 i 5 10 k
7.0
u
5.0
3.0
1.0 . . .L........_. . . . . . . . . . . . ._.........................._............__
c)
0
-1.0
5 10 k
-3.0
-5.0 + - V: _ j
DB{m) o -...... W: _r-

Figure 7.1.1 Responses of the control loop with a deadbeat controller


of order v (normal order) and process III for a step change
of w and a step change of v.
a) block diagram of the control loop
b) behaviour of the controlled variable
c) behaviour of the manipulated variable

q 0 B' (z)
(7.1-23)
z(m+d)

and the characteristic equation becomes

z(m+d) = 0. (7.1-24)

It should be noted that the deadbeat controller cancels the process poles.
7.2 Deadbeat Controller with Increased Order 127

Example 7.1 Deadbeat Controller DB(v). (v m+d).

For the low pass process III, described in section 5.4.1 and in the ap-
pendix, one obtains for T0 = 4 sec the following coefficients of the
deadbeat controller by using Eq. (7.1-20):

9.523 -14.285 6.714 -0.952


0 0.619 0.457 -0.0762.

In Figure 7.1.1 the responses to step changes of the command variable


w and the disturbance variable v can be seen. The required deadbeat be-
haviour for the change of w is produced. 0

7.2 Deadbeat Controller with Increased Order

If the finite settling time is increased from m to m+1 then one value
of the manipulated variable can be prescribed. Since the first manipu-
lated variable u(O) generally is the largest one, this value should be
reduced by prescribing it [5.7].

In Eq. (7.1-4) and Eq. (7.1-5) one more step is admitted. Then Eq. (7.1-6)
and Eq. (7.1-7) become

-1 -2 - (m+ 1)
P ( z) p1z + p2z + ... + Pm+1z ( 7. 2·-1)

-1 -.(m+1) (7. 2-2)


Q (z) qo + q1z + ··· + qm+1z ·

Comparison of the coefficients in Eq. (7.1-13) then leads to

- (m+1)
Pm+1z
(7.2-3)
- (m+1)
+ · · .+ qm+1 z

This equation can only be satisfied if the right hand term has the same
root in both the denominator and numerator. Hence,

-m -1
P (z)
+ ... + p~z ) (a-z )
(7.2-4)
Q(z) m -1
+ .•. + ~z ) (a-z )
128 7. Controllers for Finite Settling Time (Deadbeat)

If the coefficients in Eq. (7.2-3) are compared one obtains, after di-
vi ding by q'0

q' a1q0 p' b1q0


1 1
qi a2q0 p'
2 b2q0
(7 .2-5)

q'm = amqo p' = bmqo


m
.
Now, the numerator and denominator in Eq. (7.2-4) are written fully,
and from the comparison of the coefficients with the right hand side of
Eq. (7.2-3) and Eq. (7.2-4), the following equations are obtained:

(7.2-6)

(aq~-q~-1) Pm (ap~-p~-1)

-q~ Pm+1 -p~ •

From Eq. (7.1-7) we have

(7.2-7)

and with Eq. (7.2-1) or Eq. (7.1-6)

P1 + ••• + Pm+1 = 1 •

Now, it follows from Eq. (7.2-6) and Eq. (7.2-5)

q' (7.2-8)
0

The parameters of the controller now read, using Eq. (7.2-7) and Eq.
(7 .2-8),

qo u(O) (given)

q1 q 0 (a 1-1l + Eb1
i
a1
q2 qo (a2-a1) + Eb.
~
(7 .2-9)
am-1
qm qo (am-am-1) + Yi:i-:-
~

1
qm+1 am (-qo + Eb.)
~
7.2 Deadbeat Controller with Increased Order 129

(7.2-10)

The controller transfer function is now, Eq. (7.1-14),


- (m+1)
Q(z) +. · .+ qm+1z
(7.2-11)
1-P (z) - (m+1)
- ... - Pm+1z

Unlike c9ntroller Eq. (7.1-14), here the initial manipulated variable


u(O) = q 0 is given. The second manipulated variable then becomes (see
Eq. ( 7. 1-7) and Eq. ( 7. 2-9)) :

(7.2-12)

u(O) should not be chosen too small because then u(1) > u(O), which is
unsuitable in most cases.

If u(1) s u(O) one requires

(7.2-13)

Even if u(1) s u(O) is specified it is not, of course, certain that for


k ~ 2 then lu(k) I < lu(O) I. Since the calculation of the parameters is
relatively simple, one proceeds iteratively, i.e. one varies u(O) so
long as there is adequate behaviour. Often choosing u(1) = u(O) gives
a good result.

For processes with deadtime d>O, one proceeds according to Eqs.(7.1-17)


to (7.1-21). Then, based on the equations corresponding to Eq. (7.2-11),
Eq. (7.2-9) and Eq. (7.2-10), the transfer function of the deadbeat
controller DB(v+1) becomes
-1 -1
q 0 A(z )[1 - z fa]
(7.2-14)
1-o B(z- 1 )z-d[1-z- 1 /a]
·o

(7.2-15)

The characteristic equation is


zm+d+1 = O. (7.2-16)
130 7. Controllers for Finite Settling Time (Deadbeat)

Example 7.2 Deadbeat Controller DB(v+1)

For the same process as in example 7.1, Fig. 7.2.1 shows step responses
for given u(O) with controller parameters:

3.810 -0.0012 -5.884 3.647 -0.571

0 0.247 0.554 0.244 -0.046.

For a step change of the reference variable the desired deadbeat beha-
viour is obtained. The manipulated variable u(O) could be decreased to
40% of the value in Fig. 7.1.1. The control variable needs one more
sample time to reach the finite settling time. For a step change of
the disturbance v, a relatively well damped behaviour can be seen. The
chosen initial manipulated variable u(O) leads to a somewhat worse con-
trol performance. However, this deadbeat controller can be applied more
generally as it produces smaller amplitudes of the manipulated variable.
0

y
1.0 ------ Q_-G--G--o--G--()--()--()--G--G--o-
0

+ +
+ +
0 + +
+ +
o-,-·~·-----.,---------~---·~·~·--
9.0 5 10 k

7.0
u
5.0
3.0
1.0 L.. ••••••••-••M•W•M-·•••••·••·--••••••••-••"''''"'

0~~~~------------~
-1.0
. . .J 5 10 k
-3.0
-5.0 +-v:_r-
DB(m+1) o ········ w: ...r-

Figure 7.2.1 Responses of the control loop with deadbeat controller


of (v+1)th order (increased order) and process III to
step changes of w and step changes of v. For the design
u(O) = u(1), Eq. (7.2-13), has been set.
7.3 Choice of the Sample Time for Deadbeat Controllers 1 31

The step responses of the deadbeat controllers used in example 7.1 and
7.2 are shown in Fig. 7.2.2. For the controller DB(v), a negative u(1)
follows the initial large positive manipulated variable u{O); this op-
posite control is required because of the large value of u{O). The va-
lue of u(2) is positive, and after the oscillations have decayed an in-
tegral behaviour with increasing u{k) arises.

ulkl ulk)
Figure 7.2.2 Step responses of
10 10
control algorithms
for finite sett-
ling time. Process
III.
a) DB(V)
b) DB (v+1)

10 20 k 10 20 k

-5 a) -5 b)

Because of the assumption that u(1) = u(O) in closed-loop, and because


of the dead time d = 1 of the process and, therefore, p 1 = 0, u(O) and
u(1) are both equal for the controller DB(v+1). Then a negative value
of u(2) occurs, followed by integral behaviour after some oscillations.

In comparison with the step responses of a PID controller, the deadbeat


algorithms generate a larger lead effect which is followed by decaying
oscillations to produce the finite settling time of the closed loop.

7.3 Choice of the Sample Time for Deadbeat Controllers

From Eq. (7.1-15) the manipulated variable u(O) is inversly proportio-


nal to the sum of the numerator parameters of the process model. How-
ever, this sum increases (c.f. Table 3.7.1) with the sample time, so
that the magnitude of the manipulated variable can be decreased by in-
creasing the sample time, or vice versa for a given range of a reference
variable step a suitable sample time can be determined. Table 7.3.1
shows the manipulated variable u{O) as a function of the sample time
for a process of third order given by Table 3.7.1.
132 7. Controllers for Finite Settling Time (Deadbeat)

Table 7.3.1 Influence of the sample time on the manipulated variable


u(O) for deadbeat controllers with the third order process
of Table 3. 7. 1

Controller T0 (sec] 2 4 6 8 10

DB(v) u(O) = qo 71.5 13.3 5.75 3.81 2.50

-- ( 1-a 1 ) 3.25 2.71 2.30 2.00 1.77

DB(v+1) u(O) = qo 22.0 4.91 2.50 1. 91 1. 41

To avoid u(O) becoming too large, the sample time for the DB(v)-control-
ler should be T0 ~ 8 sec. This corresponds to

Here TE is the sum of the time constants and T 95 the 95 % settling time.

If the settling time is increased by one sample interval, i.e. a DB(v+1)-


controller is used, from Eq. (7.2-13) u(O) can be decreased at most by
the factor (1-a 1 ). This means for the example in Table 7.3.1 a decrease
of about 1.8 to 3.3, depending on the sample time. For the DB(v+1)-con-
troller the sample time should be T0 ~ 5 sec or

If that maximum possible u(O) is assumed which can be chosen from the
allowable range of the manipulated variable, the sample time for the
DB(v+1)-controller can be smaller than for the DB(v)-controller.

Table 7.3.2 compares suitable sample times for a parameter-optimized


controller and for the deadbeat controllers considered for the process
III. The recommended smallest sample time can be about the same for
3PC-3 and DB(v+1). However, for DB(v) a value twice this can be chosen.

Tab,le 7.3.2 Comparison of the sample times for parameter optimized and
deadbeat controllers for the example of process III.
Assumption u(O)max ~ 4.5

3PC-3 (PID) DB(v) DB(V+1)


Controller (r = 0)

TO/T95 ~ 0.12 ~ 0.2 ~ 0.10


7.3 Choice of the Sample Time for Deadbeat Controllers 133

The form of the deadbeat controllers discussed in this section requires


particularly few calculations. Therefore, they should be applied when
the synthesis must be repeated frequently, e.g. in adaptive control sys-
tems. However, as these deadbeat controllers compensate for the poles
of the processes, as in Eq. (7.1-22) and Eq. (7.2-14), they should not
be applied to processes with poles outside or in the vicinity of the unit
circle in the z-plane (see chapter 6,(7.1D. The application of deadbeat
controllers has to be restricted to asymptotically stable processes
(c.f. section 11.1).
8. State Controllers

For designing a controller using the methods described in the previous


chapters either the structure of the controller has to be given and the
controller parameters found by minimizing a performance criterion,
chapter 5, or the structure and the parameters result from the desired
behaviour of the closed loop, chapter 6 and 7. In both cases it is as-
sumed that the closed loop is in a steady state before a disturbance
occurs. These special assumptions need not be made in the design of
state controllers. The structure and the parameters of the state con-
troller result from the optimization of a quadratic control performance
criterion, and the initial and final conditions can differ from zero.
Initially it will be assumed that all state variables are measurable.

Section 8.1 considers the optimal control of a process from a given


initial state into the zero state. The design of optimal state control-
lers for external disturbances is described in section 8.2. In both
cases the calculation of the optimal controller requires the solution
of a matrix Riccati difference equation.

The parameters of a linear state controller can also be determined from


the chosen coefficients of the characteristic equation, section 8.3,
which is computationally simple. After transforming the process equa-
tion into a diagonal form, the controller parameters can be determined
by independently chosing the poles of the closed loop, provided certain
requirements are satisfied. This is called modal state control, section
8.4. The parameters of the state controller can also be simply deter-
mined if the design objective of a finite settling time is applied,
section 8.5.

If some state variables are unmeasurable they must be reconstructed by


an observer, section 8.6. The combination of state controllers and ob-
servers with the process is considered in section 8.7 for both initial
value disturbances and external disturbanqes. Finally, the design of
obervers of reduced order, section 8.8, and the selection of free de-
sign parameters, section 8.9, is considered.
8.1 Optimal State Controllers for Initial Values 135

As the derivation of state controllers for multivariable control sys-


tems and single variable control systems differ only by writing the
manipulated and controlled variables in vector form and by using para-
meter matrices instead of parameter vectors, the general multivariable
case is considered in the following sections.

8.1 Optimal State Controllers for Initial Values

It is assumed that the state equation of the process

~(k+1) !::_ ~(k) +!!. ~(k) ( 8. 1-1)

with constant parameter matrices A and B is given, together with the


initial condition ~(0) (see Figure 8.1.1). It is assumed initially that
all state variables x(k) are exactly measurable.

x(OJ

r- -, y!kl
---=--=t c r:-~..:c-=:;>
L_-:.....J

Figure 8.1.1 State model of a linear process

Now a controller has to be determined which generates a manipulated


variable vector ~(k) from the state variable vector ~(k) so that the
overall system is controlled into the final state ~(N) ~ 0 and the
quadratic performance criterion
N-1
I= ~T(N)£ ~(N) + Z (~T(k)Q ~(k) + ~T(k)~ ~(k)] ( 8. 1-2)
k=O
is minimized. Here

s is positive semidefinite and symmetric,


g is positive semidefinite and symmetric,
R is positive definite and symmetric,

i.e. xTS X 2: 0, ~Tg X 2: 0 and uTR u > 0.


136 8. State Controllers

These conditions on the weighting matrices ~' g and ~ result from the
conditions for the existence of the optimum of I, and can be discussed
as follows. Meaningful solutions in the control engineering sense can
only be obtained if all terms have the same sign, e.g. a positive sign.
Therefore, all matrices have to be at least positive semidefinite. If
~ = Q, i.e. the final state ~(N) is not weighted, but g + Q, i.e. all
states ~(O), ••. ,~(N-1) are weighted, a meaningful optimum also exists.
That means that if ~ is positive definite ~ can also be positive semi-
definite. The converse is also true. One should, however, exclude the
case where S = 0 and g = Q, for then the states ~(k) would not be weigh-
ted and only the manipulated variables would be weighted by ~ +Q,
which is nonsense. R has to be positive definite for continuous-time
- -1
state controllers as R is involved in the control law. For time-dis-
crete state controllers, however, this requirement can be relaxed as
described later.

As only the case where ~(N) ~ Q is considered here, ~ = g is chosen.


In this case, g should also be positive definite. Note that in this
problem the influence of the reference variables and external disturban-
ces is ignored, and that the output variables

y(k) = f ~(k) (8 .1-3)

are not fed back. Instead, we consider the modification of the process
eigen behaviour and stabilization through state feed back. If the opti-
mal manipulated variable u(k) is found then
N-1
min I= min {xT(N)Q ~(N) + E (xT(k)Q x(k) + ,!!T(k)~ ,!!(k)]}
,!!(k) - - k=O - - - ( 8. 1-4)
k = 0,1,2, ••. ,N-1

The calculation of the optimal manipulated variable is a problem of


dynamic optimization which can be solved by variational calculus,
applying the maximum principle of Pontryagin or the Bellman optimiza-
tion principle [8.1]. The solution outlined below was given by Kalman
and Koepcke [8.2] and uses the optimality principle.

Remarks
a) According to the optimality principle of Bellmann each final element
of an optimal trajectory is also optimal. This means that if the
end point is known, one can determine the optimal trajectory in a
backward direction.
8.1 Optimal State Controllers for Initial Values 137

b) Because of the state equation (8.1-1), ~(k) influences the future


states ~(k+1), ~(k+2), . . . . Therefore, one can calculate the opti-
mal ~(k) by backward calculation. Hence, Eq. (8.1-4) is written as:

min I = min
[
min {~T (N) .Q ~ (N)
~(k) ~ (N-1)

k = 0,1, •.. ,N-2 (8.1-5)


N-1 ]
+ k:O[~T(k).Q ~(k) + ~T(k)~ ~(k)]} •
It follows that
N-1 N-2
min { ... } L
xT(k)Q x(k) + L uT(k)R u(k) +
~(N-1) k=O - - - k=O - - -
(8.1-6)
+min {xT(N)Q x(N) + uT(N-1)R u(N-1)}
u(N-1) - --

as the two first terms are not influenced by ~(N-1) and IN- 1 ,N are
the costs of k = N-1 to k = N resulting from ~(N-1). If the state
equation

~(N) ~ ~(N-1) + ~ ~(N-1)

or (8.1-7)
T T T T T
~ (N) = ~ (N-1)~ + ~ (N-1) ~

is considered as a further condition, it follows from Eq. (8.1-6)


that

min {xT(N)Q x(N) + uT(N-1)R u(N-1)} =


~(N-1) - - - - - -

min {xT(N-1)ATQ A x(N-1) + ~ ~T(N-1)~Tg ~ ~(N-1)


~(N-1)- ----
T T T
+ ~ (N-1)~ .Q ~ ~(N-1) + ~ (N-1)~ ~(N-1)} =

xT(N-1)ATQ A x(N-1) +min {2xT(N-1)ATQ B u(N-1)


---- ~(N-1) - ----

(8.1-8)

To minimize Eq. (8.1-8) the following relations are valid

a .Q, o.
min
~(N-1)
{ ... } = a~(N-1){. •• } > (8.1-9)
138 8. State Controllers

Hence, using the rules for taking derivatives of vectors and matrices
given in the appendix

a 0
a~(N-1 > {. • ·}

and
-(~TQ ~ + ~)-1~TQ ~ ~(N- 1 )
- !_(N-1) ~(N-1). (8.1-10)

Here
~(N-1) (8.1-11)

and
(8.1-12)

The costs IN- 1 ,N resulting from ~(N-1) can therefore be formulated as


a function of the initial condition ~(N-1) for this stage:
IN-1,N = ~T(N-1)~TQ ~ ~(N-1)-2~T(N-1)~TQ ~(~TQ ~ + ~)-1~TQ ~ ~(N-1)
+ ~T(N- 1 )~TQ ~(~TQ ~ + ~)-1~TQ ~ ~(N- 1 )
~T(N- 1 )(~TQ ~ _ ~TQ ~(~TQ ~ + ~)-1~TQ ~] ~(N- 1 )
~T(N-1)(~TQ ~- !_T(N-1) (~TQ ~ + ~)!5_(N-1)]~(N-1)
T (8.1-13)
- ~ (N-1) !:'_N_ 1 ,N~ (N-1) .
Here

!:'_N- 1 ,N = ~TQ(! _ ~(~TQ ~ + ~)-1~TQ] ~


= ~TQ ~- ~T(N-1) (~TQ ~ + ~)!_(N-1). (8.1-14)

I or min I according to Eq. (8.1-5) and Eq. (8.1-6) can be given as func-
tion of ~(k), k = o, ... ,N-1 and ~(k), k = o, ... ,N-2. Thus the unknowns
~(N) and ~(N-1) can be eliminated. In order to perform this elimination,
first IN- 1 ,N from Eq. (8.1-13) is substituted in Eq. (8.1-6), resulting
in
N-1 T T
min {xT(N)Q x(N) + L [x (k)Q x(k) + ~ (k)~ ~(k)]}
~ (N-1) - - - k=O - - -
N-1 N-2 T T
L XT(k)Q x(k) + k~O~ (k)~ ~(k) + ~ (N-1)!:'_N- 1 ,N~(N-1)
k=O- - -
N-2
L [xT(k)Q x(k)
k=O - - -

(8.1-15)
8.1 Optimal State Controllers for Initial Values 139

The abbreviation

~N-1 = ~N-1,N + 9 (8.1-16)

is introduced so that in Eq. (8.1-15)

T
IN-1,N + ~ (N-1)9 ~(N-1)
T
~ (N-1)~N- 1 ~(N-1) (8.1-17)

can be formed. In this abbreviation the costs of the last step and the
evaluation of the corresponding initial deviation ~(N-1) are included.
(This compression allows a simpler formulation of the following equa-
tions.) If Eq. (8.1-16) is introduced into Eq. (8.1-15), and if there-
sult is placed into Eq. (8.1-5) it follows that:
N-2 T T
min I min f min { Z (~ (k)9 ~(k) + ~ (k)~ ~(k)]
u(k) u(N-2) k=O

k ~, ... ,N-3- + ~T(N-1)~N- 1 ~(N-1)}J (8.1-18)

Instead of min now it reads min as the optimal ~(N-1) and the re-
u(N-1) u(N-2)
sulting state ~(N) have been calculated and substituted. For the term
min one obtains by analogy to Eq. (8.1-6)
~(N-2)
N-2 N-3
min { ... } Z XT(k)Q ~(k) + Z UT(k)R u(k) +
~(N-2) k=O- - k=O- - -

+min {uT(N-2)R u(N-2) + ~T(N-1)~N- 1 ~(N-1)}.


~(N-2) - --

(8.1-19)

IN- 2 ,N describes the costs resulting from the last two stages
T T (8.1-20}
IN- 2 ,N = ~ (N-2)~ ~(N-2) + ~ (N-1)9 ~(N-1) + IN- 1 ,N.

If now the state equation is again considered

~(N-1) = ~ ~(N-2) + B ~(N-2)

it follows that

IN- 2 ,N =min {uT(N-2) (R + ~T~N- 1 ~)~(N-2) +


~(N-2) - -
T T T T
+ 2~ (N-2)~ ~N- 1 ~ ~(N-2) + ~ (N-2)~ ~N- 1 ~ ~(N-2)} =
~T(N-2)~T~N- 1 ~ x(N-2) +min {~T(N-2) (~+~T~N- 1 ~)~(N-2) +
~(N-2)
T T
+ 2~ (N-2)~ ~N- 1 ~ ~(N-2)}. (8.1-21)
140 8. State Controllers

This results by analogy to Eq. (8.1-10), in

0 T -1 T
~ (N-2) = - (~ + ~ ~N-1~) ~ ~N-1~ ~(N-2) - KN_ 2 ~(N-2). (8.1-22)

Hence, the controller ~N- 2 becomes

T -1 T
~N-2 = (~ + ~ ~N-1~) ~ ~N-1~" (8.1-23)

Therefore the minimal costs IN- 2 ,N for the two last stages become using
Eq. ( 8. 1-21 ) :

IN-2,N =

with

(8.1-25)

Now, the minimum of I with respect to ~(N-2) can be formulated accor-


ding to Eq. ( 8 . 1-1 9)
N-2
min I l: XT(k)Q X(k)
~(N-2) k=O- - -
N-3 T T
Z: [xT(k)Q x(k) + ~ (k)~ ~(k)]+~ (N-2) (~N- 2 ,N+g)~(N-2)
k=O - - -

IN-2 (8.1-26)
If the abbreviation

~N-2 = ~N-2,N + g (8.1-27)

is introduced again, the costs of the two last stages including the
weighting of the initial deviation ~(N-2) results in
T T
!N- 2 ,N + ~ (N-2)2 ~(N-2) ~ (N-2) (~N- 2 ,N + g)~(N-2)

T (8.1-28)
~ (N-2)~N- 2 ~(N-2).
8.1 Optimal State Controllers for Initial Values 141

Considering the state equation, I can now be expressed as a function of


~(k) and ~(k) with k = O, ••• ,N-3. Then ~0 (N-3) can be determined, etc.

In general terms, one obtains a linear, time-variant state aontroZZer

~O(N-j) = - ~-j~(N-j) j = 1,2, ••• ,N (8.1-29)

which is a proportional-action negative-feedback to the process input


via the gain matrix K __ ., Figure 8.1.2.
=-"N-J

~(0)

_\:!(k) x(k) r---, ylkl


-=:>1
1--""-'"'---_- cl: ~-=:.>
L---J

Figure 8.1.2 Process state model with optimal state controller K for the
control of an initial state deviation x(O). It is assumed
that the state vector x(k) is exactly and completely mea-
surable. -

Its parameters are obtained from the recursive equations

~-j (8.1-30)

-PN -].

(8.1-31)

with ~N = Q as the initial matrix. The last equation is a matrix Ricca-


ti difference equation. For the value of the performance criterion Eq.
(8.1-2) we have:

(8.1-32)

with k = O,l, ••• ,N-1. The minimal value of the quadratic criterion can
142 8. State Controllers

therefore be expressed explicitly as a function of the initial state


-x(O). As is shown in a following example, -KN -J. converges for j = 1,2,
•.• ,N, i.e.

~N-1' ~N-2' ... , ~2' ~1' ~0

to a fixed final value when N ~ oo

so that in the limit a time-invariant state controller

- K: ~(kl (8.1-33)

is produced. The controller matrix is obtained from

K: (B: + ~T~ ~)-1~T~ ~ (8.1-34)

with p E.o as solution of the stationary matrix Riccati equation


'F lim p
-N-J
. .Q + ~T~(_! - ~(B: + BTP ~)-1~TE_] A . (8.1-35)
N~oo

The solution of this nonlinear equation is obtained from the recursive


equation (8.1-31). The time-invariant state controller of Eq. (8.1-33)
is most appropriate for practical applications. Because of the matrix
inversion,

det[B: + ~T~ ~] +0
has to be satisfied, but because of the recursive calculation or the
existence conditions for the optimum, we must also have:

det (B: + BTQ B] +0 ->- Eq. ( 8 . 1 -1 2 , Eq. ( 8 . 1 -11 )

det(B: + BTE_N-j+ 1 ~] +0 ~ Eq. (8.1-30).

This means that the terms in the brackets have to be positive definite.
This is satisfied in general by a positive definite matrix R. R 0 can,
T -
however, also be allowed if the second term~ E_N-j+ 1 ~ > 0 for j 1,2,
..• ,Nand g0. Since E_N-j+ 1 is not known a priori, R > 0 has to be
>
required in general.

For the closed system from Eq. (8.1-1) and Eq. (8.1-33) we have

~(k+1) = [~ - ~ ~] ~(k) (8.1-36)


8.1 Optimal State Controllers for Initial Values 143

and therefore the characteristic equation becomes

det(z ! - ~ + ~ EJ = O. (8.1-37)

This closed system is asymptotically stable if the process Eq. (8.1-1)


is completely controllable. If it is not co~letely controllable then
the non controllable part has to have asymptotically stable eigen va-
lues [8.4].

Example 8.1.1

This example uses test process III, which is the low pass third-order
process with deadtime described in section 5.4. Table 8.1.1 gives the
coefficients of the matrix PN ., and Fig. 8.1.3 shows the controller
- -J
coefficients kNT . as functions of k = N-j (see also example 8.7.1).
- -J

Table 8.1.1 Matrix ~-j as a function of kin the recursive solution


of the matrix Riccati equation for process III [8.5].

0.0000 0.0000 0.0000 0.0000


!:.29 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 1 .0000 0.0000
0.0000 0.0000 0.0000 0.0000

0.0000 0.0000 0.0000 0.0000


!:.28 0.0000 1 .0000 1.5000 0.0650
0.0000 1.5000 3.2500 0.0975

0.9958 1.4937 1.5385 0.1449


!:.27 1.4937 3.2405 3.8078 0.2823
1 . 5385 3.8077 5.6270 0.3214
0.1449 0.2823 0.3214 0.0253

6.6588 6.6533 5.8020 0.7108


!:.24 6.6533 7.9940 7.7599 0.8022
5.8020 7.7599 8.9241 0.7529
0.7108 0.8022 0.7529 0.0822

7.8748 7.5319 6.4262 0.8132


!:.21 7.5319 8.6296 8.2119 0.8763
6.4262 8.2119 9.2456 0.8056
0.8132 0.8763 0.8056 0.0908

7.9430 7.5754 6.4540 0.8184


!:.19 7.5754 8.6573 8.2296 0.8796
6.4540 8.2296 9.2570 0.8077
0.8184 0.8796 0. 8077 0.0912

7.9502 7.5796 6.4564 o.8189


!:.1 7.5796 8.6597 8.2310 0.8799
6.4564 8.2310 9.2578 0.8079
0.8189 0.8799 0.8079 0.0913
144 8. State Controllers

kI
10 t
.
08t++++++ ++++++++ ++++++++
1 & & £ • • • a • • • • a a • a a • a a a & • +
-
• +
0.6 -
• k,
04 + k2
- kJ
+
0.2

+
0 •-+-+-
10 20 k 30

kj
1.0 1
0.8 i . . . . . . . . . . . . . . . . . . . . .
1 ++++++++ ++++++++ ++++++"

o.6 I -------- -------- -----: ~


• k,
0.4 + k2
- k3
k,
.
:t
0.2 y

VYYVVYYYYYVYYVVVYYYYYyyy

0-+----- -+------ --+----- .ll<..--.. ..


.
...

10 20 - - k 30

Figure 8.1.3 Coefficients of the state controller~~-· as functions of


k = N-j in the recursive solution of theJmatrix Riccati
equation for process III [8.5]
8.2 Optimal State Controllers for External Disturbances 145

The recursive solution of the matrix Riccati equation was started for
j = o, with R r = 1, N = 29 and

=l~ n
0 0
0 0
.Q
0
0 0

A value of .Q was assumed to give

T
(c.f. section 8.9.1). The coefficients of PN . and kN . do not change
- -J - -J
significantly after about ten stages, i.e. the stationary solution is
quickly reached.
D

In this section i t was assumed that the state vector ~(k) can be measu-
red exactly and completely. This is occasionally true, for example for
some mechanical or electrical processes such as aircraft or electrical
networks. However, state variables are often incompletely measurable;
they then have to be determined by a reference model or observer (see
section 8.6).

8.2 Optimal State Controllers for External Disturbances


The resulting time-invariant state controller for an initial value de-
viation ~(0) has a proportional action. Therefore, constant disturban-
ces to state variables cannot be controlled without offset. For con-
stant disturbances the state controller has to be modified, as in [8.3],
L8.6], (8.7], [8.4].

We now consider the case where aonstant referenae variabtes~(k) and dis-
turbanaes ~(k) arise. These constant signals can be generated [8.8]
from definite initial values ~(0) and y(O) by areferenae variabtemodeZ
(c.f. Figure 8.2.1):

~(k+1) !:_ ~(k) + B y(k)


y(k+1) y(k) } (8.2-1)
~(k) c ~(k).

Here dim y(k) dim ~(k). For example, if a step change of the refe-
146 8. State Controllers

Ylol V( ol

'!! lkl

II
II ~ ( k )

Figure 8.2.1 State model of a linear process with a reference and dis-
turbance variable model for the generation of constant
reference signals w(k) and disturbances n(k). The result-
ing state controller is illustrated by dashed lines.

renee variable w(k) = w0 1 (k) is required for a process with proportio-


nal action, y(O) = ~O is taken. Then element (~'~' f) of the reference

variable model is excited by a step and generates a response w (k) at


-y
its output. The constant difference for a step function ~(k)

-w{k) - -wy {k) = -v


w (k)

can be generated by suitable initial values ~(0). Selecting appropriate


values of .Yo and ~(0) a step in ~(k) can be modelled. The values of .Yo
depend on
~
and on the gain factors of the model (~, ~, £). I f the
gains are unity, Yo = ~o· Giving other values of ~(0) can generate
other signals ~(k) which converge also for k -+ oo to a fixed value ~-

As with the reference variable model a di s tur b ance va r iable modeZ is


assumed to be
8.2 Optimal State Controllers for External Disturbances 147

!1 (k+1) ~ !}(k) + B .f(k)

1 (k+1) .f(k)

£(k) c n.(k)

with dim 1_(k) dim £ (k) (see Fig. 8. 2. 1) . Based on the initial values
1_(0) and !}(0) a constant £(k) can be generated.

Using other structures of the preset model with state variables y(k)
or 1_(k) other classes of external signal can be modelled. For example
a linearly increasing signal of first order can be obtained from

~(k+1) =~ ~(k) + ~ y2(k)

y2 (k+1) y2 (k) + y1 (k)

y1 (k+1) y1 (k).

The state variables of the process model, the reference variable and
the disturbance model are combined into an error state variable ~(k),
so that for the control deviation ~(k) we have

~(k) ~(k) - y(k) - £(k)

~[~(k) - ~(k) - !}(k)] ~ ~(k). (8.2-4)

l=[~ ~lr~(k)
The overall model is then described by

[ ~(k+1) ]-[~]u(k).
Q -
(8.2-5)
y(k+1)-1_(k+1) Q .!_ y(k)-1_(k)

As in the previous section, it is assumed that the state variables ~(k),


y(k) and 1_(k) are all completely measurable. The manipulated variable
is now separated into two terms

."!:!1 (k) + ."!:!2 (k) • (8.2-6)

If
~1 (k) = y(k) - .f(k) (8.2-7)

is taken, the influences of y(k) and of 1_(k) on y(k) are completely


compensated. Therefore, ~ 1 (k) controls the effect of the initial va-
lues y(O) and 1_(0). This corresponds to ideal feedforward control ac-
tion. Then the partial control ~ 2 (k) has to control the effects of the
initial values

£(0) ~(0) - ~(0) - !}(0) (8.2-8)


148 8. State Controllers

by using a feedback state controller. Hence we are left with the syn-
thesis of an optimal feedback state controller for the initial values
of the system

~(k+1) = ~ ~(k) - ~ ~2(k). (8.2-9)

Here, the corresponding quadratic control performance criterion


T N- 1 T T
I ~ (N)g ~(N) + L (€ (k)Q E(k) + ~ 2 (k)~ ~ 2 (k)] (8.2-10)
€ k=O - - -

has to be minimized. However, this problem is like that solved in sec-


tion 8.1, so that the optimal time-invariant state controller given by
Eq. (8 .1-33) is

~2(k) ~ ~(k). (8.2-11)

Unlike the control of initial values ~(0) (section 8.1), this state
controller controls the system with initial values ~(0).

The overall controller for constant disturbances is now composed of

- a state controller for initial values ~(0) + ~ 2 (k)

- a feedforward control of y(k) and ~(k) + ~ 1 (k)

and therefore with Eq. (8.2-6), Eq. (8.2-7) and Eq. (8.2-11)

€ (k) ]
~(k) = (~_!] [ - . (8.2-12)
y(k) - ~(k)

This controller is illustrated in Fig. 8.2.1 by dashed lines.

The overall model of Eq. (8.2-5) can be represented by using the abbre-
viations

~*(k)
--[ ~(k) A*
y(k)
(8.2-13)

B* C* (f .Q]

as follows

~* (k+1) (8.2-14)

~(k) (8.2-15)

and the constant state controller becomes


~ (k) (8.2-16)
8.2 Optimal State Controllers for External Disturbances 149

with
K* = (~ _!]. (8.2-17)

If each output variable yi(k) has a corresponding command variable wi(k)


and disturbance ni(k) then

dim x* dim £ + dim~ =m + r.

Hence~* is a (m+r)x(m+r)-matrix.

The characteristic equation of the optimal state control system for


stepwise external disturbances is

[~
I - A + B K
det [z I - A* - ~*~*] det
(z-1)i]

det [z I - A + B ~](z-1)q

0 (8.2-18)

i f dim }.(k) = dim 1(k) dim .!!1 (k) dim _!!(k) q.

Assuming the given models for external disturbances, the control sys-
tem for q manipulated variables acquires q poles at z = 1, i.e. q "in-
tegral actions", which compensate for the offsets. For a single input/
single output system the characteristic equation becomes

det [z.!- ~ + bkT](z-1) = 0. (8.2-19)

Hence it has (m+1) poles. Note that the characteristic equation has
poles at z = 1 as the system is open loop with respect to the additio-
nal state variables }.(k) and 1(k).

Another way to counteract the offsets due to external disturbances and


reference variables is to add a pole at z = 1 to the process model. For
a process with output

~(k) =f. ~(k)

this corresponds to the introduction of additional state variables

~(k+1) = ~(k) + ~ ~(k) (8.2-20)

i.e. by adding summing or "integration" action terms. Here F is a dia-


gonal matrix. Using rectangular integration the diagonal elements of F
can be interpreted as the ratio of the sample time T0 to the integration
time TI., f .. = T0 /TI .• The state controller is then of the form [8.4],
~ ~~ ~
150 8. State Controllers

[8.5]

~(k) ~ ~(k) - ~I~(k). (8.2-21)

Unlike adding reference variable and disturbance models, the addition


of an integration element has the disadvantage that constant disturban-
ces ~v(k) at the process input cannot be controlled without offset
(c.f. chapter 4). Furthermore, the integration time constants fii can
be chosen arbitrarily, because they are not determined by the criterion
Eq. (8.1-2) used for the controller design. Therefore this integral ac-
tion does not suit the design requirement for a state controller having
a closed-form solution.

If the state variables ~*(k) cannot be measured exactly as assumed for


this section, they have to be reconstructed by a process model. Only
then do the advantages of a state control system as in Fig. 8.2.1 become
clear, see section 8.7.2.

8.3 State Controllers with a Given Characteristic Equation

A controllable process with state equation

~(k+1) = ~ ~(k) + ~ £(k) (8.3-1)

may be changed by state feedback

~(k) ~ ~(k) (8.3-2)

such that the poles of the total system

~(k+1) = [~- ~ ~] ~(k) (8. 3-3)

or the coefficients of the characteristic equation

det [z ! - ~ + ~ ~] 0 (8.3-4)

are given. The procedure of pole assignment of a state feedback con-


trol system will be discussed for a single input/single output process.
The state equation is transformed into the controller canonical form,
as in Table 3.6.1,
8.3 State Controllers with a Given Characteristic Equation 1 51

~ (k+1)
0

-am-1 · · ·
J~(k). m u (k). (8. 3-5)

The state feedback is

T
u(k) - ~ ~(k) (8.3-6)

Eq. (8.3-6) and Eq. (8.3-5) yield


0 0

~.
~(k+1) ~ (k). (8.3-7)

(-am-km)
0 0
(-a
m-1
-k
m-1
) (
1
-k
1
,j
Hence, the characteristic equation is

(8. 3-8)

0.

This equation leads to following relationships for the coefficients of


the feedback vector kT

k. =a. - a. i = 1 , 2, ... ,m. (8.3-9)


l l l

The coefficients ki are zero if the characteristic equation of the pro-


cess is not changed by the feedback, ai = ai. The coefficients ki in-
crease if the coefficients ai of the closed system are changed in a po-
sitive direction compared with the coefficients ai. Therefore, the ma-
nipulated variable u(k) becomes increasingly active when the coeffi-
cients ai are changed by the controller to be further away from the
coefficients ai. The effect of a state feedback on the eigen behaviour
can therefore be clearly interpreted.

In poZe assignment design the poles zi, i = 1, ... ,m

(8.3-10)

are first determined appropriately, and then the ai are calculated and
the ki are determined from Eq. (8.3-9). The multivariable system case
is treated for example in [2.19J. See also section 21.2.
152 8. State Controllers

It should be noted, however, that by placing the poles only single ei-
gen oscillations are determined. As the cooperation of these eigen os-
cillations and the response to external disturbances is not considered,
design methods in which the control and manipulated variables are di-
rectly evaluated are generally preferred. The advantage of the above
method of pole assignment lies in the especially clear interpretation
of the changes of the single coefficients ai of the characteristic
equations caused by the feedback constants ki. As has been shown in
chapter 7, the characteristic equation for deadbeat aontroZ is zm = 0.
Eq. (8.3-8) shows that this occurs when ai = 0. This state deadbeat
control will be considered in section 8.5.

8.4 Modal State Control

In section 8.3 the state representation in controller canonical form


has been used for pole assignment. By changing the coefficients ki of
the feedback matrix, the coefficients ai of the characteristic equation
could be directly influenced. ki influences only ai' so that the ki and
a. for j
J
*
i are decoupled. In this section the pole assignment of a
state control system is considered for state representation in diagonal
form. Since then the ki directly influence the eigenvalues (modes) zi
this is called modal aontrol. For multivariable systems modal control
was originally described by [8.9]. A more detailed treatment can be
found in e.g. [5.17], [8.10].

A linear time invariant process with multi input and output signals

~(k+1) ~ ~(k) + B ~(k) (8.4-1)

y(k) f. ~(k) (8.4-2)

is considered, and it is assumed that the eigenvalues are distinct.


This process is now linearly transformed using Eq. (3.2-29)

~t(k) = T ~(k) (8.4-3)

into the form

~t (k+1) ~t~t (k) + !!t~(k)


(8.4-4)
y(k) f.t~t (k).
8.4 Modal State Control 153

The system

(8.4-5)

is now in diagonal form and

T B (8.4-6)

C T- 1 • (8.4-7)

The characterist ic equation of the original process model is

det [z l - ~] =0 (8.4-8)

and the transformed process equation becomes

det [z l - ~] = det [z l - ~ ~ ~- 1 ]
det ~ [z I - ~] ~- 1 = det [z I - ~] (8.4-9)

(z-z 1 ) (z-z 2 J ••• (z-zm) 0.

The diagonal elements of ~ are the eigenvalues of Eq. (8.4-4) which are
identical to the eigenvalues of Eq. (8.4-1). The transformatio n matrix
T can be determined as follows [5.17]. Eq. (8.4-5) is written in the
form

(8.4-10)

Then T- 1 is partitioned into columns

T- 1 = [_v 1 v v J (8.4-11)
-2 · · · -m

and it follows that

0 z
m

(8.4-12)

For each column we have

A v. z.v. i = 1,2, •.. ,m (8.4-13)


- -~ ~-~
154 8. State Controllers

or
[z.I -A] v. = 0. (8.4-14)
~- - -~

Eq. (8.4-14) yields m equations for the m unknown vectors ~i which have
m elements vi 1 , vi 2 ' .•. , vim· If the trivial solution ~i 0 is exclu-
ded, there is no unique solution of the equation system Eq. (8.4-14).
For each i only the direction and not the magnitude of ~i is fixed. The
magnitude can be chosen such that in ~t or ft only elements 0 and 1 ap-
pear [2.19]. The vectors ~i are called eigenvectors. For a single in-
put/single output process, the state representation in diagonal form
corresponds to the partial fraction expansion of the z-transfer func-
tion for m different eigenvalues
T -1 ct1bt1 ct2bt2 ctmbtm
G(z) = ~t [ z ! - ~t] ~t = z-z 1 + z-z 2 + ... + z-zm (8.4-15)

This equation also shows that the bti and cti cannot be uniquely de-
termined. If e.g. the bti = 1 is chosen then the cti can be calculated
by

(c.f. Eq. (3.7-3)). ~may have conjugate complex zi as well as real


zi. If conjugate complex elements must be avoided see e.g. [5.17].

Also in Eq. (8.4-4) the control vector ~(k) is transformed using

~t(k) (8.4-16)

yielding

(8.4-17)

This process is now extended by the feedback

(8.4-18)

This results in the homogeneous vector difference equation

(8.4-19)

If ~t is also diagonal

~t (8.4-20)
8.4 Modal State Control 155

the characteristic equation becomes

det (z I- (~- ~tl]

= (z- (z 1 -kt 1 )) (z- (z 2 -kt 2 JJ (8.4-21)

The eigenvalues zi of the process can now be shifted independently from


each other by suitable selection of kti' since both Eq. (8.4-17) and
Eq. (8.4-18) have diagonal form. This results in m first-order decou-
pled control loops.

The realizable control vector ~(k) is calculated from Eq. (8.4-16) and
Eq. (8.4-6), which yields:

(8.4-22)

Because the inverse of B is involved, this matrix has to be regular,


i.e. it has to be quadratic and

det B + 0.
This means that the m eigenvalues of ~ or ~ can only be influenced in-
dependently from each other if m different manipulated variables are
at the disposal. The process order and the number of the manipulated
variables therefore have to be equal.

The block-diagram structure of the modal state control is shown in Fig.


8.4.1. The state variables ~t(k) are decoupled using the transformation
of Eq. (8.4-3); this is called modal analysis. The transformed control
vector ~t(k) is generated in "separate paths" by the modal controller
~t· The realizable control vector ~(k) is then formed by back trans-
formation (modal synthesis). As regular control matrices B are rare,
the modal control described above can rarely be applied.

modal synthesis modal modal analysis


controller

Figure 8.4.1 Block diagram of a modal state control system


156 8. State Controllers

For multivariable processes of order m with p inputs, i.e. control ma-


trices ~ of order (mxp), only p eigenvalues can independently be in-
fluenced by a diagonal (pxp) controller matrix K. The remaining m-p
eigenvalues remain unchanged [8.11], [5.17].

A linear process with one input, p = 1, is now considered. Its trans-


formed equation is, from Eq. (8.4-3):

~t (k+1) ~t~t(k) + ~tu(k) (8.4-23)


T (8.4-24)
y(k) ~t~t (k)

with
(8. 4-25)

(8.4-26)

For controlling this transformed process a state feedback

(8.4-27)

is assumed. Substituting into Eq. (8.4-23) leads to

(8.4-28)

If the bti can all be selected to be one, this becomes

(z 1-k 1 ) -k2 -k
m
-k1 (z2-k2) -k
m (8.4-29)
F

-k1 -k2 (zm-km)

The single state variables are no longer decoupled and the eigenvalues
of f change compared with ~ in a coupled way so that the supposed ad-
vantage of modal control cannot be attained by assumption Eq. (8.4-27).
If, however, a single state variable xtj is fed back

u(k) = - k.xt. (k)


J J

one eigenvalue can be changed independently of the other invariant pro-


cess eigenvalues
8.5 State Controllers for Finite Settling Time (Deadbeat) 157

z1 -k. 0
J

F 0 (zj-kj) 0 (8.4-30)

0 -k. z
J m

The characteristic equation is now

det (z .! - !:] (8.4-31)

It was assumed above that the eigenvalues of A are distinct. If there


are multiple eigenvalues one must use Jordan matrices instead of dia-
gonal matrices A (8.10].

As modal state control for controller design considers only pole place-
ment, the remarks made at the end of section 8.3 are also valid here.
Note, however, that the modal control can advantageously be applied to
distributed parameter processes with several manipulated variables
(8.11], (3.10], (8.12].

8.5 State Controllers for Finite Settling Time (Deadbeat)

A controllable process of order m with one manipulated variable is con-


sidered

~(k+1) = ~ ~(k) + £ u(k). (8.5-1)

It was shown in section 3.2.2 that this process can be driven from any
initial state ~(0) to the zero state ~(N) =Q in N = m steps. The re-
quired manipulated variable can be calculated using Eq. (3.2-7). It
can also be generated by a state feedback

T (8.5-2)
u(k) = - ~ ~(k).

Then we have

~(k+1) (~ - £ ~TJ ~(k) ~ ~(k) (8.5-3)


or
~(1) R x(O)
~(2) i 2 ~(0)

~(N) ~N~(O).
158 8. State Controllers

For ~(N) = Q it follows that

(8.5-4)

The characteristic equation of the closed system is

det [.z I- B] =am+ am_ 1 z + ..• + a 1 z m-1 + zm = 0. (8.5-5)

From the Cayley-Hamilton theorem a quadratic matrix satisfies its own


characteristic equation, i.e.

0. (8.5-6)

Eq. (8.5-4) is also satisfied by

N =m
= am = 0.

The characteristic equation therefore becomes

det (z I - B] = zm = 0. (8.5-7)

A multiple pole of order m at z = 0 characterizes a control loop with


deadbeat behaviour (c.f. Eq. (7.1-16)).

If the process is given in controllable canonical form as in Eq. (8.3-5)


then the deadbeat state controller becomes with Eq. (8.3-9) and, there-
fore, with ki = -ai

u(k) =(am am_ 1 ... a 1 ] ~(k). (8 .5-8)

Using the controllable canonical form all state variables xi are multi-
plied by ai in the state controller and are fed back with opposite sign
to the input as in the state model of the process itself, (c.f. Figure
3.6.3). Therefore, for m times, zeros of the first state variable are
generated one after another and are shifted forward to the next state
variables, so that fork =mall states become zero (2.19].

The deadbeat controller DB(v) described in section 7.1 drives the pro-
cessfrom any initialstate ~(0) = 0 in m steps to a constant output of

y(m) = y(m+1) = = y(oo)

for a constant input of

u(m) = u(m+1) = U (oo) •


8.6 State Observers 159

This controller is therefore an "output-deadbeat controller".

The deadbeat controller described in this section drives the process


from any initial state ~(0) +Q to a final state ~(m) 0. Therefore,
it can be called a "state-deadbeat controller".

As both closed systems have the same characteristic equation zm = O,


they have the same behaviour for the same initial disturbances ~(0), be-
cause behaviour after an initial disturbance depends only on the charac-
teristic equation. Hence the deabeat controller DB(v) also drives the
system into the zero state ~(m) = Q after m steps for any initial state
~(0).

8.6 State Observers

As the state variables ~(k)


------
are not directly measurable for many pro-
cesses they have to be determined using measurable quantities. Now we
consider the dynamic process

~(k+1) A ~(k) + _!! ,'!:!(k)


(8.6-1)
y(k) c ~(k)

where we assume that only the input vector ,'!:!(k) and the output vector
y(k) can be measured without error and that the state variables ~(k)

are observable. A model with the same structure is connected in paral-


lel to the process model, as in Fig. 8.6.1. State corrections A~(k) are
generated by feeding back the difference between the output signals of
the model and the process

A~(k) = y(k) - y(k), (8.6-2)

weighted by a matrix~, to the state variables ~(k+1), so that after


convergence the model states follow the process states. This model is
called a Luenberger state observer (8.13], (8.14]; it is an identity
observer if a complete model of the process is applied.

The constant observer feedback matrix H must be chosen such that ~(k+1)
approaches ~(k+1) asymptotically as k~oo. Figure 8.6.1 leads to the
following observer equation
160 8. State Controllers

PROCESS
~------------------- -----1
1 u(k) x!Ol I
I B I
I I
I I
I I
L______________________ _ _ _ _j
~--- - - - - - - - ~ ~(k) - +- -l
1 ~============~ I
I ~~ I
I I
I I
I I
I I
I I
IL_ ____ _ - _ _ _j
OBSERVER

Figure 8.6.1 A dynamic process and its state observer

~ (k+1) ~ g(k) + ~ ~(k) + H 6 ~(k)


~ ~(k) + ~ ~(k) + H (y(k) - ~ ~(k)). (8.6-3)

For the state error one has

~(k+1) = ~(k+1) - g(k+1) (8.6-4)

and with Eq. (8.6-1) and Eq. (8.6-3) it follows that

~(k+1) = (~ - ~ ~] g(k). (8.6-5)

Hence, a homogeneous vector difference equation arises. The state va-


riable error depends only on the initial error ~(0) and is independent
of the input ~(k). For convergence to a zero state

lim ~(k) 0
k-+oo
8.6 State Observers 161

Eq. (8.6-5) has to be asymptotically stable. Therefore, the characteris-


tic equation

det lz I - A + H f] (z-z 1 ) (z-z 2 )

Ym + Ym-1z + (8.6-6)

may have only roots within the unit circle jz.


~
I < 1, i 1,2, ..• ,m, i.e.
only stable observer poles. The poles can be influenced by proper
choice of the matrix ~- The assignment of this feedback matrix can pro-
ceed as in the determination of the state controller matrix. As

det W

we have
T T T
det (z!- ~ + ~ f] = det lz!- ~ + f ~ ]. (8.6-7)

By comparing the characteristic equation of the state controller with


the corresponding process, Eq. (8.1-37), i t follows that one must mo-
dify the design equations of the state controllers

(8.6-8)

to include the feedback matrix H of the observer. Instead of the pro-


cess

~ (k+1) ~ ~(k) + B _!:!(k)


y(k) f ~(k)

with feedback

u(k) = - ~ ~(k)

to determine the observer poles the "transposed auxiliary process" is


introduced

~ (k+1) (8.6-9)

with feedback

£(k) = - ~T ~(k) (8.6-10)

so that the equations of the state controller design can be used. The
observer matrix H can then be determined, e.g. by:
162 8. State Controllers

a) Determination of the characteristic equation according to section 8.3

For scalar u(k) and y(k) and therefore H ~ Q


1 the observer equation is

i<k+l) = [~- Q ~T] i<k) + Q u(k) + h y(k). (8.6-11)

Here the observable canonical form is suitable, due to the controllable

ll
canonical form in section 8.3, so that

I
·ll
0
(-am - h m) m
0
.
..
(-am- 1 - hm-1 ) hm-1
~ (k+1) i<kl + Q ~(k) -t ~1
h
y(k)
0 1 (-a 1 - h 1 )

(8.6-12)
and analogously to Eq. (8.3-9) we have

i = 1,2, ..• ,m, (8. 6-13)

where yi are the coefficients of Eq. (8.6-6) and which must be given.

b) Deadbeat behaviour

Choosing

h.~ =- a.
~
(8.6-14)

the observer attains a minimal settling time and therefore has deadbeat
behaviour (c.f. section 8.5).

c) Minimization of a quadratic performance criterion

In Eq. (8.6-8), h can be chosen such that the quadratic criterion


N-1
IB = lT(N) gb l(N) + ~ [lT(k) gb l(k) + ~T(k) ~ ~(k)] (8.6-15)
k=O
is minimized as described in section 8.1. The resulting recursive solu-
tion equations are (c.f. Eq. (8.1-30} and Eq. (8.1-31)):

[R C P C]T- 1 C P AT
·-b + - -N-j+l - - -N-j+1

P
-N-j
= -b
Q A P AT - H . [R
+- -N-j+l-
C P CT]HT
-N-J -b +- -N-j+l- -N-j"

Hence the behaviour of the observer and therefore its eigen behaviour
can be chosen in several ways. In practical observer realization, the
noise that is always present in the output variable limits the attain-
able settling time. For the observers described so far, all state va-
8.7 State Controllers with Observers 163

riables ~(k) are calculated. However, some state variables can often be
directly determined, e.g. by the OUtpUt Variable z(k) 1 SO that re-
duced-Order observers can be derived (see section 8.8). Figure 8.6.1
shows that the state variables of the observer follow the process
states with no lag for changes in ~(k). However, they lag for initial
values ~{0), and disturbances affecting the output variable z(k) lead
to errors in the observer states.

8. 7 State Controllers with Observers

For the state controller described in sections 8.1 to 8.5 it was assumed
that the state variables of the process can be measured exactly and com-
pletely. However, this is not the case for most processes, so that in-
stead of the actual process state variables ~(k) (c.f. Eq. (8.1-33)),
state variables reconstructed by the observer have to be used by the
control law. Hence:

~(k) =- .!5. ~(k). (8. 7-1)

The resulting block diagram is shown in Figure 8.7.1.

,---------------
1
~(k) ~(k+1)
x(OJ
-
------, PROCESS

I
I
I
I
I
----------------------~
r-- - - - -- - - --------- ~~lkl - -:;-- l
1 llxlklrr=====J I<~========O I
I
I I
I
I
I I
I I
I ~ I
L ______________ _ _j
OBSERVER

STATE
CONTROLLER

Figure 8.7.1 A state controller with an observer for initial values ~(0)
164 8. State Controllers

8.7.1 An Observer for Initial Values

The complete state of the closed control system follows from Eq. (8.1-1),
Eq. (8.6-3) and Eq. (8.7-1)

~(k+1)1 ll~ (k)l


l ~(k+1) A - B K - H
- B .!S_
~ ~(k)
(8.7-2)

y(k) c ~(k). (8.7-3)

~(k) and g(k) influence each other. From Eq. (8.1-36) the eigenbehavio ur
of the process and the state feedback without observer is described by:

~(k+1) = [~- B .!S_] ~(k)

and from Eq. (8.6-5) the eigenbehavio ur of the observer by:

~(k+1) -i(k+1) =~(k+1) =[~-_!!~]~(k).

For comparison with these equations, Eq. (8.7-2) is transformed by

l ~(k+1)1 ollx(k+1)1 (8.7-4)


~(k+1) -_! ~(k+1)

giving:

T-1
=[i -i] T

r~(k+1)J
~(k+1)
[: -
B K

A - H ~
B
.!S_ 1 r~ (k)J
~(k)
(8. 7-5)

A*

~ (k)]
y(k) = [~ Q] [ ~{k) (8.7-6)

The eigenbehavio ur of this system depends on the characterist ic equa-


tion

det [z I - ~*] det (z I -A+ B .!S_] det [z I -A+ H ~] = 0.


(8.7-7)

Therefore, the poles of the control system with state controller and
observer are the poles of the control system with no observer together
with the poles of the observer. The poles of the control and the poles
of the observer can be determined independentl y, as they do not influ-
8.7 State Controllers with Observers 165

ence each other. This is the result of the so-called separation theorem.
However it should be noted that, of course, the time behaviour of ~(k)

is influenced by the observer poles as can be seen from Eq. (8.7-5).


An observer introduces additional poles and therefore, additional lags
into the control system. If an identity observer is used, Eq. (8.7-7)
shows that the control loop with a process of order m has 2 m poles
and consequently is of order 2 m.

The lagging influence of an observer can be clearly shown if a state


controller for deadbeat behaviour, section 8.5, is combined with an
observer with deadbeat behaviour, Eq. (8.6-14). Here the characteristic
equation becomes:

m m 2m
det [ z ! - ~*] z z z (8.7-8)

Hence the steady state after a non-zero initial value is reached only
after 2 m sampling steps, and not in m steps as with the deadbeat con-
troller given by section 8.5. In this case, the simple deadbeat con-
troller discussed in chapter 6 is faster than the state controller with
observer. Section 8.7.2 and section 8.8 show how the observer lags can
be partially overcome.

8.7.2 An Observer for External Disturbances

Section 8.2 describes how constant external disturbances can be gene-


rated from initial states using extended state models. In order to con-
trol constant disturbances the manipulated variable vector ~(k) given
by Eq. (8.2-12) has to be generated by the state vector ~(k) through a
state controller, and by the state vectors y(k) - 1(k) using a propor-
tional feedforward control. However, as these state variables are not
measurable in general, they have to be determined by an observer. Here
it will be assumed, as in the preceding section, that the input variab-
les ~(k) and the output variables ~(k) can be measured without errors.
The overall system described by Eq. (8.2-14) and Eq. (8.2-15) with the
abbreviations (8.2-13) uses an extended state vector ~~(k) containing
all state variables of the process and disturbance models. The obser-
ver for this state vector is

i*(k+1) = ~*i*(k) + ~*~(k) + ~*[~(k) - f*~*(k) J. (8.7-9)

For processes with m state variables and r outputs, the observer feed-
back matrix H* has dimension (m+r)xr, and can be determined by the me-
thods given in section 8.6.
166 8. State Controllers

Using the observer, the controller equation becomes, from Eq. (8.2-16),

~(k) = ~*i*(k). (8.7-10)

Fig. 8.7.2 shows the resulting block diagram for the case of constant
changes of the command variable. Fig. 8.7.3 shows the corresponding
scheme with the abbreviations used in Eq. (8.2-13). For changes of dis-
turbances g(k) or reference variables ~(k) the unknown state variables
2*(k) are first determined by the observer such that the assumed dis-
turbance or reference variable model generates exactly g(k) or ~(k) at
the output.

Fig. 8.7.2 indicates that for the manipulated variable ~1 (k) results

~1 (k) = i<k)
(8.7-11)
i<k+1) = i<k) + ~(rxr)~~(k).

Here, H(rxr) is the corresponding part of H*. For a single output va-
riable (r=1) it follows that:
-1
z
u 1 (z) ------
1 hm+ 1 ~e(z). (8.7-12)
1-z

Therefore the observed state variable y(k) leads to a (m+1)th state


feedback and to the feedforward control part u 1 (k) which acts as summa-
tion (integration) with respect to ~e(k). By introducing the state va-
riable y(k) (or i;;(k)) an "integral action term" is generated in the ob-
server so that offsets disappear. The integration constant is the same
as the feedback constant hm+ 1 of the observer. This constant is deter-
mined automatically by the observer design and is therefore appropriate
for the state control system design method.

Now the eigenbehaviour of the hypothetical, extended process Eq. (8.2-14)


with extended observer Eq. (8.7-9) and state controller Eq. (8.7-10) is
considered. Analogously to Eq. (8.7-5) and Eq. (8.7-6) it follows that

(8.7-13)

y(k) = [C* QJ [~:~::].


The characteristic equation is therefore

det [z I - A* - ~*~*] det [z I - A* + ~*f*] o. (8.7-14)


8.7 State Controller s with Observers 167

REFERENCE VARIABLE MODEL


---~
r-------------------
1 ytol ~(ol 1

:Y(k+1) Ylkl V(k+1) ~(k) :


- I

I
I
I A I
I -
L_ _ _ _ _ _ _ - - - - - - - - - - - - - - - _ _ j
PROZESS
~---------------~
1 ~lol 1 w(~
I
1 ~(k) y(k) +

I
I e!kl
I
L - - - - - - - - ________ _j

,----
I !:! lk l
--------~--------,

+ K € Ik l I
I + - - I
I
L------- -----------~
OPTIMAL STATE CONTROLLER
FOR CONSTANT REFERENCE VARIABLE CHANGES

Figure 8.7.2 State controller with observer (drawn for constant refe-
rence variable changes). Compare with Fig. 8.2.1.
168 8. State Controllers

,-------------- -----~

1 PROCESS
I n(k) I w(k)
I - I
~(k) + I + ~(k)

g_ (k)

OBSERVER
L _ - - - - - - - - - - - - - - - - _ _ _ _ _ _j

Figure 8.7.3 State controller with observer for constant reference va-
riables ~(k) and disturbances ~(k)
8.7 State Controllers with Observers 169

The controller poles and the observer poles appear separately in this
hypothetical system. The real behaviour is obtained by coupling the
process of Eq. (8.6-1) with the extended observer Eq. (8.7-9), and the
controller of Eq. (8.7-10) and Eq. (8.2-4)

X (k+1)
f ~*(k+1l
1 [A
= -B*f
B K*
(~*+ B*K*- B*f*l
J [ x (k) J + r -0
~* (k) B*
l (w(k)- n(k)).
- -
(8.7-15)
Hence, after z-transformation

[z I - ~J ~(z) = ~ ~*~~(z) (8.7-16)

[z I - (~*+ ~*~*- B*f*) J~*(z) - B*f*~(z) + g*(~(z)-g(z))


(8.7-17)
and after elimination of ~*(z) we have

[ z _! - A + ~ !*[z I - (~*+ ~*~* - B*f*l f 1g*f* J ~(z)


= ~ ~*[z I- (~*+ ~*~*- B*f*)- 1 g*(~(z) - g(z)). (8.7-18)

With y(z) =C ~(z) the reference or the disturbance behaviour can be


calculated. In both cases the poles are given by

det [z I - A+~ ~*[z _!- (~*+ ~*~*- B*f*l ]- 1 B*f*] = 0. (8.7-19)

Controller and observer poles no longer appear separately. The dynamics


of the observer are part of the reference or the disturbance behaviour
of the overall system.

It should be mentioned that a state controller with observer can be de-


signed so that the dynamics of the observer do not influence the refe-
rence behaviour. Here the reference variable ~(k) is introduced only
after the observer and the state controller using a feedforward control
of ~(k) [2.19]. However, we can no longer make a direct comparison of
the control variables and the reference variables, the parameters of
the feedforward control element depend on the process parameters, and
offsets can arise if the process parameters are not exactly known or if
they are changing. The design described above does not have these dis-
advantages as the control deviations ~(k) are formed before the obser-
ver and the state controller using a direct comparison of the reference
and control variables. Therefore, no offset can occur for step changes
in the external disturbances because of the poles at z = 1. The resul-
ting delays of the observer can be partially overcome as shown in the
following.
170 8. State Controllers

In Figure 8.7.4 the behaviour of the controlled and the manipulated va-
riable for the process III and the state controller designed for exter-
nal disturbances is shown for a step change in the disturbance n(k). No
offset in the controlled variable arises. The manipulated variable, how-
ever, is changed only after one sampling interval. This delay occurs as
all changes 6e(k) of the control deviation have to pass one element z- 1
in the observer before a change of the manipulated variable can happen
(see Figure 8.7.2).

.l
]'
>ist

0
0
~o~oaoao~
-r------~0~o~b~0

10 20 k
u

-2.0

Figure 8.7.4 Behaviour of the controlled variable and the manipulated


variable for process III with a state controller for ex-
ternal disturbances. Step change of disturbance n(k) .[8.5).

Therefore the observer, unlike the parameter optimized and compensating


controller, causes the manipulated variable to be undesirably delayed.
The initial delay, however, can be avoided. Then, for the observer of
Fig. 8.7.2 all state variables are reconstructed, although one state
variable can be measured directly if it is identical to the output va-
riable y(k). This is the case for a state representation in observer
canonical form (see Fig. 3.6.3). Instead of the delayed part of the
control

k x (kl
mm
8.7 State Controllers with Observers 171

one uses the undelayed signal

u~(k) = km y(k) (8.7-20)

(see Figures 8.7.5 and 8.7.6). An undelayed output variable y(k) can
also be included by using a reduced-order observer (see section 8.8).

Example 8 .. 7.1

As an example, the design of a state controller described in section


8.2 with an observer for external disturbances described in section
8.7.2 is now considered for process III (c.f. section 5.4.1). The state
representation of test process III is chosen in observable canonical
form. As the process has a deadtime d = 1, it follows from Eq. (3.2-40)
and Eq. (3.2-41) that

x, (k+1) 0 0 -a3 b3 x, (k) 0

x 2 (k+1) 0 -a2 b2 x 2 (k) 0


+ u(k)
x 3 (k+1) 0 -a, b1 x 3 (k) 0

x 4 (k+1) 0 0 0 0 x 4 (k)

or
~d(k+1) = ~d ~(k) + ~ u(k)

y(k) = [0 0 1 OJ x 1 (k)

x 2 (k)
x 3 (k)

x 4 (k)

or
T
y(k) = £d ~d (k).

A block diagram is shown in Fig. 8.7.5. The observer for step changes
of external variables w(k) or n(k) has its parameters given by Eq.
(8.7-9) and Eq. (8.2-13)

0 0 -a3 b3 0 0
1 0 -a2 b2 0 0
A* 0 -a, b1 0 b* 0
0 0 0 0 1
---------·--
I
0 0 0 0 I
1 0
I
c*T [0 0 0 1
OJ
1

h*T [h* h* h* h:lh;J.


1 2 3 I
172 8. State Controllers

u(k)
v(k)
,--------------,
I _, x~,(k)
PROCESS III

I
.---------t~--'--1~ z I
1 I n(k)
I I
I y(k)
I
I
I
I I
I I
L - - - - - - - - - - - - ___ _j

~----- ----------------- --l


I Ay(k) +

I
I
I
I
I
I
I

I
I
I
L ____ _ _ _ _ _ _ _j
OBSERVER

,- I
1 I
I I
I I
u(k) I I
I
L---------------------~
STATE CONTROLLER

Figure 8.7.5 Block diagram of process III with state controller and
observer for external disturbances, with bypass of the
initial delay.
8.7 State Controllers with Observers 173

The calculation of the feedback constants ~* of the observer is perform-


ed by minimizing the quadratic performance criterion of Eq. (8.6-15)
for the transposed observer, Eq. (8.6-9) and Eq. (8.6-10), by using the
recursive solution of the matrix Riccati equation (8.6-16). If the
weighting coefficients are chosen to be

0 0 0 0
0 0 0 0
Q 0 0 0 0 rB 5
-B
0 0 0 1 0
0 0 0 0 25

then
h*T = [0.061 -0.418 0.984 1. 217 1.217]

results. When designing the state controller of Eq. (8.2-17) t (8.2-11)


or (8.1-33) via a recursive solution of the matrix Riccati equation,
Eq. (8.1-30) and Eq. (8.1-31), the state variable weighting 9 of Eq.
(8.9-4) is chosen such that only the controlled variable y(k) is given
unit weight, i.e.

For a weighting of r 0.043 on the manipulated variable the state con-


troller becomes

~*T = [4.828 5.029 4.475 0.532 1.000)

and for r = 0.18

k*T = [2.526 2.445 2.097 0.263 1 .000).

Fig. 8.7.6 shows the time responses of the resulting control and the
manipulated variables.

The algorithms required for one sample of this process of total order
m+d = 4 are:

- observer output error:

lle(k-1) = e(k-1)- ~ 3 (k-1)


174 8. State Controllers

Yist Yist
1,0 0-------------- 1,0 0--------------

0+--"-<0,..,..,.~..,..,____,_
10 k

0+------+::--- 0~---~----
10 k 10 k
u

-2,0

-3.0 -3.0

r= 0.043 r= 0.18
-4,0 -4,0

sc ( 1) SC(2)

Figure 8.7.6 Time response of the controlled variable and the manipula-
ted variable for process III with a state controller for
external disturbances and a modified observer with 'bypass'
of the initial delay. Step change disturbance n(k). [8.5].

- state estimates:

i1 (k} - a
3 3
x x
(k-1) + b 3 4 <k-1 1 + h 1*lle(k-1)

x2 <k> x1 (k-1) - ax2 3 x


(k-1) + b 2 4 <k-1 1 + h 2 *lie (k-1)

x3 (k) x x
2 <k-1) - a 1i 3 (k-1) + b 1 4 <k-1) + h 3*lle(k-1)
x4 (k) xs(k-1) + u(k-1) + h 4*lie (k-1)

xs(k) xs (k-1) + hs*lle(k-1)


8.8 A State Observer of Reduced Order 175

- manipulated variable:

- without 'bypass' of the observer delay z- 1 for y(k)


u(k) = k1*x1(k) + k2*x2(k) + k3*x3(k) + k4*x4(k) + k5*x5(k)
- with 'bypass' of the observer delay z- 1 for y(k)
u(k) k1*x1(k) + k2*x2(k) + k3*y(k) + k4*~4(k) + k5*x5(k).

The following calculations are required for each sample


15 multiplications
16 summations.

Here ks* = 1 is taken into account. For realization in a digital com-


puter 8 shif~ operations of the variables must be added.
0

8.8 A State Observer of Reduced Order


In the identity observer of section 8.6 all state variables ~(k) are
reconstructed. However, if some state variables are directly measurable
they need not be calculated. For example, in an mth order process with
one input and one output one state variable can directly be calculated
from the measurable output y(k), so that only (m-1) state variables
have to be determined by the observer. An observer whose order is lower
than the order of the process model is called an observer of reduaed
order (see [8.13], [8.15)). The following derives a reduced observer
using [8.15] and [2.19]. The process is assumed to be

~{k+1) ~ ~(k) + ~ !!(k) (8.8-1)


y(k) f ~(k). (8.8-2)

The dimensions of the vectors are

~(k) (mx1)
!! (k) (px1)
y(k) (rx1).

For r independent measurable output variables y(k), r state variables


can be calculated directly. Therefore, the state vector ~(k) is parti-
tioned into a directly calculable part ~b(k) and an observable part
~a (k):

r::~::~~J = [!~~ :~:J [::~:~] +[!;I !!(k)


(8.8-3)
176 8. State Controllers

y(k) = [f1 f2J [~a (k)]


(8.8-4)
~b (k) •

The directly calculable state vector ~b(k) is replaced by y(k). Then,


a state vector v is obtained by the linear transformation:

(8.8-5)

It follows from Eq. (8.8-4) that ~ 21 = £1 and ~ 22 = £2 • As ~a(k) re-


mains unchanged, ~ 11 = !, and is independent of ~b(k), ~ 12 0. There-
fore, the transformation matrix is

~ ~1 = [ :J
and the transformed process is
(8.8-6)

~(k+1) = ~t~(k) + ~t~(k) (8.8-7)

y(k) = ft~(k). (8. 8-8)

Hence it follows from Eq. (3.2-32) that

~t T A T-1

!l.t T B
C T- 1 [Q !]·
ft

If Eq. (8.8-7) is partitioned as in Eq. (8.8-3) we obtain

~a (k+1) ~t11 ~a(k) + ~t12 y(k) + !l.t1 ~(k) (8.8-10)


y(k+1) ~t21 ~a(k) + ~t22 y(k) + ~t2 ~(k) · (8.8-11)

In Eq. (8.8-10) an identity observer of order (m-r) is now used

~a(k+ 1 ) = ~t11 ~a(k) + ~t12 y(k) + ~t1 ~(k) + ~ ~t(k) (8.8-12)

(c.f. Eq. (8.6-3)). With the identity observer of complete order m the
output error given by Eq. (8.6-2) is used for the error between the
observer and the process. However, as the reduced order observer does
not explicitly calculate y(k), and as y(k) contains no information con-
cerning ~a{k), the observer error ~t(k) must be redefined. Here, Eq.
(8.8-11) can be used because it yields an equation error ~t{k) if ~a(k)
is not yet adapted to the measurable variables y(k), y(k+1) and ~(k)
8.8 A State Observer of Reduced Order 177

~t (k) y(k+ 1 ) - ~t21 ~a(k) - ~t22 y(k) - ~t2 ~(k) • (8. 8-13)

Q<k+1)

From Eq. (8 .8-12) and Eq. (8.8-13) the observer becomes

x
-a <k+1 l
= ~t11 i_a(k) + ~t12 y(k) + ~t1 ~(k)

+ !!ly (k+1) - ~t22 y(k) - ~t2 ~(k) x


- ~t21 -a <k>J.(8.8-14l

Its block diagram is shown in Fig. 8.8.1. In Eq. (8.8-14), y(k+1) is


unknown at time k. Fig. 8.8.1 shows that, with respect to the output
x (k), nothing changes if
-a
the signal path

x (z).. = -H z- 1 y(z) z
-a

is replaced by

~a(z) =!! y(z).

However, we must introduce new observer state variables

.Q_(k) B.a (k) - !! y(k). (8.8-15)

From Fig. 8.8.2 the observer of reduced order is

~(k+ 1 ) = ~t11 ~(k) + [~t12 + ~t11 !!] y(k) + ~t1 ~(k)


+ !![-~t21 !! y(k) - ~t22 y(k) - ~t2 ~(k) - ~t21 ~(k) J
or
.Q. (k+1) [~t11 -!! ~t21J ~(k)
+ [~t12 -!! ~t22 + ~t11 H -!! ~t21 !!J y(k)
+ [~t1-!! ~t2J ~(k). (8.8-16)

The state variables to be observed are obtained from

(8.8-17)

and finally the overall state vector is

v{k) - ~a(k)] -_ [.!. !!] [.Q_(k)] • {8. 8-18)


.Q.!. y{k)
A _ [

- y{k)

Considering the state variable error of the reduced observer

X
-a
{k+1) =X {k+1) - X {k+1)
-a -a
{8.8-19)
178 8. State Controllers

u (k)

Figure 8.8.1 Block diagram of a reduced-order observer given by


Eq. (8. 8-14)

y(k)
-
1 At12
-
kI '===
,rr
~~t2l== IAt 221
+ -
K
I
-H kI
~
II
+
-lr -

1 ~t 211
u(k) + f-!_(k+1) jl!k)
+
I 8 I I Iz-1 I r H l
~ ,;>j_t1 r -v
1- I """1-
+ + +

IA t11
1-
r + 8a!k)
~

Figure 8.8.2 Modified block diagram of a reduced-order observer given


by Eq. ( 8 • 8-1 5)
8.9 Choice of Weighting Matrices and Sample Time 179

it follows from Eq. (8.8-10) to (8.8-13) that

(8.8-20)

Compared with the identity observer, in this homogeneous error diffe-


rence equation one takes the transformed part system matrix ~t 11 which
belongs to the state vector ~a' rather than the system matrix~, and
instead of the output matrix f one takes the transformed part system
matrix ~t 21 , yielding the relationship between ~a(k) and y(k+1) given
by Eq. (8.8-11).

The characteristic equation of the reduced observer is

det [z .! - ~t 11 + !! ~t 21 ] = (z-z 1 ) (z-z 2 ) •.• (z-zm-r) 0.

(8.8-21)

The observer poles can be determined using section 8.6.

The advantages of a reduced-order observer compared with the identity


observer of section 8.6 are in its lower order (reduced by the number
r of the directly measurable output variables) and in the use of cur-
rent output variables y(k) with no delay, hence avoiding the delays des-
cribed in section 8.7. These advantages, however, are offset by the in-
creased computations required. Moreover, an additional equation, Eq.
(8.8-17), arises in the calculation of the state variables to be ob-
served. In digital computer realization, a reduced-order observer is
usually preferred if relatively many state variables are directly mea-
surable. In all other cases, e.g. for processes with only one measura-
ble input and output, the identity observer modified according to sec-
tion 8.7 is better as the design is simpler and more transparent and
the potential saving of operational calculations is comparatively small.

8.9 Choice of Weighting Matrices and Sample Time

If the state controller is not designed for finite settling time (dead-
beat behaviour), then comparatively many free parameters have to be
suitably. chosen in designing state controllers, compared with other
structure optimal controllers. For a design with no performance crite-
rion, either the coefficients of the characteristic equation (section
8.3) or the eigenvalues (section 8.4) have to be chosen. The quadratic
180 8. State Controllers

optimal state controller presumes the choice of weighting matrices 2


for the state variables and R for the manipulated variables. The free
parameters in the observer design have also to be chosen; these para-
meters again are either coefficients of the characteristic equation or
weighting matrices 2b and ~b of a quadratic performance criterion (sec-
tion 8.6). In addition, the parameters of the assumed external distur-
bance (section 8. 2) and the sampling time. also influence the design, as
for other controllers. These relatively many free parameters for the
design of state controllers mean on the one hand that there is an espe-
cially great ability for the adaptation to the process to be controlled;
on the other hand there exists always a certain arbitraryness in the
selection if there are too many parameters. Therefore, the design of
state controllers is rarely performed in one step, but rather is per-
formed iteratively, using evaluations of the resulting control behavi-
our as described in chapter 4. By specialization, however, the number
of free design parameters can be reduced.

8.9.1 Weighting Matrices for State Controllers and Observers

In state controller design using the performance criterion of Eq.


(8.1-2), the manipulated variables can be weighted separately, so that
R can be taken to be the diaginal matrix:

r1 0 0
0 r2
R (8.9-1)

0 r
p

To give a positive definite ~' the elements ri must be positive for all
i = 1,2, •.• ,p. In special cases where~ can be positive semi-definite
certain ri can be zero (see section 8.1).

Individual state variables can also be weighted in general by a diago-


nal matrix 2·

~has
J.
to be positive definite, so that qi > 0 (see section 8.1).
(8.9-2)
8.9 Choice of Weighting Matrices and Sample Time 181

If only the output variables y(k) have to be weighted by a diagonal ma-


trix ~ (as for parameter-optimized controllers using the quadratic per-

formance criterion Eq.(5.2-6))with Eq. (8.1-2) it results:


T T
~ (k) g ~(k) = y (k) ~ y(k)

and as with Eq. ( 8. 1-3)

T
y (k) ~y(k)

it follows that

(8.9-3)

Hence, for a single input/single output process with L

R = r
T
(8.9-4)
~ c

Note that in state controller design, the manipulated variable u 2 (k) is


weighted by r, unlike Eq. (5.2-6) where 6u 2 (k) = [u(k)-u(oo) ] 2 is weight-
ed. With proportional acting processes, however, a state controller has
u(oo) = 0 giving u{k) = 6u(k) (because of the assumed initial value dis-
turbance ~(0))), so that there is no difference in principle.

In observer design using the quadratic performance criterion Eq. (8.6-15)


for the transposed system Eq. (8.6-9) and Eq. (8.6-10), the weighting
matrices gb and Bb can be assumed as for state controllers. Generally,
however, one would design a faster observer in comparison with the pro-
cess, i.e. the elements of Bb are chosen smaller than those of gb.

8.9.2 Choice of the Sample Time

In choosing an appropriate sample time T0 there seems to exist a further


possibility compared with other controllers by taking into account the
analytical relations between the optimal control performance and the
process parameters. From Eq. (8.1-32) we have

where ~O = ~ is a stationary solution of the matrix Riccati equation


Eq. (8.1-31). An analytical solution giving the cost function in terms
of the sample time, however, is complicated [8.16]. It can be shown
(for small sample times T0 ) that the costs Iopt(T 0 ) increase monotoni-
cally with increasing T0 if the process is controllable. This is the
182 8. State Controllers

case for processes with real poles, but not for processes with conju-
gate complex poles if the sampling is close to the half period of the
natural frequencies [8.16). Generally, the smallest costs are attained
for T0 = 0, i.e. for the continuous state controller and for very small
sample times the control performance differs only little from the con-
tinuous case. Only for larger sample times does the control performance
deteriorate significantly.

Current experience is that the sample time for state controllers can be
chosen using the rules given in sections 5.5 and 7.3.

With state controllers, as with deadbeat controllers, there is a rela-


tionship between the required manipulated variable changes and the sam-
ple time, if a disturbance has to be controlled completely in a definite
time period. If the restricted range of the manipulated variable is gi-
ven, the required sample time can be determined [2.19).
9. Controllers for Processes with Large
Deadtime

The controller designs of the preceding chapters automatically took the


process dead time into account. This was straightforward because dead
time can be simply included in process models using discrete-time sig-
nals - one of their advantages compared with models with continuous
signals. Therefore, controllers for processes with dead times can be
designed directly using the methods previously considered.

Processes with small dead time compared with other process dynamics
have already been discussed in some examples. A small dead time can re-
place several small time constants or can describe a real transport de-
lay. If, however, the dead time is large compared with other process
dynamics some particular cases arise which are considered in this chap-
ter. Large dead times are exclusively pure transport delays. Therefore
one has to distinguish between pure deadtime processes and those which
have additional dynamics.

9.1 Models for Processes with Deadtime

A pure dead time of duration Tt dT0 can be described by the trans-


fer function

G (z) = ~ = bz-d d 1 '2' •.. (9. 1-1)


P u(z)
or the difference equation

y(k) = bu(k-d). (9 .1-2)

Here Tt is an integer multiple of the sample time T0 .

Proaesses with additional proaess dynamias have the transfer function


-1 -m
b 1z + ..• +bmz -d
~ (9.1-3)
u(z) -1 -m z
1+a 1 z + ... +amz
184 9. 1 Models f.or Processes with Deadtime

(or the corresponding difference equation- see Eq. (3.4-13)). Eq. (9.1-1)
follows either by the replacement of d by d' = d-1, b 1 = b and b 2 .' .•. ,
bm 0 and a 1 , ••. ,am = 0
-1 -d I -d
b 1z z = bz (9 .1-4)
-1
or simply by taking d = 0 in z-d m d in B(z ) : bm b and
b 1 , ••. ,bm_ 1 = 0 and a 1 , .•• ,am = 0
-m -d
GP(z) = bmz = bz • (9 .1-5)

When considering the state representation of single input/single output


processes there are several ways to add a dead time (see section 3.2.3).

- Dead time at the input


~(k+1) ~ ~(k) + ~ u(k-d)
(9 .1-6)
y(k) ~T x(k)

Dead time included in the system matrix A (c.f. Eq. (3.6-41))


~(k+1) ~ ~(k) + b u(k)
T (9.1-7)
y(k) ~ ~(k)

- Dead time at the output


~(k+1) ~ ~(k) + ~ u(k)
(9 .1-8)
y(k) ~T ~(k-d) or y(k+d) = ~T ~(k).

In all cases A can have different canonical forms (see section 3.6).
For Eq. (9.1-6) and Eq. (9.1-8) ~has dimension mxm, but for Eq. (9.1-7)
the dimension is (m+d)x(m+d). If the dead time is included in the sys-
tem matrix ~, d more state variables must be taken into account. Though
the input/output behaviour of all processes is t~e same, for state con-
troller design the various cases must be distinguished as they lead to
different controllers. Inclusion of the dead time at the input or at the
output depends on the technological structure of the process and in ge-
neral can be easily determined. For a pure dead time, including the
dead time within the system matrix in controllable canonical form one
obtains Eq. (3.6-36) with a state vector ~(k) of dimension d. In con-
trast, for Eq. (9.1-6) and Eq. (9.1-8) ~=a= 0 results and d has to
be replaced by d' = d-1. In this case one can no longer use a state
representation.

Note that as well as dead time at the input or the output dead time
can also arise between the state variables. In the continuous case,
9.2 Deterministic Controllers for Deadtime Processes 185

vector difference differential equations of the form

~(t) ~1 ~(t) + ~2 ~(t-TtA) + ~ ~(t)


y(t) £ ~(t)

result. For discrete-time signals, however, these dead time systems can
be reduced to Eq. (9.1-7) by extending the state vector and the system
matrix.

9.2 Deterministic Controllers for Deadtime Processes

There is a substantial literature discussing the design of controllers


for dead time processes with continuous signals; see e.g. [9.1] to [9.7]
and [5.14]. As well as parameter optimized controllers with proportio-
nal and integral behaviour, the predictor controller proposed by Reswick
[9.1] has been studied in detail. In this a model of the dead time pro-
cess is used in the feedback of a controller, and then a smallest sett-
ling time can be obtained. The disadvantages of this predictor control-
ler and its modifications (see [5.14]) have been its relatively high
technological cost and high sensitivity to the difference between the
assumed and the real dead time. The general conclusion recommended the
use of proportional integral controllers which approximate the behavi-
our of a predictor controller. Digital computers have overcome the dis-
advantage of high cost. Therefore the control of processes with (large)
dead time but discrete-time signals is again discussed below.

9.2.1 Processes with Large Deadtime and Additional Dynamics

The parameter-optimized controllers of chapter 5 in the form of the


structure adapted deadbeat controller of chapter 7, and the state con-
troller of chapter 8, can be used for controlling processes with large
dead time. The structure of the parameter-optimized controllers iPC can
be the same, except that the controllerparameters may change conside-
rably. The deadbeat controllers DB(v) and DB(v+1) have already been
derived for processes with dead time. In the case of state controllers
the addition of the dead time in the state model plays a role. Some
additions are made in this section to the controller design considered
earlier.
186 9. Controllers for Processes with Large Deadtime

Predictor Controller (PREC}


First the predictor controller [9.1] which was specially designed for
dead-time processes is considered for discrete-time signals. In the
original derivation a transfer element GER(z} was placed in parallel
with the process GP(z} such that the overall transfer function is equal
to the process gain KP. The parallel transfer element GER(z} is changed
to an internal feedback of the controller GR(z} [5.14]. When GR + oo Eq.
(6-4} produces a cancellation controller with closed loop transfer func-
tion

-d
z (9.2-1}

The closed loop transfer function is then equal to the process transfer
function with unity gain - a reasonable requirement for dead time pro-
cesses. The prediator aontro~~er then becomes, using Eq. (6-4} and Eq.
(9.2-1}:
A(z - 1 }
-1 -1 -d
KPA(z }-B(z }z

(9.2-2}

The characteristic equation follows from Eq. (9.2-1}:

d m -1 d m m-1 .2-3}
z_ z A(z } = z [z +a 1 z + ••• +am_ 1 z+am] = 0. (9

The characteristic equations of the process and the closed loop are
identical. Therefore, the predictor controller can only be applied to
asymptotically stable processes. In order to decrease the large sensi-
tivity of the predictor controller of Reswick (see section 9.2.2} to
changes in the dead time, Smith introduced a modification [9.2], [9.3),
[9.4), [5.14) so that the closed loop behaviour

Gw ( z) = K~ GP ( z) • G' ( z) (9.2-4)

has an additional delay G' (z). The modified prediator aontro~~er is


then
G' (z)
(9 .2-5)

A first-order lag can be chosen for G' (z).


9.2 Deterministic Controllers for Deadtime Processes 187

State Controller (SC)


If the dead time d cannot be included in the system matrix ~ (see Eq.
(9.1-7)), but can only represent delayed inputs u(k-d) or delayed state
variables ~(k-d), as in Eq. (9.1-6) and Eq. (9.1-8), the advantage of
state controllers in the feedback of all state variables cannot be a-
chieved. In designing state controllers for processes with dead time
the dead time should be included in the system matrix ~ if the state
variables can be measured directly. However, for large dead times the
order (m+d)x(m+d) of the matrix~ also becomes large. An advantage is
that the design of the state controller does not change. As can be seen
from Eq. (3.6-39) and Eq. (3.6-40) only ~, Q and £T change compared with
processes with time lags.

For structure optimized input/output controllers for processes with


dead time the order of the numerator of the transfer function depends
only on the process order m, and is equal to m for the controllers DB(v)
and PREC and (m-1) for the minimum variance controller MV3-d (see chap-
ter 14). The dead time influences only the numerator order and is equal
to (m+d) or (m+d-1) (see Table 9. 2. 1) :

Table 9.2.1 Non-zero parameters of deadbeat controllers, predictor con-


trollers and m~n~mum variance controllers (chapter 1 4) for
processes of order m ~ 1 and dead time d.

qo q1 qm-1 ~ Po p1 pl+d Pm+d-1 Pm+d

DB(V) X X X X X X X X

PREC X X X X X X X X X

MV3-d X X X X X X X

9.2.2 Pure Deadtime Processes

Input/output controller (deadbeat-, predictor- and PI-controller)


The structure optimal input/output aontroZZer for pure dead-time pro-
cesses
-d
GP (z) = ~
u(z) = b z (9.2-6)

is given by the corresponding controller equations for time lag pro-


cesses of order m and dead time d using Eq. (9 .1-4) or Eq. (9 .1-5).
Both cancellation controllers - the deadbeat aontroZZer DB(v) and the
prediator aontroZZer PREC - give the same transfer function
188 9. Controllers for Processes with Large Deadtime

G (z) = 1 ~d (9.2-7)
R b 1-z
or the difference equation

u(k) = u(k-d) + q 0 e(k) (9.2-8)

with q 0 = 1/b. The current manipulated variable u(k) is calculated from


the manipulated variable u(k-d), delayed by the dead time, and the pre-
sent control error e(k). The transient response of the cancellation
controller given by Eq. (9.2-7) is shown in Fig. 9.2.1. As already re-
marked at the beginning of this chapter, the dead-time controller can
be approximated by a PI-controller as shown by Fig. 9.2.1.

0 3 6 9 12 15 k

Figure 9.2.1 Transient response of the dead-time cancellation control-


ler u(k) = u(k-d) + q 0 e(k) ford = 3.
Dashed line: approximation by a PI-controller from Eq.
(9.2-9).

Then one obtains the control algorithm

u(k) = u(k-1) + qoe(k) + q;e(k-1)

with parameters
qo 1 (9.2-9)
q' 2 = 2b
0
1 1 1 (d-2)
q'
1 qO[d - 2J - 2b - d -

or the characteristic values, as in section 5.2.1

gain factor

d2 integration factor.
9.2 Deterministic Controllers for Deadtime Processes 189

Now we consider the characteristic equation of the resulting feedback


loop for both exact and inexact dead times in order to discuss the sen-
sitivity to the dead time. For the cancellation controller and exactly
chosen dead time we have

0. (9.2-10)

The characteristic equation of the closed loop is the same as that of


the process. The sensitivity of the dead time controller for inexactly
assumed dead time can be seen from the characteristic equation. If the
controller is designed for a dead time d and is applied to a process
with dead time d+1 (i.e. the assumed dead time is too small) the cha-
racteristic equation becomes

zd+ 1 - z + 1 = 0. (9.2-11)

For d~1 the roots are on or outside the unit circle, giving rise to in-
stability (c. f. Table 9.2.2). If the process has a dead time d-1 we have

zd + z - 1 = 0. (9.2-12)

In this case instability occurs for d~2 (Table 9.2.2). Table 9.2.3
shows the largest magnitudes of the unstable roots ford= 1,2,5,10
and 20; for very large dead time the feedback loop with the cancella-
tion controller is so sensitive to changes of the process dead time by
one sampling unit that instability is induced. Therefore in these cases
this controller can only be applied if the dead time is known exactly.
If a PI-controller (2PC-2) is used the characteristic equation becomes

(9.2-13)

with parameters given by Eq. (9.2-9)

d+1 d d-2
2z - 2z + z - ~ = 0. (9.2-14)

If the process dead time changes from d to d+1, then

2 zd+2 _ 2 zd+1 _ d~2 = O. (9.2-15)

For a change from d to d-1 we have

d d-1 d-2
2z - 2z + z - ~ = 0. (9.2-16)

Tables 9.2.4 and 9.2.5 show the magnitudes of the resulting roots. If
the case d=1 is excluded, no instability occurs. Therefore the feedback
loop with the PI-controller is less sensitive to changes in process
Table 9.2.2 Magnitudes lz. I of the roots of the characteristic equations (9.2-10), (9.2-11), (9.2-12) \D
for dead-time~processes with the controller GR(z) = 1/(1-z-d) 0

Process d = 1 d = 2 d = 5
z-d 0 0 0 0 0 0 0 0
z-(d+1) 1.0 1.0 1.325 0.869 0.869 1.126 1.126 1.050 1.050 0.846
z-(d-1) 0.5 1.62 0.618 1.000 0.755
hill. ~

\D

(')
0
::s
r1'
11
0
1-'
1-'
Table 9.2.4 Magnitudes lzil of the roots of the characteristic equations (9.2-14), (9.2-15), (9.2-16) CD
11
for dead-time processes with the PI-controller of Eq. (9.2-9) Ill
HI
0
11
Process d = 1 d = 2 d = 5
'tl
11
z-d 0.707 o. 707 0.07 0.886 0.886 0.829 0.829 0.701 0
0
- CD
z-(d+1) 0.856 0.856 Ill
1.065 1 .065 0.441 0.941 0.941 0.565 0.923 0.923 0.858 0.858 Ill
CD
z-(d-1) Ill
0.333 0.500 0 0.796 0.796 0.789 o. 789 0.760
- ....r1'~
:Y
t<
PI
11
o.Q
CD
0
CD
PI
p.
r1'
.....
~
9.2 Deterministic Controllers for Deadtime Processes 1 91

Table 9.2.3 Largest magnitudes lz.J. Imax of the roots of the characteris-
tic equation for dead-time processes with controller
-d
GR(z) = 1/(1-z )

Process d=1 2 5 10 20
-d 0 0 0
z 0 0
z- (d+1) 1 .o 1 • 1 2 6 1.068 1 .034
1. 320
-(d-1)
-- --- ---
2 1. 151
1. 61 8 1 .076 1 .036
0.5
- - - - - - --- ---

Table 9.2.5 Largest magnitudes lz.J. I max of the roots of the characteris-
tic equation for dead-time processes with the PI-controller
of Eq. ( 9. 2-9)

Process d=1 2 5 10 20
-d 0.866 0.938 0.970
z 0.707 0. 707
z- (d+1) 1 .065 0.941 0.923 0. 951 0.974
2 -(d-1) 0.333 0.500 0.796 0.923 0.967

dead time. Only for a PI-controller designed for d = 1 does instability


arise when a process with d = 2 is connected. Furthermore, it can be
observed that the largest magnitudes of the roots of the characteristic
equation increase if the dead time of the process is assumed too small.
As then the distance from the stability limit is smaller, it is better·
to choose the dead time too large than too small if a PI-controller
has to be designed. Section 14.3 considers controllers for pure dead
time and stochastic disturbances designed according to the minimum va-
riance principle.

State Controller
If the dead time as in Eqs. (9.1-7) and (3.6-41) is included in the
system matrix, and assuming that all state variables are directly mea-
surable, one obtains for the pure dead-time process with a state con-
troller (see Eq. (8.3-8)) the characteristic equation

T + k1zd-1 + zd
det [z ! - ~ + ~ ~ ] kd + kd~1z +
(z-z 1 ) (z-z 2 ) (z-zd) = 0. (9.2-17)

If the characteristic equation has to be the same as for the input/out-


put cancellation controllers, i.e. zd = 0, then all proportional feed-
back terms have to be zero, i.e. ki 0, i = 1 ... d. The reason is that
192 9. Controllers for Processes with Large Deadtime

an open loop dead-time process with reducing state feedback has the
smallest settling time for initial values x(O), corresponding to dead-
beat behaviour (~zd = O). If the state fee~back is not to reduce, the
poles zi in Eq. (9.2-17) must be nonzero. As the state variables intro-
duced in the process model Eq. (3.6-41) cannot be directly measured in
general, the state variables have to be observed or estimated. Then
the question arises as to whether the state controllers with the state
observers of sections 8.6 and 8.7 or state estimators of sections 22.3,
15.2 and 15.3 have advantages over the input/output controllers dis-
cussed above. This is considered in the following sections using the
results of digital computer simulations.

9.3 Comparison of the Control Performance and the


Sensitivity of Different Controllers for Deadtime
Processes
To compare the control performance and the sensitivity to inexact dead
time of various control algorithms and processes with large dead time,
the control behaviour was sinulated with a process computer and program
package CADCA, described in chapter 29 [30.1]. Two processes have been
investigated, a pure dead-time process

G (z- 1 ) = ~ = z-d with d = 10 (9.3-1)


P u(z)

and the low-pass process III (see Eq. (5.4-4) and Appendix) with dead
time d = 10

-1 ~ z
-d (9.3-2)
GP(z ) u(z)

The resulting closed loop behaviour for step changes of the set point
is shown in Fig. 9.3.1 and Fig. 9.3.2. The root-mean-squared (or rms)
control error

se -
-
v - 1 M
l: e 2 (k)
M+1 k=O w
(9. 3-3)

and the rms changes of the manipulated variable

su y;1
-l: M [u(k)
M+1 k=O
- U (co) ] 2 (9.3-4)

are shown in Fig. 9.3.3 for M = 100 and for the dead time dE chosen
for the design (which is exact for dE = d = 10, too small for dE = 8
and 9, or too large for dE= 11 and 12). Table 9.3.1 shows the resul-
9 .3 Compar i son of the Control Perfo rman c e 193

y y
..
1.0 ..: :·..·..:.-. ·... - --·- 1,0
....
.· .-- ------

0 0
20 l.O 60 80 k 20 l.O 60 80 k

u 3 PC-3 2 PC-2
u
1,0 1,0

0 0
20 l.O 60 80 k 20 l.O 60 80 k

y y

1,0 ------ 1,0 ----- -

0 0
20 l.O 60 80 k 20 l.O 60 80 k
u DB(v) ,PREC u DB ( '1!+ 1)

1,0 -------- 1,0 -- - - -

tt
0 20 l.O 60 . 80k
20 l.O 60 80k

0 20 l.O 60 80k

u sc

1,0 ------

0
20 l.O 60 80 k

Figure 9.3.1 Control variable y(k ) and mani pulated va r iable u(k) for
the pure dead-time process Gp(z) = z-d with d = 10 and
a step change in the set p oint
194 9. Controllers for Processes with Large De a dt ime

y
1.0

o~+-~~~~~~~--
20
----

40 60 80 k
tr---
u
20 40 60 00 k
u 3 PC-2
3 PC-3
2.0
3,0

2,0

0
20 40 60 80 k
1,0

0
20 40 60 80 k ,~:L
- -
- - ·- - - 20 40 60 80 k

{:1 : u
DB (v +1)

"t:==
I I I
I
' '
20 40 60 80 k 2,0

1P~ 0
20 40 60 80 k
0 '
20 40 60 80 k

~l_,_ _ _ ._ -~~- -_--.....-- r.:l I I I


' I ..k
20 40 60 80 k 20 40 60 80
u
sc

~·:+-1~· - <Pf-R~E-C+1-
1,0

--+o--,-, ....,--+--,--
1 1
0
20 40 60 80 k 20 40 60 80 k

Figure 9 .3 .2 Control variable y(k) and manipulated variable u(k) for


the process III with de ad time d=10, Gp(z)=B(z-1)z-d/A(z-1),
for a ste p change in th e set point
9.3 Comparison o f the Control Performanc e 195

0,2

0.1 0,1

10 ,, 12
10 11 12

Su
\I
\I
\
\ 1.0 0 2 PC-2
\
I I
•o 2 PC - 1
3 PC-3
• 3- PC-2

l i "'

DB(~I
08(~+11
\ .f o PREC
\ c sc
'· I
\ I
0.5
I
0,5

1I I ~
l \I :
unstable

0.,__1 1,)
0...
"'0--
--
....a"
_.o
{ 2 PC- 1
2 PC-2
3 PC-2
3 PC-3

0
c, ,"'-0....
--10-
_,~/
~

6 9 11 12 dE

a) b)

Figure 9 . 3.3 The rms control error Se and the change of the manipula-
ted v ariable Su as functions of the dead time dE used
for the design -d
a) pure dead-time process z b) process III with dead
with d = 10 time d = ;10
196 9. Controllers for Processes with Large Deadtime

Table 9.3.1 Controller parameters for the investigated processes with


large dead time

-d B (z - 1 ) -d
G = z G = z
H Ill p p A(z - 1 )
Q) H
rl Q)
rl +.1
0 Q)
d = 10 (Proc. III with d = 10)
H
+.1
s
Ill
s:: H
0 Ill 3PC-3 2PC-2 3PC-3 3PC-2 2PC-2
0 0.
(r=O) (r=O) (r=O) (r=O)

qo 0.6423 0.5198 3.4742 2.0000 0. 6279


q1 -0.6961 -0.4394 -6.2365 -3.3057 -0.5714
q2 0.1372 0 2.8483 1.3749 0
K 0.5052 0.5198 0.6258 0.6251 0.6280
CD 0.2715 0 4.5515 2.1993 0
CI 0.1651 0.1547 0.1373 0. 1106 0.092

DB (v+1) DB(v) DB (v+1) DB(v) PREC

qo 1. 25 1 3.8810 9.5238 1
q1 -0.25 0 -0.1747 -14.2762 -1.4990
q2 0 0 -5.7265 6. 7048 0. 7040
q3 0 0 3.5845 - 0.9524 -0.1000
q4 0 0 -0.5643 0 0

Po 1 1 1 1 1 .0000
p1 0 0 0 0 -1.4990
p2 0 0 0 0 0.7040
p3 0 0 0 0 -0.1000
p4 0 0 0 0 0

pd 0 -1.25 0 0 0
pd+1 0 0 -0.2523 -0.6190 0.0650
pd+2 0 0 -0.5531 -0.4571 0.0480
pd+3 0 0 -0.2398 0.0762 -0.0451
pd+4 0 0 0.0451 0

sc (r = 1 )
.~)
sc (r = 1) {,)

k1 0 0.0680
k2 0 0.0473
k3 0 0.0327
k4 0 0.0807
k5 0 0.0691
k6 0 0.0551 .~)
k7 0 0.0420 rb = 5;
k8 0 0.0311
0 0.0226 Qb c.f.
k9 Example
k10 0 0.0161
8. 7 .1.
k11 1. 0 0.0114
k12 - 0.0080
k13 - 0.0056
k14 - 1.0
9.3 Comparison of the Control Performance 197

ting controller parameters for dE d 10. The results can be summa-


rized as follows:

a) Pure dead-time process


For the pure dead-time process the controller 2PC-2 with PI-behaviour
shows - within the class of parameter-optimized control algorithms -
a somewhat better control performance than the controller 3PC-3 with
PID-behaviour, as better damped control variable and manipulated vari-
able can be observed. The gain in both cases is about 0.5. A weighting
of the manipulated variable with r > 0 has only an insignificant influ-
ence on the resulting control behaviour. The sensitivity of these para-
meter-optimized controllers to errors in the assumed dead time is smal-
ler than that of all the other controllers. The best possible behaviour
of the control variable is produced by the deadbeat controller DB(v)
or the identical predictor controller PREC. The modified deadbeat con-
troller DB(v+1) reaches the new steady state one sampling unit later.
Neither the deadbeat controllers nor the predictor controller, however,
can be recommended as instability occurs if the dead time used for the
design differs from the real dead time. Well-damped control behaviour
is produced by a state controller with observer. Here u(O) = 0, as in
the optimization of the quadratic performance criterion Eq. (8.1-2)
the state feedback kd and also all ki for i = 1 to d-1 become zero.
Only kd+ 1 ' the feedback of the state variable y(k) = xd+ 1 (k) of the
extended observer becomes kd+ 1 = 1 (compare Fig. 8.7.2, Fig. 8.7.5 and
Example 8.7.1). This state controller is independent on the choice of
the weighting r of the manipulated variable. The sensitivity arising
from a wrong dead time is, however, for jfidj = JdE-dJ = 1 greater than
for parameter-optimized controllers. For j6dj > 1 instability has been
observed. Therefore for a pure dead time process with a relatively well
known dead time (j6dj ~ 1), a state controller with modified observer
can be recommended, but for an inexactly known or changing dead time
J6dJ > 1, a parameter-optimized controller with PI-behaviour is to be
prefered.

b) Low pass process III with large dead time


The parameter-optimized control algorithms with PI-behaviour (2PC-2,
2PC-1) designed with r = 0 leads to relatively undamped control vari-
ables and manipulated variables. The control variable settles faster
using the controller 3PC-3 with PID-behaviour. However, large changes
of the manipulated variable are required. A controller 3PC-2 with q 0 =2
leads to the best behaviour among the group of parameter-optimized
controllers, with good damping of the controlled and manipulated vari-
198 9. Controllers for Processes with Large Deadtime

ables. There is little sensitivity of all parameter-optimized control-


lers to inexactly chosen dead time. The controlled variable of the
deadbeat controller reaches the new steady state faster than all other
controllers. However, the changes of the manipulated variable are ex-
cessive using DB(v) and also large with DB(v+1). The sensitivity to in-
exactly chosen dead time is greatest for the deadbeat controllers. In
the example, dead time errors of ~dE = ± 1 can be permitted, especially
for DB(v+1). However, larger dead time errors result in poor control
behaviour. The predictor controller models the behaviour of the process
itself. The manipulated variable immediately reaches the new steady
state. The state controller with an observer designed for r = 1 results
in a very well-damped control variable compared to the predictor con-
troller. The manipulated variable u(O) is still small because of the
small value of the state feedback k 3 , as shown in Table 9.3.1 and Fig.
8.7.5. The manipulated variable has its largest value fork= 1 and
then approaches with good damping the new steady state value. Both the
predictor controller and the state controller with observer have about
the same small sensitivity.

The best control performance for this low pass process with large time
delay can therefore be achieved with the state controller, the predic-
tor controller or the parameter-optimized controller 3PC-2 (or 3PC-3
with r ~ 1). The predictor controller leads to the smallest, the 3PC-2
to the largest and the state controller to average changes in the mani-
pulated variable. A comparison of the control performance shows that
the open loop transient response to set point changes can hardly be
changed compared with the transient response of the process itself if
very large variations of the input have to be avoided. For larger chan-
ges of the manipulated variable, as for deadbeat controllers, one can
reach a smaller settling time. However, this leads to a higher sensiti-
vity to dead time. Therefore deadbeat controllers cannot be recommended
in general for processes with large dead time. As the predictor control-
ler can be applied only to asymptotically stable processes, state con-
trollers with observer and parameter optimized controllers with PID- or
PI-behaviour are preferred for low-pass processes with large dead time.
10. Control of Variable Processes with
Constant Controllers

The preceding dontroller design methods assumed that the process model
is exactly known. However, this is never the case in practice. Both in
theoretical modelling and in experimental identification one must al-
ways take into account both the small and often the large differences
between the derived process model and the real process behaviour. If,
for simplicity, i t is assumed that the structure and the order of the
process model are chosen exactly then these differencies are manifested
as parameter errors. Moreover, during most cases of normal operation,
changes of process behaviour arise for example through changes of the
working point (the load) or changes of the energy-, mass- or momentum-
storages or flows. When designing .controllers we must therefore assume:
- the assumed process model is inexact~

- the process behaviour changes with time during operation.


This chapter will briefly consider how changes of the process influence
the closed-loop behaviour. This includes both inexact process models
and changing process behaviour. Here parameter changes with reference
to a nominal parameter vector ~n are assumed in the controller design.
Then the closed-loop behaviour for parameter vectors near the nominal
parameter vector is of interest if a aonstant aontroZZer is assumed.
It is further assumed that the order of the process model does not
change and that the parameters change slowly compared with the closed
loop dynamics. This last assumption means that the process can be re-
garded as quasi-time-invariant. If the parameter changes are small,
then sensitivity methods can be used [10.1] to [10.7]. If the parame-
ter sensitivity is known, then for controller design both good control
performance and small parameter sensitivity can be required. This will
be considered in section 10.1. However, for larger parameter changes
this "sensitivity design" is unsuitable. Instead one can design con-
stant controllers which are optimal on average for process models with
different parameter vectors. This approach is more general than the
sensitivity method. Then, as well as considering larger changes in the
parameters, a single controller is designed for two or more working
points and not for only one as is the case for sensitivity design ha-
ving a small sensitivity for (small) parameter changes. This, however,
200 10. Control of Variable Processes with Constant Controllers

can only be discussed very briefly in this book, as in section 10.2.


This problem has been treated in [8.8] for state controllers with con-
tinuous signals.

10.1 On the Sensitivity of Closed-loop Systems

Compared with feedforward control systems, feedback control systems


have the ability not only to decrease the influence of disturbances on
the output variable but also to decrease the influence of process para-
meter changes on the output. In order to demonstrate this well-known
property [10.1], a feedback control and a feedforward control as in
Figures 6.1 and 6.2 are treated. It is assumed that the process has
transfer function GP(z), the controller GR(z) and the feedforward ele-
ment G8 (z). Both systems are designed for the nominal process parame-
ter vector ~n' so that the same input signal w(k) generates the same
output signal y(k). The process GP(z) is assumed to be asymptotically
stable so that after the decay of the free response both processes are
in the same steady state before w(k) changes. The input/output behavi-
our of the closed loop is for the nominal working point

GR(z)GP(~n'z)
~ (10.1-1)
w(z)
1+GR(z)GP(~n'z)

The feedforward control with the same input/output behaviour has the
transfer function

G(8 )_u(z) (10.1-2)


S ..::._n,z - w (z)

The process parameter vector now changes by an infinitesimal value d~.

For the control loop it follows by differentiation of the process pa-


rameter vector

a Gw (8-n , z)
1 (10.1-3)

Accordingly one obtains for the feedforward control

(10.1-4)
10.1 On the Sensitivity of Closed-loop Systems 201

As for both systems we have:

Cly (z) ClGw(~n'z)


w(z) (10.1-5)
ae ae

it follows that:

Cly(z)
-----
I =
Cly(z)
R(0 ,z) -----
-n
I (10.1-6)
ae- R
ae- S

with

(10.1-7)

as the dynamic control factor. Cly/Cl~ is called the parameter sensitivi-


ty of the output variable y. As can be seen from Eq. (10.1-6), there-
lative parameter sensitivities of the feedback control and the feedfor-
ward control depend on the frequency w of the signal w(k). If IR(z) 1<1
the feedback control has a smaller parameter sensitivity than the feed-
forward control, but if IR(z) I > 1 the opposite is the case. In general,
however, feedback control systems are designed so that in the signifi-
cant frequency range (O~w~wmax) the magnitude of the dynamic control
factor IR(z) I< 1 is less than one to achieve good control performance.
Therefore in most cases the parameter sensitivity of feedback control
systems is smaller than that of feedforward controllers. The parameter
sensitivity increases with the exciting frequency, and is therefore
smallest at w = 0, i.e. in the steady state.

The same equation gives both the ratio of the parameter sensitivity of
the feedback and the feedforward control, and the ratio of the influ-
ences of the disturbance n(k) on the output variable y(k)

y(z)
--
I = R(z) - -
y(z) I (10.1-8)
n(z) R n(z) S

From Eq. (10.1-3) and Eq. (10.1-1) it further follows for the feedback
control that

dGw(~n'z)
(10.1-9)
Gw(~n'z)

with the sensitivity function S(~n'z) of the feedback control

s (~n'z) = R(_§n,z) = -------=---------- (10.1-10)


1+GR ( z) GP (~n, z)
202 10. Control of Variable Processes with Constant Controllers

This sensitivity function expresses how relative changes of the input/


output behaviour of a closed loop depend on changes of the process
transfer function. Since this ratio is the same as the ratio of the pa-
rameter sensitivity of feedback and feedforward control the remarks gi-
ven above can also be transferred to this case. The sensitivity function
can also be used for non-parametric models. Small sensitivity of the
closed-loop behaviour after set point changes can be obtained by choo-
sing a small dynamic control factor IR(G ,z) I in the significant fre-
-n
quency range 0 s w s wmax for the exciting external signals n(z) or
yw(z) = GR(z)GP(z)w(z). Note that the parameter sensitivity of the out-
put variable and the sensitivity function represent time functions
ay(k)ja~ or s(k) after back transformation in the original region.
For a process in state representation

(10.1-11)

with state controller

T
~R(k) = -~ ~(k) (10.1-12)

or closed-loop transfer function


UR (z) T 1
GR(z)Gp(z) = =~ [zl- AJ- b (10.1-13)
up(z)

the dynamic control factor becomes, now defined as R' (z)


with up = uR,

up(z)
R' (z) = (10.1-14)

In [10.8], page 132, it is shown that Eq. (10.1-6) gives the parameter
sensitivity of the open loop output variable y(k) = cTx(k) of the state
controller, but with R' (z) instead of R(z). Optimal state controllers
for continuous signals always have smaller parameter sensitivities for
all frequencies than feedforward controllers, [10.2], [8.4] page 314,
and [10.8] page 126 .• State controllers with observers and state con-
trollers for discrete time signals do not obey this rule ([8.4 page 419
and 520).

The sensitivity function S(~n'z) of Eq. (10.1-10) expresses the influ-


ence of relative changes of the process transfer function. Absolute
changes of the closed loop behaviour for set point changes follow from
Eq • ( 10 • 1 - 9 ) and Gw
10.1 On the Sensitivit:r of Closed-loop Sys.tems 203

(10.1-15)

2
The influence of process changes I~Gpl on I~Gwl is amplified by IRI IGRI.
Compare the corresponding relation for the signals

ly(z) I = IR(z) lln(z) I (10.1-16)

which can be used to evaluate the control performance. The changes


I~Gwl influence ly(z) I linearly:

(10.1-17)

The amplification of the I~Gpl by IRI 2 1GRI means that changes of I~Gpl
have a great influence in frequency ranges II and III of the dynamic
control factor, as shown in Fig. 11.4.1. For very low frequencies and
controllers with integral action IRI 2 1GRI - IRI. This means an influ-
ence such as that for the control performance given by Eq. (10.1-16).
An insensitive control can be obtained in general by making IR(z) I as
small as possible, particularly for the higher frequencies of range I
and for the ranges II and III if disturbances arise in these ranges.

From Figures 11.4.2 and Table 11.4.2 it follows that for feedback con-
trols insensitive to low-frequency disturbances the weight of the mani-
pulated variable r must be small, leading to a strong control action.
Disturbance signal components n(k) in the vicinity of the resonant fre-
quency, however, require a decrease of the resonance peak and therefore
a smaller r, or a weaker control action. This case again shows that
steps towards an insensitive control depend on the disturbance signal
spectrum. If IR(z) 12 is considered, Figures 11.4.2 and 11.4.3 show that
for different controllers high sensitivity to process changes arises
with the following controllers: Range I: 2PC-2. Range II: 2PC-2, DB(v)
and SC. Small sensitivities are obtained for range I: SC, and for range
II: DB(v+1). Note, however, that parameter optimized and deadbeat con-
trollers have been designed for step changes of the set point, i.e. for
a small excitation in ranges II and III. For step changes of w(k) these
results essentially agree with the sensitivity investigations of sec-
tion 11. 3, c) .

Until now, only some sensitivity measures have been considered. Other
common parameter sensitivity measures given a nominal parameter vector
8 are
-n
204 10. Control of Variable Processes with Constant Controllers

sensitivity of a state variable <J


ax (10.1-18)
(trajectories) -x ae

sensitivity of the performance ar (10.1-19)


criterion QI a~
aA..
sensitivity of an eigenvalue QA. = ae ~ . (10.1-20)

The sensitivity of the output variables follows from the state variable
sensitivity
a T ax
(10.1-21)
<J
-y ae [.£ ~J = ae c.

The parameter sensitivity can be taken into account in the controller


design by forming positive semi-definite functions f(_q_) ~ 0 at the no-
minal point and adding into a performance criterion In so that

I + f(cr) (10.1-22)
n -
has to be minimized. If the parameter sensitivity of the control per-
formance criterion

f (Q) KT<J = T Clin (10.1-23)


- -I K ae

is used, in which ~ are weighting factors, the criterion


T din
In + £ ae (10.1-24)

with respect to the unknown controller parameters must be optimized in-


stead of In. If an insensitive feedback control for variable working
points, such as for load, power or throughput changes ~M, has to be de-
signed then based on~= !(M) the derivation a~/aM must be calculated
and

(10.1-25)

optimized. Then one obtains relationships as in Fig. 10.1 if only the


controller gain factor K is considered. The curves show:
Figure 10.1 a)
-Minimum of In furnishes K1 with control performance I 1 •
- Minimum of Icr furnishes K2 with control performance I 2 •
- K2 < K1 •
- More insensitive control leads to inferior control performance,
I2 > I1.
10.1 On the Sensitivity of Closed-loop Systems 205

Figure 10.1 b):


- Design disregarding the sensitivity leads to a better control
performance for Mk < M < Mg.
- Only outside the region ~ < M < M does insensitivity design
lead to a better control performaHce.

a) b)

Figure 10. 1: To the design of insensitive control


a) Control performance I against controller gain K at the
nominal working point M
n
b) Control performance I against working point M.

This example shows that the design of an insensitive control is only


of advantage if the sensitivity ar n /3M of the feedback control is high
and therefore the region Mk - Mg is relatively small. For optimizing
I 0 numerical parameter optimization methods have to be used in general.
Each of the process parameters taken into account and each optimizing
step requires the solution of a difference equation. The computational
burden therefore becomes large even for loworder processes.
206 10. Control of Variable Processes with Constant Controllers

10.2 Control of Processes with Large Parameter Changes

Sensitivity methods require that the sensitivity within the considered


parameter intervals changes only little. Therefore, only the influence
of relatively small parameter changes can be taken into account. Fre-
quently the problem consists of designing a constant controller for
processes with large parameter changes, and practice shows that this
can be successful. The problem of feedback control of processes with
large and slowly varying parameter changes with constant controllers
has been investigated in [8.8]. Here the parameter region is discreti-
sized so that, for example, M processes

~i (k+1) ~i ~i(k) + B ~i(k)


} i = 1 ,2, ••• ,M (10.2-1)
yi(k) f.i ~i (k)
are obtained. For these M different processes one fixed state control-
ler
~(k) = -
-K x.(k)
-~
(10.2-2)

is to be designed so that this controller is optimal on average. For


this one can use an overall performance criterion

0 s e: 0 s 1 ( 10.2-3)
~

where Ii is the local quadratic performance criterion

(10.2-4)

e:i weights the criteria at the individual operating points. With con-
tinuous signals it was proved that there are constant controllers ~
giving stable behaviour at all N operating points. A representative
process with parameter matrices ~' ~ and f is defined for which a con-
troller is calculated to be optimal on average according to Eq. (10.2-3).
In this the solution of M Matrix-Riccati-Equations is required.
In recent time the problem considered here is to design "robust" con-
trollers.
11. Comparison of Different Controllers
for Deterministic Disturbances

At the end of part B the various design methods and the resulting con-
trollers or control algorithms for linear processes with and without
dead time are compared. Section 11.1 considers the controller struc-
tures and in particular the resulting poles and zeros of the closed
loop. Then the control performance for two test processes is compared
quantitatively for different controllers in sections 11.2 and 11.3. The
dynamic control factors for different controllers are compared in sec-
tion 11.4. Finally section 11.5 draws conclusions as to the application
of the various control algorithms.

11.1 Comparison of Controller Structures:


Poles and Zeros
The general linear controller with transfer function

G (z) = u(z) (11.1-1)


R e(z)

and the special cases of the parameter-optimized controllers of low or-


der, the cancellation controllers and the deadbeat and predictor con-
trollers can be summarized as input/output eontroZZers in contrast with
state controllers. Together with the process

-d -d
z z (11.1-2)

and the controller Eq. (11.1-1) the closed-loop transfer function for
setpoint changes

Q(z- 1 )B(z- 1 )z-d


~
w(z) P ( z - 1 ) A ( z - 1 ) + Q ( z - 1 ) B ( z - 1 ) z -d

(11.1-3)
208 11. Different Controllers for Deterministic Disturbances

and the closed-loop transfer functions (c.f. Figure 5.2.1)

G (z) ~
n n(z) -1 -1 -1 -1 -d
P(z )A(z )+Q(z )B(z )z

(11.1-4)
P(z- 1 )B(z- 1 )z-d

(11.1-5)
are obtained. In general form these transfer functions can be written as
as + .•. +
(11.1-6)
+ .•. +

In this * means w, n or u. The order ~ is

t =max [m+~; m+d+v]. (11.1-7)

We now consider the orders v and ~ of the individual controllers and


the poles and zeros of the transfer functions of the closed loop. There-
fore the polynomials of G*(z) must be written with positive exponents
0 1 2
z , z , z , .... Based on the controller

Q (z) (11.1-8)
P (z)

and the process


B 0 (z)
GP(z) = d (11.1-9)
A 0 (z)z

(see Eq. (6-9)) results in the general transfer function

'&- (z)
(11 .1-10)
G* (z) = ._A.(z)

with the closed-loop characteristic equation

J\-(z)

(11.1-11)

In this z . are the poles. The zeros of G*(z)


cu.
(11.1-12)

follow from
11.1 Comparison of Controller Structures: Poles and Zeros 209

13w (z) Q(z)B 0 (z) 0

13n (z) P(z)A0 (z)zd 0 } (11.1-131

'Eu (z) P (z) B 0 (z) 0.

They depend on the injection points of the external signals.

The following sections consider the existence and placement of poles


and zeros for the closed loops and for different controllers. This is
also done for the state controtter. As the general linear controller
has the most freedom in the placement of poles, at least compared with
other input/output controllers considered here, this controller is trea-
ted first.

11.1.1 A General Linear Controller for Specified Poles

If the poles zai in

A-(z) = (z-za 1 ) •.• (z-zaR.) = 0 (11.1-14)

or the resulting characteristic equation

(11.1-15)

are specified, the controller parameters can be determined by comparing


the coefficients in

-1 -].! -1 -m
(1 + p 1 z + ... + p z ) (1 + a 1 z + ... + a z )
J.l m
+ (qo + q1z
-1
+ •.. + qvz
-v ) (b z -1 + ••. + bmz -m )z -d o.
1

(11.1-16)

To avoid steady state offsets, Gw(1) has to be set to one. From Eq.
(11.1-3) it follows that P(1)A(1) = 0, and this is generally fulfilled
for
]J
E p. = -1. (11.1-17)
1
i=1

There are (R-+1) equations giving unique determination of the (]J+V+1)


unknown controller parameters. Hence

]J + v + 1 = R, + 1. (11.1-18)

In Eq. (11. 1-7) two cases must be distinguished:


210 11. Different Controllers for Deterministic Disturbances

a) \1 ;;,_ v + d -+ Q, = m + 11·
Eq. (11.1-18) gives v = m. Hence \1 ;;,_ m + d.

b) \1 ,
v + d -+ Q, = m + d + v.
Eq. (11.1-18) gives \1 m + d. Hence v ;;,_ m.

I f the smallest possible order numbers are chosen

v = m and \1 = m + d (11.1-19)

then in all cases it is possible to determine uniquely the controller


parameters. Eqs. (11.1-16) and (11.1-17) lead to the system of equa-
tions

a1
0

0
0 lo

: :
I .
1d
0 p1 a 1 -a 1

a
m
a, 0 0

0
,---
10

I b1 o
---
0

I b1 a -a
0 a Pm+d m m
m I
a1 I b m 0 qo am+1
I b1 q1
I
I

l·~- ~ ---,- am I 0
... 1 I o
I
----
bm

0 qm
a2m+d
-1

m + d m + 1

R ~R 5;!:_ (11.1-20)

and the unknown controller parameters are obtained from

-1 (11.1-21)
~R = R a

i f det B + 0.

As already remarked in chapter 4 there is more freedom to place the


poles within the stability region. Therefore moderate trial and error
should lead to suitable time responses of the controlled and manipula-
ted variables. Note that for a given characteristic equation or for gi-
ven poles, the zeros T:k<z) = 0 of the transfer functions Eqs. (11 .1-3)
to (11.1-5) are also determined if the system of equations Eq. (11.1-20)
is unique, i.e. Eq. (11.1-18) is valid. If, however, v > m then the
zeros of Gw(z), and if 11 > m+d, the zeros of Gn(z) and Gu(z) can be in-
11.1 Comparison of Controller Structures: Poles and Zeros 2 11

fluenced as well. For v = m and ~ = m+d the zeros of the processes ap-
pear in Gw(z) and Gu(z) and the process poles appear in the zeros of
Gn(z). This means that the process itself dictates some of the zeros
of the closed loop transfer functions.

11.1 .2 Low Order Parameter-optimized Controllers

For low order parameter-optimized controllers, for example the 3PC-3


controller with PID-behaviour
-1 -2
qo + q1z + q2z
1 - z- 1

one must remember that, in contrast with the general parameter-optimiz-


ed controller of Eq. (11.1-1), there are~= m+d+2 poles and that be-
cause there are only 3 free controller parameters the coefficients of
the characteristic equation for process orders m > 1-d are not inde-
pendent. Furthermore, the zeros of Gn(z) and Gu(z) are dictated by the
process and by the controller pole at z = 1, as in Eq. (11.1-13). Only
some zeros of Gw(z) can be influenced by the controller parameters.

11 .1.3 General Cancellation Controller

Chapter 6 shows that cancellation controllers with a given closed-loop


transfer function for set point changes Gw(z) Eq. (6-4) lead to the
characteristic equation described by

A d d d J Gw ( z) = 0
J-r( z) = A0 ( z) z B ( z) + [A ( z) z B0 ( z) - A0 ( z) z B ( z)

(11.1-22)

For an approximate agreement of the process and its model we have


Gw (z) =13 w0 (zl&w0 (z)

(11.1-23)

Therefore general cancellation controllers can only be applied to pro-


cesses with poles and zeros inside the unit circle. For certain closed-
loop responses these restrictions can be relaxed as with the deadbeat
and the predictor controllers.
212 11. Different Controllers for Deterministic Disturbances

11.1.4 Deadbeat Controller

For the deadbeat controller DB(v) we have


-1
-1 q 0 A(z )
G (z) = Q(z )
R P(z-1) -1 -d
1 - q 0 B (z )z

(see Eq. (7.1-22)); after expansion with z(m+d) in the nominator and
the denominator

G (z) = Q(z) (11.1-24)


R P(z) z(m+d) - q B(z)
0

Here A(z)zd and B(z) are polynomials of the process model. The charac-
teristic equation becomes

~(z) = z(m+d)A0 (z)zd- q 0 A0 (z)zdB(z) + q 0 A(z)zdB 0 (z) 0.

(11.1-25)

If the process and the process model approximately agree so that


A(z)zd ~ A0 (z)zd and B(z) ~ B0 (z), we have

~. (m+d) d
~(z) ~ z A0 (z)z =. 0. (11.1-26)

For the zeros we have

'J3w (z) q 0 A(z)z d B0 (z) 0

'an (z)
[z(m+d) - q B(z)] A (z)z d
0 0 0 1(11.1-27)
[z(m+d) - q B (z) J B (z) 0
13u (z) 0 0

and, using Eq. (11.1-26), the transfer functions are

q 0 B0 (z)A(z)z d q 0 B0 (z)A (z)


Gw(z) d
z (m+d) A0 ( z ) z z(m+d)Ao(z)

[z (m+d) - q B(z) J A (z)z d [z(m+d) - q B (z) J P(z)


0 0 0
Gn(z) z(m+d)
(m+d) A ( ) d z(m+d)
z 0 z z

[z (m+d) - q 0 B (z) ] B0 (z) P (z)


z (m+d) GP (z). (11.1-28)
z (m+d) A0 ( z ) zd

If there is an exact agreement of process model and process, i.e.


A(z) = A0 (z) and B(z) = B0 (z), the polynomial A0 (z) in Gw(z) cancels
so that
(11.1-29)
11.1 Comparison of Controller Structures: Poles and Zeros 213

and

(11 .1-30)

There is deadbeat behaviour only for an exact agreement of process mo-


del and process. If there is no agreement the free oscillations decay
besides to z(m+d) delayed by A0 (z)zd, as shown by Eq. (11.1-26). There-
fore deadbeat controllers may only be applied to processes with poles
sufficiently inside the unit circle in the z-plane, i.e. for well damped
asymptotically stable processes. The zeros of the transfer functions of
the closed loop are mainly determined by the zeros of the process. As
can be seen from Eq. (11.1-25) the differencies ~B(z) = B(z) - B0 (z)
between the numerator polynomials of the process and the process model
influence the characteristic equation as follows

(11.1-31)

with A0 (z) = A(z). Small changes ~B(z) do not seriously affect stabili-
ty. The zeros of the process can therefore be placed outside the unit
circle; they are not cancelled by the deadbeat controller.

11.1.5 Predictor Controller

The predictor controller is given by Eq. (9.2-2)

Q (z -1) A(z - 1 )
GR (z) -1 - B(z -1 ) z -d
P(z- 1 ) KPA(z )
or
Q(z) A(z)z d
GR(z) P(z) (11.1-32)
d
KPA(z)z - B(z)

and leads to the characteristic equation of Eq. (11.1-11),

d d d d
Jl
~(z) = KPA(z)z A0 (z)z - A0 (z)B(z)z + A(z)B 0 (z)z = 0. (11.1-33)

If the process and process model approximately agree then

(11.1-34)

The transfer functions become


214 11. Different Controllers for Deterministic Disturbances

d
KPA 0 (z)z

d
[KPA(z)z - B(z) J P(z)
d
(11.1-35)
KPA(z)z

d d
For Gw(z) or Gn(z) the poles A(z)z or A0 (z)z are always cancelled by
the corresponding zeros. Closed loops with a predictor controller are
only stable for asymptotically stable processes, as Eq. (11.1-34) shows.
The process zeros can therefore lie outside of unit circle. The zeros
of the closed loop are only for the closed loop response to setpoint
changes dictated by the process zeros. If the poles of the process are
sufficiently within the unit circle small differencies 6B(z) = B(z) -
B0 (z) do not influence the stability, Eq. (11.1-33).

11.1.6 State Controller

For a simple state-control system with one controlled, one manipulated


variable, controller u(k) = -~T~(k) and with no external disturbances
we have
T
~(k+1) = [~- ~ ~ J ~(k)
(11.1-36)
T
y(k) = .£ ~(k).

This control is now influenced by an external disturbance signal v(k)

-x(k+1) = [A - b kTJ x(k) + f v(k).


- -- - - (11.1-37)

If f =~ the disturbance is at the process input. By appropriate choice


of f each state variable can be disturbed. The transfer function of
the process alone is, from Eq. (3.2-50),
m
b + ••• + b 1 z
GP(z) =~
u(z)
T
.£ [z I - A]- 1 b = m B(z)
A(z) •
(11.1-38)
a + ••• + a 1 z m
m

Therefore for the state control system it follows that

T b kT]- 1 f
Gv(z) = ~
v(z) .£ [z I - A + - - -
adj [z I - A + b kTJ _ 13(z)
T f - A-(z). (11.1-39)
c
det [z I - A + b ~TJ
11.1 Comparison of Controller Structures: Poles and Zeros 215

Eq. (8.3-8) gives the characteristic equation

J\-(z) (am+ km) + (am-1 + km-1)z + •.. + zm


am+ am_ 1 z + ..• + zm = 0. (11 .1-40)

By suitable choice of ki' arbitrary ai for arbitrary ai can be genera-


ted. Unstable processes can be stabilized. If the state control system
is disturbed at the process input, i = ~' the zeros of Gv(z)

13< Z) = ~T adj [ z ..!_ - f:. + ~ ~T J ~ (11.1-41)

do not change compared with the process (see Eqs. (11.1-38) and (11.1-39)).
This holds because the parameters of the denominator polynomials13(z)
B(z) are contained either in ~T or in~' depending on the assumed ca-
nonical state representation. If, however, the disturbance influences
one state variable the zeros of the transfer function Gv(z) are also
influenced by the state controller.

Example:

l
The process order m is 2, and controllable canonical form is chosen.

j
Then
z + (a 1 +k 1 )
13(z) = [b 2 b 1 J 1 f.
-(a2+k2) z -

With!=~= [~] it follows that

1j(z) = b 1 z + b 2 = B(z)

and therefore, taking!=[~], the process numerator polynomial is

The choice of the poles of the control law also determines the zeros
in the last case.
0

The poles of state-control systems with observers were considered in


section 8.7. The observer adds further poles and zeros to the control
system (see Eqs. (8.7-7), (8.7-18) and (8.7-19)). Only if the external
disturbances are exactly measurable and can be directly connected to
the observer, and if observer and process are in direct agreement, then
the observer does not furnish additional poles for then ~~(k) = Q,
(Figure 8.7.1). If in this case the disturbance arises at the process
input the zeros do not change either, and so 13< z) = B ( z) •
Q(z-1)/P(z-1) 1'0
Table 11.1.1 Structural properties of different deterministic controllers GR(z)
A-(z): Process poles near or outside the unit circle "'
B-(z): Process zeros near or outside the unit circle
n : no y: yes
x) for exact agreement of process and process model

abbrev. orders of zeros 13* (z) risk of


controller controllers char .eq. instability at
Q (z -1) P(z- 1 ) A<z)=O x) A- (z)
-
B (z)
Gw(z) Gn (z) Gu (z)
general ....0
linear LC m m+d 2m+d QB PAzd PB n n en
en
controller CD
Cj
Ul CD
;:l
H
Q) param.opt. rt
rl
3PC-3 2 1 m+d+2 QB PAz~ PB n n
low order ()
rl (PID)
0 controller 0
;:l
H
+l rt
Cj
s:: 0
0 general I-'
u ~m+1 ~m+d+1 <:2m+d QB PAzd PB y y I-'
cancell. cc CD
+l Cj
;:l controller
0.. en
+l
;:l en
0 deadbeat p PB y n 0
DB(v) m m+d m+d qoB Cj
'- controller
+l 0
;:l
0.. CD
s:: rt
·.-i predictor m+d or B p PB y n CD
PREC m m+d 2 (m+d) Cj
controller s....
;:l
state contr.
....
Ul sc en
H with no - - B n n ....rt
Q) w.n.o. - - m+d
observer ()
rl
rl
0 ....0
Q) H
+l+l state contr. sc fixed by process, n n en
cO s:: - - 2 (m+d) controller and rt
+l 0 with observ. W.O. s::Cj
Ul u observer tr
--------- PJ
;:l
()
CD
en
11.2 Characteristic Values for Performance Comparison 217

In Table 11.1.1 the most important structural properties of various con-


trollers are summarized for the process B(z- 1 )z-d/A(z- 1 ). The input/
output controllers have orders v ~ m and ~ ~ m+d, if they are matched
to the process in a structurally optimal way. The order of the charac-
teristic equation and therefore the number of the poles is different;
the smallest number is (m+d) for the exactly matched deadbeat control-
ler. The zeros of the processes appear in all cases as zeros of Gw(z)
and Gu(z) as well. Furthermore the controller poles P(z) = 0 become the
zeros of Gn(z) and Gu(z). For linear processes the general linear con-
troller and the parameter-optimized controller of low order can be ap-
plied in general. The deadbeat controller and the predictor controller
may only be applied to processes with poles within the unit circle, and
the general cancellation controller only for processes with both poles
and zeros within the unit circle. For state controllers with no obser-
ver the controller vector kT has at least an order of (m+d) . The order
of the characteristic equation is also (m+d) and is therefore smaller
than that of input/output controllers with the exception of the deadbeat
controller. This advantage, however, disappears if an observer has to
be used. The state controllers are applicable to a very wide class of
processes.

11.2 Characteristic Values for Performance Comparison

The last section summarizes the structural differences of the various


controllers; this section compares the most important controllers with
respect to the control performance. The word 'performance' is taken to
mean the following properties: The actual control performance and the
required effort of the manipulated variable, the sensitivity to an in-
exactly known process model, the computational effort between sampling
times and in the numerical part of the synthesis. Since a quantitative
comparison is impossible without specifying particular systems, the two
test processes described in section 5.4.1 and in the Appendix are used:

Process II : Second order, nonminimum phase behaviour, T0 = 2 sec.


Process III: Third order with dead time, low pass behaviour,
T 0 = 4 sec.

The properties of following control algorithms are compared:

A. Parameter-optimized control algorithms of low order


2PC-2, PI-behaviour with no prescribed manipulated variable }
(5.2-19)
2PC-1, PI-behaviour with prescribed manipulated variable
218 11. Different Controllers for Deterministic Disturbances

3PC-3, PID-behaviour with no prescribed manipulated variable


3PC-2, PID-behaviour with prescribed manipulated variable }<5.2-10)

B. Control algorithms for finite settling time (deadbeat)


DB(v), v-th order, with no prescribed manipulated variable (7. 1-21)
DB(v+1), (v+1)-th order, with prescribed manipul. variable (7.2-11)

C. State-control algorithms with observer for external disturbances


sc-1, small weight ron the manipulated variable 1 (8.7-9,10)
SC-2, larger weight r on the manipulated variable Fig. 8.7.5

The control algorithms are investigated for the single input/single out-
put control systems of Figure 5.2.1. The comparison is performed parti-
cularly with regard to the computer aided design of algorithms by the
process computer itself [8.5]. As process computers often have to per-
form other tasks the computational time for the synthesis should be
small. Furthermore the required storage should not be too large consi-
dering the capacity of smaller process computers and micro-computers.
A further criterion is the computational time of the algorithms between
two samples. Not only the computational burden of the synthesis but al-
so that required during operation have to be considered in connection
with characteristic values of the control problem such as, for example,
the control performance, required manipulation power, required manipu-
lation range, sensitivity to inexact process models and to parameter
changes of the process.

For comparing the control performance, the following characteristic va-


lues are used:

a) Root of the mean squared control error

se 63 (11.2-1)

b) Root of the mean squared change of the manipulated variable (manipu-


lation effort)
M
su Vu2(k) =V-1- L tm 2 (k)
M+1 k=O
(11.2-2)

where l'lu(k) u(k) - U (co) •

c) Value of the quadratic performance criterion of Eq. (5.2-6) for


r = 0.1 and 0.25.
11.3 Comparison of the Performance of the Control Algorithms 219

d) Overshoot
Ym = Ymax(k) - w(k) (11.2-3)

e) Control settling time k 3 for le(kll:;; 0.03Iw(oo) I


or le(kll $ 0.03!v(oo) I

f) Manipulated variable u(O) for a step change w(k)

g) Sensitivity with respect to an inexact process model

10 1 = 0 6 lao (11.2-4)
y g
This value is described at the end of section 11.3.

For judging the computational effort between two samples the following
measures will be used:

h) Number of additions and subtractions: £add


Number of multiplications and divisions: £mult
Number of operations: £E = £add + £mult•

11.3 Comparison of the Performance of the Control


Algorithms
The control algorithms considered in this chapter have been designed
for a step change of the setpoint w(k). This also corresponds to a step
change of the disturbance n(k) at the output of the process. The resul-
ting frequency spectrum of this input contains high frequency components
compared with the dynamics of the process. The control behaviour will
also be described for a step change of the disturbance v(k) at the pro-
cess input. The weighting r of the manipulated variable has to be dis-
cussed separately. For the parameter optimized control algorithms 2PC-2
and 3PC-3, r = 0 was set in the quadratic criterion Eq. (5.2-6) to pro-
vide for relatively large manipulated variables. For control algorithms
2PC-1 and 3PC-2 the weighting r = 0 was also assumed. Here the first
manipulated variable u{O) was chosen such that u(1) ~ u(O) (c.f. Eq.
(5.2-31)). This gives relatively small values of the manipulated vari-
able. For the design of state controllers the weighting matrix g was
chosen to give criterion Eq. (5.2-6). Furthermore, ~ = r was chosen to
give the same manipulated variable u(O) as for control algorithms 3PC-3
and 3PC-2, so that a direct comparison is possible. The manipulated
220 11. Different Controllers for Deterministic Disturbances

variable u(O) for the deadbeat control algorithm DB(v+1) was set to
give u(O) = u(1) in order to minimize the manipulated variable changes.
The characteristic values of all algorithms are summarized in Table.
11.3.1.

The time response of the controlled and the manipulated variables is


shown in Figures 11.3.1 and 11.3.2 for the three main important control
algorithms and with processes II and III for step changes of the set-
point. Figure 11.3.3 shows a graphical representation of the characte-
ristic values given in section 11.2 for the processes II (C) and III
(o) and stepwise setpoint changes w(k) (lefthand side) or stepwise dis-
turbance changes v(k) (righthand side). These figures show several pro-
perties of the single control algorithms (for these processes under
consideration) •

a) Behaviour for setpoint changes w(k)

For step changes of the setpoint (the case for which the design has
been made) the most important results are summarized below:

Process III (lowpass-behaviour)

3PC-3 (PID-behaviour)
Choosing r = 0 results in a large u(O) and a relatively weakly damped
behaviour. The rms control error Se is relatively large. The overshoot
ym and the settling time k 3 have average values.

3PC-2 (PID-behaviour, with a prescribed manipulated variable)


Prescribing the manipulated variable u(O) to be a relatively small va-
lue leads compared with 3PC-3 to much more damped behaviour, somewhat
larger Se for much smaller Su' smaller ym and somewhat smaller k 3 .

2PC-2 (PI-behaviour)
In comparison to 3PC-3 this controller gives a somewhat larger Se to-
gether with smaller Su' much smaller u(O), larger ym and larger k 3 ,
somewhat smaller computational effort tE.

2PC-1 (PI-behaviour, with prescribed manipulated variable)


Compared with 2PC-2 u(O) was chosen larger, resulting in a somewhat lar-
ger se and larger su' larger ym and larger k 3 . This shows inferior per-
formance compared with 2PC-2.
11.3 Comparison of the Performance of the Control Algorithms 221

Table 11.3.1 Parameters of the investigated control algorithms

Ul
,._, P R 0 C E s s II P R 0 c E S S III
Q)
.j.J
Q)
I
s
Ill
C 0 NT R 0 L A L G0 R I T H M
,._,
Ill
Q., 2PC-1 2PC-2 3PC-2 3PC-3 2PC-1 2PC-2 3PC-2 3PC-3

qo 2.00 1. 364 2.00 3.485 2.00 1. 615 2.00 4.562

q1 -1.886 -1.229 -2.596 -5.433 -1.802 -1.405 -2.400 -7.200

q2 0 0 0. 753 2.150 0 0 0. 649 3.033

K 2.00 1. 364 1. 24 7 1. 335 2.00 1. 615 1. 351 1. 534

CD 0.0 0.0 0.604 1. 610 0.0 0.0 0. 480 1. 977

CI 0.057 0.099 0.126 0. 151 0.099 0.129 0.184 0.257

DB (v+1) DB(v) DB (v+1) DB(v)

qo 5.840 14.084 3.810 9.523

q1 -0.078 -20.070 -0.001 -14.285

q2 -8.851 6.985 -5.884 6.714

q3 4.089 - 3.647 - 0.952

q4 - - 0. 571 -
p1 -0.595 - 1. 436 0 0

p2 0. 169 2.436 0.247 0.619

p3 1.426 - 0.554 0.457

p4 - - 0.244 - 0.076

P5 - - -0.046 -

sc ( 1) sc (2) sc (1) sc (2)

k1 4. 157 2.398 4.828 2.526

k2 3. 441 1. 983 5.029 2.445

k3 1.0 1.0 4.475 2.097

k4 - - 0.532 0.263

k5 - - 1. 532 1 . 263
222 11. Diffe rent Contro llers for Determi n ist i c Disturba nces

PROCESS I I
y
.
y
1.0 ---- ............................ lO -------.................................

.
5,?l
01:
!b
10 20 k 0}· 10 20 k

3 PC-3 3PC-2

tO .. . ::
10 20 k 10 20 k

....,,......................
y y
1.0 ____
10 ----;
........................ .

.. 10 20 k
0
10 20 k

~b
sc (2)
50~ sc ( 1)

~-=--= 10 20 k 10 20 k
y
y
10 ............................. 1.0 __.........................
0· 0·
10 20 k 10 20 k

1.0 1.0

14.0 14.0
u u
DB( v ) DB ( v+ 1)
10.0 10.0

5.0 5.0

tO 10 .
0 0
10 20 k 10 20 k
·2.5 -2.5

F igure 1 1. 3 .1 Ti me respon ses of t h e con trolle d a n d manipul a ted var iabl e


for different control algorithms and process II (nonmini-
mum p hase behaviour)
11.3 Comp arison of the Performa nce of th e Control Algo rithms 2 23

PROCESS III

y y

.
1.0 ----~~·;;•'~···· · ··· ·· ••••• 1.0 --- .·~!....._ ................
0 •
.
01 10 20 k I 10 20 k

~0~ 3 PC -2

7ak
3 PC-3

~-- ~
0 10 20 k 0 10
'
20 k

y y
1.0 .
---·~! ... ,............... _ lO - - - . . . . . .. . . . . .. . . . . . . .. . . . . . &

o-r· o-r ~ 10 20 k

5Dl
k

sob
10 20
sc ( 1) SC(2)

1~:-- ~:= ' .k


0 10 20 k 0 10 20

y y
1.0 __ ........................
, 1.0 ___ .. .......................
o.,.,___,_____
0· ·· - -. . - - - - --
10 20 k 10 20 k

10.0 10.0
u DB( v ) u DB ( v +1 )

5.0 5.0

10 10 . fL-- -
0-H1f--_,_ __ -
_-- __
- 0--k
O-H+- -1-0- -2 10 20 k

-50 -5.0

Figure 11.3.2 Time r esponses of the controlled and manipulated v a riables


of v a rious control algorithms for proce s s III (low-pass
pro ce s s )
224 11. Different Controllers for Deterministic Disturbances

w-__r- Se v:__r-
Se
0.4 0.4
• •. • • • •I • •
• .. • • • .
0.3 0.3

•.
I I I
I
I

•...
I I
0.2 0.2
0. 1 0.1
I
2.0
Su
1.5
• 2.0
Su
1.5

••
I

1.0 1.0


I
05 0.5


••
Seu
I
r= 0.1 co Seu r = 0.1 co
r=025 •• r =0.25 ••

••
0.6 0.6

t •
I I
0.4 0.4


•l
I

0.2 t t f t
14
12
•I
~(OM
8 • PROCESS II c
6


I
4 PROCESS III 0
2

Figure 11.3.3 a) Characteristic values of the control behaviour of


different control algorithms
11 . 3 Comparison of the Performance of the Control Algorithms 2 25

W :~ V: _r-
Ym fws Ym
0.6
•• 0.6
•... • 1:
I
$I

•• "' ~

•I •*
0.5 0.5 I
~

0.4 0.4
Q3 + 0.3
I ...
0.2
0.1
•• I 0.2
0.1

60
k3
• 60
k3 •
... •...
40 40

• •... t •... •
I .

• •...
30 30
I
20 20 I

i
I
10 10

2PC-1 2.P C- 1 3 I'C- 2 DB( v +1 SC( 1 l


2PC-2 2PC-2 3PC-3 DB( v ) SC(2)
IAoo
16
• ...
••
I I
12
8 ~NDER
T
4 PROCESS II 0

l..u.r PROCESS III


...I ...I
0

.
20
16
12
8
...
4

2PC-2 3PC- 3

Figure 11.3.3 b) Characteristic values of the control behavi our of


different control algorithms
226 11. Different Controllers for Deterministic Disturbances

r =0.1 [)O
• •
• •I
2.5
510
eu r= 0.25 • • I e2
2.0

••
0.8

•* * •• •• •
I
•• ..• • • •I •...
• • *:t
4I
t
0.6 1.5


I I

•• •
I

t I I
I ~-
0.4 _._.::: 1.0

0.2 0.5

2PC-1 3PC - 2 DB ( v + 1) sc ( 1)
lr 2PC-2 3PC-3 DB ( v ) SC ( 2 )
34
32
28
24
20
16 ...
12 • +
8
4

3.0
e,
2.5
+
20

*: •
1.5
I
1.0 t
0.5

2PC-1 3PC-2 DB ( v +1 ) SC ( 1 )
2PC- 2 3PC - 3 DB ( v ) SC ( 2 )

Figure 11.3.3 c) Characteristic values of the control behaviour of


different control algorithms
11.3 Comparison of the Performance of the Control Algorithms 227

SC-1 (state controller for r = 0.043)


As u(O) is about the same as for 3PC-3 one can immediately compare with
that control algorithm. From Fig. 11.3.2 it results in a better damped
behaviour. Se and Su are a little bit smaller, ym is smaller and k 3 is
much smaller. The computational effort tz, however, is six times larger.

SC-2 (state controller for r = 0.18)


u(O) is the same as for 3PC-2. Compared with that controller a more
damped behaviour results. Furthermore Se and Su are about the same,
whereas ym and k 3 are essentially smaller. Compared with SC-1 Se is lar-
ger, Su is smaller, ym is smaller and k 3 is about the same.

DB(v) (deadbeat controller)


The steady state for k = 4 is reached by taking into account a very lar-
ge u(O) and large changes 6u(k). Compared with all other control algo-
rithms it gives smallest Se for largest Su, largest u(O), relatively
small Ym and smallest k 3 . The computational effort between samples is
about twice that for 3PC-3.

DB(v+1) (deadbeat controller, with precribed manipulated variable)


By increasing the settling time by one sampling unit to k = 5 one can
decrease the first manipulated variable u(O) and the following 6u(k)
considerably compared with DB(v). Se becomes somewhat larger for much
smaller Su; ym and k 3 become larger. The computational effort tz increa-
ses because of the increase in order by one. Compared with 3PC-3 it
gives about the same Se and larger Su, but smaller u(O), smaller ym,
much smaller k 3 and three times of the computational effort tz.

A general evaluation of the control behaviour of all control algorithms


is possible by using the quadratic performance criterion Seu· This per-
formance criterion expresses both the behaviour of the controlled and
the manipulated variable, weighted by r. For a smaller weighting of the
manipulated variable r = 0.1 the best results can be obtained with 3PC-2,
SC-2 and SC-1 and for larger weighting r = 0.25 with 3PC-2, SC-2 and
2PC-2. The parameter-optimized control algorithms 3PC-3 and 3PC-2 differ
only little from the state control algorithms SC-1 and SC-2.

Process II (nonminimum phase behaviour)


From Figure 11.3.1 it can be seen that the deadbeat algorithm DB{v) is
unsuitable here, as the manipulated variable changes excessively. An
increase of the manipulated effort Su does not lead as it did for pro-
228 11. Different Controllers for Deterministic Disturbances

cess III (and for all control algorithms) to a decrease of the mean
squared control error Se. Too much manipulated effort or too large u(O)
lead to an inferior control performance. Smallest values of Se for still
small Su' i.e. relatively good Seu' result for r = 0.1 and 0.25 from
controllers 2PC-2, 3PC-2, 3PC-3, SC-1 and SC-2. Both a small undershoot
and a small overshoot can be obtained using 3PC-2.

b) Behaviour for the disturbance v(k) at the process input

Step changes of the disturbance variable v(k) lead to approximately the


same characteristics with processes I and II.

Better behaviour: DB(v), DB(v+1), SC-1, 3PC-3


(i.e. the controllers with the highest gain)
Worse behaviour: 2PC-2, 2PC-1, 3PC-2

The differencies of the characteristic values are, however, smaller


than for the setpoint changes as the step changes of the disturbance
v(k) excite the higher frequencies of the controlled variable less than
the disturbance w(k) for the design case. To evaluate the behaviour for
both disturbances, w(k) and v(k), the following measure has been calcu-
lated:

5 eu = (Seu)w + (Seu)v.

A good control behaviour on average can be obtained using

Process III: r = 0.1 : SC-1, SC-2, 3PC-2, 3PC-3;


r = 0.25: SC-2, 3PC-2, SC-1, 2PC-2.
Process I I r = 0.1 : SC-1, 3PC-3, 3PC-2, SC-2;
r = 0.25: SC-1, SC-2, 3PC-2, 3PC-3, 2PC-2.

c) Sensitivity to inexact process models

In most cases the process model is just an approximation to the real


process, so control algorithms cannot be judged without considering
their sensitivity to errors in the process model. In both theoretical-
ly and experimentally (identified) obtained process models, errors in
individual parameters rarely occur independently, so that the sensiti-
vity to single parameters can only lead to incomplete conclusions.

In the following, the sensitivity of the considered control algorithms


is treated for inexactly identified process models. The processes II
and III were therefore identified several times with four different
on-line parameter estimation methods for two different disturbance/
11.3 Comparison of the Performance of the Control Algorithms 229

signal ratios n = 0.1 and 0.2 and for three different identification
times [3.13]. The synthesis of the control algorithms was then perform-
ed with these identified models, and the resulting control variable
y(k) using the inexact process models and the resulting controlled va-
riable y(k) using the exact process model (the real process) were cal-
culated. Then the error caused by the inexactly identified process mo-
del is

b.y(k) y(k) - y(k). (11.3-1)

Hence the rms control error

N 2 N 2 ] 1/2
[ L b.y (k) / L y 0 (k) (11.3-2)
k=O k=O

can be determined. y 0 (k) is the controlled variable for the exact mo-
del with its matched control algorithm. The error of the controlled va-
riable oy is considered as a function of the error of the impulse res-
ponse o of the process model

12
,; [ Ag 2 (k) I g 2 (k) ] ' (11.3-3)

b.g(k) g(k) g(k). ( 11.3-4)


identified exact

To reduce the influence of statistical fluctuations, the standard devi-


ations a 0 and a 0 of these errors were calculated in each case for
five iden~ificati6n runs. For the control algorithm 3PC-3 it was shown
in [8.5] that for both investigated processes for 0 ~ ao ~ 0.2 there
is approximately linear dependence a 0 = f(a 0 ) . This al~o occurred for
all other control algorithms. A direcl relatignship between the single
parameter errors of the models could not be seen. The errors of the
weighting function in which the input/output behaviour of the process
is expressed can therefore be used to show a relationship between the
inexact process model and differencies in the control performance of
the loop. Then values of the 'model sensitivity of the loop'

E
1
= a8 I a8 (11.3-5)
y g
can be determined. The smaller E 1 the smaller the influence of the in-
exact model on the behaviour of the closed loop. Figure 11.3.3 c) shows
that the sensitivity of process II is generally larger than for process
III. The smallest sensitivity results for both processes with control
algorithms 3PC-2 and SC-2, the largest sensitivity being with 2PC-1.
A large sensitivity is shown by the deadbeat controller DB(v) for pro-
230 11. Different Controllers for Deterministic Disturbances

cess II. This can be explained by the nonminimum phase behaviour of


this process. For process III, however, DB(v) has about the same sensi-
tivity as for 3PC-3. The deadbeat controller DB(v+1) is for both pro-
cesses less sensitive than DB(v).

d) Computational effort between samples

To evaluate the computational effort during the operation phase of a


control algorithm the following values are used:
£E = £add + £mult: number of the calculations
£add: number of the additions and subtractions
£mult: number of the multiplications and divisions

Table 11.3.2 shows that the parameter-optimized control algorithms have


the smallest, the state control algorithms the highest and the deadbeat
control algorithms an average computational effort between two samples.

e) Synthesis effort required by the control algorithms

Synthesis effort depends on the storage and the computation time requi-
red for the design of the control algorithms. Both depend on the soft-
ware system (including mathematical routines) of the digital computer
used. The values given in Table 11.3.2 are for a process computer Hew-
lett-Packard HP 2100 A with 24K core memory, an external disk storage
and hardware floating point arithmetic. The synthesis computation time
is particularly small with deadbeat controllers, medium for state con-
trollers and greatest for parameter optimized controllers. Note that
the parameter optimization used the Hooke-Jeeves method, which requi-
res relatively little storage; the stopping rule was \~q\ = 0.01. The
storage required for synthesis is like that of the synthesis computa-
tion time - smallest for the deadbeat controller, medium for the state
controller and greatest for the parameter optimized controller.

Table 11.3.2 Computational effort between samples and the computational


effort for synthesizing various control algorithms, pro-
cess III and process computer HP 2100 A

Control algorithm 3PC-2 3PC-3 sc DB(v) DB (v+1)

Computation time 6 6 34 14 18
between samples £E
Synthesis compu- 20 ... 30 40 ... 60 1 0,004 0,004
tation time [s]
Synthesis storage 1881 1881 342
1996 342
[words]
11.3 Comparison of the Performance of the Control Algorithms 231

f) Relationship between control performance and manipulation effort

Figure 11.3.4 shows the control performance Seas a function of there-


quired manipulation effort Su for different control algorithms in the
design case with step changes of the setpoint. If the first-order con-
trol algorithms 2PC-1 and 2PC-2 are excluded then a direct relationship
between Se and Su can be observed for the other control algorithms of
second and higher orders. For process III an increase of Su leads to a
decrease of se in the following line

3PC-2 1st group


SC-2
} u(O) = 2.0

SC-1

}
2nd group
3PC-3
u(O) = 3.81 ••• 4.56
DB(v+1)
3rd group
DB(v)
} u(O) = 9.52.

Se
2 PC-1
•<>

2 PC-2

0.3 <>
'•lloc___ __ -•---- --- .,, _,A
PROCESS II 3 PC-2
3 PC-3

0

. . .........
<>• A
DB(v)
0.2 'C-0..6.
--------A DB (v+1) ...
PROCESS III
sc ( 1) c
0.1 SC(2) •
0
0 0.5 1.0 1.5 2.0 Su
Figure 11.3.4 Relationship between the control performanceS and the
manipulation effort S for the investigated co~trol al-
gorithms and processe~ II and III

Therefore groups can be associated with particular values of the ini-


tial manipulated variable u(O). It can further be seen that, starting
with the first group, small improvements of the control performance Se
are always obtained by increasing the manipulation effort Su. For pro-
cess II there is a small improvement of Se given by
232 11. Different Controllers for Deterministic Disturbances

3PC-2 1st group


SC-2 } u(O) = 2.0

}
3PC-3 2nd group
SC-1 u(O) = 3.44 3.49.

With an increasing manipulation effort Su, however, the control perfor-


mance becomes worse, first for DB(v+1) with u(O) = 5.84 and then for
DB(v) with u(O) = 14.09. It can further be seen that for the same Su
the first-order control algorithms 2PC-1 and 2PC-2 lead to an inferior
control performance Se for both processes compared with control algo-
rithms of second and higher order. For the same Su the control perfor-
mance for process II is worse than for process III. Figure 11.3.4 there-
fore shows that for control algorithms of second and higher order and
for other control algorithms there is a relationship between the rea-
chable control performance Se and the required manipulation effort Su.
However, this is true only for the design case in which there is a step
change in w(k) [8.5].

11.4 Comparison of the Dynamic Control Factor

Section 11.3 compared the closed-loop performance of different control


algorithms for step changes in the set point w(k) and in the process
input v(k). For stochastic disturbances n(k), chapter 13 shows the corr-
esponding simulation results using parameter optimized controllers. A
further comparison of various control algorithms for stochastic distur-
bances and step disturbances with regard to the application in adaptive
control algorithms is made in [2.22], section 26.2.

To evaluate the control performance for different input signal spectra


the dynamic control factor [5.14]

R ( z) (11.4-1)

is useful, as the closed loop response to an input signal is

y (z)
(11.4-2)
y(z) R(z)n(z)

and a disturbance v(k) passed through an arbitrary filter GPv(z)


n (z) /v(z) gives
11.4 Comparison of the Dynamic Control Factor 233

y ( z) = R ( z) GPv ( z) v ( z) = R ( z) n ( z) • (11.4-3)

Eq. (11.4-3) includes Eq. (11.4-2) with v(z) = n(z), GPv(z) = 1 or v(z)=
w(z) and GPv{z) = GR(z)GP(z). For deterministic disturbances the ampli-
tude density spectra are

ln(z) I = IGPv(z) I lv(z) I (11.4-4)

T iw
with z = e 0 and 0 ~ w ~ ws' where ws is the Shannon frequency ws=~/T 0
(see section 27.1). The amplitude density spectrum of the controlled va-
riable is therefore

I Y ( z) I = I R (z) I In ( z) I = I R (z) I I GPv ( z) I I v (z) I • (11.4-5)

For stationary stochastic disturbances with power density

(11.4-6)
T=-oo

where

R (T) = E{[n(k) - n)[n(k+T) - n)}


nn

is the autocovariance function

2 (11.4-7)
S (z) = IGp v (zll S (z)
nn vv

the power densitiy of y(k) becomes

Syy(z) = IR(z) 1 2 Snn{z) = IR(z) 12 IGPv(z) 1 2 Svv(z). (11.4-8)

The magnitude of the dynamic control factor IR(z) I or its squared value
IR(z) 1 2 indicate how much the amplitude or power spectra are reduced by
the control loop. Therefore in the following the dependence of IR(z) I
on the frequency w in the range 0 ~ w ~ ws is shown for different con-
trollers. The effect of different weighting of the manipulated variable
is also shown.

The dynamic control factor R(z)=y(z)/n(z) can be simply derived, also for
state controllers with observers,in the following way: A low-pass pro-
cess with several small time constants

~ (11.4-9)
u(s) (1+4.2s) (1+1s) (1+0.9s) (1+0.6s) (1+0.55s) 2

was simulated on an analog computer and identified by a process compu-


ter after perturbation by a pseudorandom binary signal. The method of
234 11. Different Controllers for Deterministic Disturbances

"correlation and least squares parameter estimation" and an order-search


program [3.13], [29.1], [29.2], [29.3] led to the transfer function, for
a sampling time of T0 = 2 sec, of

Yi& 0.0600z- 1 + 0.1617z- 2 + 0.0328z- 3


(11.4-10)
u(z) 1 - 0.9470z- 1 + 0.2164z- 2 - 0.0005z 3 •

Based on this model various control algorithms were then designed with
the aid of the same process computer for step changes in the set point
(see chapter 29). IR(z) I was then determined experimentally through mea-
surement of the frequency response of the closed loop which consisted
of the analog computer and the process computer, leading to the results
described below. The dynamic control factor can, as is well-known, be
divided into three main regions [5.14] (c.f. Fig. 11.4.1):

Region I 0 ~ w < wi ~ 0 ~ IRI < 1 (low frequencies)


Disturbances n(k) are reduced.

Region II WI ~ w < wii ~ 1 < I Rl (medium frequencies)


Resonance effect. Disturbances n(k) are amplified.

Region III: wii ~ w < ws ~ IRI ~ 1 (high frequencies)


Disturbances n(k) are unaffected.

IRI

Ws w
f-- I --+-- IT--t--- ill -i

Figure 11.4.1 Frequency regions of the dynamic control factor IR(z) I.


ws = n/T 0 . wres resonance frequency.

The effectiveness of the closed loop is therefore restricted to region


I. Invariably, parameter changes of a controller are such that a de-
crease of IRI in one region is followed by an increase in another re-
gion [5.14]. The graph of the magnitude of the dynamic control factor
11.4 Comparison of the Dynamic Control Factor 235

is shown for different controllers in Fig. 11.4.2. The controller para-


meters are summarized in Table 11.4.1 and the effect of a higher weight-
ing on the manipulated variable is shown in Table 11.4.2. It can be seen
that IRI increases in region I, and therefore disturbances at low fre-
quencies are less damped and the control performance becomes worse. The
same happens in the ascending region at low frequencies in region II.
However, in the descending part in region II, beyond the resonance peak
(w > wres) IRI decreases for all controllers and the control performance
is correspondingly improved. There are unsignificantly small changes in
region III. For all controllers it can be concluded that a higher weight
on the manipulated variable or a smaller u(O) decreases the resonance
peak and moves it to a lower frequency. To appreciate the variation in
the dynamic control factor for a state controller the reader is referred
to Eqs. (10.1-11) - (10.1-14) and the corresponding remarks and referen-
ces.

This discussion again shows that evaluation of control behaviour de-


pends significantly on the frequency spectrum of the exciting signals,
especially from Eq. (11.4-4). Only if very low frequency signals act on
the closed loop can a small value of r or a large value of u(O) be cho-
sen. Components near the resonance frequency require a large r or a
small u(O). If medium or high frequency signals are acting which are not
specially filtered (see section 27.1), the deadbeat controller DB(v)
should not be applied (Fig. 11.4.2 c)). For the other controllers r can
be chosen to be larger or u(O) smaller.

Fig. 11.4.3 shows the dynamic control factor for the different controll-
ers. The weight on the manipulated variable was chosen such that after
a step change in the set point the manipulated variable u(O) is about
the same, i.e. u(O) ~ 1.93 ... 2.41. IR(z) I does not differ very much
for 3PC-3, DB(v+1) and SC. Only 2PC-2 shows a significantly higher re-
sonance peak at lower frequencies. SC is best in region I, DB(v+1) in
region II, and in region III SC is best again.

The dynamic control factor is not only useful for evaluating control
performance as a function of the disturbance signal spectrum. Eq.
(10.1-10) shows that the dynamic control factor is identical to the
sensitivity function S(~n'z) of the closed loop which determines the
effect of changes in the process behaviour. Small IR(z) I not only means
a good control performance but also a small sensitivity (see chapter 10).
236 11. Different Controllers for Deterministic Disturbances

Ws

1,0 1,5 W( 1/sl

a) Parameter-optimized b) Parameter-optimized
controller 2PC-2 (PI) c o ntro lle r 3PC- 3 (PID)

IRI
20

d) St ate c o n troller
with o bserver

Figure 11.4.2 Graph of the magnitude of the dynamic control factor for
different controllers and different weightings on the
manipulated variable or different u(O).
11.4 Comparison of the Dynamic Control Factor 237

IRI
2 PC-2

0 0.5

Figure 11.4.3 Magnitude of the dynamic control factor for four


different controllers
2PC-2: u(O) 1.93
3PC-3: u(O) 2.41
DB(v+1): u(O) 2.23
SC: u(O) 2.38
238 11. Different Controllers for Deterministic Disturbances

Table 11.4.1 Controller parameters for different dynamic control factors

2PC-2 3PC-3
Controller
parameter r=O r=0.1 r=O r=0.1

q 0 =u(O) 1 . 9 3 36 1 .5781 3. 6072 2.4141


-1.5586 -1.2266 -4.8633 -2.9219
q1
q2 - - 1.9219 1.0000
K 1.9336 1. 5781 1.6957 1.4141

CD - - 1. 14 75 0. 7072

CI 0.1939 0.2225 0.3992 o. 3481

o. 35 0. 33 0.55 0.60
wres

Controller
sc
Controller DB(v) DB (v+1)
parameter parameter r=0.03 r=0.05

q 0 =u(Ol 3. 9292 2.2323 k1 2.6989 2.3466

q1 -3.7210 -0.4171 k2 3.1270 2.5798

q2 0. 8502 -1.1240 k3 2.3777 1.9358

q3 -0.0020 0. 3660 k4 1 .0000 1.0000

q4 - -0.0009 u(O) 2.3777 1. 9358


Po 1 .0000 1 .0000 0.50
(!) 0.57
res
p1 -0.2359 -0.1340
p2 -0.6353 -0.4628
p3 -0.1288 -0.3475
p4 - -0.0556

(!) 0. 73 0.58
res

Table 11.4.2 Change of IR(z) I for different weights on the manipulated


variable

I R (z) \ becomes

con- change region I region II region III


troller at design os:w<wr w s:w<w wress:w<wii wii,;ws:ws
I res
2PC-2 r=O _,. 0.1 greater greater smaller -
3PC-3 r=O -+ 0. 1 greater greater/smaller smaller -
DB v -+ v+1 greater greater smaller -
sc r-0.03 greater greater smaller -
-+0.05
11.5 Conclusion for the Application of the Control Algorithms 239

11.5 Conclusion for the Application of the Control


Algorithms
The most important properties of the studied control algorithms are
summarized in Table 11.5.1 for the given test processes with proportion-
al action, low pass behaviour and nonminimum phase behaviour.

Table 11.5.1 Evaluation of the most important properties of the control


algorithms
1: "good" "small" 2: "medium" "medium"
3: "bad" "large"

Control Sensitivity Computational Synthesis


Control effort between effort
behaviour to inexact
algorithm
process mod. samples
process process
III II III II
2PC-1 2 2 2 3 1 3
PI 2 2 2 3
2PC-2 2 1
3PC-3 1 1 2 2 1 3
PID 3PC-2 1 1 1 1 1 3
Dead- DB(V) 3 3 2 3 2 1
beat DB(v+1) 2 3 2 3 2 1
State- SC-1 1 1 2 2 2 2
contr. SC-2 1 1 1 1 2 2

Parameter-optimized control algorithms

The three-parameter-control algorithms with PID-behaviour are better


than the two-parameter-control algorithms with PI-behaviour, as they
give better control for smaller manipulation effort, quicker settling
with smaller overshoot and smaller sensitivity to inexact process mo-
dels. Parameter optimized control algorithms of low order are charac-
terized by an especially small computing time between samples, but the
synthesis effort is relatively large in the numerical parameter opti-
mization. However, a design with smaller computational effort is des-
cribed in section 25.2.3. Unlike all other control algorithms, simple
tuning rules can be applied to low-order parameter-optimized control
algorithms. Parameter-optimized control algorithms are therefore re-
commended for the following cases:
- Many control loops,
- Controller synthesis performed only once or rarely,
- Feedforward adaptation of controller parameters as a
function of the operating point.
These controllers are therefore suitable for a wide range of processes.
240 11. Different Controllers for Deterministic Disturbances

State-control algorithms

The attainable performance of state-control algorithms differs little


from that of parameter-optimized algorithms for the considered test
processes. For the same initial manipulated variable u(O), state-con-
trol algorithms result in somewhat more damping of the controlled va-
riable and in a smaller settling time. The computational effort between
samples is higher for processes beginning with second order, but com-
puter aided design is simpler. Therefore state-control algorithms are
preferred in the following cases:
- Few control loops,
- Controller synthesis performed only once or rarely,
- Unstable processes which need a feedback of many
state variables for stabilization.

Deadbeat-control algorithms

Because of large changes in the manipulated variable, the deadbeat-con-


trol algorithms of order v = m+d cannot be recommended for small sample
times. If the sample time is large enough, however, deadbeat-control
algorithms of order v+1 are better, as they result in smaller changes
of the manipulated variable. The settling time and overshoot are small-
er for deadbeat-control algorithms than for the parameter-optimized
algorithms. The computation time between samples is about three times
larger for the fourth-order process. A main advantage of the deadbeab-
control algorithms is the computational simplicity of their synthesis.
Therefore the following applications can be recommended:
- Asymptotically stable processes,
- Controller synthesis to be repeated many times (adaptive control) .
These statements are valid for the test processes investigated here,
but they can be generalized for similar linear processes without ex-
cessive error.

Chapter 9 has already briefly discussed proportional action processes


with large deadtime, and further results comparing various algorithms
for stochastic disturbances are given in chapter 13 and 25. It is diffi-
cult to choose a control algorithm for a multivariable process, as mul-
tivariable processes can differ considerably. The advantages and dis-
advantages of parameter-optimized-and state-control algorithms must be
investigated in each special case (see part E) •
C Control Systems for Stochastic
Disturbances
12. Stochastic Control Systems

12.1 Preliminary Remarks


The controllers treated in the preceding chapters were designed for de-
terministic disturbances, that means for signals which are exactly
known a priori and can be described analytically. Real disturbances,
however, are mostly stochastic signals which cannot be exactly describ-
ed nor predicted. The deterministic signals used for the design of con-
trol systems are often 'proxies' of real signals. These proxies have
simple shapes to reduce the design complexity and to allow for easy
interpretation of the control system output. The resulting control
systems are then optimal only for the chosen proxy signal and the appli-
ed criterion. For all other signals the control system is sub-optimal;
however, this is not very important in most cases. If the demands on
the control performance increase, the controllers must be matched not
only to the dynamic behaviour of the processes but also to the distur-
bances. To this the theory of stochastic signals has much to contribute.

In section 12.2 the recursive mathematical modeZs of stochastic signaZs,


required by the following chapters, are briefly treated. Then three im-
portant controllers for stochastic disturbances are considered. All the
parameter-optimizedcontroZZers of chapter 5 can also be matched to sto-
chastic disturbances as shown in chapter 13. Then chapter 14 gives a
detailed treatment of various minimum variance controZZers which result
from the minimization of a quadratic performance criterion and which
are matched with an optimal structure both to the process to be con-
trolled and to the stochastic disturbances. Finally the state controZZ-
er, which also has an optimal structure, and which requires a state
variabZe fiZter for estimation of the stochastic state variables is
treated in chapter 15.

The theory of stochastic control systems is quite recent, and so far


the following books have been published on discrete time stochastic
control systems: [12.1 ], [12.2], [12.3], [12.4], [12.5], [8.3].
242 12. Stochastic Control Systems

12.2 Mathematical Models of Stochastic Signal Processes

This section presents some equations describing signal processes which


are required in the design of stochastic controllers and state variable
filters, However, a detailed introduction and derivation cannot be gi-
ven here, so the reader is referred to special publications, for example
on continuous stochastic signals [12.6], [12.7], [12.8], and discrete-
time stochastic signals [12.9], [12.10], [12.4], [3.13].

12.2.1 Basic Terms

We consider the discrete-time stochastic signal process

{x (k)}; k = 1,2, ••• ,N.

The statistic properties of stochastic signals are described by their


amplitude probability density and by all joint probability density
functions. If these probability densities are functions of time, the
stochastic signal is nonstationary. If the probability densities and
the joint probability densities are independent of a time shift the
signal is called stationary (in the narrow sense) • Stationary signals
are called ergodic if their ensemble-average can be replaced by their
time-average. A stationary ergodic signal can be described by its ex-
pectation (linear average value)

N
X E{x(k)} limN l:: x(k) (12.2-1)
N->-oo k=1

and by its autocorrelation function

1 N
E{x(k)x(k+-r)} limN l:: x(k)x(k+-r). (12.2-2)
N-+oo k=1

The autocorrelation function describes the intrinsic relationships of


a random signal. From the definition of the autocorrelation function
it is seen that the d.c. value of the signal influences its value. If
only deviations from the average are considered, one obtains the auto-
covariance function

E{[x(k)-xJ[x(k+-rl-xJ} (12.2-3)
12.2 Mathematical Models of Stochastic Signal Processes 243

For T 0 the variance of the signal is obtained as

N
2
a
X
lim z [x(k)-x] 2 . (12.2-4)
N->-oo N k=1

If the signal has a Gaussian amplitude distribution, it is completelv


described by its expectation and the autocovariance function. A stocha-
stic signal is stationary in the wide sense if x and Rxx (T) are inde-
pendent of time.

The relationship between different stationary stochastic signals x(k)


and y(k) can be described by the crosscorrelation function

N
cjJ (T) E{x(k)y(k+T)} lim Z x(k)y(k+T) (12.2-5)
xy N->oo N k=1

or by the erosscovariance function

R (T)
xy
= cov[x,y,t] E{x(kl-xJ[y(k+T)-yJ} = wxy (t)-x v.- (12.2-6)

Two different stochastic signals are called uncorrelated if

cov[x,y,t J = R
xy
(T) = 0. (12.2-7)

They are orthogonal if additionally x y o, which means that

<!J (T) = 0. ( 12 .2-8)


xy

For white noise, a current signal value is statistically independent of


all past values. It has no intrinsic relationship, and in the case of
Gaussian amplitude distribution it is completely described by the ave-
rage x and covariance function

cov[x,T] = a 2 6(t) (12.2-9)


X

where 6(T) is the Kronecker delta function

for T 0
6 (T) -- { 01
for It! + 0
(12 .2-10)

and a 2 is the variance of x(t). Hitherto only scalar stochastic signals


X
were considered. A vector stochastic signal of order n

T (12.2-11)
{~ (k)} = [x 1 (k) x 2 (k) ... xn (k) ]

contains n scalar signals. If they are stationary, their average is


(12.2-12)
244 12. Stochastic Control Systems

The relationship between two (scalar) components is described by the


aovarianae matrix

R (T) R (T) R (T)


x1x1 x1x2 x1xn
R (T) R (T) R (T)
x2x1 x2x2 x2xn (12.2-13)

R (T) R (T) R (T)


xnx1 xnx2 X X
n n

On the diagonal are the n autocovariance functions of the individual


scalar signals, and all other elements are crosscovariance functions.
Note that the covariance matrix is symmetric for T = 0.

Example 12.2.1: x 1 (k) and x 2 (k) are two different white random signals.
Then their covariance matrix is

cov[~,T=OJ

cov[~, TfO J = 0.
0
Covariance or correlation functions are nonparametric models of sto-
chastic signals; the next two sections describe parametric models of
stochastic signal processes.

12.2.2 Markov Signal Processes

A stochastic signal process is called a first-order Markov signaZ pro-


aess (Markov process) if its conditional probability density function
satisfies:

p[x(k) lx(k-1), x(k-2), ... , x(O) J = p[x(k) lx(k-1) J. (12.2-14)

The conditional probability for the event of value x(k) depends only
on the last value x(k-1) and not on any other past value. Therefore
a future value will only be influenced by the current value. This defi-
nition of a Markov signal process corresponds to a first-order scalar
difference equation

x(k+1) = a x(k) + f v(k) (12.2-15)


12.2 Mathematical Models of Stochastic Signal Processes 245

for which the future value x(k+1) depends only on the currsnt values of
both x(k) and v(k). If v(k) is a statistically independent signal (white
noise) then this difference equation generates a Markov process. How-
ever, if the scalar difference equation has an order greater than one,
for example satisfying

x(k+1) = a 1 x(k) + a 2x(k-1) + f v(k) (12.2-16)

then one can transform the process equation by replacing

x(k) x 1 (k)

x(k+1) x 1 (k+1) = x 2 (k) (12.2-17)

into a first-order vector difference equation

[
x 1 (k+1)] [0
[x 1(k)l +[OJ
f
v(k) (12.2-18)
x 2 (k+1) = a1 x 2 (k)

which becomes in general

~(k+1) =~ ~(k) + ! v(k). (12.2-19)

Here A and f are assumed to be constant. Then each element of ~(k+1)

depends only on the state ~(k) and on v(k), i.e. only on current va-
lues. ~(k+1) is then a first-order veetor Markov signal proeess. Sto-
chastic signals which depend on finite past values can always be des-
cribed by vector Markov processes by transforming into a first-order
vector difference equation. Therefore a wide class of stochastic sig-
nals can be represented by vector Markov signal processes in a parame-
tric model, as shown in Figure 12.2.1. If the parameters of~ and!
are constant and v = 0, then the signal is stationary. Nonstationary
Markov signals result from ~(k), !(k) or v(k) which vary.

v (k)

Figure 12.2.1 Model of a vector Markov signal x(k).


v(k): statistic independent random variable.
246 12. Stochastic Control Systems

The covariance matrix ~(k+1) of the signal ~(k+1)

cov[~(k+1) ,T=OJ E{[~(k+1) - ~(k+1) J[~(k+1+T) - x(k+1+T) JT}


~(k+1) (12.2-20)

is derived for a Markov signal with constant parameters

~(k+1) = ~ ~(k) + F ~(k). ( 12. 2-21)

The following values are known

E{~(k)} = ~
for T 0
cov[~ (k) , T J ={ ~ for T +0 (12.2-22)
E{~(O)} = ~(0)

- -
cov[x(O) ,T=OJ = X(O)
E{[~(k)-~J[~(k)-~J
= } = 0.
T

Taking the expectation of Eq. (12.2-21) gives

~(k+1) =A ~(k) + F v. (12.2-23)

Subtracting from Eq. (12.2-21) and Eq. (12.2-23) yields

~(k+1) - ~(k+1) = ~[~(k) - ~(k) J + E:.[~(k) - ~J. (12.2-24)

Eq. (12.2-24) is now multiplied with its transpose from the right and
the expectation is taken. Then the covariance matrix obeys

~(k+1) = ~ ~(k) ~T + F V FT. (12.2-25)

If the eigenvalues of the characteristic equation

det[z I - ~J = 0

are within the unit circle of the z-plane, and if ~ and F are constant
matrices, then for k+oo a stationary signal process with covariance ma-
trix X is obtained which can be recursively calculated using Eq. (12.2-~)

giving

(12.2-26)

In the following, it will be required also that the expectation of a


quadratic term of the form ~T(k)g ~(k), where ~(k) is a Markov process
with covariance matrix ~, and both X and g are nonnegative definite
matrices. Using
12.2 Mathematical Models of Stochastic Signal Processes 247

T T
~ g ~ = tr[g ~ ~ J (12.2-27)

where the trace operator tr produces the sum of the diagonal elements
it follows, for ~(k) =Q
E{~T (k)_Q ~(k)} E{tr[g ~(k)~T(k) J}
tr[g ~]. (12.2-28)

If ~(k) = ~ f 0 accordingly

-T
~ g ~ + tr [g ~ J . (12.2-29)

12.2.3 Scalar Stochastic Difference Equations

A stochastic difference equation with constant parameters is

n(k) + c 1 n(k-1) + ... + cmn(k-m)

= d 0 v(k) + d 1 v(k-1) + ... + dmv(k-m). (12.2-30)

Here n(k) is the output of a 'virtual' filter with z-transfer function

n (z)
(12.2-31)
v (z)

and v(k) is a white noise with expectation v = 0 and variance o 2 1.


v
Stochastic difference equations represent a stochastic signal n(k) as
a function of a statistically independent signal v(k). The scalar sto-
chastic difference equation (12.2-30) results from the vector difference
equation (12.2-19) by

A
0
-c
m-1

~T (k) = [x 1 (k) x 2 (k) ... xm (k)]


T
n(k) = £ ~(k) + d 0 v(k)
(12.2-32)

One distinguishes the autoregressive process


do
n (z) = _1 v (z) (12.2-33)
c (z )
248 12. Stochastic Control Systems

the moving average process


-1 (12.2-34)
n(z) = D(z )v(z)

and the mixed autoregressive-moving average process of equation (12.2-31).


If the roots of C(z- 1 ) lie within the unit circle of the z-plane these
processes are stationary, but if roots are allowed to lie on the unit
circle, for example

n(z) (12.2-35)
P = 1 ,21 • • •
v(z)

then nonstationary processes can also be described. For further details


see for example [12.4], [12.9], [12.10].
13. Parameter-optimized Controllers
for Stochastic Disturbances

The parameter-optimized control algorithms given in chapter 5 can be


modified to include stochastic disturbance signals n(k) by using the
quadratic performance criterion

M
l: [e 2 (k) + rt~u 2 (k) ] ( 13-1)
k=O

if the disturbance signals are known. When using a process computer,


the stochastic noise can first be stored and then used in the optimi-
zation of controller parameters. If the disturbance is stationary, and
if it has been measured and stored for a sufficiently long time, it can
then be assumed that the designed controller is optimal also for future
noise and a mathematical noise model is not necessary for parameter
optimization.

In the following some simulation results are presented which show how
the optimized controller parameters change compared with parameters
obtained for step changes of the disturbances and for test processes
II and III. A three-parameter-control-algorithm
-1 -2
qo+q1z +q2z
( 13-2)
1-z- 1

is used as in Eq. (5.2-10). A stochastic disturbance v(k), as in Fig.


5.2.1, acts on the input of the process, and is considered to be a nor-
mally distributed discrete-time white noise with

E{v(k)} = 0 ( 1 3-3)

and standard deviation

0.1. ( 13-4)

Then we have n(z) = GP(z)v(z). For this disturbance the controller pa-
rameters were determined by minimization of the control performance
criterion Eq. (13-1) forM= 240, r = 0 and using the Fletcher-Powell
method. Table 13.1 gives the resulting controller parameters, the qua-
N
Table 13.1 Controller parameters, control performance and manipulation effort for stochastic Ul
0
disturbances v(k)
Process II Process III

S + Min S + Min S -+ Min I S e' _r ->- Min w


T = 4 secJ e,stoch e,~ e,stoch ' PC-2
T 0 =4sec j 3 PC 3 1 3 PC-2 1 3 PC-3 3
l-- 'tj
O 3 PC-3 3 PC-2 3 PC-3 I 3 PC-2 p;
>1
p;
0.477 1.750 2.332 1.750 3.966 1 2. 5oo
qo qo
~
q1 -0.512 -3.010 -3.076 -2.039 q1 -7.171 j-7.160 1-3.320 rT
CD
0.014 1. 105 0. 591 3. 1 81 1 1 .o97 >1
q2 1 . 239 q2 I 3.030 I
0
K 0.463 0.511 1.227 1 . 1 59 K 0.785 1. 045 1 1.519 I 1 • 403 'd
I rT
1 . 39 2 1 . 994 0.783 f-'·
CD 0.031 2.425 o. 901 0. 511 CD 4.051 sf-'·
-0.045 -0.041 0.095 0.261 -0.030 -0.026 0.275 0. 1 98 N
CI CI CD
0.0346 0.037 0.0435 0.0411 0.0216 0.0213 0.0245 0.0249 0..
se se ()
su 0.0207 0.0604 0.0786 0.0595 su 0.0572 0.0361 0.0673 0.0438 0
::J
K o. 903 0. 966 1 . 13 1.08 K 0. 70 0.71 0.82 0.83 rT
>1
0
f-'
f-'
CD
->- Min ->- Min >1
8 e,stoch _,. Min s e, ...r- 5 e,stoch ->- Min s e,...r- en
T 0 =8sec T0 =8sec
H1
3 PC-3 3 PC-2 3 PC-3 3 PC2 3 PC-3 3 PC-2 3 PC-3 3 PC-2 0
>1
(ll
qo 0.913 1.500 1.999 1. 500 qo 1 . 494 2.000 2.437 2.000 rT
0
(l
q1 -1.488 -2.154 -2.079 -1.338 q1 -2.565 -3.370 -2.995 -2.280 ::>
p;
q2 0.557 0. 652 o. 748 0.364 q2 1 .052 1 . 394 1 . 1 58 0. 784 en
rT
K 0.356 0.848 1 . 2 51 1 . 13 6 K 0.442 0.606 1 . 2 79 1. 216 f-'·
(l

CD 1 . 564 0. 770 0. 597 0. 321 CD 2. 37 8 2.300 0.905 0.645 t:l


f-'·
CI -0.051 -0.002 0.534 0.464 CI -0.044 0.040 0. 469 0.414 en
rT
0.0423 0.0452 0.0528 0.0519 0.0356 0.037 0.0432 0.0431 c
se se >1
t1
0.0387 0.0658 0.0858 0.0663 0.0485 0.0677 0.0807 0.0672 p;
::J
su su
(l
K 0.90 I o. 95 1 . 11 1 . 09 K 0.85 0.88 1 .02 1 .03 CD
--- -- -- en
'
13. Parameter-optimized Controllers for Stochastic Disturbances 251

dratic average value of the control error Se (control performance), the


quadratic average value of the deviation of the manipulated variable Su
(manipulation effort) , and the stochastic control factor

K = lly 2 (k) with controller (13-5)


yz(k} without contr.
for two different sample times. These are shown in the columns headed
by 'Se,stoch +Min'. The same characteristic values were also calcula-
ted for the controller parameters which were optimized for step changes
in the reference variable. They can be found in Table 13.1 in the co-
lumn headed 'S ~+Min' Considering first the controller parameters
e,~

for the control algorithm 3PC-3 optimized for step changes, the para-
meters q 0 and K for stochastic disturbances decrease for both processes
and cD increases, with exception of process II, T0 = 4 sec. The inte-
gration factor ci tends towards zero in all cases, as there is no con-
stant disturbance, meaning that E{v(k)} = 0. The controller action in
most cases becomes weaker, as the manipulation effort Su decreases.
Therefore the control performance is improved as shown by the values
of the stochastic control factor K. The inferior control performance
and the increased manipulation effort of the controllers optimized to
step changes indicates that the stochastic disturbances excite the re-
sonance range of the control loop. As the stochastic disturbance n(k)
has a relatively large spectral density for higher frequencies, the K-
values of the stochastic optimized control loops are only slightly be-
low one. The improvement in the effective value of the output due to
the controller is therefore small as compared with the process without
control; this is especially true for process II. For the smaller sample
time T0 = 4 sec, much better control performance is produced for pro-
cess III than with T0 = 8 sec. For process II the control performance
in both cases is about the same. For the controller 3PC-2 with a given
initial input u(O) = q 0 and where two parameters q 1 and q 2 are to be
optimized, only one value q 0 was given. For process II q 0 was chosen
too large. For process II the control performance is therefore worse
than that of the 3PC-3 controller. In the case of process III for both
sample times T0 = 4 sec and T0 = 8 sec, changes of q 0 compared with
3PC-3 have little effect on performance.

These simulation results show that the assumed 3-parameter-controller,


having a PID-like behaviour for step disturbances, tends to a propor-
tional differential (PD-)action for stationary stochastic disturbances
with E{n(k)} = 0. As there is no constant disturbance, the parameter-
252 13. Parameter-optimized Controllers for Stochastic Disturbances

optimized controller does not have integral action. If in Eq. (5.2-18)


ci = 0, then the pole at z = 1 is cancelled and we obtain a PD-contro-
ller with transfer function

(13-6)

and a control algorithm

(13-7)

If the disturbance signal n(k) is also stationary and E{n(k)} = 0, then


the parameter-optimized controller of Eq. (13-6) can be assumed. As
in practice this is not true, at least a weak integral action is re-
commended in general, and therefore the assumed 3-parameter-controller
of Eq. (13-2) or Eq. (5.2-10) should be used. For this controller one
calculates K and cD using parameter optimization, and then one takes
a small value of the integration factor ci > 0 so that drift components
of the disturbance signal can also be controlled and offsets can be a-
voided.
14. Minimum Variance Controllers
for Stochastic Disturbances

In the design of minimum variance controllers the variance of the con-


trolled variable

var[y(k) J= E{y 2 (k)}

is minimized. This criterion was used in [12.4] by assuming a noise


filter given by Eq. (12.2-31) but with C(z- 1 ) = A(z- 1 ). The manipulated
variable u(k) was not weighted, so that in many cases excessive input
changes are produced. A weighting ron the input was proposed in [14.1],
so that the criterion

is minimized. The noise n(k) can be modelled using a nonparametric mo-


del (impulse response) or a parametric model as in Eq. (12.2-31). As a
result of the additional weighting of the input, the variance of the
controlled variable is no longer minimal; instead the variance of a
combination of the controlled variable and the manipulated variable are
minimized. Therefore a generalized minimum variance controller is pro-
duced.

The following sections derive the generalized minimum variance controll-


er for processes with and without deadtime; the original minimum vari-
ance controller is then a special case for r = 0. For the noise filters
are assumed parametric models as they are particularly suitable for
realizing self-adaptive control algorithms on the basis of parameter
estimation methods.

14.1 Generalized Minimum Variance Controllers


for Processes without Deadtime

It is assumed that the process to be controlled is described by the


transfer function
254 14. Minimum Variance Controllers for Stochastic Disturbances

~ (14.1-1)
u (z)

and by the noise filter


-1 -m
\[1+d 1 z + ... +dmz J
G (z) = n(z) (14.1-2)
-1 m
Pv v(z) 1+c 1 z + ... +cmz

Here v(k) is a statistically independent signal


1 for T = 0
{
E{v(k)v(k+T)} 0 forT f 0 (14.1-3)

E{v(k)} = v= 0

(see Figure 14.1.1).

- - - - I PROCESS
A.v 1 D (z-1)
-
i/
Ju,
I
C (z-1 ) I
I n
I
u I I
e Q (i1) I 8 (z- 1 } y
~ I
p ( z-1) I
A ( z- 1) ~
~

- I
I
I
L _ _ _ _ _ _ _ _j
CONTROLLER

Figure 14.1.1 Control with minimum variance controllers of processes


without deadtime

Now w(k) = 0, =
-y(k) is assumed. The problem is now to de-
i.e. e(k)
sign a controller which minimizes the criterion

(14.1-4)

The controller must generate an input u(k) such that the errors in-
duced by the noise process {v(k)} are minimized according to Eq.
(14.1-4). In the performance function I, y(k+1) is taken and not y(k),
as u(k) can only influence the controlled variable at time (k+1) be-
cause of the assumption b 0 = 0. Therefore y(k+1) must be predicted on
the basis of known signal values y(k), y(k-1), .•. and u(k), u(k-1),
Using Eq. (14.1-1) and Eq. (14.1-2) a prediction of y(k+1) is

z y ( z) (14.1-5)

and
14.1 Minimum Variance Controllers for Processes without Deadtime 255

-1 -1
A(z )C(z )z y(z)

(14.1-6)
or
-1 -m -1 -m
(1+a 1 z + ... +amz ) (1+c 1 z + ... +cmz )z y(z)
-1 -m -1 -m
(b 1 z + •.• +bmz ) (1+c 1 z + ... +cmz )z u(z)
-1 -m -1 -m (14.1-7)
+ A(1+a 1 z + ... +amz ) (1+d 1 z + ..• +dmz )z v(z).

After multiplying and transforming back into the time domain we obtain

b 1 u(k) + (b 2 +b 1 c 1 )u(k-1) + + bmcmu (k-2m+1)

+ A[v(k+1) + (a 1+d 1 )v(k) + + amdmv(k-2m+1) J. (14.1-8)

Therefore the performance criterion of Eq. (14.1-4) becomes

I(k+1) = E{:-(a 1 +c 1 )y(k) - ... - amcmy(k-2m+1)

+ b 1u(k) + (b 2 +b 1 c 1 )u(k-1) + ... + bmcmu(k-2m+1)

+ A[(a 1 +d 1 )v(k) + •.• + amdmv(k-2m+1)]

+ A v(k+1l] 2 + ru 2 (k)} (14.1-9)

At time instant k, all signal values are known with the exception of
u(k) and v(k+1). Therefore the expectation of v(k+1) only must be ta-
ken. As in addition v(k+1) is independent of all other signal values

I(k+1) = [-<a 1 +c 1 )y(k)-


+ (b 2 +b 1 c 1 )u(k-1) + + bmcmu (k-2m+1)

+ A[(a 1 +d 1 )v(k) + ... + a d v (k-2m+1)


m m j
2 ]1
+ A2 E{v 2 (k+1)}

+ 2A[-<a 1 +c 1 )y(k) - + bmcmu (k-2m+1)

+ A[ (a 1 +d 1 ) v ( k) + • • . + ad v(k-2m+1)]1E{v(k+1)}
m m ~
+ ru 2 (k). (14.1-10)

Therefore the condition for optimal u(k) becomes

di (k+1)
()u(k) 2 [- (a 1 +c 1 ) y (k) - ... - amcmy (k-2m+1)

+ b 1u(k) + (b 2 +b 1 c 1 )u(k-1) + ... + bmcmu(k-2m+1)

+ A[(a 1 +d 1 )v(k) + ..• + amdmv(k-2m+1) J] b1

+ 2ru(k) = o. (14.1-11)
256 14. Minimum Variance Controllers for Stochastic Disturbances

In this equation the term in brackets before b 1 can be replaced using


Eq. (14.1-8) giving

[zy(z) - A.zv(z) Jb 1 + ru(z) 0. (14.1-12)

Applying Eq. (14.1-5)


C( -1) B(z- 1 )C(z- 1 )
A.zv(z) z zy(z) - zu(z)
D(z- 1 ) A(z 1 )D(z- 1 )

one finally obtains the generalized minimum variance controller

-1 -1 -1
u (z) A (z ) [D ( z ) -c (z ) ]z
GRMV1 (z)
y (z) -1 -1 r -1 -1 •
zB(z )C(z )+~A(z D(z )
1
(Abbreviation: MV1) (14.1-13)

This controller contains the process model with polynomials A(z- 1 ) and
B(z- 1 ) and the noise model with polynomials C(z- 1 ) and D(z- 1 ). With
r = 0, the simple form of the minimum variance controller is produced

-1 -1 -1
A(z ) [D ( z ) -c (z ) ]z
-1 -1
zB(z )C(z )

_ zA(z -1 ) [ D(z -1 ) _ 1 ] .
-1 -1
(14.1-14)
zB ( z ) C ( z )
(Abbreviation: MV2)

If C(z- 1 ) = A(z- 1 ), as assumed in [12.4], it gives

-1 -1
_ [D(z )-A(z ) ]z
-1 r 1
(14.1-15)
zB(z )+~D(z )
1
(Abbreviation: MV3)

and for r = 0
-1 -1
G (z) = _ [D(z )-A(z ) ]z (14.1-16)
RMV4 zB(z-1)

(Abbreviation: MV4)

These controller equations have the following properties:


14.1 Minimum Variance Controllers for Processes without Deadtime 257

a) Controller order

numerator denominator
MV1 2m-1 2m
MV2 2m-1 2m-1
MV3 m-1 m
MV4 m-1 m-1

Because of the high order of MV1 and MV2, one should assume C(z- 1 )
A(z- 1 ) for modelling the noise and then prefer MV3 or MV4.

b) Cancellation of poles and zeros

Taking into consideration the discussion in chapter 6 on the approxi-


mate cancellation of poles and zeros of controllers and process, the
f~llowing can be stated:

MV1: The poles of the process (A(z- 1 ) = 0) are cancelled. Therefore the
controller should not be applied to processes whose poles are near
the unit circle or to unstable processes.

MV2: The poles and zeros of the process (A(z- 1 ) = 0 and B(z- 1 ) = 0)
are cancelled. Therefore the controller should not be used with
processes as for MV1 nor processes with nonminimum phase behavi-
our.

MV3: In general no restriction.

-1
MV4: The zeros of the process (B(z ) 0) are cancelled. Therefore
this controller should not be used with processes with nonmini-
mum phase behaviour.

The most generally applicable controller is therefore MV3.

c) Stability

It is assumed that the conditions listed under b) are satisfied. Then


the characteristic equation of the closed loop

with minimal variance controllers MV1 and MV2 becomes

[: A(z)+zB(z) ]D(z) o. (14.1-17)


1
258 14. Minimum Variance Controllers for Stochastic Disturbances

It follows that for closed-loop stability

MV1 and MV3 (r f 0) :


The zeros of the noise filter D(z) 0 must lie within
the unit circle of the z-plane.
- The zeros of
[~ A(z} + zB(z) J = 0
1
must lie within the unit circle. The larger the weight r
on the process input, the nearer are these zeros to the
zeros A(z) = 0, i.e. to the process poles.

MV2 and MV4 (r = 0) :


- The characteristic equation of the closed loop becomes,
for r = 0
zB ( z) D ( z) = 0.
- Therefore the zeros of the process B(z) = 0 and of the
noise filter D(z) = 0 must lie within the unit circle.
- The poles of the noise filter C(z) = 0 (for MV2) do not
influence the characteristic equation. Therefore they
can lie anywhere.

d) Dynamic control factor

The dynamic control factor of the closed loop means for the controller
MV1
zB(z- 1 )C(z- 1 )~A(z- 1 )D(z- 1 )
R(z) ~
r -1 -1 -1
n(z) [b A(z )+zB(z ) ]D(z )
1
(14.1-18)
For r = 0, i.e. for the controller MV2,

R(z) (14.1-19)
GP (z)
v
Therefore the dynamic control factor for r 0 is the inverse of the
noise filter. It follows that

lliL 1. ( 14 .1-20)
A.v(z)

Hence the closed-loop is forced to behave as the reciprocal of the


noise filter. Poles and zeros of the processes do not appear in Eq.
(14.1-19) because they are cancelled by the controller. With increa-
sing weight r on the process input, however, the poles of the process
increasingly influence the closed-loop behaviour, as can be seen from
Eq. (14.1-18).
14.1 Minimum Variance Controllers for Processes without Deadtime 259

e) Control variable y(k)

For the disturbance transfer behaviour of the closed-loop using con-


troller MV1 it is
~A(z- 1 )o(z- 1 )+zB(z- 1 )C(z- 1 )
r -1 -1 -1
A.v(z) [bA(z ) +zB (z ) ]C (z )
1

bA(z
r -1
)[D(z
-1
)-C(z
-1
)]
1 + ~1----~---------------­ (14.1-21)
[:A(z-1)+zB(z-1) JC(z-1).
1

The controlled variable y(k) for r +0 is a mixed autoregressive moving-


average process with order 2m for MV1 and order m for MV3. For r ~ 0,
i.e. for MV2 and MV4, it is y(z) ~ A.v(z), i.e. the controlled variable
becomes a statistically independent, i.e. white noise, process with
variance a 2 = >.. 2 a 2 • The smaller the weight ron the process input, the
y v
smaller is the variance of the control variable y(k), and the control
variable converges to a white noise signal A.v(k). The smallest variance
which can be attained by a minimum variance controller is therefore

min var[y(k) J = >.. 2 • (14.1-22)

f) Special case

If D(z- 1 ) = C(z- 1 ), all minimal variance controllers are identically


zero. If a statistically independent noise n(k) = A.v(k) acts directly
on the controlled variable, minimum variance controllers cannot de-
crease the variance of the controlled variable; only for coloured noise
n(k) can the variance of the controlled variable be reduced. The more
'colourful' the noise, i.e. the greater differences in [D(z- 1 )-C(z- 1 ) J
the larger is the effect of the minimum variance controller.

g) Behaviour of minimum variance controllers for constant disturbances


E{v(k)} f 0

From Eq. (14.1-13) it follows that the static behaviour of MV1 satis-
fies
A(1) [D(1)-C(1) J Lai [ Ldi -Lei J
GRMV1 (1) (14.1-23)
r
B ( 1) C ( 1) +:A ( 1) D ( 1) LbiLci-h,""LaiLdi
1
m
Here L is read as L • If the process GP(z) has a proportional action
behaviour, i.e. La~=~~
0 and Lb.
J.
f 0, then the controller MV1 in gene-
ral has a proportional action static behaviour. For constant distur-
260 14. Minimum Variance Controllers for Stochastic Disturbances

bances, therefore, offsets occur. This is also the case for the mini-
mum variance controllers MV2, MV3 and MV4. To avoid offsets with mini-
mum variance controllers some modifications must be made, and these are
discussed in chapter 14.3.

Typical properties of the minimum variance controllers are summarized


in Table 14.1.1. The best overall properties are shown by controller MV3.

Table 14.1.1 Different properties of minimum variance controllers


(A- = 0 means: zeros of A on or outside the unit circle,
c.f. chapter 6)

Con- Danger of Offset dis-


trol- GR Instability appears .for
instability
ler for w=n=u =1 w=n=1
for v
MV1 - zA[D-C] A- = 0 D- = 0 - -
r
zBC+bAD
1

zA(D-C) A- = 0 D - = 0 c (1) =0
c (1)=0
B-
MV2 r=O -
zBC = 0 B = 0

- = 0
MV3 C=A _z[D-AJ
- D - A(1)=0
zB+:D
1
-
B- = 0 -
C=A _z[D-A] D = 0 A. ( 1) =0
MV4 r=O zB B- = 0

Hence for practical realization of minimum variance controllers,


C(z- 1 ) = A(z- 1 ) should be assumed. In deriving minimum variance con-
trollers we assumed b 0 = 0. If b 0 + O, one needs only replace b 1 by b 0
and write

h) Choice of the weighting factor r

The influence of the weighting factor r on the manipulated variable


can be estimated by looking at the first input u(O) in the closed-loop
after a reference variable step w(k) = 1 (k), see Eq. (5.2-30). Then
one obtains u(O) = q 0 w(O) = q 0 • Therefore q 0 is a measure for the size
of the process input. For the controller MV1 (process without deadtime)
it follows, if the algorithm is written in the form of a general linear
14.2 Minimum Variance Controllers for Processes with Deadtime 261

controller as Eq. (11.1-1)

d1 - c,
q = (14.1-24)
0 b +..£..
1 b1

and for MV3


d1 - a,
qo (14.1-25)
b1 +..£..
b1

Hence, there is approximately a hyperbolic relationship between q 0 and


r/b 1 for r/b 1 >> b 1 • r = o leads to MV2 or MV4 with q 0 = (d 1- c 1 )/b 1 or
q 0 = (d 1- a 1 )/b 1 • A reduction of this q 0 by one half is obtained by
choosing
(14.1-26)

b 1 can be estimated from the process transient response as for a pro-


cess input step u 0 the relationship b 1 = y(1)/u 0 holds. For a process
with deadtime one obtains as well for MV1-d as for MV3-d(see section
14.2)
(14.1-27)

14.2 Generalized Minimum Variance Controllers


for Processes with Deadtime

The process to be controlled may be described by the transfer function


with deadtime

ml z
-d
z
-d (14 .2-1)
u(z)

as shown in Fig. 14.2.1. The disturbance filter is as assumed in Eq.


(14.1-2) and Eq. (14.1-3) describes the disturbance signal v(k). As
the input u(k) for processes with deadtime d can influence the con-
trolled variable y(k+d+1) at the earliest, the performance criterion

I(k+1) = E{y 2 (k+d+1) + ru 2 (k)} (14.2-2)

is used. Corresponding to Eq. (14.1-5), for the prediction of y(k+d+1)


results
262 14. Minimum Variance Controllers for Stochastic Disturbances

__
AV ---1 G ( l D(z1J
1--------, ~ Pv z =C (z-1)
,__ __.
I
I
I
_j
n
w y

Figure 14.2.1 Control with a minimum variance controller for processes


with deadtime

1 1 (d+1)v(z).
z = B(z- ) zu(z) + A D(z- )
z (d+1) y () 2 (14.2-3)
A(z- 1 ) C(z- 1 )

As at the time k for which u(k) must be calculated the disturbance sig-
nals v(k+1), •.• , v(k+d+1) are unknown, this part of the disturbance
filter is separated as follows

z(d+ 1 )y(z) = B(z- 1 ) zu(z) + A[F(z- 1 )z(d+ 1 )+ L(z=~)]v(z). (14.2-4)


A(z- 1 ) C(z )

As can also be seen from Fig. 14.2.1, the disturbance filter is sepa~
rated into a part F(z- 1 ) which describes the parts of n(k) which cannot
be controlled by u(k), and a part z-( 1 +d)L(z- 1 )/C(z- 1 ) describing the
part of n(k) in y(k) which can be influenced by u(k). The corresponding
polynomials are

-1 -d
F (z - 1 ) + f 1z + + fdz (14.2-5)
-1 -(m-1)
L (z - 1 ) 10 + 1 1 z + + lm-1 z . (14.2-6)

Their parameters are obtained by equating coefficients in the identity:

(14.2-7)

Example 14.2.1

For m = 3 and d 1 it follows from Eq. (14.2-7)

f1 d1 - c,
10 d 2 - c 2 - c 1f 1
14.2 Minimum Variance Controllers for Processes with Deadtime 263

11 d3 - c3 - c2f1

12 - c3f1

and for m 3 and d 2

f1 d1 - c1

f2 d2 - c2 - c1f1

10 d - c - c1f2 + c2f1
3 3
11 - c2f2 - c3f1

12 - c3f2.

The coefficients for m = 2 are obtained by c 3 d3 0 and for m

Eq. (14.2-4) now leads to

B(z- 1 )C(z- 1 )zu(z)


+ AF(z- 1 )A(z- 1 )C(z- 1 )z(d+ 1 )v(z)

+ AL(z- 1 )A(z- 1 )v(z). (14.2-8)

After multiplying and transforming back into the time-domain, one ob-
tains from Eq. (14.1-7) to Eq. (14.1-10) I(k+1) and from ai(k+1)/au(k)=O
as in Eq. ( 1 4 . 1-1 2)

[z(d+ 1 )y(z)-AF(z- 1 Jz(d+ 1 )v(z)]b 1 + ru(z) = 0. (14.2-9)

Substituting from Eq. (14.2-3) one finally obtains

-1
AV(Z) = ~_l y(z)
D (z - 1 )

one finally obtains

u(z) A ( z - 1 ) ( D ( z - 1 ) - F ( z - 1 ) C ( z - 1 ) Jz ( d+ 1)
GRMV1d (z) = - - -1 -1 -1 r -1 -1
y (z) zB(z )C(z )F(z )+~A(z )D(z )
1
-1 -1
A(z )L(z ) • (14.2-10)
zB(z 1 JC(z 1.)F(z 1 )+: A(z- 1 )D(z- 1
1
(In short: MV1-d)
264 14. Minimum Variance Controllers for Stochastic Disturbances

For r = 0

GRMV2d(z) (14.2-11)

(In short: MV2-d))

With C(z- 1 ) = A(z- 1 ) and r +0 it follows that

GRMV3d(z) (14.2-12)

(In short: MV3-d)

and with r = 0

GRMV4d (z) -1 -1 . (14.2-13)


zB(z )F(z )

(In short: MV4-d)

The properties of these minimum variance controllers with d +0 can be


summarized as follows:

a) Controller order

- MV1-d and MV2-d: Numerator: 2m-1


Denominator: 2m+d-1 (d ~ 1)
- MV3-d and MV4-d: Numerator: m-1
Denominator: m+d-1 (d ~ 1)

b) Cancellation of poles and zeros

As for controllers without deadtime.

c) Stability

The characteristic equations for MV1-d and ~fV3-d are

[brA(z) + zB(z) ]D(z) = 0 (14.2-14)


1
and for MV2-d and MV4-d

zB(z)D(z) = 0. (14.2-15)

They are identical with the characteristic equations for the mini-
mum variance controllers without deadtime, and therefore one reaches
the same conclusions concerning stability.
14.3 Minimum Variance Controllers without Offset 265

d) Dynamic control factor

For MV1-d one obtains


zB(z- 1 )C(z- 1 )F(z- 1 )~A(z- 1 )o(z- 1 )
R(z) ~ (14.2-16)
n(z) [: A(z-1)+zB(z-1) ]D(z-1)
1
With r = 0 it follows that for controller MV2-d
C(z- 1 )F(z- 1 ) F(z -1) = 1 - z -(d+1)L(z- 11 ) •
R(z) (14.2-17)
D (z - 1 ) D (z- )

Again in the dynamic control factor the reciprocal disturbance filter


arises, but it is now multiplied by F(z- 1 ) which takes into account
disturbances v(k+1) ••• v(k+d+1) which cannot be controlled by u(k).

e) Controlled variable

For r 0, we have for controllers MV2-d and MV4-d

A;~~~ R(z)GPv(z) * = F(z- 1 ). (14.2-18)

y(z) is therefore the moving average process

y(k) = [v(k) + f 1v(k-1) + ..• + fdv(k-d)]A (14.2-19)

and the variance of y(k) is


2 2 2 2
var[y(k) J= E{y (k)} = [1 + f 1 + ••• + fd]A . (14.2-20)

The larger the deadtime the larger is the variance of the controlled
variable.

14.3 Minimum Variance Controllers without Offset

To avoid offsets of the controlled variable for constant external dis-


turbances or constant reference value changes, the controller should
satisfy, c.f. chapter 4,

lim GR (z) (14.3-1)


z+l
As this is not true for the derived minimum variance controllers in the
case of proportional acting processes, the controllers must be suitab-
ly modified and three methods are described in the next sections.
266 14. Minimum Variance Controllers for Stochastic Disturbances

14.3.1 Additional Integral Acting Term

The simplest modification is to add an additional pole at z = 1 to the


minimum variance controller transfer function. Rather more freedom in
weighting the integral term is obtained by multiplying the minimum va-
riance controller

u 1 (z}
(14.3-2}
YTZ>
by the proportional integral action term
-1
G (} u(z} = 1 + _SL = 1-(1-a}z (14.3-3}
PI z = u 1 (z} z- 1 1-z- 1

This results in an additional difference equation

u(k} - u(k-1} = u 1 (k} - (1-a}u 1 (k-1} (14.3-4}

with the special cases

a = 0: u(k} u 1 (k} (only P-action; no I-action}


a= 1: u(k}-u(k-1} U 1 (k} (equal weighting of the P- and
I-term}

For a +0 then

lim GR(z} = lim G (z}G (z} = oo


z+1 z+1 MV I

is fulfilled if for controllers MV1 and MV2 D(1} + C(1}, and for MV3
and MV4 D(1} + A(1}. If these conditions are not satisfied, additional
poles at z can be assumed

MV2: C 1 (z} (z-1}C(z}


MV3 and MV4: A (z} 1 (z-1}A(z}

Only for MV1 is there no other possibility. The insertion of integra-


tions has the advantage of removing offsets. However, this is accom-
panied by an increase of the variance of y(k} for higher frequency dis-
turbance signals v(k}, c.f. section 14.5. Through a suitable choice of
a both effects can be weighted against each other.

14.3.2 Minimization of the Control Error

The minimum variance controllers of section 14.2 were derived for a


vanishing reference variable w(k} = 0 and therefore for y(k} = - e(k).
Now the performance criterion is modified into
14.4 Variance Controllers for Processes with Pure Deadtime 267

I(k+1) = E{[y(k+d+1) - w(k) ] 2 + r[u(k) - u (k) ] 2 } (14.3-5)


w

so that the variances around the non-zero operating point [w(k); uw(k) J
are minimized with

A( 1) 1
uw(k) = B1TT w(k) = KP w(k) (14.3-6)

the value of u(k) for y(k) = w(k) 1 the zero-offset case. A derivation
corresponding to section 14.2 then leads to the modified minimum vari-
ance controller [14.2]

-1 -1 -1
u (z)
L(z l[D(z )-C(z )]z
-1 -1 -1 r -1 -1 Y ( z)
zB(z )C(z )F(z ) + ~A(z )D(z )
1

GRMV1-d(z)

(1 + ...£. ;.)w(z).
b1 p

(14.3-7)
This controller removes offsets arising from variations in the reference
variable w(k). Another very simple possibility in the connection with
closed loop parameter estimation is shown in section25.3.

14.4 Minimum Variance Controllers for Processes with


Pure Deadtime
The minimum variance controllers of section 14.2 1 being structurally
optimal for the process B(z- 1 lz-d/A(z- 1 ) and the stochastic disturbance
filter D(z- 1 )/C(z- 1 ) 1 were derived for timelag processes with deadtime.
As can be seen from Eqs. (14.2-18) - (14.2-20) 1 the controlled variab-
les of the controllers MV2-d and MV3-d are a moving average signal pro-
cess of order d whose variance increases strongly with deadtime d.

As in section 9.2.2 we consider minimum variance controllers for pure


deadtime processes. Based on

-1 -(d-1)
B(z )z (14.4-1)
-1
b 1z and the deadtime d-1 as in section 14.2 1 the follow-
ing controllers can be derived (c.f. Eq. (9.1-4):
268 14. Minimum Variance Controllers for Stochastic Disturbances

a) Disturbance filter: GPv

(14.4-2)

-1
G = _ L(z ) (14.4-3)
RMV2d b1C(z-1)F(z-1)

with, from Eq. (14.2-5),


- (d-1)
+ fd-1z (14.4-4)

and from Eq. (14.2-7) one now has

-1 -1 -1 -d -1 (14.4-5)
D(z )=F(z )C(z )+z L(z ).

If the order of the polynomial C(z- 1 ) is m ~ 1 or of D(z- 1 ) m ~ d, then


there exist nonzero controllers.

-1 -1
D (z ) -+ C (z )

GRMV3d (z) (14.4-6)

L (z - 1 )
GRMV4d(z) (14.4-7)
1
b 1F(z )

From Eq. (14.2-7) it follows that L(z- 1 ) = 0, and therefore no control-


ler exists if the order m of D(z- 1 ) ism~ d- 1. This again illustra-
tes the principle used to derive the minimum variance controller - to
predict the controlled variable y(k+d+1) based on known values of
u(k-1), u(k-2), ... and v(k-1), v(k-2), ... and use the predicted va-
lue to compute the input u(k). Here the component of the disturbance
signal

yv(k+d+1) = [v(k+d)+f 1v(k+d-1)+ ... +fd_ 1v(k+1) ]A (14.4-8)

cannot be considered nor controlled (see Eq. (14.2-4) and Eq. (14.2-19)).
If now the order of D(z- 1 ) ism= d-1 then

yv(k+d+1) = [v(k+d)+d 1v(k+d-1)+ •.. +dd_ 1v(k+1) ]A. (14.4-9)

Then D(z- 1 ) = F(z- 1 ), and the disturbance signal consists of the uncon-
trollable part so that the minimum variance controller cannot lead to
14.5 Simulation Results with Minimum Variance Controllers 269

any improvement over the open-loop and is therefore null. Only if m ~ d


can the minimum variance controller generate a smaller variance of y(k)
than the open-loop.

Hence minimum variance controllers lead only to a better control perfor-


mance compared with the uncontrolled pure deadtime process if the dis-
turbance signal n(k) acting on y(k) is an autoregressive moving average
(coloured noise) process or a moving average process of order m ~ d.

14.5 Simulation Results with Minimum Variance


Controllers
The behaviour of control loops with minimum variance controllers is now
shown using an example. The minimum variance controllers MV3 and MV4
were simulated for a second-order test process using a digital computer.

Process VII (second-order low-pass process)

A(z- 1 ) 1- 1.036z- 1 + 0.263z- 2

0.1387z- 1 + 0.0889z- 2 } ( 14.5-1)


+ 0.5z- 1 + 0.25z- 2

The polynomials A and B are obtained from the transfer function

1
G(s) = (1+7.5s) (1+5s)

with a sample time T0 = 4 sec.

For the minimum variance controller MV4, Eq. (14.1-15), the quadratic
mean values of the disturbance signal n(k), the control variable y(k)
and the process input u(k) were determined by simulation for weighting
factors on the process input of r = 0 ••• 0.5 and weighting factors on
the integral part of~= 0 ••• 0.8, applying Eq. (14.1-25). Then the cha-
racteristic value (the stochastic control factor)

K = (14.5-2)

was determined and shown as a function of

In Figure 14.5.1 N 150 samples were used. Figure 14.5.1 now shows:
270 14. Minimum Variance Controllers for Stochastic Disturbances

The best control performance (smallest K) is obtained using


r = 0 and a = 0, i·. e. for controller MV4.
- The rather small weighting factors of r = 0.01 or 0.02 reduce
the effective value of the manipulated variable compared with
r = 0 by 48 % or 60 % at the cost of a relatively small increase
in the effective value of the controlled variable by 12 % or
17 % (numbers given for a = 0). Only for r ~ 0.03 does K become
significantly worse.
- Small values of the integral part a ~ 0.2 increase the effec-
tive value of the controlled variable by r ~ 3 •.• 18% according
to r. For about a > 0.3 the control performance, however, be-
comes significantly worse.

K=~~
1.0
.,.
',0.
~i".
.
o.1-\\J\_
'\
1\.
o.1\ /""'
o.os·'\. 1 "-.
0.5
' ............/ '------- 0 L.
r=o.oi'-.....•~:2
O.Q1 ~
r
-oicx=O
r- I
MVL.

0 0.5 1.0 1.5

Figure 14.5.1 Stochastic control factor K as a function of the manipu-


lating effort Su for process VII with the minimum vari-
ance controller MV3 for different weighting factors r on
the process input and a on the integral part.
14.5 Simulation Results with Minimum Variance Controllers 271

Figure 14.5.2 a) shows a section of the disturbance signal n(k) for


A= 0.1, the resulting control variables and manipulated variable with
the minimum variance controller MV4 (r 0)

-1
G (z) = _ 11.0743- 0.0981z
RMV 4 1 + 0.6410z- 1

and with the controller MV3 for r = 0.02

-1
G (z) = _ 5.4296 - 0.0481z
RMV 3 1 + 0.569z- 1 + 0.1274z- 2

For MV4 it can be seen that the standard deviation of the controlled
variable y is significantly smaller than that of n(k); the peaks of
n(k) are especially reduced. However, this behaviour can only be ob-
tained using large changes in u(k). A weighting factor of r = 0.02 in
the controller MV3 leads to significantly smaller amplitudes and some-
what larger peak values of the controlled variable.

Figure 14.5.2 b) shows the responses of the controlled variable to a


step change in the reference value. The controller MV4 produces large
input changes which are only weakly damped; the controlled variable
also shows oscillating behaviour and an offset. By using the deadbeat
controller DB(v) the maximal input value would be u(O) 1/(b 1 +b 2 )=4.4;
the minimum variance controller MV4, however, leads to values which
are more than double this size. In addition, the offset means that the
resulting closed loop behaviour is unsatisfactory for deterministic
step changes of w(k). The time response of u(k) obtained using con-
troller MV3 and r = 0.02 is much better. However, the input u(O) is
still as high as for DB(v) and the offset is larger than for MV4.

For a = 0.2, Fig. 14.5.2 c), the offset vanishes. The time response of
the manipulated and the controlled variable is more damped compared with
Fig. 14.5.2 b). The transient responses of the various controllers are
shown in Figure 14.5.3.

The simulation results with process III (third order with deadtime)
show that with increasing process order it becomes more and more diffi-
cult to obtain a satisfactory response to a step change in the refe-
rence variable
272 14. Minimum Variance Controllers for Stochastic Disturbances

n
04
w w
0.3
0.2 1 .................... 1 ....................

0.1
0~~~~~~-+~--~+---~ 0 t---+...--..::.k-
50 10 20 10 20
-0.4
y
-0.3
-0.2

................
1.5 MV 4 lr=OI 1.5 •

1.0 \/

0
10 20 10 20

u u

10 10
MV4ir=OI

0
20

-1.0 -5 5

-1.5
-2.0 -10 10
y

04 MV 3 lr• 0.02) MV 3 lr:0.021

0.3
02
0.1
............. , z'-.;;·"··············-
10 20 10 20
-0.1
-0.2 u u

10 10

MV 3 (r= 0.02)

10 20

a) Stochastic disturbance b) Step change in c) Step change in


n(k) the reference the reference
variable w (k). variable w (k),
a. = 0 a. = 0.2

Figure 14.5.2 Signals for process VII with minimum variance controllers
MV4 and MV3
14.5 Simulation Results with Minimum Variance Controllers 273

u u

10 10
MV4 (r=O) MV4 (r=O)

5 5

0 10 20 k 0 10 20 k

u u
10 10

MV 3 (r=0.02)

5 5

0 10 20 k 0 10 20 k
a)cx=O b) 0<=0.2

Figure 14.5.3 Transient responses of the controllers MV3 and MV4 and
process VII for different r and a
15. State Controllers for Stochastic
Disturbances

15.1 Optimal State Controllers for White Noise

The process model assumed in chapter 8 for the derivation of the state
controller for deterministic initial values is now excited by a vector
stochastic noise signal ~(k)

~(k+1) =~ ~(k) + ~ ~(k) + ~ ~(k). (15.1-1)

The components of ~(k) are assumed to be normally distributed, statis-


tically independent signal processes with expectation

E{~(k)} = 0 (15.1-2)

and covariance matrix

(15.1-3)

c
where
for i j
0 .. =
+j
~J
for i

is the Kronecker Delta-function. ~(k) is assumed to be statistically


independent of ~(k). The initial value ~(0) is also a normal stochas-
tic process with

E{~(O)} =0
(15.1-4)
cov[~(O) J
The matrices V and ~O are positive semidefinite.

We now look for a controller which generates a process input ~(k),


based on completely measurable state variables ~(k), such that the
control system approaches the final state ~(N) ~ Q and the quadratic
performance criterion
15.1 Optimal State Controllers for White Noise 275

N-1
E{I} = E{~T(N)~ ~(N) + E (~T(k)~ ~(k)+~T(k)~ ~(k)J} (15.1-5)
k=O
becomes a minimum. Here ~ is assumed to be positive semidefinite and
symmetric, and ~ to be positive definite and symmetric. As the state
variables and input variables are stochastic, the value of the perfor-
mance criterion I is also a stochastic variable. Therefore the expec-
tation of I is to be minimized, Eq. (15.1-5). As in section 8.1 the
output variable y(k) is not used. Section 15.3 considers the case of
nonmeasurable state variables ~(k) and the use of measurable but dis-
turbed output variables. The literature on stochastic state controllers
started at about 1961, and an extensive treatment can be found in
[12.2], [12.3], [12.4], [12.5], [8.3].

The Bellman optimality principle, described in section 8.1, can be


used to calculate the optimal input ~(k), giving:
N-1
min E{I} =min E{xT(N)Q x(N) + E [~T(k)~ ~(k)+~T(k)~ ~(k) J}
~(k) - - - k=O
k=O, 1 ,2, ••• ,N-1
N-1
min E { min E{~T (N) ~ ~ (N) + E [~T (k) ~ ~ (k) +}!T (k) ~ }! (k)]} }.
k=O
(15.1-6}
If I possesses a unique minimum, it is given by([12.4J p.260)

min E{I} = E{min I}. (15.1-7}

Optimization and expectation operations can therefore be commuted. It


is therefore plausible that one obtains the same equations for stochas-
tic state controllers as in the deterministic case. This is [12.4]

~(N-j) = - !N-j ~(N-j} (15.1-8}

together with Eqs. (8.1-30) and (8.1-31}. For N + oo the stationary so-
lution becomes

}!,(k) =- ! ~(k} (15.1-9}

i.e. a linear time-invariant state controller. This controller can be


interpreted as follows. From Eq. (15.1-1}, }!(k} can only decrease
~(k+1}. Since ~(k+1}, as well as }!(k), depend only on ~(k) and ~(k},

but not on ~(k-1), ~(k-2), .•• and ~(k-1}, ~(k-2), ••• and furthermore
~(k) is statistically independent of ~(k-1), ~(k-2), ••• the control
276 15. State Controllers for Stochastic Disturbances

law for large N can be restricted to ~(k) = !(~(k)) (c.f. Eq. (15.1-9)).
For small N both the stochastic initial value ~(0) and the stochastic
disturbances y(k) have to be controlled, and the resulting optimal con-
troller is Eq. (15.1-8). As the optimal control of a deterministic ini-
tial value ~(O) leads to the same controller, Eq. (8.1-33) is an opti-
mal state controller for both deterministic and stochastic disturbances
if the same weighting matrices are assumed in the respective performance
criteria.

We now consider the covariance matrix !(k+1) of the state variables in


closed loop for the stationary case. From Eq. (15.1-1) and Eq. (15.1-9)
it follows that

~(k+1l = [~ - ~ ~J ~<kl + K y(kl (15.1-10)

and according to Eq. (12.2-25)

! (k+1) (15.1-11)

and for k + oo the covariance matrix becomes

(15.1-12)

The value of the performance criterion which can be attained with the
optimal state controller, can be determined as follows. If Eq. (15.1-1)
instead of Eq. (8.1-7) is introduced into Eq. (8.1-6), the calculations
of that section follow until Eq. (8.1-19) giving

T T T
E{IN-1 ,N} E{~ (N-1) ~N-1,N~(N-1)+y (N-1lK 2 K y(N-1)}
E{~T (N-1) T T
E{IN-1} ~N- 1 ~(N-1)+y (N-1lK2Ky(N-1)}
or
T T T
E{~ (N-2)~N-2 ~(N-2) + y (N-2)K 2 K y(N-2)
+ yT(N-1)KT2 K y(N-1)}
and therefore finally, if y(k) is stationary, for N steps

(15.1-13)

In the steady state ~ = ~' and instead of the single initial state
~(0) the disturbance signals K y(k) can be taken. Then

Y = lim~ E{I 0 } lim~ E{N yT(k)KT~ K y(k)+N yT(k)KT2 K y(k)}


N+oo N+oo

(15.1-14)
using Eq. (12.2-28).
15.2 Optimal State Controllers with State Estimation 277

15.2 Optimal State Controllers with State Estimation


for White Noise
In section 15.1 it was assumed that the state variables ~(k) can be
measurable exactly, but in practice this is generally untrue and the
state variables must be determined on the basis of measured variables.
We now consider the process

~(k+1) = ~ ~(k) + ~ ~(k) + F ~(k) (15.2-1)

with measurable outputs

y(k) = f ~(k) + g(k) (15.2-2)

or

y(k+1) = f ~(k+1) + g(k+1).

Here it is assumed that the output disturbance signal satisfies

E{g(k)} = 0

cov[_n(k); T=i-jJ = E{n(i)nT(j)} =No ..


- - - ~J

i.e. white noise. In section 15.4 it will be shown that the unknown
state variables can be recursively estimated by a state variable filter
(Kalman filter) which measures y(k) and ~(k) and applies the algorithm

~(k+1) ~ ~(kl + ~ ~(kl + I<k+1l[y(k+1l-f ~ ~(kl-f ~ ~<k> J.


(15.2-4)
Here I(k+1), the correction matrix, follows from Eqs. (15.4-24) and
(15.4-26 ). Fork+ oo this matrix converges to a constant I· For state
estimation ~' y and K have to be known. Replacing the state variables
~(k) in the control law Eq. (15.1-9) by their estimates

~(k) = - .!5. ~(k) (15.2-5)

then one again obtains an optimal control system which minimizes the
performance criterion Eq. (15.1-5) [12.4]. For the overall system we
then have

[~(k+1 '1 [~ ~ ~ c
!
A-B K-r
- B
w(k)J
~(k+1)
----- ~ ~(k)

+ [~ f K
0]
I
[v(k) ]
g(k+1l
(15.2-6)

Introducing an estimation error, as in section 8.7,


278 15. State Controllers for Stochastic Disturbances

~(k) = ~(k) - ~(k) (15.2-7)

and transforming Eq. (15.2-6) by the linear transformation of Eq. (8.7-4)


into the equation system

[~(k+1)I
~(k+1) "[:- B K

A -
B K
.I. f ~
I [x(k)l
x<k>

+ [ [~ - 0] [ v(k) J (15.2-8)
r fJ~-.I. ,!!(k+1)

This equation system is identical to the equation system Eq. (8.7-5)


with exception of the last noise term. Instead of the observer feedback
~~(k) = ~ f ~(k) here ~~(k+1) = .I. f ~ ~(k), the state filter feedback,
influences the modes, as the state filter, unlike the observer of sec-
tion 8.6, uses a prediction ~ ~(k) to correct the state estimate. The
poles of the control system with state controller and filter follow
from Eq. (15.2-8)

(15.2-9)

They consist, in factored form, of the m poles of the control system


without state filter, Eq. (15.1-10), and of them poles of the state
filter. Therefore the poles of the control and the poles of the state
filter do not influence each other and can be independently determined.
Stochastic state controllers also satisfy the separation theorem. The
design of the state filter is independent of the weighting matrices Q
and ~ of the quadratic performance criterion which determine the linear
controller as well as the process parameter matrices ~ and ~- The de-
sign of the controller is also independent of the covariance matrices
y and ~ of the disturbance signals and independent of the disturbance
matrix F. The only common parameters are~ and~-

As the state controller is the same for both optimally estimated state
variables and exactly measurable state variables, one can speak of a
'certainty equivalence principle'. This means that the state controller
can be designed for exactly known states, and then the state variables
can be replaced by their estimates using a filter which is also design-
ed on the basis of a quadratic error criterion and which has minimal
variances. Compared with the directly measurable state variables the
control performance deteriorates (Eq. (15.1-14)), because of the time
delayed estimation of the states and their error variance [12.4].
15.3 Optimal State Controllers for External Disturbances 279

Note that the certainty equivalence principle is valid only if the con-
troller has no dual property, that means it controls just the current
state and the manipulated variable is simply computed so that future
state estimates are uninfluenced in any definite way [15.1]. A general
discussion of the separation and certainty equivalence principles is
in chapter 25.

15.3 Optimal State Controllers with State Estimation


for External Disturbances

In the design of the stochastic state controller of Eq. (15.2-5) a white


vector noise signal ~ (k) was assumed to influence. the state vector
~(k+1). As the output signal with ~(k) 0 satisfies

y(k) =f ~(k)

the internal disturbance ~(k) generates an output

Yv(k) = f ~v(k) (15.3-1)

with
(15.3-2)

The disturbance component yv(k) can also be generated by an external


disturbance signal 1(k) with the disturbance signal model

~s: <k> = f .!l <k> (15.3-3)

.!J.(k+1) = ~ .!J.(k) + ~ 1(k) (15.3-4)

see Figure 15.3.1. The covariance matrix of ~(k) is

cov[>
..2
(k) ,T=i-j J = :::
-
6 ...
~J
(15.3-5)

The generation of this disturbance signal model for external distur-


bances corresponds to the discussion of sections 8.2 and 8.7.2. If the
assumptions on 1(k) correspond to the assumption on ~(k), a filter es-
timates the state variables .!J.(k) of the disturbance signal model based
on measurements of ~s(k) or y(k), so that the components of the dis-
turbance signal 1(k) are optimally controlled as the signals ~(k) using
the state controller of Eq. (15.2-5).
280 15. State Controllers for Stochastic Disturbances

Figure 15.3.1 Stochastical ly disturbed process with a disturbance


model for external disturbances ~~(k)

Now will be discussed which disturbance signal filter

j = 1,2, •.. ,r (15.3-6)

can be realized with the disturbance model of Eqs. (15.3-3) and (15.3-4
Here we consider a process with one input and one output. Then, from
Eq. (3.2-50) for a disturbance signal n~j = n~, we have

~T[zl-~J-1r ~(zl = ~Tadj[zi-AJr ~(z). (15.3-7)


det[z!-~J

If F is now a diagonal matrix, it follows that

(15.3-8)

and, depending on the choice of fi' one obtains for each ~i (z) a dis-
turbance signal filter
Di ( z)
Gp.,-· (z) = -- i 1,2, ... ,m (15.3-9)
"1 A(z)

with

A ( z) (15.3-10)

!? (z) [Dm (z) .. . Di (z) ... o 1 (z) J


T
-cTadj[zi-A]F
---
. (15.3-11)
15.3 Optimal State Controllers for External Disturbances 281

It should be noted that the process satisfies

£Tadj[z.!_-~]£
G (z) = lJ!l
u(z)
= B(z)
A(z)
(15.3-12)
P

The denominators of GP(z) and GP~(z) are therefore identical, and the
polynomials Di(z) and B(z) contain common factors. The following exam-
ple will show possible forms of Di(z) for two canonical state repre-
sentations (c.f. section 3.6).

Example 15.3.1

Consider a second order process with transfer function


-1 -2
b 1z +b 2 z b 1 z+b 2
B(z)
A (z) 1+a 1 z
-1 +a z -2
2 z 2+a 1 z+a 2

a) Controllable canonical form

A(z)

B(z)
:][ ~]
With F as a diagonal matrix, one obtains

:,J
= [f 2 b 2 z + f 2 a 1 b 2 + a 2b 1].
f 1b 1 z + f 1b 2

Therefore with white disturbances ~i (k) as input, disturbance polyno-


mials

can be generated which satisfy the following conditions on their para-


meters:
282 15. State Controllers for Stochastic Disturbances

- ~;, (k) +0; ~; 2 <k> 0: f1 +0; f2 0

d11 f1b1

d21 f1b2

- ~;, (k) 0; 1;2 (k)


+0: f1 0; f2 +0
d12 f2b2

d22 f2a1b2 + a2b1.

d 1 i and d 2 i cannot be arbitrarily chosen because they are dependent on


each other, so that choice of one parameter fixes the other.

b) Observable canonical form

~ = [0
1
-a2]
-a 1
b
[~]
A(z)

B(z) [0 1 J !'z:al) -:2 ] r::J


[1 z] r::]
QT (z) [1

Hence only the following disturbance polynomials can be realized

o 1 (z) d21z f 1 z for I; 1 (k) +0; ~; 2 <kl 0

o 2 (z) d12 f2 for ~;, (k) 0; 1;2(k)


+o.
Here also d1i and d2i cannot be freely chosen; one of the two parame-
ters is always zero.
0

This example shows that with the assumption of white vector disturbance
signals ~(k) or ~(k) with independent disturbance signal components,
the parameters of the corresponding disturbance signal polynomials can-
not possess arbitrary values. This position changes, however, if the
disturbance signal components are equal:

I; 1 (k) = 1; 2 (k) = ••• = E;m (k) = I; (k) • (15.3-13)


15.3 Optimal State Controllers for External Disturbances 283

Then ~ changes into a vector

fT = [fro •.• f 2 f 1 ] ( 15.3-14)


and in example 15.3.1 we have

where
a) Controllable canonical form

b) Observable canonical form

The parameters of D can then be chosen independently. The assumption of


Eq. (15.3-13) means that all elements are equal in the covariance ma-
trix of the disturbance

ll (15. 3-15)

This covariance matrix is, however, positive semidefinite for cr~+ 0,


so that the assumptions of Eq. (15.1-1) are not violated, nor is the
assumption of Eq. (15.4-4) used in deriving the Kalman filter violated
by Eq. ( 15. 3-1 5) •

Until now F was assumed to be diagonal. If all elements are non-zero,


That means in the case of example 15.3.1

(15.3-16)

then arbitrary parameters d 1 , d 2 , ••• can be realized.

From this discussion, as in the discussions of section 8.2 and 8.7.2,


it follows that the state controller of Eq. (15.2-5) also becomes op-
timal for external correlated disturbances ~~(k) as a consequence of
the assumption of a white vector disturbance process v(k) where n (k)
- -~
is generated through the disturbance filter of Eqs. (15.3-3) and
(15.3-4) from ~(k). By the choice of the elements of f, disturbance
filters can be given in the form
284 15. State Controllers for Stochastic Disturbances

~ _ D(z) (15.3-17)
I; (z) - A(z)
with
(15.3-18)

where e.g.

£_(k) = [1 .•. 1 1 JTI; (k) (15.3-19)


or
~(k) = [1 •.. 1 1 ]Tv(k)
T
and F
-
= -f = [fm .•. f 2 f 1] . (15.3-20)

The parameters of ! and ~ or y and therefore the parameters of the dis-


turbance polynomial D(z) affect only the design of'the state filter but
not the design of the state controller. Therefore in the state filter
we must set either

v = - (15.3-21)

from Eq. (15.3-15) and F =! from Eq. (15.3-20), or all elements of r


must be properly chosen so that the stochastic correlated disturbance
signal nl;(k) can be optimally controlled.

15.4 State Estimation (Kalman Filter)

It is assumed that a stationary stochastic vector signal can be des-


cribed by the Markov process

~<k+1> =~ ~<k> + r ~<k> ( 1 5. 4-1)

with measurement equation

y(k) = £ ~(k) + g(k) or y(k+1) = £ ~(k+1) + g(k+1) (15.4-2)

see Figure 15.4.1 and section 12.2.1. This process will be extended
subsequently to include a measurable input ~(k). Here the following
symbols are used:
~(k) (mx1) state vector
~(k) (px1) vector input noise, statistically independent with
covariance matrix V
y(k) (rx1) measurable output vector
15.4 State Estimation (Kalman Filter) 285

x(O) n(k+1)

Figure 15.4.1 Stochastic vector Markov process y(k+1) (~ = Q) or a


dynamic process with measurable input u(k), output y(k+1)
and noise ~ (k) -

_!!(k) (rx1) vector output noise, statistically independent


with covariance matrix N
A (mxrn) system matrix
F (rnxp) input matrix
c (rxm) output matrix

~' ~ and C are assumed to be time invariant. The objective is to esti-


mate the state vector ~(k) based on measurements of the outputs y(k)
which are contaminated by white noise _!!(k). The following are assumed
known a priori

~' C and F
(15.4-3)

cov[~(k) ,T=i-j] v 0 l.J


.. (15.4-4)

E{_!!(k)} = Q

cov[_!!(k) ,T=i-jJ E{_!! (i) _!!T (j)} N ..


ol.J (15.4-5)

where
1 for i = j
{
0 for i f j

is the Kronecker delta-function. ~(k) and _!!(k) are statistically inde-


pendent. As the state estimates are time varying in most applications
one is interested in recursive estimation, in which the states ~(k)

are calculated after the measurement of y(k).

The derivation of the estimation algorithms can be based on several


estimation methods, for example
286 15. State Controllers for Stochastic Disturbances

the principle of orthogonality between estimation error


and measurement E{~ yT} = Q [15.4], [15.6], [15.2]
the recursive least squares method [15.7]

the minimum variance estimation [15.8]

the maximum likelihood method [15.8]

the Bayes method [3.12]

The following derivation uses minimum variance recursive estimation.


This method is appropriate as an introduction because it is transparent
and leads directly to a nice interpretation of the result.

15.4.1 Weighted Averaging of Two Vector Measurements

The resulting Kalman filter estimation algorithms form a weighted mean


of two independent vector estimates. Therefore it is assumed that ~ 1
and ~ 2 are two statistically independent estimates of an m-dimensional
vector ~· The weighted mean of these two estimates is

X (15.4-6)

where K' is an (rnxrn) weighting matrix which is to be chosen such that


the variance of the estimate x
becomes a minimum. Subsequently a dyna-
mic system is considered where ~ 1 is a state vector of dimension m and
instead of ~ 2 a measurable output vector y 2 with dimension p < m is
used. Therefore

(15.4-7)

is asserted. Then Eq. (15.4-6) becomes

K'

~1 + ~ [y 2 -cx 1 ]

[!-~ f]~1 + ~ Y2· (15.4-8)

The (rnxm) covariance matrix of ~1 is

T
.Q = E{ (~ 1 -E{~ 1 }) (~ 1 -E{~ 1 }) } (15.5-9)

and the (pxp) covariance matrix of y2 is

y (15.4-10)
15.4 State Estimation (Kalman Filter) 287

For the covariance matrix of the estimate~ Eq. {15.4-8) gives


A A A A T
p E{[~-E{~})[~-E{~}J}

E{({!-~ £)~1 + ~ y2- {!-~ £)E{~1}- ~ E{y2}J


T
[{!-~ £)~1 + ~ Y2- {!-~ £)E{~1}- ~ E{y2})}

(!- ~ £)2(!- ~ £)T + ~! ~T (15.4-11)

as ~ 1 and y 2 are statistically independent. Now a value of ~ is saught


which minimizes the variance of the estimation errors (the diagonal ele-
ments of~). To find this minimum without differentiation. Eq. {15.4-11)
is modified. By multiplying out it follows that

p =2 + (~ £)2(~ £)T- {~ £)2- 2{~ £)T + K Y KT

2 + ~(£ g £T+!)~T _ ~(£ 2 ) _ (£ 2 )T~T {15.4-12)

with g = 2T, because 2 is symmetric. Eq. {15.4-12) can be formed into


a complete square in~· Two new matrices Rand S are introduced [15.9]

(15.4-13)

If now we set

S ST =£ 2 £T + y (15.4-14)

S RT £ 2 {15.4-15)

then
{~ £ _ ~) (~ £ _ ~)T ~{£ 2 £T + !l~T _ ~{£ Ql _ (£ Q)T~T + ~ ~T.
(15.4-16)

This equation agrees with (~-2) except for g gT; see Eq. (15.4-12).
R RT can be obtained from Eq. {15.4-14) and Eq. {15.4-15) as follows

STS RT £T£ 2
RT (£T£)-1£T£ 2

R 2 £T£(£T£)-1
R RT 2 £T£(£T£)-1 (£T£)-1£T £ Q

w
STW S = (£T£) (£T£)-1(£T£)-1(£T£) I
S STW S ST =S I ST =S ST
288 15. State Controllers for Stochastic Disturbances

and with Eq. (15.4-14)

R RT =9 fT(f 9fT+ !l- 1£ 9· (15.4-17)

With Eqs. (15.4-16) and (15.4-17), Eq. (15.4-12) can be written as

(15.4-18)

T
In this equation only the term (~ ~ - ~) (~ ~ - ~) depends on K. The
diagonal elements of this term consist only of squares of elements of
(~ ~ - ~) and therefore can have only positive or zero values. The dia-

gonal elements of P become minimal only if (~ ~ ~) 0 or, using Eq.


(15.4-14) and Eq. (15.4-15)

K S R

K R ST(~ ~T)-1

K (15.4-19)
A
The minimum variance of the estimation error of x is then, using Eq.
(15.4-12)

( 15. 4-20)

and the estimate with minimum variance is

(15.4-8)

with K from Eq. (15.4-19).

15.4.2 Recursive Estimation of Vector States

The recursive weighted averaging of two vector variables described in


the last section is applied to the estimation of the state variable
~(k+1) of the Markov process Eq. (15.4-1) and Eq. (15.4-2). In Eq.
(15.4-8) the following are introduced:

~(k+1), the prediction of x(k+1) based on


~1
the last estimate ~(k)
y 2 = y(k+1), the new measurement

with the prediction

~(k+1) A ~(k) + F y(k). (15.4-21)


15.4 State Estimation (Kalman Filter) 289

Here the expectation ~(k) is used as the exact value ~(k) is unknown.
The recursive estimation algorithm is then

~(k+1) = ~(k+1) + !5_(k+1)[y(k+1) - _g_ ~(k+1) ]. (15.4-22)

To calculate the correction matrix !5_(k+1) the covariance matrices


Q(k+1) of ~(k+1) and~ of y(k+1) have to be known as in Eq. (15.4-19).
The covariance of the estimation error of ~(k) is

-P(k) = E{(x(k)-E{x(k)})(x(k)-E{ x(k)})T}.


- - - - (15.4-23)

Then from Eq. (15.4-21) with E{~(k+1)} = E{~(k+1)} we have

g(k+1) E{(~(k+1)-E{~(k+1J})(~(k+1)-E{~(k+1)})T}
~ ~(k) ~T + ~ y ~T (15.4-24)
A

as ~(k) and ~(k) are uncorrelated. Furthermore Eq. (15.4-2) gives

~(k+1) E{ (y (k+1) -E{y (k+1)}) ( y (k+1) -E{y (k+1)}) T}

E{~(k+1)~T(k+1)} = ~· (15.4-25)

Hence the correction matrix becomes from Eq. (15.4-19)

(15.4-26)

and with Eq. (15.4-20) the covariance matrix of the estimation error
of ~(k+1) becomes

~(k+1) = g(k+1) - !5_(k+1) _g_ g(k+1). (15.4-27)

If the prediction ~(k+1) given by Eq. (15.4-21) is inserted in Eq.


(15.4-22) and ~(k) = 0 is assumed, one obtains the recursive estimation
algorithm of the Kalman filter

~(k+1) ~ ~(k) + !5. (k+1) [y (k+1) - C A ~(k) J (15.4-28)


new
estimate
predicted
estimate,
correction
+ matrix [ oew
estimate
p<ed,oted
measurement,
J
based on old based on old
estimate estimate

Some additional remarks are appropriate:

Starting values
To start the filter algorithm, assumptions on the initial state have
to be made. If it is unknown

~(0) ~(0)
290 15. State Controllers for Stochastic Disturbances

is taken. The initial value of the covariance matrix ~(0) must also be
assumed. For properly chosen ~(0) and ~(0) their influence vanishes
quickly with time k, so that precise knowledge is unnecessary. See the
example 22.2.1 in [2.22].

The correction matrix


As the correction matrix is independent of the measurements it can be
calculated initially. After inserting Eq. (15.4-27) in Eq. (15.4-26) it
follows that

~(k+1) = ~(k+1) fT ~- 1 . (15.4-29)

For a stationary process ~(k+1) reaches a constant value fork~ oo. P


and 2 then become constant covariance matrices. They can be calculated
from the equation system

.Q-1 + CTN-1C (15.4-30)


A P AT + F V FT (15.4-31)

which follows from [3.12].

Block diagram
In Figure 15.4.1 the estimation algorithm is shown for y(k) = 0. The
Kalman filter contains the homogenous part of the process. The measured
y(k+1) and its model predicted value y(k+1) are compared and an error

~(k+1) y(k+1) - y(k+1) = y(k+1) - f ~(k+1)

y(k+1) - f ~ ~(k) (15.4-32)

is formed. This error causes a correction ~(k+1) to the predicted


~ (k+l).

Dynamical processes with a measurable input ~(k)

If a measurable stochastic or deterministic input u(k) acts on the pro-


cess via an input matrix B the prediction becomes

~(k+1) = ~ ~(k) + ~ ~(k) + F y(k) (15.4-33)

and after inserting into Eq. (15.4-22)

~(k+1) ~ ~(k) + ~ ~(k) +I v(k) + ~(k+1)[y(k+1)-f ~ ~(k)


- f ~ ~(kl - f I y(kl J. (15.4-34)
(J1
PROCESS
~
r- ------l {J)

1 rt
I PJ
rt
I (I)
I
I n(k+1) t<:l
[Jl

I I rt
f-'·
I
I ~~~+1) y(k+1) ~
I rt
I f-'·
0
I measured I ::s
output
I I :>::
PJ
I I f-'
!3
L _______ _______ _ _ _ ______ j PJ
::s
I - - - - - - - - - - - - -xTk+D -;ew - - - +- ~tk:1l ____ l '"'.1
f-'·
f-'
1 ~ .~ estimate rt
~ output error I (I)
I H
(redidual,
I innovation) I
I I
I
predicted
8lk+1) predicted new output
estimate ytk ... 1)

~~k~)correction of the predicted

I new estimate
I
L __ __....___ __ --------- ----'
KALMAN FILTER

Figure 15.4.2 Markov process with Kalman filter for the estimation of ~(k+1)
N
1.0
292 15. State Controllers for Stochastic Disturbances

Orthogonalities
In the original work of Kalman [15.4] the recursive state estimator
was derived by applying the orthogonality condition between the esti-
mates and the measurements

(15.4-35)

However, other orthogonalities exist for minimum variance estimates:

(15.3-36)

or with i<i) = f ~(i)

(15.3-37)

From these orthogonalities it follows that the error signal (residual,


innovation)

~<k+1> = x<k+1> - f ~ ~<k>

is statistically independent

(15.4-38)

Extensions to the original Kalman filter


Since 1960 many publications appeared which can be considered as exten-
ding the original Kalman filter. Among the extensions are

input noise ~(k) and measurement noise g(k) are correlated


input noise ~(k) is correlated (coloured)
measurement noise g(k) is correlated
influence of wrong initial values, covariance matrices and
process parameters on the convergence (divergence)
simultaneous estimation of unknown covariance matrices
nonlinear filter problem
- simultaneous state and parameter estimation
- nonlinear processes

These problems are treated for example in [15.2], [15.3], [15.9],


[15.10], [15.11]. A simulation example of a Kalman filter is given in
[2.22].
D Interconnected Control Systems

Up to now, when considering the design of controllers or control algo-


rithms it was assumed, with the exception of state controllers, that
only the control variable y determines the process input u. This leads
to single control loops. However, in chapter 4 it was mentioned that
by connecting additional measurable variables to the single loop - for
example auxiliary variables or disturbances - improved control behavi-
our is possible. These additions to the single loop lead to interaonnea-
ted aontro~ systems. Surveys of common interconnected control systems
using analogue control techniques are given for example in [5.14],
[16.2], [16.3]. The most important basic schemes use cascade control,
auxiliary control variable feedback or feedforward control.

In aasaade aontro~ and auxi~iary aontro~ variab~e feedbaak additional


(control) variables of the process, measurable on the signal path from
the manipulated variable to the controlled variable, are fed back to
the manipulated variable. The cascade control scheme uses an inner con-
trol loop and therefore involves a second controller. In the case of
the auxiliary variable, the differentiated auxiliary variable (conti-
nuous-time) is usually added to the input or the output of the con-
troller. Then, instead of a controller only a differentiating element
is necessary, which possibly needs no power amplification. When reali-
zing control schemes in digital computers the hardware cost is a small
fraction of the total, so we concentrate here on the cascade control
scheme. This also allows for a more systematic single loop design, so
only cascade control systems (chapter 16) and no other auxiliary vari-
able feedback scheme is considered. Also of significance is feedfor-
ward aontrot (chapter 17). In this case measurable external disturban-
ces of the process are added to the feedback loop.
16. Cascade Control Systems

The design of an optimal state controller involves the feedback of all


the state variables of the process. If only some state variables can
be measured, but for example only one state variable between the pro-
cess input and output, then improvements can be obtained for single
loop systems using for example parameter optimized controllers, by assu-
ming this state variable to be an auxiliary control variable y 2 which
is fed back to the manipulated variable via an auxiZiary aontroZZer, as
shown in Figure 16.1. Then the process part GPu 2 and the auxiliary con-
troller GR 2 form an auxiliary control loop whose reference value is the
output of the main aontroZZer GR 1 .

mam auxiliary
(major) (minor!

Figure 16.1 Block diagram of a cascade control system

The main controller forms the control error as for the single loop by
subtracting the (main} control variable y 1 from the reference value w1 .
The controlled plant of the main controller is then the inner control
loop and the process part GPu 1 . The auxiliary control loop is therefore
connected in cascade with the main controller. A cascade control sys-
tem provides a better control performance than the single loop because
of following reasons:
1} Disturbances which act on the process part GPu 2 ' that means in
the input region of the process, are already controlled by the
16. Cascade Control Systems 295

auxiliary control loop before they influence the controlled vari-


able y 1 •
2) Because of the auxiliary feedback system, the effect of parameter
changes in the input process part GPu 2 is attenuated (reduction
of parameter sensitivity by feedback, chapter 10). For the initial
design of controller GR 1 , only parameter changes in the output
process part GPu 1 need to be considered and the small changes in
the auxiliary control loop behaviour can be incorporated later.
3) The behaviour of the control variable y 1 becomes quicker (less
inert) if the auxiliary control loop leads to faster modes than
those of the process part GPu 2 .

The overall transfer function of a cascade control system can be deter-


mined as follows. For the reference value of the auxiliary loop as in-
put one has

y2(z) GR2(z)GPu2(z)
Gw2 (z) ( 16-1 )
w2 (z) 1+GR2(z)GPu2(z)

and for the behaviour of its manipulated variable

u (z) GR2 (z)


( 16-2)
w2 (z) 1+GR2(z)GPu2(z)

With

it follows for the behaviour of the plant of the main controller GR 1


that

( 16-3)

In addition to the plant GPu(z) of the single loop the plant of the
main controller of the cascade control system now includes a factor
which acts as an acceleration term. Therefore a 'quicker' plant results.
For the closed loop behaviour of a cascade control system it finally
results

w 1 ( z) 1+GR 1 ( z) GPu ( z)

GR1 (z)GR2(z)GPu(z)
( 16-4)
296 16. Cascade Control Systems

The design of cascade control systems depends significantly on the lo-


cation of the disturbances, so that each cascade control system should
be individually treated. A simple example shows the behaviour of such
a cascade control system.

Example 16. 1

The process under consideration has two process parts with the s-trans-
fer function

GPu2(s) (1+7 .Ss)

GPu 1 (s) (1+10s) (1+5s) ·

For a sample time T0 = 4 sec, the z-transfer functions are as in the


examples in section 3.7.2,

-1
0.4134z 0.4134
GPu2 (z) -1
1-0.5866z z-0.5866

-1 -2
0.1087z +0.0729z
GPu1 (z) -1 -2
1-1.1197z +0.3012z

-1 -2 -3
0.0186z +0.0486z +0.0078z
-1 +0.958z -2 -0.1767z -3
1-1.7063z

0.0186 (z+0.1718) (z+2.4411)


(z-0.5866) (z-0.6705) (z-0.4493)

Initially a P-controller is assumed as auxiliary controller

so that the closed loop behaviour of the auxiliary loop is

To obtain an asymptotically stable auxiliary loop its pole must lie


within the unit circle of the z-plane, giving

Therefore the gain of the P-controller satisfies


16. Cascade Control Systems 297

(Note that a proportional controller acting on a first order process


is not structurally stable in discrete time, unlike the continuous time
case.) If positive q 02 are chosen then with q 02 = 0.7 or q 02 = 1.3

0.2894 0.5374
Gw2(z) = z-0.2972 or z-0.0492"

The pole moves toward the origin with increasing q 02 reaching the ori-
gin for q 02 = 1.42. This shows that the settling time of the auxiliary
control loop becomes smaller than that of the process part GPu 2 • The
resulting closed loop behaviour of the cascade control system compared
with that of the single loop becomes better only for q 02 > 1.3. If q 02
is chosen too small then the behaviour of the cascade control system
becomes too slow because of a smaller loop gain compared with that of
the optimized main controller. Notice that the parameters of the main
controller were changed when the gain of the auxiliary control loop
changed. The gain of the auxiliary loop varies for 0 < q 02 ~ 1.3 by
0 < Gw 2 (1) ~ 0.54. It makes more sense to use a PI-controller as auxi-
liary controller

so that Gw 2 (1) = 1. The closed loop transfer function of the auxiliary


loop then becomes

With controller parameters q 02 = 2.0 and q 12 = -1.4 we have

0.8268(z-0.7000)
Gw2 (z) = (z-0. 7493) (z-0.0105) ·

One pole and one zero approximately cancel and the second pole is near
the origin. The settling time of y 2 (k) becomes smaller than that of the
process part GPu 2 (z), Figure 16.2a). The overall transfer function of
the plant of the main controller is given by Eq. (16-3)

0.0372 (z- (0.6433+0.0528i)) (z- (0.6433-0.0528i))


GPu (z) = (z-0. 7493) (z-0.0105)

(z+0.1718) (z+2.4411)
(z-0.5866) (z 0.6705) (z-0.4493) •
N
\D
co
w
..............
0 5 10 15 k
u

w2
1 ............... . 4

3
0 5 10 15 k
u u.w2 2
2

....................,._

Or---;---~---4--
0 5 10 15 k 0 5 10 15 k 5 10 15
y, ~
>'2 f G'pu
1 1+ • ••
0
0'1
1 t •
\ ••••••••••••
0 Glp • ~"I I • • •
&&•······
• 0 • 0
~ ()
'o 0 0
o 0
\G Pu PJ
Ul
()
• o·~ GPu PJ
0
A.
(1)
0~
10 15 k 5 10 15 k 5 10 15 k ()
5 0
::l
cT
11
a) Auxiliary control variable y 2 b) Control variable y 1 c) Control variable y 1 with 0
I-'
without main controller main and auxiliary controller
Ul
'<:
Figure 16.2 Transients of a control loop with and without cascade auxiliary controller. Ul
cT
The main controller has PID-, the auxiliary controller PI-behaviour. (1)

o o o without auxiliary controller • • • with auxiliary controller !3


Ul
16. Cascade Control Systems 299

The auxiliary control loop introduces the poles of Gw 2 and a conjugate


complex zero pair in addition to the poles and zeros of the process
GPu" Figure 16.2b) shows that therefore the plant of the main controll-
er becomes quicker. Finally a quicker but well damped overall behaviour
is obtained which, of course, requires large process input changes,
Figure 16.2c).

Table 16.2.1 shows that the parameters of the main controller are
changed by adding the auxiliary controller as follows: K larger, cD
smaller, ci larger.

Table 16.2.1 Optimized controller parameters of the main controller.


Design criterion Eq. (5.2-6) r = 0.1

3-PC3 r=0.1 qo q1 q2 K CD CI

without auxiliary 2.895 -4.012 1. 407 1. 488 0.9456 0. 1950


control loop

with auxiliary 2.6723 -3.3452 1 .036 1.6363 0.6330 0.2219


control loop

The control algorithms which have to be programmed for a PI-controlle r


as auxiliary controller and a PID-controll er as main controller are:

e 1 (k) w1 (k) - y 1 (k) (16-5)

w2 (k) w2 (k-1) + q01e1 (k) + q11e1 (k-1) + q21e1 (k-2) (16-6)

e 2 (k) w2 (k) - y2(kl (16-7)

u(k) u(k-1) + q02e2(k) + q12e2 (k-1). ( 16-8)

The relatively small additional cost for cascade control instead of a


single loop consists in measuring the variable y 2 (k) and in the algo-
rithms of Eqs. (16-7) and (16-8). All the treated controllers for pro-
cesses with one input and one output are suitable as auxiliary con-
trollers and main controllers. Therefore many combinations are possible.
A comprehensive investigation of cascade control with P- and PI-eon-
trollers for continuous time signals is described in [16.2], where it
is shown that as an auxiliary controller a P-controller and as a main
controller a PI-controlle r should be used. Furthermore, for disturban-
ces at the input, the auxiliary variable should be near the disturbance
location and for equally distributed disturbances along the process the
process part GPu 2 should have about half the order of the overall pro-
cess.
300 16. Cascade Control Systems

In discrete time the gain of the auxiliary aontroller must be reduced


because of the smaller stability region (see example 16.1). The auxi-
liary control loop therefore becomes slower and its offset is larger.
In addition the parameter changes of the process part GPu 2 have more
influence on the parameter tuning of the main controller. By adding an
I-term one obtains always Gw 2 (1) = 1 as the gain of the auxiliary loop
independently of any parameter change of the process part GPu 2 . Then
if the resulting PI-controller is tuned so that it is far enough away
from the stability limit, larger parameter changes of the first pro-
cess part need not be considered when designing the main controller,
provided that the dynamics of the auxiliary control loop are much quick-
er than those of the second process part.

As main aontrollers for example parameter optimized controllers, dead-


beat controllers or minimum variance controllers are suitable. For
their design the process plus the already tuned P- or PI-auxiliary
controllers can be considered together as one plant given by Eq. (16-3).
Using a state controller one should consider the auxiliary variable y 2
to be a directly measurable state variable and employ a reduced order
observer (see section 8.8) by inserting the directly measurable state
variable in place of an observed state variable given by a full order
observer (see section 8.7.2).

For two measurable auxiliary control variables a double-cascade control


system with two auxiliary controllers can be designed [16.1). If all
state variables are measurable then the multi-cascade control system
has the same structure as a state controller. From the theory of opti-
mal state control it is known that the single auxiliary controllers
have P-behaviour, chapter 8. Cascade control systems with P-controll-
ers can therefore be considered as first steps towards optimal state
control.

A practical advantage of the separation into an auxiliary controller


and main controller is in the resulting stepwise parameter adjustment
one after another. This is true both for applying tuning rules and for
computer aided design. For cascade control systems, first estimates
can be simply obtained of the parameters q 02 of the auxiliary controll-
er and q 01 of the main controller by prescribing the manipulated vari-
able u(O) in the case of a step in the reference value w 1 (o). Eqs.
(5.2-31) and (5.2-32) give
16. Cascade Control Systems 301

u(O) = q 02 w2 (o)
w2 (0) = q01w1 (0)

and therefore

( 16-9)

This relation can in particular be used to choose q 01 of a parameter


optimized main controller if the initial manipulated variable u(O) must
be adjusted to the manipulation range and if the parameter q 02 of the
auxiliary controller is already fixed.

Cascade control systems can often be applied. If in the input area of a


process an auxiliary control variable is measurable for control sys-
tems with higher performance requirements cascade control systems should
always be applied. They can be especially recommended in cases where a
valve manipulates flow. The gain of the valve is non-linear, as it de-
pends among others on the pressure drop across the valve which can chan-
ge siginificantly during operation. An auxiliary control loop with PI-
controller can compensate for these gain changes completely. Cascade
control systems should be applied more frequently in digital control
systems compared with analogue control systems, as the additional cost
of the auxiliary controller is small.
17. Feedforward Control

If an external disturbance v of a process can be measured before it


acts on the output variable y then the control performance with respect
to this disturbance can often be improved by feedforward control, as
shown in Figure 17.0.1. Here immediately after a change in the distur-
bance v the process input u is manipulated by a feedforward control G5
which does not wait as with feedback control until the disturbance
has effected the control variable y. Significant improvement in control
performance, however, can only be obtained for a restricted manipula-
tion range if the process behaviour GPu is not slow compared with the
disturbance behaviour GPv"

process
~---------1

v I
I
I
ly

feedforward
controller

Figure 17.0.1 Feedforward control of a single input/single output


process

When designing a control system one should always try to control the
effects of measurable disturbances using feedforward, leaving the in-
completely feedforward-controlled parts and the effect of unmeasurable
disturbances on the controlled variable to feedback control.

As feedforward does not influence the stability of a control loop in


the case of linear processes, feedforward control systems can be added
after the feedback controllers are tuned. In this chapter the following
design methods of feedforward control systems are treated:
17. Feedforward Control 303

If an element G5 can be realized such that the disturbance behaviour


GPv can be exactly compensated by G5 GPu' then after a change in the
(deterministic or stochastic) disturbance variable v there is no change
in the control variable y. This is ideal feedforward control: its rea-
lizability and other cancellation feedforward control systems are con-
sidered in section 17.1. Section 17.2 describes parameter-optimized
feedforward control systems, where the structure of the feedforward
element is fixed a pri.ori, and which are sui table for many more process-
es. Right from the onset one restricts the problem to nonideal feedfor-
ward control. Parameter-optimized feedforward control systems can be
designed for both deterministic and stochastic disturbances. State con-
trollers for external disturbances already contain ideal feedforward
control for part of the disturbance model. Feedforward control systems
for directly measurable state variable disturbances satisfy the state
control concept for external disturbances in the form of state variable
feedforward control, section 17.3. Finally, corresponding to minimum
variance control, minimum variance feedforward control for stochastic
disturbances can also be designed, section 17.4.

In the following it is assumed that mathematical models of the process-


es both for the process behaviour

~ z -d -1 -m
z
-d ( 17 .0-1)
u(z) 1+a 1 z + •.• +amz

and for the disturbance behaviour


d 0 +d 1 z- 1+ ... +d z-q
n(z)
(17.0-2)
v(z) 1+c 1 z- 1 + ..• + cqz-q

are known. For state feedback control the state model

~(k+1) A ~(k) + B ~(k) ( 17 .0-3)

y(k) £ ~(k) ( 17 .0-4)

is assumed to be known.
304 17. Feedforward Control

17.1 Cancellation FeedfoiWard Control

For ideal feedforward control we have

(17.1-1)

and therefore

o us (z) A(z- 1 )D(z- 1 )


G (z) = --
s v(z) B(z- 1 )z-dC(z- 1 )

f z-d+f z-( 1+d)+ +f z-(m+q+d)


d 1+d · · • m+q+d

(17.1-2)

Therefore the process behaviour must be completely cancelled by the


feedforward control element [17.1]. The feedforward element, however,
becomes simpler if the disturbance filter satisfies C(z- 1 ) A(z - 1 ).
Then
D(z- 1 )
Gso(z) = (17.1-3)
B(z- 1 )z-d

and only the numerator of the process transfer behaviour has to be can-
celled.

If these feedforward controls can be realized and are stable, the in-
fluence of the disturbance v(k) on the output y(k) is completely eli-
minated. One condition for the realizability of Eq. (17.1-2) is that
if the element h 0 is present an element f 0 must also be present, and
if h 1 is present f 1 must also be present, etc. This means that for the
assumed process model structure of Eqs. (17.0-1) and (17.0-2) d 0
and d 0 = 0 must always be fulfilled. Therefore one can assume d = 0
from the beginning if GPv(z) has no jumping property and does not al-
ways contain a deadtime d' ~ d. Then only the part B/A is cancelled.

To obtain stable feedforward control the roots zj of the denominator


F(z) must satisfy lz.l < 1, that means the zeros of B(z) and C(z) must
J
lie within the unit circle. Therefore ideal feedforward control is im-
possible for processes with deadtime and with a jumping property, or
for processes with zeros of the process or of the disturbance behavi-
17.1 Cancellation Feedforward Control 305

our on or outside the unit circle in the z-plane (nonminimum phase be-
haviour).

Example 17.1.1

As examples the feedforward control of three test processes I, II and


III with distinct process behaviour, but with identical disturbance be-
haviour are considered (see Tables 17.1.1, 17.1.2 and the appendix).

Table 17.1.1 Parameters of GPu(z) (process behaviour)

T0 [s] a1 a2 a3 b1 b2 b3 d

process I 2 -1.5 0.7 1.0 0.5

process II 2 -1.425 0.496 -0.102 0.173

process III 4 -1.5 0.705 -0.100 0.065 0.048 -0.008 1

Table 17.1.2 Parameters of GPv(z) (disturbance behaviour)

T 0 [s] c1 c2 d1 d2
for processes
2 -1 . 02 7 0.264 0.144 0.093
I and II

for process
4 -0.527 0.070 0. 385 0.158
III

Process I (second order with nonminimum phase behaviour; model of an


oscillator)
From Eq. (17.1-2) it follows that
-1 -2 -3
d 1 +(a 1 d 1 +d 2 )z +(a 1 d 2 +a 2 d 1 )z +a 2 d 2 z
Gso(z) = -1 -2 -3
b 1 +(b 1 c 1 +b 2 lz +(b 1 c 2 +b 2 c 1 )z +b 2 c 2 z

0.144-0.123z- 1 -0.039z- 2 +0.065z- 3

1.0-0.527z- 1 -0.237z- 2 +0.132z- 3

( z+O. 646) ( z- (0. 750+0. 369i)) (z- (0. 750-0. 369i))

(z+O. 494) (z- (0. 510+0.082i) (z- (0. 510-0.082i))

This transfer function can be realized and is stable. Figure 17.1.1


shows the behaviour of the manipulated variable for a step change in
the disturbance v(k). No change in tmoutput variable arises, as there
is complete compensation.
306 17. Feedforward Control

u{k)
0,15
- 0,133 ------.-.-.-. .................. .--
••

0,1 • •


- 0,05

0+++4~~++4-~--------~----
0 10 20 k

Figure 17.1.1 Time behaviour of the manipulated variable u(k) for


process I

Process II (second order with nonminimum phase behaviour)


For this process one obtains a real pole at z = 1.695, as the zero is
outside the unit circle for GPu(z), indicating unstable behaviour of
the feedforward element. Ideal feedforward control is therefore im-
possible.

Process III (third order with deadtime; model of a low pass process)
The feedforward element given by Eq. (16.1-2) is unrealizable for pro-
cess III because d + 0. If the deadtime din Eq. (17.1-2) is omitted,
that means only the deadtime-free term B/A is compensated, then

(z-0.675) (z-0.560) (z-0.264) 0.385(z+0.441)


Gso(z) =
0.065 (z+O. 879) (z-0.140) z 2 -0.527z+0.07

5.923-6.448z- 1 +0.526z- 2 +1.121z- 3 -0.243z- 4

1+0.212z- 1 -0.443z- 2 +0.117z- 3 -0.009z- 4

This feedforward element is realizable. However, the cancellation in


this case involves a large input amplitude which can be seen from the
alternative equation for the feedforward element

u 8 (0) = (0.385/0.065)v(O) = 5.923v(O).

D
17.2 Parameter-optimized Feedforward Control 307

These examples show that ideal feedforward control is often unrealizab-


le or leads to excessive manipulated variables. In this case one can,
R
as in chapter 6, add an additional realizability term G8 (z)

G8 (z) = G8
o (z) G R (z) (17.1-4)
8

which leads to transient deviations of the output variable y. When de-


signing cancellation controllers one can suitably prescribe the overall
behaviour

and hence calculate the feedforward control

Although this design is computationally simple, because of the arbitra-


riness in the prescription of Gv(z), the cancellation of poles and ze-
ros, and the untreated intersample behaviour, this procedure is not re-
commended just as with cancellation controllers. Therefore other design
procedures are considered.

17.2 Parameter-optimized Feedforward Control


When designing parameter-optimized feedforward control, one assumes a
fixed (realizable) structure, as in the design of parameter-optimized
controllers, i.e. the structure and order of the feedforward algorithm
are given and the free parameters are adjusted by parameter optimization
[17.1]. Here feedforward control structures

u 8 (z)
(17.2-1)
v(z)

are assumed. Because the structure need not be correct, one does not
obtain in general an ideal feedforward control, and transient devia-
tions may occur.

17.2.1 Parameter-optimized Feedforward Controls without a Prescribed


Initial Manipulated Variable

The unknown parameters of the algorithm


T I
.'! = [ho h1 ••. h1 I f 1 f2 ••• f 1 J (17.2-2)
I
308 17. Feedforward Control

are minimized by a cost function, e.g.


M
2
82
eu
l: [y2(k) + rl.lu (k) J (17.2-3)
k=O

hence

dS 2
eu 0. ( 17. 2-4)
da

Here the disturbance signal can be deterministic or stochastic. For the


change in the manipulated variable one must set

~u(k) = u(k) - u (17.2-5)

with
u = u(oo) final value for e.g. step disturbances

u = E{u(k)} expected value for stochastic disturbances

In many cases one obtains satisfactory feedforward control performance


for 1 $ 2. As the gain of the feedforward element satisfies

GPv(1) h 0 +h 1 +h 2
---- = Ks = (17.2-6)
GPu(1) 1+f 1 +f 2

then for 1 = 2 four parameters and for 1 1 only two parameters have
to be determined through optimization.

17.2.2 Parameter-optimi zed Feedforward Control with Prescribed Initial


Manipulated Variable

Now the response of u(k) to a step change of the disturbance variable


v(k) = 1 (k) is considered. For 1 = 2, Eq. (17.2-1) leads to the diffe-
rence equation

(17.2-7)

For v(k) = 1 (k) we have

u(O) ho
u ( 1) (1-f 1 )u(O) + h1
u (2) -f 1 u(1) + (1-f 2 )u(O) + h1 + h2 (17.2-8)

u(k) -f 1 u(k-1) - f 2 u(k-2) + u(O) + h1 + h2.


17.2 Parameter-optimized Feedforward Control 309

The initial manipulated variable u(O) equals h 0 or h 0 v(O). Therefore h 0


can be fixed simply by a suitable choice of u(O), so that a definite
manipulating range can be easily considered. By means of the given u(O)
the number of optimized parameters for 1 = 2 is reduced to three para-
meters and for 1 = 1 to one parameter. For 1 = 1 the equations, together
with Eqs. (17.2-8) and (17.2-6), become

ho u(O) (17.2-9)

u(1) - Ks
f1 (17.2-10)
u(O) - Ks
h
1
= u ( 1) - u(O) (1-f 1 ). (17.2-11)

Here u(1) can now be chosen as the single independent variable in the
parameter optimization, and its value is such that

dV (17.2-12)
0.
d(u(1)]

From stability considerations it follows that

(17.2-13)

and therefore from Eq. (17.2-10)

u(1) < u(O) (17.2-14)

and from Eq. (17.2-11)

(17.2-15)

Hence for 1 = 1, the design of the feedforward element with a prescrib-


ed initial manipulated variable u(O) leads to the optimization of a
single parameter, taking into account the restriction of Eq. (17.2-14).
The computational effort for parameter optimization in this case is par-
ticularly small. Improved gradient methods, described in section 5.2,
are recommended as suitable optimization methods when using a digital
computer. The truncation criterion must not be selected too small.

A parameter-optimized feedforward control of first order (1 = 1) with


a prescribed initial manipulated variable is now described for the
examples of processes II and III subjected to a step change in the dis-
turbance v (k) .
310 17. Feedforward Control

Examples 17.2.1

Process II

Figure 17.2.1 shows V[u(1) J and Figure 17.2.2 the corresponding time
responses of the manipulated and the controlled variable for u(O) = 1.5.
Because of the nonminimum phase behaviour, the initial deviation is in-
creased by feedforward control, the deviation for k ~ 3, however, is
improved.

Process III

Figure 17.2.3 shows V[u(1) J for different u(O). The minima are relative-
ly flat for large u(O). Figure 17.2.4 shows that the feedforward cont-
rol improves the behaviour fork ~ 2.
D

The above method for designing parameter-optimized feedforward control


systems is suitable for linear stable processes with either minimum or
nonminimum phase behaviour. The computational effort required for syn-
thesis is, however, larger than that for feedforward cancellation con-
trol. The parameter estimation for first or second order elements is
in general a simple computer aided design problem.

100
2:: y2 !kl
k=O
2,0

1,0

O+------------r~r-~.-
1,0 1,5 u(1)

Figure 17.2.1 Loss function V[u(1) J for process II


17.2 Parameter-optimized Feedforward Control 311

y(k)
1,0 - - - - - -o o o-o-o o-o C>--Q o-o -o- ~ o-o o - - - -
0~
0 ~
without
0

feedforward control
0

0,5
••

0 •
• •



0

••

u(k)
- 1,5

- 1,0
•• •• .....
---------~~J·~··········~
...

0,_~~~++~~----------~---------.-
0 10 20 k

Figure 17.2.2 Transient responses of the manipulated variable u(k} and


the output variable y(k) for the process II for f 1 =-0.8;
h 0 =1.5; h 1 =-1.3.
312 17. Feedforward Control

1,5 uiOl = 2,0

Figure 17.2.3
0~-------r~-----+-L------~------ Loss function V[u(1) J
0,5 1,0 1,5 20 ulll for process III

ylk)
1,0 - - - -0 -o-o-o o-o-o-o-o-o--o-o-o-o-o-o-- ~-
0

0 ~Without feedforward control


with
0,5

."/
Figure 17.2.4
• Transient responses of the
manipulated variable of

•••or.,~~~----
O~r+4-r+~.~4-~-.~.~ u(k) and output variable
•... 20 k y(k) for process III for
f 1=o.3; h 0 =3.o; h 1=-2.3.

ulk I

-3,0

-2,0


-1,0 •
-~ ..................................................... -

0+4~~~~~,_-----------r-----
0 10 20 k
17.4 Minimum Variance Feedforward Control 313

17.3 State Variable Feedfotward Control


It is assumed that measurable disturbances ~(k) influence the state ~a­

riables ~(k+1) as follows

~(k+1) A ~(k) + B ~(k) + ~v(k)

~v(k) F ~(k) } (17.3-1)


y(k) £ ~(k).

If the state variables ~(k) are directly measurable, the state variable
deviations ~v(k) are acquired by the state controller of Eq. (8.1-33)

~(k) = -! ~(k)

one sample interval later, so that for state control additional feedfor-
ward control is unnecessary. With indirectly measurable state variables,
the measurable disturbances ~(k) can be added to the observer. For ob-
servers as in Fig. 8.7.1 or Fig. 8.7.2 the feedforward control algo-
rithm is

!:_~(k) ori(k+1) !:. ~(k). (17.3-2)

17.4 Minimum Variance Feedfotward Control


Corresponding to minimum variance controllers, feedforward control with
minimum variance of the output variable y(k) can be designed for mea-
surable stochastic disturbances v(k). Here, as in the derivation of
the minimum variance controller in chapter 14 for processes without
deadtime, the quadratic cost function

I(k+1) = E{y 2 (k+1) + r u 2 (k)} (17.4-1)

is minimized. One notices that the manipulated variable u(k) can at the
earliest influence the output variable y(k+1), as b 0 = 0. The deriva-
tion is the same as for minimum variance control, giving Eqs. (14.1-5)
to (14.1-12). The only difference is that v(k) is measurable, and as
result instead of a control u(z)/y(z) = ... , a feedforward control
u(z)/v(z) = ... is of primary interest.

Eq. ( 1 4. 1-1 2) implies

z y(z) - A z v(z) + ~ u(z) 0. (17.4-2)


b1
314 17. Feedforward Control

In this case for z y(z) Eq. (14.1-5) is introduced, and for feedforward
control it follows that

u(z)
GSMV1 (z) v(z) -1 -1 r -1 -1 (17.4-3)
zB(z )C(z l+t-A(z )C(z )
1
This will be abbreviated as SMV1.

If r = O, then

-1 -1 -1
G (z) = _ hA(z )[D(z )-C(z )] (17 .4-4)
SMV2 zB(z-1)C(z-1)

If C(z- 1 ) = A(z- 1 ) then it follows from Eq. (17.4-3) that

-1 -1
_ h[D(z )-A(z )]
GSMV3 (z)
zB(z- 1 )+: 1
A(z- 1 )
(17.4-5)

and for r = 0

_ J.z[D(z -1 )-A(z -1 ) ]
GSMV4(z) (17.4-6)
-1
zB(z )

The feedforward control elements SMV2 and SMV4 are the same as the mi-
nimum variance controllers MV2 and MV4 with the exception of the factor
A. As the discussion of the properties of these feedforward controllers
is analogous to that for the minimum variance controllers in chapter 14,
in the following only the most important points are summarized.

Since the minimum variance feedforward controller cancels the poles


and zeros of the process, as in the minimum variance feedback controll-
er, there exists the danger of instability in the cases given by Table
14.1.1. For SMV1 and SMV2 the roots of C(z- 1 ) = 0 must lie within the
unit circle so that no instability occurs.

The feedforward control SMV1 affects the output variable in the follow-
ing way:

~
v(z)

AD (z -1 ) [ 1 + zB (z -1 )
-1 r -1
[ C (z -1 ) _ 1
-1
J] •
(17.4-7)
C(z- 1 ) zB(z )~A(z ) D(z )

C(z- 1 ) = A(z- 1 ) has to be set only for SMV3. When r + oo, Gv(z) +
AD(z- 1 )/C(z- 1 ), and the feedforward control is then meaningless. For
17.4 Minimum Variance Feedforward Control 315

r 0 i.e. SMV2 or SMV4, one obtains, however,

(17.4-8)

This means that the effect of the feedforward control is to produce


2
white noise y(z) = AV(z) with variance A at the output. For processes
with deadtime d the derivation of the minimum variance controller is
identical with Eqs. (14.2-2) to (14.2-9). In Eq. (14.2-9) one has only
to introduce Eq. (14.2-4) to obtain the general feedforward element

u(z) AA ( z - 1 ) L ( z - 1 )
8 SMV1d(z) v(z) -1 -1 r -1 -1 (17.4-9)
zB(z )C(z l+tA(z )C(z )
1
or with r = 0
AA ( z - 1 ) L ( z - 1 )
8 SMV2d(z) -1 -1 . (17.4-10)
zB(z )C(z )

8 SMV3d(z) -1 r -1 (17.4-11)
zB(z l+tA(z )
1
or for r 0
AL(z- 1 )
8 SMV4d(z) -1 . (17.4-12)
zB(z )

The resulting output variable is, for feedforward controllers SMV2d


and SMV4d

(17.4-13)

Therefore, as with minimum variance feedback control, a moving average


process of order d given by Eq. (14.2-19) is generated. With increasing
deadtime the variance of the output variable increases rapidly, as in
Eq. (14.2-20). The feedforward controller G8 MV 4 (z) was first proposed
by [25.9].
E Multivariable Control Systems

18. Structures of Multivariable Processes


Part E considers some design methods for linear discrete-time multiva-
riable processes. As shown in Figure 18.0.1 the inputs ui and outputs
y. of multivariable processes influence each other, resulting in mutual
J
interactions of the direct signal paths u 1 -y 1 , u 2 -y 2 , etc. The internal
structure of multivariable processes has a significant effect on the
design of multivariable control systems. This structure can be obtained
by theoretical modelling if there is sufficient knowledge of the pro-
cess. The structures of technical processes are very different such
that they cannot be described in terms of only a few standardized struc-
tures. However, the real structure can often be transformed into a ca-
nonical model structure using similiarity transformations or simply
block diagram conversion rules. The following sections consider special
structures of multivariable processes based on the transfer function
representation, matrix polynomial representation and state representa-
tion. These structures are the basis for the designs of multivariable
controllers presented in the following chapters.

___........____
-
' ' -----
"""~-- -..._ ...- ---/

""/
v'-
/
/
><._-- ..--.._ -

/
.
r--
-----

-----
/Y~
/// ',"--
l..v? .....::_,
Up Yr

Figure 18.0.1 Multivariable process


18.1 Structural Properties of Transfer Function Representations 317

18.1 Structural Properties of Transfer Function


Representations
18.1.1 Canonical Structures

As an example the block diagram of the evaporator and superheater of a


steam generator with natural circulation is considered as shown in Fig.
18.1.1. The controlled variables of this two-variable process are the

pressure
fuel valve ~Mr b.Ms,vi evaporator ~Pdr gauge
U2 Msm Pdr y2

Gw G13 G14 G,s


~Ms

disturbance superheater
fi Iter ~tvls
,----------,
~s I I
I
I
G,6 G7
~qG I
qG I
I
I
injection Gs G6 I temperature
valve I gauge
u, t.~si -~~so Y,
I
G, G2 G3 I G4
I I
L_. _ _ _ _ _ _ _ _ j

Figure 18.1.1 Block diagram of the evaporator and superheater of ana-


tural circulation steam generator [18.5], [18.6].

steam pressure y 2 in the drum and steam temperature y 1 at the super-


heater outlet. The manipulated variables are the fuel flow u 2 and spray
water flow u 1 • Based on this block diagram the following continuous-
time transfer functions can be derived.

y1 (s)
Superheater: G11 (s)
u 1 (s)

y2(s)
Evaporator
u 2 (s)
318 18. Structures of Multivariable Processes

Coupling
superheater-evaporator: G12 (s)

Coupling y 1 (s)
evaporator-superheater: G21 (s) = -- = G 10 (s) G5 (s) G6 (s) G4 (s)
u 2 (s)

G11 and G22 are called the 'main transfer elements' and G12 and G21 the
'coupling transfer elements'. Assuming that the input and output signals
are sampled synchroneously with the same sample time T 0 , the transfer
functions between the samplers are then combined before applying the
z-transformation, as in appendix giving
y 1 ( z)
G11 (z) = = G1G2G3G4 (z)
u1 ( z)

y2(z)
u 2 (z)
(18.1-1)
y2 (z)

u 1 ( z)

y 1 ( z)

u 2 (z)

This example shows that there are common transfer function elements in
this input/output representation. The transfer functions can be summa-
rized in a transfer matrix Q(z)

Y 1 (z)l G11 (z)G21 (z)l u 1 (z)l


[ [ [
y2(z) G12(z)G22(z) u 2 ( z)

X (z) Q(z) ~(z). (18.1-2)

In this example the number of inputs and outputs are equal, leading to
a quadratic transfer matrix. If the number of inputs and outputs are
different, the transfer function becomes rectangular. It should be no-
ted that the transfer function elements describe only the controllable
and observable part of the process. The non-controllable and non-obser-
vable process parts cannot be represented by transfer functions, as
known.

The most important canonical structures used to describe the multivari-


able process' input/output behaviour are shown in Fig. 18.1.2 [18.1].
18.1 Structural Properties of Transfer Function Representations 319

a) P-canonical structure b) v-canonical structure

Figure 18.1.2 Canonical structures of multivariable processes shown for


a twovariable process

In the case of the P-canonical structure each input acts on each out-
put, and the summation points are at the outputs; P-canonical multiva-
riable processes are described by Eq. (18.1-2). Changes in one trans-
fer element influence only the corresponding output, and the number of
inputs and outputs can be different. The characteristic of the V-cano-
nical structure is that each input acts directly only on one correspon-
ding output and each output acts on the other inputs; this structure
is defined only for the same number of inputs and outputs. Changes in
one transfer element influence the signals of all other elements. For
a twovariable process with v-canonical structure we obtain the follow-
ing equation

[:;]"[:11 G:J {[:; l + [:, G~1j[:;J}


and in generalized form

y_ QH {~ + QK ~}. ( 18 .1-3)

QH is a diagonal matrix which contains the main elements. In QK the


coupling elements are summarized; its diagonal elements are zero. As an
explicit representation y_ = f(~) we obtain

y_ = ( I - QH2K J-1 QH~' (18.1-4)

The transfer matrix of a v-canonical process is therefore

G ( I - 2H§K J-1 QH. (18.1-5)


320 18. Structures of Multivariable Processes

It exists if det[!-QHQK] + 0. Using Eq. (18.1-5) a V-canonical struc-


ture can be converted to a P-canonical structure. Conversely s square
P-canonical structure can be converted into a V-canonical structure as
follows. Here Eq. (18.1-2) must be written in a form which corresponds
to Eq. (18.1-3) [18.4], by splitting up Q(z) into a matrix QH containing
only the diagonal elements, and into a matrix QN which contains the re-
maining elements, yielding

If G is non-singular, we have

u = Q-1 Y.
so that

( 18. 1-6)

Comparing with Eq. (18.1-3) we have

(18.1-7)

Both canonical forms can therefore be converted to each other, but rea-
lizability must be considered. For a twovariable process the calcula-
tion of the transfer function elements is for example given in [18.2].

If the behaviour of multivariable processes has to be identified on the


basis of nonparametric models, as for example using nonparametric fre~

quency responses or impulse responses, then one obtains only the trans-
fer behaviour in a P-canonical structure. If other internal structures
are considered, proper parametric models and parameter estimation me-
thods must be used.

The overall structure describes only the signal flow paths. The actual
behaviour of multivariable processes is determined by the transfer
functions of the main and coupling elements including both their signs
and mutual position. One distinguishes between symmetricaZ muZtivaria-
bZe processes, where

Gii (z) Gjj(z) i 1I 2I • • •

}
Gij (z) Gji (z) j 1I 2I • • •

and non-symmetricaZ muttivariabte processes, where

Gii (z) +Gjj (z)


Gij (z) +Gji (z).
18.1 Structural Properties of Transfer Function Representations 321

With regard to the settling times of the decoupled main control loops
stow process etements Gii can be coupled with fast process etements Gij"
With lumped parameter processes signals can only appear at the input or
output of energy, mass or momentum storages. The main and coupling ele-
ments often contain the same storage components, so that a main trans-
fer element and a coupling transfer element possess some common trans-
fer function terms. Hence Gii ~ Gij or Gii ~ Gji can often be observed.

18.1.2 The Characteristic Equation and Coupling Factor

To describe some further structure-conditioned properties of multiva-


riable processes, we use as a simple example a twovariable process with

l [:;:::]
a P-structure of Eq. (18.1-2) connected with a twovariable controller

[ u1 (z)l - R11 (z)


[ 0
0
u 2 (z) -R22 (z)

~(z) _!3:(z) y(z) (18.1-8)

which consists of only two main controllers. The sample time is assumed
to be equal and sampling to be synchroneous for all signals. Further-
more w1 w2 = 0. Then we have

[! - §_(z)_!3:(z) J _x(z) = Q ( 18. 1-9)

",,.,, l [y'] [:l


or

[1+G11R11
G12R11 1+G22R22 Y2

After multiplying we obtain

(1+G 11 R11 Jy 1 + G21R22Y2 0

G12R11Y1 + (l+G22R22)y2 0.

If the first equation is solved for y 2 and introduced into the second
equation, one obtains

Therefore the characteristic equation of the twovariable control system


becomes

[1+G 11 (z)R 11 (z) J(1+G 22 (z)R 22 (z) J - G12 (z)R 11 (z)G 21 (z)R 22 (z) = 0.

(18.1-10)
322 18. Structures of Multivariable Processes

For the characteristic equation Eq. (18.1-9) shows that

det [! - Q(z)~(z) J= 0. (18.1-11)

The expressions 1+G 11 R11 and 1+G 22 R22 are the characteristic polynomials
of the uncoupled single control loops arising from the main transfer
elements and in the main controllers. The term -G 12 R11 G21 R22 expresses
the influence of the coupling between both single control loops by the
coupling elements G12 and G21 on the eigenbehaviour. This term describes
the affect on the characteristic equations of the single loops induced
by the coupling elements. If G12 = 0 and/or G21 = 0 the coefficients of
the single control loops are unchanged.

We now consider another representation of the characteristic equation


of the twovariable control system. For this Eq. (18.1-10) is written in
the form

0.

The transfer functions with the reference variables as inputs are in-
troduced
Gii (z) Rii (z)
i 1 '2 (18.1-12)
1+Gii (z)Rii (z)

so that

(18.1-13)

The term (1-KGw 1Gw 2 ) = 0 contains additional eigenvalues arising from


the influence of the couplings, where
G12(z)G21(z)
K (z) = (18.1-14)
G11 (z)G22 (z)

is the dynamic coupling factor. Eq. (18.1-13) shows that the eigenva-
lues of a multivariable system in P-structure consist of the eigenva-
lues of the single main control loops and additional eigenvalues cau-
sed by the couplings G12 and G21 . Again, if G12 = 0 and/or G21 = 0 the
eigenvalues of the twovariable control system are identical to those of
the single uncoupled loops. From Eq. (18.1-10) it follows after divi-
sion by (1+G 22 R22 l that

1 + G11 R11 (1-KGw 2 ) = 0 (18.1-15a)

or after division by (1+G 11 R11 l


18.1 Structural Properties of Transfer Function Representations 323

( 18 .1-15b)

Under the influence of the coupled control loop the controlled "process"
of the main controller changes as follows:

G11 + G11{l-KGw2)

G22 + G22{l-KGw1)

(see Fig. 18.1.3a). A second transfer path GiiKGwj appears in parallel


to the controlled main process element Gii"

Now the change in the gain of the controlled "processes" caused by the
coupled neighbouring control loop is considered. For the controller
Rii(z) the process gain is Gii(1) in the case of the open loop j and
G .. {1)[1-K 0 G .{1) J in the case of the closed neighbouring loop. The
1.1. WJ
factor [1-K 0 G . (1) J = 8 .. describes the change of the gain through the
WJ 1.1.
coupled neighbouring loop. Ko is called the static coupling factor
G12{ 1 )G21 ( 1 ) K12K21
{18.1-16)
K{1) = G11{1)G22(1) K11K22.

This coupling factor exists for transfer elements with proportional be-
haviour, or integral behaviour if there are two integral elements Gii{z)
and G .. {z). In Fig. 18.1.4 the factor 8 .. is shown as a function of K0 •
l.J 1.1.
For an open neighbouring loop j, 8ii = 1 is valid. If the neighbouring
loop is closed, following cases can be distinguished [18.7]:

1) Ko < 0 negative coupling + > 1

2) Ko > 0 positive coupling

1
a) 2: K0 > o
G . {1)
WJ
1
b) Ko > G . { 1) + 8 ..
1.1.
< o.
WJ
Therefore a twovariable process can be divided into negative and posi-
tive coupled processes. In case 1), the gain of the controlled "process"
increases by closing the neighbouring loop, so that the controller gain
must be reduced in general. In case 2a), the gain of the controlled
"process" decreases and the controller gain can be increased. Finally,
in case 2b) the gain of the controlled "process" changes such that the
sign of the controller Rii must be changed. Near 8ii ~ 0 the control
of the variable yi is not possible in practice.
324 18. Structures of Multivariable Processes

+
I
I
I
a) i
i
L - - - - - - - - - - - - - - - - - - - - - _j

b)

Figure 18.1.3 Resulting controlled "process" for the controller R 11

Ejj=(1-X 0 Gwj(1 ))

0 I
I
negative coupled positive coupled

without I with
Sign ·change

Figure 18.1.4 Dependence of the factor Eii on the static coupling fac-
tor Ko for twovariable control systems with P-canonical
structure
18.1 Structural Properties of Transfer Function Representations 325

As the coupling factor K(z) depends only on the transfer functions of


the processes including their signs, the positive or negative couplings
are properties of the twovariable process. The path paralleling the main
element G11 , see Fig. 18.1.3 b), generates an extra signal which is
lagged by the coupling elements. If these coupling elements are very
slow, then the coupled loop has only a weak effect. For coupling ele-
ments G12 and G21 which are not too slow compared with G11 , a fast cou-
pled loop 2 has a stronger effect on y 1 than a slow one.

18.1.3 The Influence of External Signals

The dynamic response of multivariable processes to external disturban-


ces and reference values depends on where these signals enter and whe-
ther they change one after another or simultaneously. The following ca-
ses can be distinguished, using the example of a twovariable process,
as in Fig. 19.0.1.:

a) The disturbance v acts on both loops

Then one has n 1 Gv 1 v and n 2 = Gv 2 v. This is the case for example


for changes in the operating point or load, which results mostly in
simultaneous changes of energy, mass flows or driving forces. Gv 1
and Gv 2 can have either the same or different signs.

b) The disturbances n 1 and n 2 are independent

Both disturbances can either change simultaneously, as for example


for statistically independent noise. They can, however, also appear
sequentially, as for occasional deterministic disturbances.

c) Reference variables

The reference variables w1 and w2 can be changed simultaneously,


f w2 (k). They can, of course, also be changed
w1 (k) = w2 (k) or w1 (k)
independently.

In the exampZe of the steam generator of Fig. 18.1.1 these cases corres-
pond to the following disturbances:

a) - changes in steam flow following load changes


changes in calorific value of the fuel (coal)
- contamination of the evaporator heating surface
326 18. Structures of Multivariable Processes

b) n 1 : - contamination of the superheater surface


- change in the steam input temperature of the
final superheater caused by disturbances of
the spraywater flow or temperature
n 2 : - changes in feedwater flow

c) In the case of load changes the reference variables w1 and w2 can be


changed simultaneously, particularly in gliding pressure operation,
but single changes can also occur.
The most frequent disturbances for this example act simultaneously
on both loops. These disturbances tend to have the largest amplitude.

18.1.4 Mutual Action of the Main Controllers

Depending on the external excitation and the transfer functions of the


main and coupling elements the main controllers may mainZy reinforee or
mainZy eounteraet eaeh other [18.7]. With a step disturbance, v acts
simultaneously on both loops, Fig. 19.0.1, such that Gv 1 and Gv 2 have
the same sign and that all main and coupling elements have low pass be-
haviour and a P-structure; Table 18.1.1 shows 4 corresponding groups
of sign combinations, derived from inspection of signal changes in the
block diagram of the initial control variable response, where the lar-
gest deviations occur in general. The separation of the groups depends
on the signs of the quotients

Their product yields the static coupling factor K0 • Therefore for posi-
tive coupling KO > 0 the groups

I) R11 reinforces R22 , R22 reinforces R11


II) R11 counteracts R22 , R22 counteracts R11

and for negative coupling K0 < 0

III) R11 reinforces R22 , R22 counteracts R11


IV) R11 counteracts R22 , R22 reinforces R 11

can be distinguished. I f Gv 1 and Gv 2 have different signs, in Table


18.1.1 the sign combinations of groups I and II or groups III and IV
must be changed. The disturbance transfer function
G ..
G - ____1!. G .G .
vi Gjj WJ VJ
G . v
vyl. 1+G .. R .. (1-KG .)
1.1. 1.1. WJ
18.1 Structural Properties of Transfer Function Representations 327

Table 18.1.1 Mutual effect of the main controllers as a function of the


sign of the main and coupling elements for a step distur-
bance v, simultaneously acting on both loops. G 1 and G 2
have the same sign. From [18.7]. v v

sign of mutual action of


coupling group
K11 K22 K21 K12 main controllers

+ + + + I)
K21
+ - - + reinforcing
K22
> 0

- + + - K12
> 0
- - - - K11
positive
Ko > 0 + + - - II)
K21
+ - + - counteracting K22
< 0

- + - + K12
< 0
- - + + K11

+ + - + III)
K21
+ - + + R11 reinforces R22 -- < 0
K22
- + - - R22 counteracts R11 K12
> 0
negative - - + - K11
K0 < o
+ + + - IV)
K21
+ - - - R11 counteracts R22 > 0
K22
- + + + R22 reinforces R11 K12
< 0
- - - + K11

shows that the response of the controlled variable is identical for the
different sign combinations within one group. If only one disturbance
n 1 acts on the output y 1 (and n 2 = 0), then the action of the neighbou-
ring controller R22 is given in Table 18.1.2. The controller R22 coun-
teracts the controller R11 for positive coupling and reinforces it for
negative coupling.
328 18. Structures of Multivariable Processes

Table 18.1.2 Effect of the main controller R22 on the main controller R11
for one disturbance n1 on the controlled variable y 1 . Sign
combinations and groups as in Table 18.1.1.

coupling effect of R22 on R11 group

positive counteracting I
Ko > 0 counteracting II

negative reinforcing III


Ko < 0 reinforcing IV

After comparing all cases


- G
v1
and G
v2
have same sign
- G
v1
and G
v2
have different sign
- G
v1
= 0; Gv2
+ 0 or G
v2
= 0; G
v1 +0
it follows that there is no sign combination which leads to only rein-
forcing or only counteracting behaviour in all cases. This means that
the mutual effect of the main controllers of a twovariable process al-
ways depends on the particular external excitation. Each multivariable
control system must be individually treated in this context.

As an example again the steam generator in Fig. 18.1.1 is considered.


The disturbance elements have the same sign for a steam change, so that
Table 18.1.1 is valid. An inspection of signs gives the combination
-+++ and we have therefore group IV. The superheater and evaporator are
negatively coupled and K0 = -0.1145. The steam pressure controller R22
reinforces the steam temperature controller R11 , c.f. [18.5]. However
R11 counteracts R22 only unsignificantly, as the coupling elenent G8
in Fig. 18.1.1 has relatively low gain. Also the calorific value distur-
bances act on both outputs with the same sign, so that the same group
is involved.
18.2 Structural Properties of the State Representation 329

18.1.5 The Matrix Polynomial Representation

An alternative to the transfer function representation of linear multi-


variable systems is the matrix polynomial representation [18.10]
-1 -1
~(z )y(z) = .!?_(z )~(z) (18.1-17)

with
+A z-m
~0 + -m (18.1-18)
-m
+ -m
B z .

If ~(z- 1 ) is a diagonal poylnomial matrix one obtains for a process


with two inputs and two outputs

-1
A 1~ ( z )
[
0
A22(z
-1
)
J
(18.1-19)

This corresponds to a P-canonical structure with common denominator


polynomials of G11 (z) and G21 (z) or G22 (z) and G12 (z) -compare with
Eq. (18.1-2). More general structures arise if off-diagonal polynomi-
als are introduced into ~(z- 1 ).

18.2 Structural Properties of the State Representation

Extending the state representation Eq. (3.6-16) 1 Eq. (3.6-17) of linear


single-input/single-output processes to linear multivariable processes
with p inputs ~(k) and r outputs y(k) 1 the following equations are ob-
tained:

~ (k+1) A ~(k) + B ~(k) (18.2-1)


330 18. Structures of Multivariable Processes

.Y (k) C ~(k) + D ~(k). (18.2-2)

Here

~(k) is an (mx1) state vector


~(k) is a (px1) control vector
.Y (k) is an (rx1) output vector
A is an ( rnxm) systems matrix
B is an (mxp) control matrix
c is an (rxm) output (measurement) matrix
D is an (rxp) input-output matrix.

The state representation of multivariable systems has several advanta-


ges over the transfer matrix notation. For example, arbitrary internal
structures with a minimal number of parameters and noncontrollable or
nonobservable process parts can also be described. Furthermore, on
switching from single-input/single-output processes to multivariable
processes only parameter matrices ~, f and Q have to be written instead
T
of parameter vectors ~ and Q and the parameter d. Therefore the analy-
sis and design of controllers for single-input/single-output processes
can easily be extended to multi-input/multi-output processes. However,
a larger number of canonical structures exists for multivariable pro-
cesses in state form. The discovery of an appropriate state structure
can be an extensive task.

To set a first view of the forms of the matrices ~, ~ and f and the cor-
responding structures of the block diagram, we consider three examples
of a twovariable process as in section 18.1.

a) A twovariable process with direct couplings between the state


variables of the main transfer elements

Fig. 18.2.1 shows two main transfer elements for which the state vari-
ables are directly coupled by the matrices ~~ 2 and ~; 1 . This means phy-
sically that all storages and state variables are parts of the main
transfer elements. The coupling elements have no independent storage

W"
or state variable. The state representation is

[~,, (k+1)] f~'-~-~~


(k+1)
~22 ~12 I ~22
(k)l •
~22(k)
[£,
Q £~,]
[u, (k)l
u 2 (k)
(18.2-3)

[ y1 (k)l
[£!' £i,] [x, (k) l (18.2-4)
y2(k) ~22 (k) .
18.2 Structural Properties of the State Representation 331

,..------, ~11 (k +1)


!==::::::>Q=~>II z-1

Figure 18.2.1 Twovariable process with direct couplings between the


state variables of the main elements

The matrices ~ 11 and ~ 22 of the main transfer elements become diagonal


blocks and the coupling matrices ~~ 2 and ~; 1 nondiagonal blocks of the
overall system matrix ~· The main transfer elements can be put into one
of the canonical forms of Table 3.6.1. The coupling matrices then con-
tain only coupling parameters in a corresponding form and zeros.

b) A twovariable process with a P-canonical structure

Analogously to Fig. 18.1.2 a) a twovariable process with P-canonical


structure is shown in Fig. 18.2.2. Different storages and state variab-
les are assumed for both the main elements and the coupling elements,
with no direct couplings between them. The state representation then
becomes

~11(k+1) ~11 0 0 0 ~11 (k) !?.11 0

~12(k+1) 0 ~12 0 0 ~12 (k) !?.12 0


+ [u 1 (k)] (18.2-5)
~21 (k+1) 0 0 ~21 0 ~21 (k) 0 !?.21 u 2 (k)

~22 (k+1) 0 0 0 ~22 ~22(k) 0 !?.22


332 18. Structures of Multivariable Processes

l
T
~11(k)j
0 ~12(k)
~21 (18.2-6)
T ~21(k)
~12 0
~22(k)

In this case all matrices of the main and coupling elements occur in A
as diagonal blocks.

c) A twovariable process with a V-canonical structure

A twovariable process in a V-canonical structure as in Fig. 18.2.3 with


different storages and state variables of the various transfer elements
leads to

~11(k+1) 0 ~11 (k) e.11


~12(k+1) ~12(k) 0
+
~21 (k+1) ~21 (k) 0

~22(k+1) 0 ~22(k)

(18.2-7)

~11(k)j
0

0
0

0
l ~12(k)
~21 (k)
~22 (k) •
(18.2-8)

In addition to the matrices of the main and coupling transfer elements


in the block diagonal 4 coupling matrices appear for this V-canonical
structure as for the direct coupling, Eq. (18.2-3). The matrices B and
C are also similar.

Theoretical modeling of real processes shows that multivariable proces-


ses rarely show these simplified structures. In general mixtures of
different special structures can be observed.
18.2 Structural Properties of the State Representation 333

Figure 18.2.2 Twovariable process with a P-canonical structure

Figure 18.2.3 Twovariable process with a V-canonical structure


334 18. Structures of Multivariable Processes

If the state representation is directly obtained from the transfer


functions of the elements of Figure 18.1.2 some multiple state variab-
les are introduced if the elements have common states, as in Eq.
(18.1-1) for example. Then the parameter matrices have unnecessary pa-
rameters. However, if the state representation is derived taking into
account common states so that they do not appear in double or multiple
form, a state representation with a minimal number of states is obtain-
ed. This is called a minimaZ reaZization. A minimal realization is both
controllable and observable. Nonminimal state representations are there-
fore either incompletely controllable and/or incompletely observable.
Methods for generating minimal realizations are given for example in
[18.9], [18.3].

After discussion of some structural properties of multivariable pro-


cesses, some design methods for multivariable control systems are given
in the following chapters.
19. Parameter-optimized Multivariable
Control Systems

Parameter-optimized multivariable controllers are characterized by a


given controller structure and by the choice of free parameters using
optimization criteria or tuning rules. Unlike single variable control
systems, the structure of a multivariable controller consists not only
of the order of the different control algorithms but also of the mutu-
al arrangement of the coupling elements, as in chapter 18. Correspon-
ding to the main and coupling transfer elements of multivariable pro-
cesses, one distinguishes main and coupling controllers (cross con-
trollers) . The main controllers Rii are directly dedicated to the main
elements Gii of the process and serve to control the variables yi close
to the reference variables wi, see Figure 19.0.1 a). The coupling con-
trollers R .. couple the single loops on the controller side, Figure
~]
19.0.1 b) to d). They can be designed to decouple the loops completely
or partially or to reinforce the coupling. This depends on the process,
the acting disturbance and command signals and on the requirements on
the control performance.

The coupling controllers can be structured in P-canonical form, before,


parallel or behind the main controllers; corresponding arrangements are
also possible in V-canonical form. When realizing with analogue devi-
ces, the arrangement of the coupling controllers depends on the posi-
tion of the controller's power amplifier. However, when implementing
control algorithms in process computers, all the structures of Figure
19.0.1 can be used. In the following, two-variable processes are con-
sidered because of the corresponding simplification and practical re-
levance. These considerations can be extended easily to include more
than two control variables.
336 19. Parameter-optimized Multivariable Control Systems

a)

b)

c)

d)

Figure 19.0.1 Different structures of two-variable controllers


a) main controllers
b) coupling controllers behind of main controllers
c) coupling controllers parallel to main controllers
d) coupling controllers before of main controllers
19.1 Parameter Optimization of Main Controllers 337

19.1 Parameter Optimization of Main Controllers


without Coupling Controllers

Chapter 18 has already shown that there are many structures and combi-
nations of process elements and signs for twovariable processes. There-
fore general investigations on twovariable processes are known only for
certain selected structures and transfer functions. The control behavi-
our and the controller parameter settings are described in [19.1],
[19.2], [19.3], [19.4], [19.5] and [18.7] for special P-canonical pro-
cesses with continuous-time signals. Based on these publications, some
results which have general validity and are also suitable for discrete-
time signals, are summarized below.

For twovariable processes with a P-canonical structure, synchroneous


sampling and equal sample times for all signals, the following proper-
ties of the process are important for control (see section 18.1):

a) Stability, modes

o transfer functions of the main elements G11 , G22 and coupling


elements G12 , G21 :
- symmetrical processes
G11 G22

G12 G21
- asymmetric processes
G11 + G22
G12 + G21
o coupling factor
G12 (z)G21 (z)
-dynamic K(z)
G11 (z)G22 (z)

K12K21
- static
K11K22
negative coupling

positive coupling

b) Control behaviour, controller parameters


in addition to a):

o influence of disturbances, see Fig. 19.0.1:


- disturbance v acts simultaneously on both loops
(e.g. change of operating point or load)
338 19. Parameter-optimized Multivariable Control Systems

n1 = Gv1v and n2 = Gv2v


G and G have the same sign
v1 v2
G and G have different signs
v1 v2
- disturbances n 1 and n 2 are independent
n 1 and n 2 act simultaneously
n 1 and n 2 act one after another (deterministic)

o change of reference variables w1 and w2 :


J w1 (k) w2 (k)
simultaneously lw 1 (k) +
w2 (k)

- one after another

o mutual action of the main controllers:


- R 11 and R22 reinforce each other
- R11 and R22 counteract each other
- R11 reinforces R22 , R22 counteracts R 11
- R 11 counteracts R22 , R22 reinforces R 11 •

In the case of sampled signals the samp~e time To may be the same in
both main loops or different. Synchroneous and nonsynchroneous sampling
can also be distinguished.

The next section describes the stability regions and the choice of con-
troller parameters for P-canonical twovariable processes. The results
have been obtained mainly for continuous signals, but they can be qua-
litatively applied for relatively small sample times to the case of
discrete-time signals.

19.1.1 Stability Regions

Good insight into the stability properties of twovariable control sys-


tems is obtained by assuming the main controllers to have proportional
action and by considering the stability limits as functions of both
gains KR 11 and KR 22 •

For a symmetriaa~ twovariable process with P-canonical structure, con-


tinuous-time signals and transfer functions
K ..
ij = 11, 22, 12, 21 (19.1-1)
Gij(s) = (1+;;)3

the stability limits are shown in Figure 19.1.1 and 19.1.2 for positive
and negative values of the coupling factor [19.1]
19.1 Parameter Optimization of Main Controllers 339

2
KR11 "
KR11k '"
t 1.5
unstable

0.5

----
0 0.5 1.5 . 2
KR22
KR22k
Figure 19.1.1 Stability regions of a symmetrical twovariable control
system with negative coupling and P-controller s [19.1]

unst able

0.5

-
0 0.5

Figure 19.1.2 Stability regions of a symmetrical twovariable control


system with positive coupli ng and P-controller s [19.1]
340 19. Parameter-optimized Multivariable Control Systems

K =
0

The controller gains KRii are related to the critical gains KRiiK on
the stability limit of the noncoupled loops, i.e. KO = 0. Therefore the
stability limit is a square with KRii/KRiiK = 1 for the noncoupled
loops. In the case of increasing magnitude of the negative coupling
K0 < 1 an increasing region develops in the middle part and also the
peaks at both ends increase, Figure 19.1.1. For an increasing magnitude
of positive coupling K0 > 1 the stability region decreases, Figure 19.1.2,
until a triangle remains for K0 = 1. If K0 > 1 the twovariable system
becomes monotonically structurally unstable for main controllers with
integral action, as is seen from Figure 18.1.3 a). Then Gw 1 (0) = 1 and
Gw 2 (o) = 1 and with K0 = 1 a positive feedback results. If Ko > 1 the
sign of one controller must be changed, or other couplings of manipula-
ted and controlled variables must be taken. Figures 19.1.1 and 19.1.2
show that the stability regions decrease with increasing magnitude of
the coupling factor, if the peaks for negative coupling are neglected,
which are not relevant in practice.

Figure 19.1.3 shows- for the case of negative coupling- the change
of the stability regions through adding to the P-controller an integral
term (+ PI-controller) and a differentiating term (+ PID-controller) .
In the first case the stability region decreases, in the second case it
increases.

The stability limits so far have been represented for continuous-time


signals. If sampled-data controllers are used the stability limits
differ little for small sample times T 0 /T 95 ~o.01. However, the stabili-

ty regions decrease considerably for larger sample times, as can be


seen from Figure 19.1.4. In [19.1] the stability limits have also been
given for asymmetrical processes. The time constants, Eq. (19.1-1),
have been changed so that the time periods T pl. of the uncoupled loops
with P-controllers satisfy Tp 2 /Tp 1 > 1 at the stability limits. Figure
19.1.5 shows the resulting typical forms of stability region. Based on
these investigations and those in [18.7] twovariable control systems
with P-canonical structure and lowpass behaviour show following pro-
perties:

a) For negative coupling, stability regions with peaks arise. Two


peaks appear for approximately symmetric processes. Otherwise there
is only one peak.
19.1 Parameter Optimization of Main Controllers 341

------,
I
I
I
I
PID I

2
-~
K'R22k
Figure 19.1.3 Stability regions of a symmetrical twovariable system with
negative coupling Ko=-1 for continuous-time P-, PI- and
PID-controllers [19.1]
PI-controller : TI = TP
Pro-controller: T 1 = TP T 0 = 0.2 TQ
Tp: time period of one oscillation Yor KRii = KRiiK
(critical gain on the stability limit), see figure in
Table 5.6.1

2 KR22
KR22k
Figure 19.1.4 Stability regions tor the same twovariab~e system as Fig.
19.1.3. However discrete-time P-controllers with different
sample time T0 .
342 19. Parameter-op timized Multivariable Control Systems

increasing asymmetry

/
'Ko
~ t? I
I
I

/
/
/
/I
increasing
positive
coupling

b_ // b/// b
I /

/ ..----v
0 - /
/
- ----- ---- - < l - - Qncouple d

increasing

~
negative
coupling

-1
tL
tl_
L
- 00

-
2 00

!P2
T p1

Figure 19.1.5 Typical stability regions for twovariable control systems


with P-controllers ;
T .: period of the uncoupled loops at the stability limit
pl. [19.1].
19.1 Parameter Optimization of Main Controllers 343

b) For positive coupling, large extensions of the stability region a-


rise with increasing asymmetry.

c) With increasing asymmetry, i.e. faster loop 1, the stability limit


approaches the upper side of the square of the uncoupled loops. This
means that the stability of the faster loop is influenced less by
the slower loop.

The knowledge of these stability regions is a good basis for developp-


ing tuning rules for twovariable controllers.

19.1.2 Optimization of the Controller Parameters and Tuning Rules for


Twovariable Controllers

Parameter-optimized main controllers


-1 -v
q0i+q1iz + ... +qviz
(19.1-2)
1-z- 1

mostly with v = 1 or 2, can be designed by numerical optimization of


performance indexes, pole assignment or by tuning rules. Contrary to
single variable control, the controlled variables can be weighted diffe-
rently, for example by using a performance criterion

p M 2 2
l: a. l: [e. (k) + rit.ui (k) ]. (19.1-3)
i=1 ~ k=O ~

Here, the a.i are weighting factors for the main loops, with Za.i 1.
If these have a unique minimum

0 (19.1-4)
dq

leads ~ the optimal controller parameters

T
51 = ( qo 1 ' q 11 ' · • · ' qv 1 ; · · · ; qop' q 1 p' · • · ' qvp J• (19.1-5)

However, the required computational effort increases considerably with


the number p of controlled variables. Good starting values of the con-
troller parameters can lead to quicker convergence. The results depend
very much on the signals acting on the system, as in section 19.1.
344 19. Parameter-optimized Multivariable Control Systems

Tuning rules for parameter-optimized main controllers have been deve-


loped for the main controllers of twovariable systems. They depend on
the Ziegler-Nichols rules and have been obtained for continuous-time
signals [ 19. 1 J, [ 19.2 J, [ 19. 3 J, [ 19.4 J, [ 19.5 J and [ 18.7 J. An addition-
al requirement in practice is that one loop remains stable if the other
is opened. Therefore the gains must always satisfy KRii/KRiiK < 1 and
can only lie within the hatched areas in Figure 19.1.6.

A A
KR11k

~11k 12 c'

@ KR22k

KR11k KR11k

Figure 19.1.6 Allowable regions of controller gains for twovariable


systems.
Negative coupling: a) symmetrical b) asymmetrical
Positive coupling: c) symmetrical d) asymmetrical

Based on the stability regions, the following controller parameter tu-


ning rules can be derived. The cases a) to d) refer to Figure 19.1.6.

1. Determination of the stability limits


1.1 Both main controllers are switched toP-controllers.
1.2 Set KR 22 = 0 and increase KR 11 until the stability limit KR 11 K
is reached + point A.
1.3 Set KR 11 0 and search for KR 22 K +point B.
1.4 Set KR 11 KR 11 K and increase KR 22 until a new oscillation with
constant amplitude is obtained + point C for a) and b).
1.5 If r.o intermediate stability occurs, KR 22 is increased for
KR 11 = KR 11 K/2 +point C' in case c) and d).
1.6 In case a) and b) item 1.4 is repeated for KR 22 KR 22 K and
19.2 Decoupling by Coupling Controllers (Non-interaction) 345

changing KRll +point D for a).

Now a rough picture of the stability region is known and also which
case a) to d) is appropriate.

2. Choice of the gain KRii(P) for P-controllers

a) I f the control performance of y 1 is more important:


KR11 = 0.5 KR11K KR22 = 0 · 5 KR22C
If the control performance of y 2 is more important:
KR22 = 0.5 KR22K KR11 = 0.5 KR11D
b) The parameters are generally chosen within the broader branch of
the stability region:

KR11 0 • 5 KR11K KR22 0.5 KR22C

c) KR11 0.25 KR11K KR22 0.5 KR22C'

d) KR11 0.5 KR11K KR22 0.5 KR22K

3. Choice of the parameters for PI-controllers


Gain: as for P-controller
Integration time:
a) + b): Tiii (0. 8 1.2)TpC or Tiii 0. 85 TpiiK

c) + d): Tiii (0. 3 0.8)TpC or Tiii 0.85 TpiiK

TpC or TpiiK are the time periods of the oscillations at the stabi-
lity points c or A for i = 1 orB for i = 2.

4. Choice of the r:arameters for PID-controllers


KRii (PID) 1 . 25 KRii (P)

Tiii (PID) o. 5 Tiii (PI)

TDii 0.25 Tiii (PID)

These tuning rules can only give rough values; in many cases correc-
tions are required. Though the rules have been given for controllers
for continuous-time signals, they can be used in the same way for dis-
crete-time controllers. The principle of keeping a suitable distance
to the stability limit, remains unchanged.

The dynamia response of different twovariable control systems with P-


canonical structure has been considered in [18.7]. In the case of si-
multaneous disturbances on both controlled variables, the coupling
factor K0 , positive or negative, has no major influence on overshoot,
346 19. Parameter-optimized Multivariable Control Systems

damping, etc. The control behaviour depends much more on the mutual
effect of the main controllers (groups I to IV in Table 18.1.1). If
the system is symmetric, the control becomes worse in the sequence
group I + III ~ IV + II, and if it is asymmetric in the sequence group
III + I + IV + II. The best control resulted for negative coupling if
R 11 reinforces R22 and R22 counteracts R11 , and for positive coupling
if both controllers reinforce each other. In both cases the main con-
troZZer of the sZower Zoop is reinforced. The poorest control is for
negative coupled processes, where R11 counteracts R22 and R22 reinfor-
ces R11 , and especially for positive coupling with counteracting cont-
rollers. In these cases the main controZZer of the sZower Zoop is coun-
teracted. This example also shows that the faster loop is influenced
less by the slower loop. It is the effect of the faster loop on the
lower loop which plays significant role.

A comparison of the control performance of the coupled twovariable sys-


tem with the uncoupled loops gives further insight [18.7]. Only small
differences occur for symmetrical processes. For asymmetrical process-
es it is shown that the control performance of the slower loop is im-
proved by the coupling, if its controller or both controllers are re-
inforced. The loops should then not be decoupled. The control perfor-
mance becomes worse if both controllers counteract, or if the controll-
er of the slower loop ~s counteracted. Only then should one decouple,
i.e. especially for positively coupled processes with counteracting
controllers.

19.2 Decoupling by Coupling Controllers


(Non-interaction)
If the coupled control system has a poor behaviour or if the process
requires decoupled behaviour, decoupling controllers can be designed
in addition to the main controllers. Decoupling is generally only
possible for definite signals. A multivariable control system as in
Figure 19.2.1 is considered, with dim y =dim~= dim~· External sig-
nals v and w result in

(19.2-1)

G G
-v -w
whereas for missing external signals the modes are described by
19.2 Decoupling by Coupling Controllers (Non-interaction) 347

n
y

Figure 19.2.1 Multivariable control system

L! + QpBJ y = Q. (19.2-2)

Three types of non-interaction can be distinguished [18.2], [19.6).


a) Non-interaction for reference signals
The reference variable wi influences only the controlled variable yi
but not the other y .• Then
J
-1
Qw = L! + QpB] QpB = !!w (19.2-3)

must be a diagonal matrix.

b) Non-interaction for disturbance signals


A disturbance vi influences only yi, but not the other yj. Then

(19.2-4)

must be diagonal.

c) Non-interaction of modes
The modes of the single loops do not influence each other if the
system has no external disturbance. Then the elements of y are de-
coupled and Eq. (19.2-2) leads to the open loop matrix
(19.2-5)

which must be diagonal. A system which has independent modes is al-


so non-interacting for reference inputs.

The diagonal matrices can be freely chosen within smme limits. The
transfer function can be given for example in the same way as for un-
coupled loops. Then the coupling controllers Rij can be calculated and
checked for realizability. As a decoupled system for disturbances is
difficult to design and is often unrealizable [18.2] in the following
only independence of modes which also leads to the non-interaction for
reference variables, is briefly considered.
348 19. Parameter-optimi zed Multivariable Control Systems

Eq. (19.2-5) gives

adj ~P
R .Qo· (19.2-6)
det ~P

The choice of the elements of _Q0 and the structure of ~ is arbitrary


if the reailizability conditions are satisfied and the process inputs
are of acceptable size. Some cases are briefly discussed.

a) P-structure process and P-like structure controller


The process transfer matrix is, see Eq. (18.1-2)

:::]
G = [G11
-P
G12
and the controller matrix is

R = [ R11 R21]·
R12 R22

The controller becomes due to Eq. (19.2-6)

(19.2-7)

If D describes the response of the uncoupled loops, n 11


n 22 = G22 R2 , then

G22G11R1 -G21G22R2J·
R [ (19.2-8)
-G12G11R1 G11G22R2

If realizability problems occur D must be changed.

b) P-structure Erocess with V-structure controllers


The decoupling scheme of Figure 19.2.2 gives with

R
-H
= [ R11
0 ~K
= [ :12
R~1l
R:J
the overall controller

(19.2-9)

Decoupling of modes for reference signals is attained, see Eq.


19.2 Decoupling by Coupling Controllers (Non-interaction) 349

Figure 19.2.2 Non-interaction of a P-canonical process by V-canonical


decoupling after the main controllers

Figure 19.2.3 Non-interaction of a V-canonical process by P-canonical


decoupling after the main controllers
350 19. Parameter-optimized Multivariable Control Systems

(19.2-6) 1 if

(19.2-10)

is satisfied. Hence for a twovariable system with Dii G .. R.


~~ ~

(19.2-11)

(19.2-12)

The decoupling is very simple. The main controllers do not require


any additional term and the coupling controllers are independent of
the main controllers. R 12 and R21 are not realizable if the order
of the process main elements is higher than the order of the coup-
ling elements or if they have zeros outside of the unit circle of
the z-plane. An inspection of the block diagram shows that the equa-
tions of the coupling controllers correspond to ideal feedforward
controllers.

c) V-structure process with P-structure controller


Decoupling according to Figure 19.2.3 again leads to simple rela-
tionships

(19.2-13)

(19.2-14)

No realizability problem occurs.

19.3 Parameter Optimization of the Main and Coupling


Controller
Section 19.1 showed that the couplings in a twovariable process may
deteriorate or improve the control compared with uncoupled processes.
Coupling controllers should therefore decouple in the first case and
reinforce the couplings in the latter case. This has been considered
in [18.7]. As coupling controllers P-controllers may often be suffi-
cient

The gains can be found by numerical parameter optimization. Simulation


studies have shown that for low pass processes in P-structure, coup-
ling controllers show no improvement for symmetrical processes.
19.3 Parameter Optimization of the Main and Coupling Controller 351

For asymmetrical processes improvements are possible. The coupling con-


trollers should reinforce the coupling, if the main controllers rein-
force each other and should decouple if the main controllers counter-
act.
20. Multivariable Matrix Polynomial Control
Systems

Based on the matrix polynomial representation of multivariable process-


es described in section 18.1.5 the design principles of some single in-
put/single output controllers can be transferred to the multivariable
case with equal numbers of process inputs and outputs.

20.1 The General Matrix Polynomial Controller

The basic matrix polynomial controller is

(20.1-1}

l
with polynomial matrices
-1 -1
f(z } = R0 + f 1z +
(20.1-2)
Q(z-1} = 9o + 21Z-1 +

The manipulated variables can be calculated from

(20.1-3}

if f(z- 1 } is nonsingular. Substituting into the process equation


-1
~(z )y(z) = !(z -1 )z -d~(z} (20.1-4)

leads to the closed loop system

y(z) = [~(z-1)+!(z-1)E_-1 (z-1)Q(z-1)z-df1


•!(Z-1)E_-1(z-1)Q(z-1)Z-d!(Z). (20.1-5)

Comparison with Eq. (11.1-3) indicates the formal correspondence with


the single input/single output case.
20.2 The Matrix Polynomial Deadbeat Controller 353

20.2 The Matrix Polynomial Deadbeat Controller

It is assumed that all polynomials of the process model have order m


and that all inputs are delayed by the same deadtime d. A deadbeat con-
troller then results by requiring there to be a finite settling time of
m+d for the process outputs and of m for the process inputs if step
changes of the reference variables ~(k) are assumed. For the SISO case
this gave the closed loop responses, c.f. section 7.1,

Y..{&
w(z)

u(z)
w(z)

and the deadbeat controller

A direct analogy leads to the design equation for the multivariable


deadbeat controller (MDB1) [20.1]

(20.2-1)

This controller results in the finite settling time responses

,!!(Z) ~- 1 ( 1 )~(z - 1 )~(z) (20.2-2)

y(z) ~-1 ( z -1 ) ~ ( z -1 ) z -d~-1 ( 1 ) ~ ( z -1 ) ~ ( z) = ~ ( z -1 ) ~ ( z) (20.2-3)

if ~(z- 1 ) has a finite order of m+d. The controller equation can also
be written as

(20.2-4)

To decrease the amplitudes of the process inputs the settling times


can be increased. If the settling time is increased by one unit to m+1
and m+d+1 the SISO deadbeat controller becomes, c.f. Eq. (7.2-14),

-1 -1
q 0 [1-z /a]A(z )
u(z)
e(z) = 1-q0 (1-z
-1
/a)B(z
-1
)z
-d

with 1/a = 1-1/q0 B(1). q 0 can be arbitrarily chosen in the range

1/ ( 1-a 1 ) B ( 1) :;; q 0 :;; 1/B ( 1)

so that u(1) :;; u(o).


354 20. Multivariable Matrix Polynomial Control Systems

The smallest process inputs are obtained for

q 0 = 1/(1-a 1 )B(1)

which means that

1/a. = a1•
The multivariable analogy (MDB2) is

(20.2-5)

with

(20.2-6)

9Q can arbitrarily be chosen in the range

( 1) [ J-1 d = _B -1 ( 1 )
2omin = ~ -1 !-~1 an Qomax (20.2-7)

satisfying ~(1) = ~(0) for 90min· For the smallest process inputs,
~(1) = ~(O), this requires that

Q
-0
= B- 1 [I-A
- - -1
f 1 (20.2-'8)

yielding

H ~1· (20.2-9)

20.3 Matrix Polynomial Minimum Variance Controllers

A stochastic matrix polynomial model

-1 -1 -d -1
~(z >x<zl = ~(z )z ~(z) + Q(z )!(z) (20. 3-1)

is assumed, with

(20.3-2)

A generalized minimum variance controller is obtained by minimizing


the criterion [20.1]

I (k+d+1) E{[y(k+d+1)-~(k) JT[y(k+d+1)-~(k) J


-t· [~(k)-~w(k) JT ~[~(k)-~w(k) j} (20.3-3)

with R RT positive semidefinite. u (k) is the offset steady-state


-w
20.3 Matrix Polynomial Minimum Variance Controllers 355

value of u(k)

u (k) = B- 1 (1)A(1)w(k).
-
(20.3-4)
-w - -
Corresponding to Eq. (14.2-4), the process and signal model is split
up into

z (d+1) Y. (z)

(20.3-5)
where the new matrix polynomials are defined by

(20.3-6)

(20.3-7)

Their parameters are determined by

(20. 3-8)

Eq. (20.3-5) is now transformed into the time domain and analogously
to Eq. (14.1-7) to Eq. (14.1-10) I(k+d+1) is obtained. Then di(k+d+1)/
a~(k) = 0 is computed, resulting in

~ T[
-1 (z -1 )[~(z -1 )z~(z)+~(z -1 )~(z) J- ~(z) ] + ~[~(z)-~w(z) J = 0
1 ~
(20.3-9)

where v(z) can be replaced (reconstructe d) by

-1 -1 -1 -1 -d (20. 3-10)
~(z)=Q (z l[~(z )y_(z)-~(z )z ~(z)].

After introducing Eq. (20.3-4) the generalized matrix polynomial mini-


mum variance controller (MMV1) is found to be

~(z) [f.(z-1)Q-1 (z-1)~(z-1)z+(~~)-1~J-1·


T -1 -1 i1 -1 -1 -1 -1 -1 -1
{[_!+(~ 1 l ~ ~ (1)~(1j~(z)-~ (z )!!_(z )Q (z )~(z )y_(z)}.

(20.3-11)

If ~ = Q is set, the minimum variance controller (MMV2) results from


Eq. (20.3-9) and Eq. (20.3-1), [20.2],
-1
.!:!. (z) = ~-1 (z -1) : (d+1) r~(z -1) [~(z) -y (z) ]+[Q(z -1) -~(z -1) ]~(z)J
1-z
(20.3-12)
where ~(z) must be reconstructed from Eq. (20.3-10}. This controller
yields for the closed-loop system

_l(Z) = f_(z- 1 )~(z) + z-(d+1)~(z). (20.3-13)


Examples are given in section 25.8.
21. Multivariable State Control Systems

The state controller for multivariable processes was designed in chap-


ter 8. Therefore only a few additional comments are made in this chap-
ter. The process equation considered in the deterministic case is

~ (k+1) ~ ~(k) + B ~(k) (21-1)

y(k) = c ~(k) (21-2)

with m state variables, p process inputs and r process outputs. The op-
timal steady-state controller is then

~(k) = - ~ ~(k) (21-3)

and possesses pxm coefficients if each state variable acts on each pro-
cess input.

21.1 Multivariable Pole Assignment State Controllers

For the closed-loop system

~(k+1) = [~ - ~ ~) ~(k) K ~(kl (21.1-1)

the characteristic equation is

det[~ ! - ~ + ~ ~J det[~ ! - E_]


(21 .1-2)

c.f. section 8.3. The pxm controller coefficients, however, cannot be


determined uniquely by assigning the m coefficients ai of the charac-
teristic equation. Therefore additional requirements can be taken in-
to account. As shown in [2.22] specializations of the structures of K
or F ease the calculation of the coefficients ai. For example, only
the state variables of the main transfer elements can be fed back or,
additionally, the cross interactions can be taken into account by
21.3 Multivariable Decoupling State Controllers 357

"feedforward" actions. For such simplified state controller structures


a unique determination of controller coefficients based on given coeffi-
cients ai of the characteristic equation is possible. For a further dis-
cussion of pole assignment controllers see for example [2.19].

21.2 Multivariable Matrix Riccati State Controllers

This controller has been derived in section 8.1. The minimization of


the quadratic performance criterion yields the steady-state solution of
the matrix-Riccati Eq. (8.1-31)

(21.2-1)

21.3 Multivariable Decoupling State Controllers

A multivariable state-control system for which the outputs y do not in-


fluence one other

y(k+1) =!::. y(k) !::. f ~(k) (21.3-1)

11 • • • 1 r o

~non-interacting system, c.f. section 19.2),can be obtained by compa-


ring with [21.1]

y(k+1) = c ~(k+1) f[~ - B .!5_]~(k) (21.3-2)

resulting in

(21.3-3)

where the parameters Ai determine the eigenvalues of the system of Eq.


(21.1-2).
358 21. Multivariable State Control Systems

21.4 Multivariable Minimum Variance State Controllers

In section 15.3 an optimal state controller for stochastic disturbances


was discussed which minimizes the performance criterion Eq. (15.1-5) and
uses a state variable estimator. The derivation of this state controll-
er was performed according to the state controller for deterministic
disturbances in chapter 8. In this section another approach is presen-
ted which is based on the minimum variance principle shown in chapter
14, which uses a prediction of the noise and which is especially suita-
ble for multivariable adaptive control. To derive stochastic minimum
variance state controllers the innovations state space model (as suita-
ble for identification methods)

~(k+1) ~ ~(k) + ~ ~(k) + F y(k) (21.4-1)

y(k) = £ ~(k) + y(k) (21.4-2)

is used, where v(k) is a zero-mean Gaussian white noise. The quadratic


criterion

I(k+1) = E{~T(k+1)Q ~(k+1) + ~T(k)~ ~(k)} (21.4-3)

where 2 is positive definite and ~ positive semidefinite is to be mini-


mized. This criterion is the same as Eq. (8.1-8) the only difference
being that the process is disturbed by the noise v(k). Therefore the
results of Eq. (8.1-9) to Eq. (8.1-10) can be directly used to write

(21.4-4)

and the generaLized muLtivariabLe minimum variance state controLLer


(MSMV1) becomes

(21.4-5)

The noise can be reconstructed by

Q<k) = y(k) - £ ~(k) (21 .4-6)

where ~(k) is predicted using Eq. (21.4-1). If the deadtime is not in-
cluded in the system matrix~, the controller equations are [20.1]

(21.4-7)

with the prediction


21.4 Multivariable Minimum Variance State Controllers 359

d-1 .
~(k+d) E{x(k+d) jk-1} = ~d~(k) + E Ad- 1 - 1B ~(k-d+i). (21 .4-8)
i=O
Another version which corresponds to the minimum variance controllers
discussed in chapters 14 and 20.3 is obtained by using the criterion

I(k+d+1) = E{yT(k+d+1)£ y(k+d+1) + ~T(k)~ ~(k)}. (21 .4-9)

Here the variances of the outputs rather than all the state variables
are weighted. Introducing 9 = fT£ fin Eq. (21.4-3) yields

~(k) -(~TfT£ f B + ~)-1~TfT£ f[~ ~(k+d) + AdF v(k)J (21. 4-10)

(MSMV2).

For R =0 finally the muZtivariabZe minimum variance state aontroZZer


(MSMV3) becomes

-u(k) = -(C B)- 1C[A x(k+d) + AdF v(k) J.


-- --- --- (21.4-11)

The controller equations show that they consist of adeterministic feed-


back law ~x(k) and a stochastic feedforward law ~v(k)

~(k) = ~x(k) + ~v(k) = ~x ~(k+d) + ~v ~(k). (21.4-12)

The deterministic feedback controller in the generalized minimum vari-


ance controller is the matrix Riccati controller if Pin Eq. (21.2-1)
or Eq. (8.1-34) is replaced by 2· And the deterministic controller in
the minimum variance controller Eq. (21.4-11) is a decoupling state
controller, Eq. (21.3-3), if~= Q [20.1].
F Adaptive Control Systems Based on
Process Identification

22. Adaptive Control Systems -


A Short Review

There are many more ways in which adaptive controllers or adaptive con-
trol algorithms can be realized with digital computers, i.e. process
computers and microcomputers, than with analog techniques. The great
progress in the production of cheap digital processors has enabled the
implementation of complex control algorithms which would otherwise ei-
ther not be realized at all or only at unjustifiable expense using a-
nalog techniques. In addition there are many advantages in having pro-
cess models and controllers in discrete-time form compared with conti-
nuous-time form, especially in theoretical development and in computa-
tional effort. Furthermore, the progress in the field of process iden-
tification and in the design of control algorithms since about 1965
was necessary so that adaptive control algorithms could be developed to
meet practical requirements. For these reasons interest in adaptive con-
trol has increased considerably during the last ten years. Many early
papers on adaptive control were published in 1958 to 1968; most of them
were based on analog signals and have been realized by analog compu-
ters. Surveys of these early adaptive control systems are given for ex-
ample in papers [22.1] to [22.5] and in books [22.6] to [22.10). How-
ever, because of the expense of practical realization and particularly
because of the lack of universal applicability, the interest in adap-
tive control subsequently declined.

This section gives a short review and introduction to the most impor-
tant basic structures of adaptive control systems. A comparison of con-
tributions on adaptive control shows that there are many different de-
finitions of the term 'adaptive'. In [22.15] some of these definitions
are summarized and new ones are proposed. The following description
considers adaptive control schemes in an input/output framework. Then
space is devoted to different adaptive control principles and algorithms.
For simplicity only single input/single output processes are treated.
22. Adaptive Control Systems - A Short Review 361

Unlike fixed control systems, adaptive control systems adapt (adjust)


their behaviour to the changing properties of controlled processes and
their signals. Two basic schemes of controller adaptation can be dis-
tinguished, as shown in Figure 22.1.

w y

a) b)

Figure 22.1 Basic schemes of controller adaptation


a) Feedforward adaptation b) Feedback adaptation
(open loop adaptation) (closed loop adaptation)
A: Adaptation algo- A: Adaptation algorithm
rithm (feedforward) (feedback)

If the process behaviour changes can be observed from measurable sig-


nals (mass flow, speed, etc.) and if it is known in advance how the
controller has to be adjusted depending on these signals, feedforward
adaptation (open loop adaptation) can be applied, Fig. 22.1 a). Here
there is no feedback from the internal closed loop signals to the con-
troller.

If the process behaviour changes cannot be observed directly feedback


adaptation (closed loop adaptation) must be applied, Fig. 22.1 b). As
a first step information on the process behaviour (structure, parame-
ters) is gained by measuring the process input and output signals.
This can be performed for example by process identification (measuring
of u and y) or by determination of the closed loop performance (measu-
ring of ew and u) . Based on this information the controller can be cal-
culated and adapted. This results in a second feedback, leading to a
closed loop action with the signal flow path: control loop signals -
adaptation algorithm - controller - control loop signals.

The following treats only feedback adaptation, as feedforward adapta-


tion (if applicable) is usually straightforward.

Adaptive controllers with feedback can be divided into two main groups.
Self-optimizing adaptive controllers try to attain an optimal control
performance, subject to the design criterion of the controller and to
362 22. Adaptive Control Systems - A Short Review

the obtainable information on the process and its signals, Fig. 22.2 a).

w y w y

a) b)

Figure 22.2 Basic schemes of adaptive controllers with feedback adapta-


tion. a) Self-optimizing adaptive controller (SOAC).
b) Model reference adaptive controller (MRAC).

Here three stages can be distinguished:

1. identification of the process or the closed loop

2. contro~ler calculation

3. adjustment of the controller (or modification [22.16])

Recent surveys are in [22.14], [25.12], [25.6]. The other group compri-
ses model reference adaptive control~ers, Fig. 22.2 b), which try to
obtain a closed loop response close to that of a given reference model
for a given input signal. This requires a measurable external signal
(e.g. the reference value for servo-systems), and the system then adapts
only if this given signal changes. In this case also three stages can
be distinguished:

1. comparison of closed loop and reference model

2. controller calculation

3. adjustment of the controller

If a fixed reference model is used, the closed loop approaches the a


priori given reference model response and not necessarily an 'optimal'
response. Surveys are given for example in [22.12], [22.13], [22.17].
The advantages of model reference adaptive controllers are in their
quick adaptation for a defined input and in the simple treatment of
stability using nonlinear stability theory. However, they do not adapt
if the measured process input does not change. Self-optimizing adaptive
controllers have the advantage that they adapt to any and in particular
to unmeasurable disturbances.
22. Adaptive Control Systems -A Short Review 363

In the next chapter self-optimizing adaptive aontrollers are considered


which are based on proaess identifiaation. Therefore on-line process
and signal identification is treated in chapter 23 and process identi-
fication in closed loop in chapter 24. Then a special class of self-
optimizing adaptive controllers - the so called parameter adaptive con-
trollers - is described in chapter 25.
23. On-line Identification of Dynamical
Processes and Stochastic Signals

Identification is the experimental determination of the dynamical be-


haviour of processes and their signals. Measured signals are used to
determine the system behaviour within a class of mathematical models.
The error between the real process or signal and its mathematical mo-
del has to be as small as possible [3.12], [3.13]. On-line identifica-
tion means the identification with computers in on-line operation with
the process. If the measured signals are first stored in a block or
arrays this is called batch processing. However, if the signals are pro-
cessed after each sample instant this is called real time processing.

For adaptive control systems on-line identification in real time -


real time identification - is of primary interest. Furthermore parame-
tric process and signal models are preferred for controller design.
They involve a finite number of parameters and allow the application
of advanced controller design procedures with relatively little compu-
tational effort for a wide class of processes.

For real time identification recursive parameter estimation methods


have been developed for linear time invariant and time variant process-
es, for some classes of nonlinear processes and for stationary and some
classes of nonstationary signals. This chapter reviews some important
methods of recursive parameter estimation. For a more extensive study,
particularly for their derivation and their convergence conditions, the
reader is referred to the literature, for example [3.12], [3.13], and
to the cited references.
23.1 Process and Signal Models 365

23.1 Process and Signal Models

It is assumed that a stable process is time invariant and linearizable


so that it can be described by a linear difference equation

(23.1-1)
where

u(k) U(k) - u00


}(23.1-2)
y(k) Y(k) - Yoo

are the deviations of the absolute signals U(k) and Y(k) from the d.c.
('direct current' or steady-state) values u 00 and Y00 . d = 0,1,2, ...
is the discrete deadtime. From Eq. (23.1-1) the z-transfer function be-
comes
yu(z) -d -d
z z (23.1-3)
u(z)

The measured output y(k) is assumed to be contaminated by disturbances


n(k), Figure 23.1.1,

y(k) = yu(k) + n(k). (23.1-4)

B (z- 1 ) Yu Y
A(z- 1)
Figure 23.1.1 Process and noise model

The disturbance signal n(k) is assumed to be described as an autore-


gressive ~oving ~verage signal process (ARMA), c.f. section 12.2.3,

n(k) + c 1 n(k-1) + .•• + cpn(k-p) = v(k) + d 1 v(k-1) + .•. + dpv(k-p)

(23.1-5)

where v(k) is a nonmeasurable, normally distributed, statistically in-


dependent noise (discrete white noise) with
366 23. On-line Identification of Dynamical Processes

E{v(k)} = 0

cov[v(k) ,T] E{v(k) v(k+T)}

where cr 2 is the variance and o(T) is the Kronecker delta function. The
v
z-transfer function of the noise filter is

1+d 1 z- 1+ ••• +d z-p


n{z)
v(z) -1 -p (23 .1-6)
1+c 1 z + ••• +cpz

Eq. (23.1-3) and (23.1-6) yield the combined process and noise model

y(z) (23 .1-7)

The objective of parameter estimation is to estimate the process para-


-1 -1
meters in the polynomials A(z ) and B(z ) and the noise parameters
-1
in C(z ) and D(z- 1 ), based on measured signals u(k) and y(k). It is
assumed that the model orders m and p are known a priori. If this is
not the case they can be determined by order search methods [3.13).
The noise n(k) is assumed to be stationary, i.e. the roots of the poly-
nomial C(z- 1 ) lie within the unit circle in the z-plane. The parameter
estimation methods described in the following differ, for example, in
the assumptions on the structure of the noise filter which must be made
for convergent parameter estimates. As well as the general model Eq.
(23.1-7) one distinguishes in particular two specialized models, the
"ML-model" also called the "ARMAX-model" (ARMA model, Eq. (23.1-5),
with an exogenous variable [23.14))

y(z) (23.1-8)

and the "LS-model"

y(z) (23.1-9)
23.2 The Recursive Least Squares Method (RLS) 367

23.2 The Recursive Least Squares Method (RLS)

23.2.1 Dynamical Processes

Considering measured signals y(k) and u(k) up to time point k and the
process parameter estimates up to time point (k-1), one obtains from
Eq • ( 2 3 . 1-1 )

y(k) +a., (k-1)y(k-1) + •.. + am (k-m)y(k-m)


- 6 1 (k-1)u(k-d-1) - - bm (k-1)u(k-d-m) e(k) (23 .2-1)

where the equation error (residual) e(k) replaces "0" in Eq. (23.1-1).
This error arises from the noise contaminated outputs y(k) and from the
erroneous parameter estimates. In this equation the following term can
be interpreted as a one-step ahead prediction y(kik-1) of y(k) at time
(k-1)

y. <k 1 k-1 > - a1 (k-1)y(k-1> - -am (k-1)y(k-m)


+ B1 (k-1)u(k-d-1) + .•. + bm (k-1)u(k-d-m)
!J!.T(k)Q_(k-1) (23.2-2)

with the data vector

!!!.T (k) = [-y(k-1) - y(k-m) u(k-d-1) .•. u(k-d-m)J (23.2-3)

and the parameter vector

(23.2-4)

For the equation error this gives

e(k) y(k) (23.2-5)

equation new one step ahead


error measurement prediction by the model

Now inputs and outputs are measured fork= 1,2, ... ,m+d+N. Then N+1 e-
quations of the form

y(k) = !l!_T(k)~(k-1) + e(k)

can be represented as a vector equation

y(m+d+N) = ~(m+d+N)~(m+d+N-1) + ~(m+d+N) (23.2-6)

with
368 23. On-line Identification of Dynamical Processes

T
y (m+d+N) [y(m+d) y(m+d+1) ... y(m+d+N) (23 .2-7)

!<m+d+N)

u(0)1
~
-y (m+d-1) -y (m+d-2) -y(d) u(m-1) u(m-2)
-y(m+d) -y (m+d-1) -y(1+d) u(m) u(m-1) u (1)

-y(~+d+N-1) -y(m+d+N-2) -y(N+d) u(m+N-1) u(m+N-2) u(N)

(23 .2-8)

~T(m+d+N) = [e(m+d) e(m+d+1) ••• e(m+d+N) ]. (23 .2-9)

Minimization of the loss function


m+d+N
T
V = ~ (m+d+N) ~(m+d+N) L e 2 (k) (23.2-10)
k=m+d

and therefore

dV
d0 I0=0
A
0 (23.2-11)

results with the assumption N ~ 2m and the abbreviation


T -1
~(m+d+N) = [! (m+d+N) !(m+d+N)] (23.2-12)

in the estimation [3.12], [3.13],

~(m+d+N-1) = ~(m+d+N) !T(m+d+N) y(m+d+N). (23.2-13)

This is nonreeursive parameter estimation as the parameter estimates


are obtained only after measuring and storing of all signal values.
Writing the nonrecursive estimation equations for Q(k+1) and Q(k) and
subtracting one from the other results in the reeursive parameter es-
timation algorithm

~(k+1) Q(k) + .Y(k) [y (k+1) ]!_T (k+1)Q(k)]

new old + correcting new one step ahead]


estimate estimate vector [ measurement prediction of
the new mea-
surement •

(23 .2-14)
The correcting vector is given by

.Y(k) = ~(k+1)]!_(k+1) 1 ~(k)]!_(k+1) (23.2-15)


]!_T(k+1)~(k)]!_(k+1)+1
23.2 The Recursive Least Squares Method (RLS) 369

and

(23.2-16)

To start the recursive algorithm one sets

~(0) = Q and £(0) a! (23 2-17)


0

with a large [3.13]. The expectation of the matrix Pis proportional


to the covariance matrix of the parameter estimates

E{£(k+1)} = ~ cov[6~(k) J (23.2-18)


oe
2 T
with oe = E{~ ~} (23.2-19)

and the parameter error 6~(k) = -


Hence the recursive algo-
~(k) ~-

rithm produces the variances of the parameter estimates (diagonal ele-


ments of covariance matrix). Eq. (23.2-14) can also be written as

Q(k+1) = Q(k) + y(k) e(k+1). (23.2-20)

Convergence Conditions
The general requirements for the performance of parameter estimation
methods are that the parameter estimates are unbiased

E{~(N)} = ~ N finite (23.2-21)

and consistent in mean square

lim E{~(N)} = ~ (23.2-22)


N+oo

0. (23 2-23)
0

For the method of least squares, applied to a stable difference equa-


tion which is linear in the parameters, the following conditions must
therefore be satisfied [3.13], [3.12].

a) The process order m and the dead time d are known.

b) The input signal u(k) U(k) - u00 must be exactly measurable and
0 oo must be known.

c) The input signal u(k) must be persistently exciting at least of or-


der m. This means that the matrix
H = {hij = ¢~u(i-j)} i,j = 1,2, ... ,m
N-1
must be positive definite (det ~ > 0), and u00 lim L U(k) and
k+oo k=O
370 23. On-line Identification of Dynamical Processes

N-1
• (t) = lim E u(k)u(k+t) exists, see [23.15], [23.16].
uu N+oo k=O

d) The output signal y(k) = Y(k)-Y 00 may be disturbed by a stationary


noise n(k). Y00 has to be known and has to correspond with u00 accor-
ding to the static process behaviour.

e) The equation error e(k) must be uncorrelated with the elements of


the data vector ~T(k). This means that e(k) must be uncorrelated.

f) E{e(k)} = 0.

The convergence of the recursive algorithm also depends on the choice


of the starting values ~(0) and ~(0) [3.13].

The requirement of an uncorrelated error signal e(k) considerably re-


stricts the applicability of the least squares method for strongly
noisy processes. Unbiased parameter estimation supposes a noise filter

G (z) = n(z) (23.2-24


v v(z)

which rarely exists. Therefore other methods must be used in general


for strongly disturbed processes. However, despite of this fact, the
recursive least squares method is well suited for parameter adaptive
control, as will be shown later.

D.C. Value Estimation


As for the process parameter estimation the variations of u(k) and y(k)
of the measured signals U(k) and Y(k) have to be used, the d.a. values
u00 and Y00 either have also to be estimated or have to be removed.
Following methods are available:

D.c. method 1: Differencing


The easiest way to obtain the variations without knowing the d.c. va-
lues is just to take the differences

U(k)- U(k-1) u (k) - u (k-1) t.u(k)


}(23.2-25
Y(k) - Y(k-1) y(k)- y(k-1) t.y (k).

Instead of u(z) and y(z) the signals t.u(z) = u(z)[1-z- 1 J and t.y(z)[1-z~ 1 ]
then are used for the parameter estimation. As this special high-pass
filtering is applied to both, the process input and output, the process
parameters can be estimated in the same way as in the case of measuring
u(k) and y(k). In the parameter estimation algorithms u(k) and y(k)
23.2 The Recursive Least Squares Method (RLS) 371

have to be replaced by ~u(k) and ~y(k). However, if the d.c. values


should be known other methods have to be used.

D.c. method 2: Averaging


The d.c. values can be estimated simply by averaging
M
1 L Y(k). (23.2-26)
M k=1

Its recursive version is

(23.2-27)

For slowly time varying d.c. values recursive averaging with exponen-
tial forgetting leads to

(23.2-28)

with A< 1. The same can be applied for u00 • The variations u(k) and
y(k} can be determined by Eq. (23.1-2}.

D.c. method 3: Estimation of a constant


The estimation of the d.c. values u 00 and Y00 can also be included into
the parameter estimation.

Inserting Eq. (23.1-2} into Eq. (23.1-1} results in

Y(k} -a 1 Y(k-1}- ••• -amY(k-m}+b 1U(k-d-1}+ •.• +bmU(k-d-m}+C

(23.2-29}
with
(23.2-30}

Extending the parameter vector i by C and the data vector ~T(k) by 1,


the measured Y(k} and U(k} can be directly used for the estimation and
C can be estimated too. Then for one given d.c. value the other can be
calculated, using Eq. (23.2-30}. For closed loop identification it is
convenient to use

Yoo = W(k} (23.2-31}

and to calculate u00 •


372 23. On-line Identification of Dynamical Processes

Example 23.1 Parameter estimation with the recursive least squares


method for a first order model.

The process model is

y(k) + a 1 y(k-1) = b 1 u(k-1)+v(k).

For the parameter estimation is used

y(k) = ~T(k)~(k)+e(k)

with
T
~ (k) = [-y(k-1)u(k-1) J
A A A T
~(k) = [a 1 (k)b 1 (k) J .

The estimation algorithms can be programmed in this way (as suitable


for adaptive control) :

a) New measurements y(k) and u(k) are taken at time k.

b) e(k) = y(k)- [-y(k-1)u(k-1)] r~1(k-l)]


b1 (k-1)
c) The new parameter estimates are

r b1(k)
~1(k)] = r~1(k-1)]
b1(k-1)
+ [y,(k-1)]
y2(k-1)
e(k)

from g)

d) Measurements y(k) and u(k) are inserted in


~T(k+1) = [-y(k)u(k)]

-y (k)]
e) E_(k)~(k+1) r
p11 (k) p12 (k)]
p21 (k) p22 (k)
r u(k)
from
h) = r-p,, (k) y (k) +p12 (k) u (k)l
-p21 (k) y (k) +p22 (k) u (k)

f) .!!?(k+1)E_(k)~(k+1) [-y(k)u(k)J i1 j
'-----v----'
from e)

h) E_(k+1) * [E_(k)-y(k)~T(k+1)E_(k) J

*[ E_ (k) -y (k) [P (k) 1jJ (k+1) JT]


'-----v----'
from c)
1
[P(k)-y(k)iTJ
I - - -
23.2 The Recursive Least Squares Method (RLS) 373

*l
p11(k)-y1i1 p12(k)-y1~2l
= p21 (k)-y2i1 p22 (k) -y 2 1 2

i) Replace (k+1) by k and start again with a).

To start the recursive algorithms at time k 0 one uses

Q(O) = f~l and £(0) = [~ ~1


where a is a great number.
D

23.2.2 Stochastic Signals

The method of recursive least squares can also be used for the parame-
ter estimation of stochastic signal models. A stationary autoregressive
moving average process (ARMA)

y(k) + c 1 y(k-1) + + c p n (k-p)


v(k) + d 1 v(k-1) + + d v(k-p) (23.2-32)
p

is considered, where compared with Eq. (23.1-5) the nonmeasurable n(k)


has been replaced by the measurable y(k). According to Eqs. (23.2-1)
to (23.2-5) it is written

y(k) (23. 2-33)

where

]!_T(k) = (-y(k-1) ... -y(k-p) v(k-1) ... v(k-p)] (23.2-34)


T
.§!. = [c 1 ... cp d 1 ... dp]. (23.2-35)

If v(k-1) , ... ,v(k-p) were known, the RLS method could be used as Eqs.
(23.2-14) to (23.2-17), as v(k) in Eq. (23.2-33) can be interpreted as
equation error, which is statistically independent by definition.

Now the time after the measurement of y(k) is considered. Here y(k-1),
... ,y(k-p) are known. Assuming that the estimates ~(k-1) , ... ,;(k-p)
and ~(k-1) are known, the most recent input signal ~(k) can be estima-
ted via Eq. (23.2-33), (23.1], [23.2]

=
~ ~T ~
v(k) y(k) - ]!_ (k) ~(k-1) (23.2-36)

with
~T (k)
~

= [ -y (k-1) ... -y (k-p) v (k-1) ;(k-p)J. (23.2-37)


374 23. On-line Identification of Dynamical Processes

Then also

iT (k+1) [-y(k) •.. -y(k-p+1) ;(k) ; (k-p+1) (23.2-38)

is determined, such that the recursive algorithms Eq. (23.2-14) to Eq.


(23.2-16) can be used to estimate §<k+1) if there ~T(k+1) is replaced
by ~T(k+1). Then ;(k+1) and §<k+2) are estimated, etc. For starting
the algorithm

;(0) = y(O); ~(O) = Q; ~(0) =a!

can be used. As v(k) is statistically independent, v(k) and ~T(k) are


uncorrelated and unbiased which results in estimates consistent in mean
square. As the model Eq. (23.2-32) must be stable, the roots of C(z)
o and D(z) = 0 should lie within the unit circle of the z-plane. The
variance of v(k) can be estimated by [3.13]

~2(k) (23 .2-39)

or by the resultant recursive algorithm

(23. 2-40)

23.3 The Recursive Extended Least Squares Method


(RELS)
If instead of the LS-model

e:(z) (23. 3-1)

with an uncorrelated error signal e:(z) the ML-model

D(z -1 )e(z) (23.3-2)

with a correlated signal e:(z) = D(z- 1 )e(z) is used, the recursive me-
thods for dynamical processes and for stochastic signals can be com-
bined to form an extended least squares method [23.3], [23.2]. Based
on
AT A
y(k) = ~ (k)g(k-1) + e(k) (23.3-3)

the following extended vectors are introduced


AT
~ (k) = [-y(k-1) ••• -y(k-m) u(k-d-1) .•• u(k-d-m)
~(k-1) ••• ~(k-p) J (23.3-4)
23.4 The Recursive Instrumental Variables Method (RIV) 375

(23.3-5)

and the parameters are estimated using

~(k+1) = ~(k) + y(k) [y(k+1) - iT(k+1)~(k)J (23.3-6)

and equations corresponding to Eqs. (23.2-14) to (23.2-16). The signal


values ~(k) = e(k) in iT(k+1) are calculated recursively with Eq.
(23.2-36). Therefore the roots of D(z) =0 must lie within the unit
circle of the z-plane. The parameter estimates are unbiased and consis-
tent in mean square if the convergence conditions of the least squares
method, section 23.2, are transferred to the model Eq. (23.3-3). That
means that the model Eq. (23.3-3) has to be valid.

23.4 The Recursive Instrumental Variables Method


(RIV)

For convergence of the least squares method the error signal e(k) must
be uncorrelated with the elements of ~T(k). The instrumental variables
method bypasses this condition by replacing the data vector ~T(k) by an
instrumental vector ~T(k) whose elements are uncorrelated with e(k).
This can be obtained if the instrumental variables are correlated as
T
strongly as possible with the undisturbed components of ~ (k). There-
fore an instrumental variables vector

~T (k) = [ -h (k-1) ... -h (k-m) u (k-d-1) u(k-d-m)] (23.4-1)

is introduced where the instrumental variables


T A
(23. 4-2)
A
h(k) y (k) = w (k) 0 (k)
u - -aux

are taken fr0m the undisturbed output of an auxiliary model with para-
meters G
-aux
(k). The resulting recursive estimation algorithms have the
same structure as for RLS, [23.5], [23.6], c.f. Table 23.7.1. To have
the instrumental variables h(k) less correlated with e(k), the parame-
ter variations of the auxiliary model are delayed by a discrete first
order low-pass filter with dead time [23.6]

§ (k) = (1-S)G (k-1) + S~(k-a) o.o1::;;S::;;0.1. (23.4-3)


-aux -aux

During the starting phase this RIV is sensitive to inapropriately cho-


s.
A

sen initial values of ~(0), ~(0) and It is therefore recommended


that this method is started with RLS [23.11 ].
376 23. On-line Identification of Dynamical Processes

The method of instrumental variables results in unbiased and consistent


parameter estimates, if

a) E{n(k)} = 0 and E{u(k)} const


or
E{n(k)} = const and E{u(k)} 0

b) E{u(k-T)n(k)} 0 for ITI ., 0

c) u(k) = U(k) - u 00 must be known

d) Y00 must not be known if E{u(k)} = o.


An important advantage of the RIV method is that no special assumptions
about the noise filter have to be made to obtain unbiased parameter
estimates. The polynomials C(z- 1 ) and D(z- 1 ) therefore can be indepen-
dent of the process polynomials B(z- 1 ) and A(z- 1 ). The RIV method
yields only the process parameters ai and bi. In the case the parame-
ters ci and di of the noise model are required they can be estimated
by RLS (section 23.2.2) using the noise signal estimate

~(k) y(k) - yu (k) y(k) - h(k). (23. 4-4)

23.5 The Recursive Maximum Likelihood Method


(RML)
To apply maximum likelihood parameter estimation to dynamical process-
es the ML-model of Eq. (23.1-8) or Eq. (23.3-2) is suitable

-1 (23.5-1)
A(z )y(z)

The abbreviations

T • . . -y (k-m) u (k-d-1) ... u (k-d-m)


!1!_ (k) = [ -y (k-1)
v(k-1) ... v(k-m)] (23.5-2)

(23.5-3)

give the model equation

T (23. 5-4)
y(k) = !1!_ (k) ~ + v(k).

As with the RELS method v(k) is interpreted as an equation error e(k)


and in !l!_T(k) the unmeasurable v(k-1) , ... ,v(k-m) are replaced by their
estimates, Eq. (23.2-32). This gives
23.5 The Recursive Maximum Likelihood Method (RML) 377

y(k) = iT(k) ~(k-1) + e(k) (23.5-5)

with iT(k) as defined in Eq. (23.3-4). From the derivation of the non-
recursive maximum likelihood parameter estimation it follows [3.12],
[3.13], that the loss function for a normatty distributed error signat
is the same as for the least squares method
1 N 2
V(~) L e (k) (23.5-6)
2 k=1

and has to be minimized with respect to the parameters ai' bi and di.
As this loss function is linear in the parameters ai and bi but nonli-
near in the parameters di' the minimization must be performed itera-
tively, for example by using gradient search algorithms. Therefore the
full maximum likelihood method can only be applied nonrecursively. How-
ever, after simplifying the gradient algorithm it becomes possible to
obtain a recursive algorithm [23.7], [23.8].

First the basic equations of the nonrecursive version are considered.


The loss function can be approximated by the Taylor series

(23.5-7)

with ~ 8 (~) the vector of first derivatives and ~ 88 (~) the matrix of

second derivatives. Through minimization of the loss function, which


is truncated after the second term, the Newton-Raphson algorithm is
obtained
-1
A

~ (k+1l
A

= ~ (kl
A

- ~ee <~ (kl ,k+1 l ~e <~ (k) ,k+1


A

>. (23. 5-8)

To obtain a recursive algorithm for parameter estimation a recursive


equation for the loss function must be used
A 1 2 A
v<.§.,k+1l V(~ 1 k) + 2 e (~,k+1). (23.5-9)

This leads to

Cle(G,k+1)
~e <~,k+1 > ~e<~,kl + e(~,k+1l 38 (23.5-10)

~A
~ee (~,k) +
[ae(G~-k
a8
+1l]T ae(Q~~+1) uv

2 A

+ e (~,k+1) () e (~,k+1) (23.5-11)


()~
378 23. On-line Identification of Dynamical Processes

in which the indicated terms can be assumed to be zero [23.8]. Based


on Eqs. {23.5-8) to {23.5-11) the recursive estimation algorithm RML
results
A

.@_ {k+ 1) Q{k) + y{k)e{k+1) {23. 5-12)

with
~ {k)~ {k+1)
y{k) = ~{k+1)~{k+1) = ---=-------- {23.5-13)
1 + ~T{k+1)~{k)~{k+1)
-1 A
~{k) Yee<Q<k-1),k) {23.5-14)

~ {k+1) [!- y{k)~T{k+1)] ~{k) {23.5-15)

~ {k+1) [ - ae{G{k) ,k+1)] {23.5-16)


aQ
e {k+1) y{k+1) - iT{k+1)~{k) {23.5-17)

v<k+n e {k+1). {23.5-18)

For Eq. {23.5-5) it is used therefore

AT
.!J!. {k+1) = [ -y {k) ••• -y {k-m+1) u {k-d) ••. u {k-d-m+1)

e {k) ••• e {k-m+1) ] • {23. 5-19)

Eq. {23.5-15) follows from Eq. {23.5-11) by applying a matrix inversion


lemma [23.8], [3.13]. The elements of the vector

~T {k+ 1 ) = _ [ ae {k+1) ae{k+1) ae{k+1) ae {k+1)


aa1 aam ab 1 abm

ae {k+1) ae{k+1)] {23.5-20)


ad 1 adm

can now~be determined with e{k) v{k) and Eq. {23.5-1)

ae{z)
z - --- - y 1 {z) z- {i-1)
aai

ae{z) - {i-1) -d
z---=- u 1 {z)z z {23.5-21)
abi

z ae{z) - e {z)z-{i-1)
--acr;-
1

i 1, ... ,m.

They can be interpreted as filtered signals

T
~ {k+1) = [ -y I {k) -y 1 {k-m+1) u 1 {k-d) u 1 {k-d-m+1)
e 1 {k) e 1 {k-m+1) ] {23.5-22)
23. On-line Identification of Dynamical ProcesseE 379

using the recursive equations

y I (k) y(k) - a1Y 1 <k-1) - ... - dmy


A
1 (k-m)

U I (k-d) u (k-d) - d1 U I (k-d-1) - ... - dmu 1 (k-d-m) } (23.5-23)

e 1 (k) e(k) - a e
1
1 (k-1) - ... - a me I (k-m) •

For di the current estimates ai(k) can be taken.

Because of the simplifications made in the derivation, the recursive


version of the maximum likelihood method is only an approximation to
the nonrecursive version. To start the algorithms one takes

~ (0) = Q !:_(0) = ai se_(O) o. (23. 5-24)

In comparison to the RELS method the RML method differs in using se_(k+1)
instead of ~(k+1) in the correction vector X(k), c.f. Table 23.7.1. Ne-
cessary conditions for unbiased and consistent estimates are:

a) u(k) = U(k) - u00 must be exactly known.

b) Y00 must be exactly known and must correspond to u00 •

c) The noise filter must be of the form D(z- 1 )/A(z- 1 ), such that
e(k) is uncorrelated.

d) The roots of D(z) = 0 must lie within the unit circle of the z-plane,
so that Eq. (23.5-23) is stable.
380 23. On-line Identification of Dynamical Processes

23.6 The Stochastic Approximation Method (STA)

Stochastic approximation methods are basically recursive and are cha-


racterized by computational simplicity. The minimum of a loss function
is searched by gradient methods which analog to deterministic equations
are applied to stochastic equations. Several versions of stochastic
approximation have been proposed [3.12], [3.13].

As a representative, an algorithm is considered which results from the


minimization of the loss function v = ~ e 2 (k)

~(k+1) ~(k) + p(k+1)!:i!.(k+1)[y(k+1)- !:i!.T(k+1)§_(k)). (23.6-1)

e (k+1)

This algorithm is identical to the RLS algorithm Eq. (23.2-14) with


the exception of the factor p(k+1). This factor is not determined by
the method but can be freely chosen to influence the convergence of
the algorithm. Most frequently one uses

p (k+1) = a/ ( 1+k) (23.6-2)

with a properly chosen a. However, this simple algorithm gives unbiased


results only for statistically independent error signals e(k). In addi-
tion the variance of the parameter estimates are large for small mea-
suring periods [3.13].
23.7 A Unified Recursive Parameter Estimation Algorithm 381

23.7 A Unified Recursive Parameter Estimation


Algorithm
The recursive parameter estimation algorithms RLS, RELS, RIV, RML and
STA can be represented uniquely by

.2 (k+1) .§_(k) + y(k)e(k+1) (23.7-1)

y(k) ]J (k+1) ~ (k)~ (k+1) (23.7-2)

~ (k+1) y(k+1) - ~T(k+1)_2(k). (23.7-3)


T
g,
A
They differ only in the parameter vector the data vector~ (k+1) and
in the correcting vector y(k). These quantities are summarized in Table
23.7.1.

Up to now it was assumed that the process parameters to be estimated


are constant and therefore the measured signals u(k) and y(k) and the
equation error e(k) are weighted equally over the measuring time k = 0,
••• ,N. If the recursive estimation algorithms are to be able to follow
slowly time varying proaess parameters, more recent measurements must
be weighted more strongly than old measurements. Therefore the estima-
tion algorithms should have a fading memory. This can be incorporated
in the least squares method by time dependent weighting of the squared
errors (the method of weighted least squares [3.13))
(m+d+N)
V l: w(k)e 2 (k). (23.7-4)
k=(m+d)

By choice of

w(k) = A (m+d+N)-k = AN'-k with 0 < A < 1 (23.7-5)

the errors e(k) are weighted as shown in Table 23.7.2 for N' = 50. The
weighting then increases exponentially to 1 for N'. The recursive esti-
mation algorithms given in Table 23.7.1 are modified as follows:

Table 23.7.2 Weighting factors due to Eq. (23.7-5) for N' 50

k 1 10 20 30 40 47 48 49 50

A = 0.99 0.61 0.67 0.73 0.82 0.90 0.97 0.98 0.99 1

A = 0.95 0.08 0.13 0.21 0.35 0.60 0.85 0.90 0.95 1


w
00
N

Table 23.7.1 Unified recursive parameter estimation algorithms for b 0 = 0 and d = 0


A A A
~(k+1) = ~(k)+y(k)e(k+1); y(k) = ~(k+1)~(k)~(k+1); e(k+1) = y(k+1l-iT (k+1)~(k).

unbiased and
method §_ iT (k+1) ~ (k+1) ~(k+1) ~(k+1) consistent for
noise filter
N
" [ -y (k) ••• -y (k-m+1) 1 1 w
RLS ':'-1 [.!-!:. (k) iT (k+1 l ]~ (k) ~(k+1)
.
~
u(k) ••• u(k-m+1)] 1+iT(k+1)~(k)l(k+1) A(z- 1 ) 0
::l
am I
1 [ -h (k) ••• -h (k-m+1) D(z-1) 1-'
RIV as RLS [,!-!:.(k)~T(k+1) ]~(k) .....
1+iT(k+1)~(k)~(k+1) u(k) ••• u(k-m+1)] ::l
~1
. C(z- 1 ) ro
H
a 1 p,
STA sm as RLS 1 p (k+1).! = k+1 .! i(k+1) ro
A(z- 1 ) ::l
rt
" .....
[ -y (k) ••• -y (k-m+1) HI
':'-1 D(z- 1 ) .....
RELS
. u (k) ••• u (k-m+1) as RLS as RLS .! (k+1) 0
" A(z- 1 ) PI
am rt
e (k) ••• e (k-m+1)] .....
0
::l
~1 [ -y I (k). • • -y 1 (k-mt-1) D (z - 1 )
. 1 [,!-y(k)~T(k+1) )~(k) 0
u 1 (k) ••• U1 (k-m+1) HI
RML as RELS A(z- 1 )
s 1~T(k+1)~(k)~(k+1) 0
m e 1 (k) ••• e 1 (k-m+1) ]
" '<
::l
~1 PI
. s.....
"am 0
PI
1-'
'"tl
11
0
0
ro
Ill
Ill
ro
Ill
23.7 A Unified Recursive Parameter Estimation Algorithm 383

The 1 in the denominator of ~(k+1) is replaced by A. For RIV


therefore
~(k+1) = 1 (23.7-6)
A+ ~T(k+1)~(k)~(k+1)

- ~(k+1) is multiplied by 1/A

~(k+1) [!- y(k)~T(k+1) J~(k) f· (23.7-7)

When choosing the weighting factor A one has to compromise between


greater elimination of the noise or better tracking of time varying
process parameters. It is recommended that A is chosen within the range
0.90 < A < 0.995. As the RML and RELS methods exhibit slow convergence
during the Starting phase due tO the UnknOWn e(k) = V(k) 1 COnVergenCe
can be improved if the initial error signals are weighted less and the
subsequent error signals are increasingly weighted up to 1. This can
be achieved with a time varying A(k) as in [23.13]

(23.7-8)

with Ao < 1 and A(O) < 1. For Ao 0.95 and A(O) 0.95 one obtains
for example

A(5) = 0.9632 A(10) = 0.9715 A(20) 0.9829.

In the limit, lim A(k+1) = 1.


k+oo

The weightings given by Eq. (23.7-8) and Eq. (23.7-5) can be combined
in the algorithm

(23.7-9)

There is a small weighting in the starting phase, depending on AO and


A(O), and for large k an exponential forgetting given by Eq. (23.7-5)
is obtained:

lim A(k+1) A.
k+oo

The recursive parameter estimation algorithms have been compared with


respect to the performance of the estimates, the reliability of the
convergence and the computational effort by simulations [23.9], [3.13],
[23.10], [23.13], by practical tests [23.11], [23.12] and theoretical-
ly [23.13). The theoretical treatment is difficult, as the recursive
estimation algorithms are nonlinear and time variant. In [23.17] it
has been shown that the asymptotic behaviour of the recursive estima-
tes may be approximately described by a time variant first order diffe-
384 23. On-line Identification of Dynamical Processes

renee equation which becomes a time invariant ordinary differential


equation after the introduction of a new time scale. The results of
the comparisons of the recursive parameter estimation algorithms can
be summarized as follows:

RLS: Applicable for small noise/signal ratios, otherwise gives biased


estimates. Reliable convergence. Relatively small computational effort.

RELS: Applicable for larger noise/signal ratios, if the noise model


D/A fits. Slow convergence in the starting phase. Convergence not al-
ways reliable (c.f. RML). Noise parameters D are estimated. They show
a slower convergence than for B and A. Somewhat larger computational
effort than RLS.

RIV: Good performance of parameter estimates. To accelerate the initial


convergence, starting with RLS is recommended. Larger computational
effort than RLS.

ru1L: High performance of parameter estimates, if the noise model D/A


fits. Slow convergence in the starting phase. More reliable convergence
than RELS. Noise parameters D are estimated, but show slow convergence.
Larger computational effort than RLS, RELS and RIV.

STA: Acceptable performance only for very large identification times.


Convergence depends on a. Small computational effort.

For small identification times and larger noise/signal ratios all me-
thods (except STA) lead to parameter estimates of about the same qua-
lity. Then in general RLS is preferred because of its simplicity and
its reliable convergence. The superior performance of the RIV and RML
methods is only evident for a larger identification time.
23.8 Modifications to Recursive Parameter Estimation Algorithms 385

23.8 Numerical Modifications to Recursive Parameter


Estimation Algorithms

Sometimes parameter estimation involves the solution of an ill-condi-


tioned equation system. One way to obtain improved parameter estimates
in such cases is to write the error covariance matrix W in the loss
function of the weighted least squares method in a square root form

v {23.8-1)

and to use for the errors the form

Y. - 'I' e [_! y_J [-eJ


~ -
= Q ~- (23.8-2)

The main idea is now to transform the matrix D into an upper triangular
matrix Qt

{23. 8-3)

by using an orthogonal transformation matrix T which does not change-


the loss function

(23.8-4)

because TTT = I. This leads finally to the parameter estimates


-1
{23. 8-5)
A

6 = f.eef.eR
-1
where f.ee and f.eR have upper triangular form so that a simple recursive
calculation is possible. For more details of this discrete square root
filtering method see [23.20), [23.19], [23.18]. As there is no initial
value to be selected, this method has a quick convergence in both non-
recursive and recursive versions. A modified square root filtering al-
gorithm which is similar to RLS, is described in [23.20). Other nume-
rical modifications to recursive estimation have the goal of reducing
the number of calculations after each sample. The resulting methods
are called 'fast' algorithms and are based on certain invariance pro-
perties of matrices due to shifted time arguments [23.21].

A comparison of these numerical modified recursive parameter estimation


algorithms with respect to computation time, storage requirements, per-
formance and convergence properties is given in [23.22]. The compari-
son is based on six different simulated processes {test processes I,
II, III, IV given in the Appendix and two others) and programming in
386 23. On-line Identification of Dynamical Processes

FORTRAN on a 16 bits process computer. The main results are given in


Table 23.8.1. The computation time of the fast algorithm compared with

Table 23.8.1 Comparison of numerical modifications to the recursive


least squares algorithm

recursive number of storage re- variance of


convergence
algorithm multiplications quirements parameter
[words] estimates
reliable,
RLS 2n 2 +5n 354 medium
medium

sensitive to
(p 2 +7p+2)n
FRLS 1136 starting values large
2
+2p 3 +3p sometimes slow

unreliable,
STA 2n 1 34 high
too slow

2(n+1) 2 +.!2(n+1) reliable,


DSF 2 2 385 small
very good
+ ~(n+1)

2n2+.Jl.n reliable,
MDSF 371 small
2 2 very good

RLS : recursive least squares n: number of parameter


FRLS: fast recursive least squares estimates
STA stochastic approximation p: number of exchanged
elements of ~T (RLS: p=2)
DSF discrete square root filtering
MDSF: modified square root filtering

the usual RLS is only smaller for n > 10 parameters (order m > 5) . How-
ever, the convergence is sensitive to the starting values and the pro-
gram storage is much higher than for the other methods. Very good pa-
rameter estimates have been obtained with the square root filter algo-
rithms for larger measurement times. The discrete square root filter
algorithms show, however, relatively high oscillations in the parame-
ters in the starting phase, which is not good for parameter adaptive
control. The best overall properties are possessed by the modified
square root filter algorithm. But for typical processes the advantages
are small in comparison to the RLS. If, however, numerical problems a-
rise with RLS and exact parameter estimates are required one should try
MDSF. Hence, for many applications the simple recursive least squares
method or its extensions (RELS, ~L) are preferable.
24. Identification in Closed Loop

If the design of self-optimizing adaptive control systems is based on


identified process models, process identification has to be performed
in closed loop. There are also other applications in dynamic processes
which must be identified in closed loop. It must first be established
whether methods developed for open loop identification can also be ap-
plied to the closed loop, taking into account the various convergence
conditions. The problem is quite obvious if correlation analysis is
considered. For convergence of the cross correlation function between
the input u(k) and the output y(k), the input u(k) must be uncorrela-
ted with the noise n(k) contaminating the output y(k). As feedback ge-
nerates such a correlation, correlation techniques cannot be applied
directly to closed loop identification. In the case of parameter esti-
mation the situation changes, as the error signal e(k) only must be un-
correlated with the elements of the data vector ~T(k). This opens pos-
sibilities for closed-loop parameter estimation, as will be shown in
this chapter.

Sections 24.1 and 24.2 discuss conditions for closed-loop parameter


estimation without and with external perturbation signals. Then appro-
priate methods for closed loop parameter estimation can be chosen,
section 24.3. To treat parameter estimation in closed loop systematic-
ally, the following cases can be distinguished, c.f. Figure 24.1.1 and
Figure 24.2.1:

Case a: Indirect process identification. A model of the closed loop


is identified. If the controller is known the process model
is calculated based on the closed loop model.

Case b: Direct process identification. The process model is directly


identified, i.e. not by using a closed-loop model as an inter-
mediate result. The controller need not be known.

Case c: Only the output y(k) is measured.

Case d: Only the input u(k) and output y(k) are measured.

Case e: No external perturbation is applied.

Case f: An external perturbation u 8 (k) is applied (non measurable or


388 24. Identification in Closed Loop

measurable) but not directly used in the identification algo-


rithms.

Case g: The external measurable perturbation us(k) is used in the iden-


tification algorithm.

As shown in the next sections, the following combinations of cases are


possible:

a+ c + e and b + d + e + section 24.1


a + g and b + d + f + sections 24.2 and 24.3.3.

Unless otherwise stated, it is assumed in this chapter that the pro-


cesses are linear and the controllers are linear, time-invariant and
noise-free.

24.1 Parameter Estimation without Perturbations

Figure 24.1.1 shows a linear, time-invariant process with the z-trans-


fer function

Yu(z) -d -d
z z (24.1-1)
u(z)

and noise filter


n(z) (24.1-2)
v(z) -1
1+a 1 z + ... +am z
a

PROCESS
r-------~

v I D ( z- 1 ) I
f--- I
I A ( z- 1 )
I
I nl
w
---
ew Q ( z-1)

p ( z-1)
u I
I
I
B(i1)
--Z
A!i 1J Yu
L _______ _j
I
-d
I
y

Figure 24.1.1 Scheme of the process to be identified in closed loop


with no external perturbation signal
24.1 Parameter Estimation without Perturbations 389

which is to be identified in closed loop. The assumption that C(z- 1 ) =


A(z- 1 ) in the noise filter considerably simplifies parameter estimation
without perturbation. The controller transfer function is
-1 -1 -v
u(z) Q(z ) qo+q1z + ... +qvz
(24.1-3)
etZT
w 1+p 1 z
-1
+ ... +pJlz
-)1

The signals are

y (z) = yu(z) + n ( z)

ew(z) = w (z) - y ( z) .

In general w(z) = 0 is assumed, i.e. ew (z) = -y (z). v(k) is assumed to


be a nonmeasurable statistically independent noise with E{v(k)} = 0 and
2
variance av.

24.1.1 Indirect Process Identification (Case a+ c +e)

The closed loop response with the noise as input is

Y.EL
v (z)
1+GR(z)Gp(z)

D(z- 1 )P(z- 1 )

-1 -r
1+B 1 z + .. . +Brz
(24.1-4)
-1 -~
1 +a 1 z + ... +a 9- z

Therefore the controlled variable y(k) is an autoregressive/moving-ave-·


rage stochastic process (ARMA) , generated by v (k) and ti1e closed loop
as a noise filter. The orders are

~ max[ma + jl, mb + v + d]
}(24.1-5)
:t" == md + Jl•

I f only the output y(k) is known, the parameters of the ARMA

(24.1-6)

can be estimated using the methods given in chapter 23, if the poles
of ~(z) = 0 lie within the unit circle of the z-plane and if the poly-
nomials D ( z - 1 ) and )t( z - 1 ) have no common root.
390 24. Identification in Closed Loop

The next step is to determine the unknown process parameters

AT "' "' "' "' A


8 = [a 1 ••• ama b1 b~ d1 dma.J {24.1-7)

for given a.
l.
and s..
l.
In order to calculate these parameters uniquely,
certain identifiability conditions must be satisfied.

Parameter Identifiability Conditions

A process {in closed loop) is called parameter-identifiabLe if the pa~

rameter estimates are consistent when using an appropriate parameter


estimation method. Then

lim E{~ {N)} {24.1-8)


N

holds, with ~ as the true parameter vector and N as the measuring


time. Now conditions for parameter identifiability are given, if onLy
the output y{k) is measured.

Identifiability Condition

In a concise notation the process equation in z-domain is

Ay = Bu + Dv.

Inserting the controller equation leads to

Ay = B{- ~)y + Dv.

This equation is extended by an arbitrary polynomial S{z- 1 )

{A+S)y B{- ~)y + Sy + Dv

{A+S)y {B-S~) {- ~y) + Dv

{A+S)Qy {BQ-SP) {- ~y) + DQv

A* B* u D*
which leads to

A*y = B*u + D*v. {24.1-9)

This shows that the process B/A and the noise model D/A can be repla-
ced by

B* BQ-SP D* DQ
{24.1-10)
and
A* AQ+SQ A* AQ+SQ
-1
without changing the signals u{k) and y{k) for a given v{k). As S{z )
is arbitrary the orders of A and B cannot be uniquely determined based
24.1 Parameter Estimation without Perturbations 391

on measurements of u(k) and y(k). Therefore the orders of the process


and noise models must be exactly known [24.1].

Identifiability Condition 2

Eq. (24.1-4) shows that the rna + ~ unknown process parameters ai and
b.1 have to be determined by the ~ parameters a..
1
If the polynomials D
and Jt have no common zero, a unique determination of the process para-
meters requires t ~ rna + ~ or

max [rna + ~' ~ + v + d] ~ rna + ~


) (24.1-111
max [~ - ~' v + d - rna] ~ 0.

Hence the controller orders have to be

+ v ~ rn
a
- d
or ) (24.1-121
+

If the process dead time is d 0, the orders of the controller poly-


nomials must satisfy either v ~ rna or ~ ~ ~· If d > o, either v~rna-d

or ~ ~ ~ must be satisfied. Here the deadtirne d can exist in the pro-


cess or can be generated in the controller, see Eq. (24.1-4). (This
means that the identifiability condition 2 can also be satisfied by
using a controller for example with d = rna and v = 0 and ~ = 0.) The
parameters di' Eq. (24.1-2) can be calculated uniquely from the Si'
Eq. (24.1-4), if r ~ rnd' i.e.

~ ~ o, (24.1-13)

which means any controller. If~(z- 1 ) and D(z- 1 ) have p common poles,
they cannot be identified, but only t-p parameters ~i and r-p parame-
ters Si. The identifiability condition 2 for the process parameters ai
and b.1 then becomes

(24.1-14)

Note that only the common zeros of A- and D are of interest, and not
those of .A- and 13, as 13 = DP and P is known. Therefore the number of
common zeros in the numerator and denominator of

(24.1-15)

is significant. If the controller order is not large enough, parameter


estimation in closed loop can be performed with two different control-
392 24. Identification in Closed Loop

ler parameter sets [24.2], [24.3]. One then obtains additional equa-
tions for determining the parameters. Some examples may discuss the
identifiability condition 2.

Example 24.1.1

The parameters of the first order process (rna = ~ m 1)

y(k) + a y(k-1) = b u(k-1) + v(k) + d v(k-1)

are to be estimated in closed loop. Various controllers are considered.

a) 1 P-controller: u(k) = -q0 y(k) (v = 0; ~ = O)


Eq. (24.1-4) leads to the ARMA process

y(k) + (a+bq 0 )y(k-1) = v(k) + dv(k-1)


or
y(k) + ay(k-1) = v(k) + Bv(k-1).
Comparison of the coefficients gives

d.
A

No unique solution for a and b can be obtained, as


i = a 0 + ~a and b = b 0 - ~a
qo
satisfy these equations for any ~a. The parameters a and b are there-
fore not identifiable. Eq. (24.1-12) requires v ~ 1 or~~ 1.

b) 1 PD-controller: u(k) = -q0 y(k) - q 1 y(k-1) (v 1; ~ 0)


The ARMA process now becomes of second order

y(k) + (a+bq 0 )y(k-1) + bq 1 y(k-2) v(k) + dv(k-1)

y(k) + a 1 y(k-1) + a 2 y(k-2) v(k) + Bv(k-1).

Comparison of coefficients leads to

The process parameters are now identifiable.

c) 2 P-controller: u(k) = -q01 y(k); u(k) = -q02 y(k)


Due to a) two equations with coefficients
a 11 = i
+ bq 01 and 12 =a+ bq 02 &
are obtained. Hence

a = [ &,, - :~~ &12 ] / [1 - :~~]


G Ux 12 - aJ.
q02
The process parameters are identifiable if q 01 + q 02 . D
24.1 Parameter Estimation without Perturbations 393

Generally the process parameter vector 8 is obtained by the ARMA para-


&
meters 1 , ... ,&£
via the comparison of coefficients in Eq. (24.1-4) by
considering the identifiability conditions 1 and 2. If d = 0 and rna=~
and therefore £ = 2m, the orders of the controller polynomials are v=m,
and~ s m to satisfy Eq. (24.1-12), it follows with p 0 = 1 that

a1 + b1q0 a1 - p1
a1p1 + a2 + b1q1 + b2q0 a2 - p2

a1pj-1 + a2pj-2 + ... + a mpj-m + b1qj-1 + . .. + b m·J-m


(l. = a. - pj
J

(24.1-16)
or in matrix form

0 0 qo 0 0 a1 a1-p1

p1 0 q1 qo 0 a2 a2-p2
p1 q1

p~ qo a a~-p~
m
0 p~ p1 qm q1 b1 a~+1
0 0 0 qm b2 a~+2
I
P~ I
0 0 0 I 0 0 qm b a 2m
m

s 8 a*. (24.1-17)

As the matrix S is quadratic the process parameter vector is obtained


by
S -1 a * . (24.1-18)

Again it can be seen that the matrix S must have the rank r ~ 2m for a
unique solution of Eq. (24.1-17), i.e. v ~ m or~~ m. If v > m or ~>m
the overdetermined equation system Eq. (24.1-17) can be solved by using
the pseudo inverse

(24.1-19)

However, as discussed in section 24.3, the process parameters converge


very slowly with indirect process identification. The advantage of this
method is that the closed loop identifiability conditions can be deri-
ved straight forwardly.
394 24. Identification in Closed Loop

24.1.2 Direct Process Identification (Case b + d +e)

For indirect process identification it is assumed that the output sig-


nal y(k) is measured and that the controller is known. Then the input
signal u(k) is also known using the controller equation. Therefore an
additional measurement of u(k) does not provide further information.
However, if u(k) is used for process identification the process can be
identified directly without deconvolution of the closed loop equation.
Furthermore, knowledge of the controller is unnecessary. If for closed
loop identification of the process GP(z) nonpararnetric identification
methods, such as the correlation method, were applied directly to the
measured signals u(k) and y(k), because of the relationship

u ( z) -GR ( z) GPv ( z)
(24.1-20)
v ( z)
1 +GR ( z) Gp ( z)

and
GPv (z)
~ (24.1-21)
v (z)
1 +GR ( z) Gp ( z)

a process with transfer function

~ y (z) /v (z) 1 (24.1-22)


u ( z) u(z)/v(z) - GR (z)

would have been obtained, i.e. the negative inverse controller trans-
fer function. The reason is that the undisturbed process output yu(k)=
y(k)-n(k) is not used. If yu(k) were known, the process

Yu(z) = y(z)-n(z) y(z)/v(z)-n(z)/v(z) = G (z) (24.1-23)


u(z) u(z) u(z)/v(z) P

could be identified. This shows that for direct closed loop identifi-
cation the knowledge of the noise filter n(z)/v(z) is required. There-
fore the process and noise model

(24.1-24)

is used.

The basic model for indirect process identification is the ARMA, c.f.
Eq • ( 2 4• 1 - 4 )

-1 -1 -1 -d -1
[A(z )P(z )+B(z )z Q(z )]y(z)
A A
24.1 Parameter Estimation without Perturbations 395

Replacing the controller equation

(24 .1-26)

results in

and after cancellation of the polynomial P(z- 1 ) one obtains the equa-
tion of the process model as in open loop, Eq. (24.1-23). The diffe-
rence from the open loop case is, however, that u(z) or P(z- 1 )u(z) de-
pend on y(z) or Q(z- 1 )y(z), Eq. (24.1-26), and cannot be freely chosen.

The identifiability conditions for direct process identification in


closed loop are now discussed in two ways. First, the conditions for
a unique minimum of the loss function

N
v l: e 2 (k) (24.1-28)
k=1

are considered. The process model is

(24.1-29)

In the closed loop u(z) is given by Eq. (24.1-26). Hence

(24. 1-30)

A unique minimum of the loss function V with regard to the unknown pro-
cess parameters requires a unique dependence of the process parameters
in

-d Q AP + Bz-dQ __ ~
p-J
A A
[A + B z = (24.1-31)
D DP 13
on the error signal e. This term is identical to the right-hand side
of Eq. (24.1-4), for which the parameters of A, Band 6 can be unique-
ly determined based on the transfer function y(z)/v(z), provided that
the identifiability conditions 1 and 2 are satisfied. Therefore, in
the case of convergence with e(z) = v(z) the same identifiability con-
ditions must be valid for direct closed loop identification. Note that
the error signal e(k) is determined by the same equation for both the
indirect and the direct process identification - compare Eq. (24.1-4)
A
and Eq. (24.1-30, 31). In the case of convergence this gives A = A,
B = B and D
= D and therefore in both cases e(k) = v(k).
396 24. Identification in Closed Loop

A second way of deriving the identifiability condition 2 results from


the consideration of the basic equations of some nonrecursive parameter
estimation methods. For the least squares method Eq. (23.2-2) gives

T
y(k) = .'1!_ (k)~ = [-y(k-1) ... -y(k-ma) u(k-d-1) ..• u(k-d-~) J~.

(24.1-32)

.'1!_T(k) is one row of the matrix! of the equation system (23.2-6). Be-
cause of the feedback, Eq. (24.1-26), there is a relationship between
the elements of .'1!_T(k)

u(k-d-1) = -p 1u(k-d-2)- ... -p~u(k-~-d-1)

-q 0 y(k-d-1)- ... -qvy(k-v-d-1). (24 .1-33)

T
u (k-d-1) is therefore linearly dependent on the other elenents of .'1!_ (k)
if~$ mb-1 and v $ ma-d-1. Only if~~~ or v ~ma-d does this linear

dependence vanish. This holds also for the actual equation system Eq.
(23.2-6) for the LS method. This shows that linearly dependent equa-
tions are obtained if the identifiability condition 2 is not satisfied.

Now it remains to consider if the same identification methods can be


used for direct parameter estimation in closed loop as for open loop.
For both the least squares and the maximum likelihood method the equa-
tion error or one step ahead prediction error is
T
= y(k) - y(k[k-1) = y(k) -
A A

e(k) .'1!_ (k)~(k) (24.1-34)

c.f. Eq. (23.2-5) and Eq. (23.5-5). The convergence condition is that
e(k) is statistically independent of the elements of .'1!_T(k). For the
LS method this gives

T
.'1!. (k) = [-y(k-1) u (k-d-1) ... J
and for the RML method

T
.':1!. (k) = [-y(k-1) ... u(k-d-1) •.. v(k-1) •.. ].

In the case of convergence e(k) = v(k) can be assumed. As v(k) influ-


ences only y(k), y(k+1), ... , and these signal values do not appear in
.':i!.T(k), e(k) is certainly independent of the elements of .'1!_T(k). This is
also true with a feedback on u(k) via the controller. The error e(k)
T
is independent of the elements of .'1!_ (k) also in closed loop. Therefore
parameter estimation methods which are based on the one step ahead pre-
diction error can be applied in closed loop as in open loop, provided
that the identifiability conditions are satisfied. The application of
24.2 Parameter Estimation with Perturbations 397

other parameter estimation methods is discussed in section 24.3. An ex-


tensive treatment of closed loop identification is given in [24.2],
[24.4]. Nonlinear and time variant controllers are also considered there.

The most important results for closed loop identification without exter-
nal perturbation but assuming a linear, time invariant, noisefree con-
troller can be summarized as follows:

1. For parameter estimation in closed loop (indirectly or directly),


identifiability conditions 1 and 2 must be satisfied.

2. For direct parameter estimation in closed loop, methods using the


prediction error can be used as in open loop. The controller need
not be known.

3. If the controller does not satisfy the identifiability condition 2


as it has too low an order, identifiability can be obtained by
a) switching between two controllers with different parameters
[24.4], [24.5],
b) introduction of dead time d ~ ma-v+p in the feedback,
c) use of nonlinear or time varying controllers.

24.2 Parameter Estimation with Perturbations

Now an external perturbation us(k) is assumed to act on the closed loop


as shown in Fig. 24.2.1. The process input then becomes

(24.2-1)

with
-1
u (z) = - Q(z ) y(z). (24.2-2)
R P(z-1)

The additional signal us(k) can be generated by a filtered signal s(k)

Gs(z) s(z). (24.2-3)

-1 -1
If Gs(z) = GR(z) = Q(z )/P(z ) then s(k) = w(k) is the reference va-
lue. There are several ways to generate the perturbation us(k). It is
only important that this perturbation is an external signal which is
uncorrelated with the process noise v(k).
398 24. Identification in Closed Loop

,-------,
PROCESS

s V I 0 (z-1) I
Gs(:z1) - I
f-----
I
I A (z- 1)
us I nl
w ew Q (z_, J UR u I 8 (i 1J -d I y
I
- p (z-1 J A(z-1) z
I i
L _______ _j

Figure 24.2.1 Scheme of the process to be identified in closed loo~


with an external perturbation s

As indirect process identification generally has no advantage, only di-


rect process identification is considered here. The output of the clo-
sed loop is

y(z) (24.2-4)

resulting in

-d -d
[AP + Bz Q]y(z) = DP v(z) + Bz P us(z).

Inserting Eq. (24. 2-1) and Eq. (24. 2-2) gives

(24.2-5)
1
and after cancellation of the polynomial P(z- ) one obtains the process
equation

(24 .2-6)

Unlike Eq. (24.1-27), u is generated not only from the controller based
on y, but from Eq. (24.2-1) also by the perturbation us(k). Therefore
the feedback, c. f. Eq. (24.1-33), is

u (k-d-1) -p 1u(k-d-2)- ••. -p~u(k-~-d-1)

-q0 y(k-d-1)- .•• -qvy(k-v-d-1)

+u (k-d-1)+p 1 u (k-d-2)- •.• -p ~ u s (k-~-d-1).


s s

If u (k) f 0, u(k-1) is not linearly dependent on the elements of the


s T
data vector~ (k), Eq. (24.1-32), for arbitrary controller orders ~ and
24.3 Methods for Closed Loop Parameter Estimation 399

v. The process described by Eq. (24.2-6) is therefore directly identi-


fiable if the external perturbation is persistently exciting of suffi-
cient order. Identifiability condition 2 is not valid in this case.
But identifiability condition 1 has still to be satisfied. Note that
the perturbation need not be measurable, and that the results are va-
lid for any noise filter D/C. The same prediction error parameter esti-
mation methods as used in open loop identification can be applied.

24.3 Methods for Oosed Loop Parameter Estimation

In this section the on-line parameter estimation methods described in


chapter 23 are considered for closed loop application.

24.3.1 Indirect Process Identification without Perturbation

The ARMA parameters ai and Si' Eq. (24.1-4), can be estimated by the
RLS method for stochastic signals, section 23.2.1, or by the method of
recursive correlation and least squares (RCOR-LS), [24.3]. However,
the parameter estimates converge very slowly with indirect process i-
dentification because the number of parameters (~ ~ rna+~ and r = md+~)

is large, and the input signal v(k) is unknown. Therefore in general


direct process identification is preferred.

24.3. 2 Direct Process Identification w·i thout Perturbation

As already discussed in section 24.1 the prediction error methods as


RLS, RELS and RML can be applied. By measuring u(k) and y(k) they pro-
vide unbiased and consistent estimates, provided that the noise filter
is 1/A for RLS and D/A for RELS and RML. To obtain unbiased estimates
with RIV, the instrumental variables vector ~T(k), Eq. (23.4-1), must
be uncorrelated with the error signal e(k) and therefore also uncorre-
lated with with the noise n(k) [3.13]. However, the input signals
u(k-<) are correlated with n(k) for T ~ 0 because of the feedback. If
the instrumental variables are chosen as in Eq. (23.4-1) the RIV me-
thod gives biased estimates in closed loop. The correlation between
u(k-T) and e(k) vanishes for T ~ 1 only if e(k) is uncorrelated, i.e.
if the noise filter is of the form 1/A, as for the LS method, c.f.
[3.13], p. 66/67.
400 24. Identification in Closed Loop

24.3.3 Direct Process Identification with Perturbation

If only u(k) and y(k) are used for parameter estimation, and not the
perturbation, the RLS, RELS and RML methods are suitable. A measurable
perturbation can be introduced into the instrumental variables vector
of the RIV method. Then this method can also be applied.

The application of the correlation and least squares (RCOR-LS) method


[3.13] in closed loop is considered in [24.3) for all three cases of
this section.

If the identifiability condition 2 is not satisfied by a given control-


ler, closed loop parameter estimation can be performed by switching
between two different controllers, c.f. Example 24.1.1 c). It has been
shown in [24.5] that the variance of the parameter estimates can be
minimized by choosing the switching period to be about (5 •.. 10) T 0 •
25. Parameter-adaptive Controllers

This chapter treats parameter-adaptive control algorithms based on re-


cursive parameter estimation methods together with suitable control al-
gorithms. From the definitions given in chapter 22 they fall into the
class of self-optimizing adaptive controllers. Recursive parameter esti-
mation methods and their application in the closed loop case were dis-
cussed in chapter 23 and 24. This chapter therefore is devoted mainly
to the discussion of appropriate combinations of parameter estimation
methods and control algorithms. Some principles of self-optimizing adap-
tive controllers are considered in section 25.1, followed by a summary
of suitable control algorithms in section 25.2. Various parameter adap-
tive controllers are considered in section 25.3 and compared in 25.4.
Examples of applications are given in section 25.5. Finally multivari-
able parameter-adaptive controllers are discussed in section 25.6.

25.1 Introduction

This section briefly introduces the principles of self-optimizing adap-


tive aontroZZers based on parameter estimation. It is assumed that the
process is linear and has either constant or time varying parameters.
The class of adaptive controllers that is considered can be classified
in terms of:
- the process model
- the parameter estimation and state estimation
- the prior information about the process
- the criterion for controller design
- the control algorithm.

Some important cases are discussed in the se~uel.


402 25. Parameter-adaptive Controllers

'J
~
---
Jj
controller parameter/
parameter state f--
calculation estimation

~
-
w u y
controller process
-

Figure 25.1.1 Structure of self-optimizing adaptive controllers based


on process iaentification

a) Process models

Chapter 3 included a classification of process models. Here only para-


metric process mode~s are of interest:
- Input/output mode~s in the form of stochastic difference equa-
tions or z-transfer functions

-1 -1 -d -1
A(z )y(z) - B(z )z u(z) = D(z )v(z) (25.1-1)

(maximum likelihood model + ML model)

-1 -1 -d
A(z )y(z) - B(z )z u(z) = v(z) (26.1-2)

(least squares model + LS model)

v(k) is a statistically independent stochastic signal with


E{v(k)} = 0 and variance o~. The parameters ~T = [a 1 ••. am;
b 1 ... bm; d 1 ... dmJ are assumed to be either constant or slowly
time varying.

- State mode~s in form of stochastic vector difference equations

~(k+1) ~(~)~(k) + ~(~)~(k) + F ~(k)


}(25.1-3)
y (k) f(~)~(k) + _!!(k)

In general ~(k) and _!!(k) are statistically independent vector


signal processes, c.f. chapter 15. The parameters ~ are consta~t,
slowly time varying or modelled by the stochastic process

Q(k+1) =1 ~(k) + 1(k) (25.1-4)

with 1(k) as a statistically independent stochastic process.


25.1 Introduction 403

The above models are written for stochastical disturbances. Ordinary


difference or vector difference equations are involved if the distur-
bances vi and ni are either deterministical signals or null.

b) Parameter estimation ·and state estimation

Suitable parameter estimation methods for the closed loop case are con-
sidered in chapter 23 and 24. For state estimation and state observa-
tion see section 8.6 and 15.4.

c) Information about the process

The information ~ about the process that is acquired by parameter and


state estimation forms the basis of controller design and can contain
the following components:
o Parameter estimation:
- process parameter estimates

(25.1-5)

- process parameter estimates and their uncertainty


(25. 1-6)

o State estimation:
- state estimates

J-21 = [~ (k+1) J (25.1-7)

- state estimates and their uncertainty

~ 22 = [~(k+1) ,t:.i(k+1) ]T (25.1-8)

o Signal estimation:
If a noise model is included in the parameter estimation the non-
measurable vi(k) or ni(k) can be estimated

~3 = [~i(k)] or [ni(k)]. (25.1-9)

In addition, future outputs y(k+j), j ~ 1, can be predicted based


on known inputs ui(k-1), 1 ~ O, and noise estimates ~i(k-1), 1~0.

Dependent on the information used, various types of adaptive cohtrol1er


can be distinguished. The following principles are presented for the
case of stoahastia aontrol systems. The definitions given in the lite-
rature are not unique- see [25.1], [25.2] and [22.14].
404 25. Parameter-adaptive Controllers

Control using the separation principle


A stochastic controller follows the separation principle if parameter
or state estimation is performed separately from the calculation of
the controller parameters, c.f. chapter 15 and [22.14]. The process
input is then

u(k) = f 5 [y(k),y(k-1), .•• ,u(k-1),u(k-2), ••. ,~(k),~(k)].

Control using the certainty equivalence principle


A stochastic controller obeys the certainty equivalence principle if
it is designed assuming exactly known parameters and state variables

u(k) = f 0 [y(k),y(k-1), •.. ,u(k-1),u(k-2), ... ,~,~(k)J

and if the parameters~ and state variables ~ 0 (k) are then replaced
by their estimates

u(k) = fG[y(k) ,y(k-1) , ... ,u(k-1) ,u(k-2) , ... ,~(k) ,~(k) ].

The certainty equivalence principle is therefore a special case of the


separation principle. The certainty equivalence principle is theoreti-
cally justified if the process parameters are known, the state variab-
les of a linear process are to be estimated with white noises ~(k) and
g(k), and a quadratic performance criterion is used [25.2]. For unknown
stochastic parameters the certainty equivalence principle is valid the-
oretically only if the parameters are statistically independent [25.3],
[25.4], [22.14]. For parameter-adaptive stochastic control the certain-
ty equivalence principle is not satisfied in general. However, it is
frequently used as an ad-hoc design principle [22.14].

Based on these principles two different types of adaptive controller


emerge [22.14]:

Certainty equivalent controllers


A controller which is designed by making use of the certainty equiva-
lence principle is called a 'certainty equivalent controller'. It is
then assumed - for the purpose of controller parameter calculation -
that the parameter or state estimates are identical with the actual
parameters or states. The information measures ~11 or J21 are used.
The controller does not take into account any uncertainty of the esti-
mates.

Cautious controllers
A controller which employs the separation principle in the design and
uses the parameter and state estimates together with their uncertain-
25.1 Introduction 405

ties is called a 'cautious controller'. Here the information measures


~12 or 122 are used. Because of the uncertainty of the estimates the
controller applies a cautious action on the process.

d) The criterion for controller design

o Duat adaptive aontrotters


The performance of self-optimizing adaptive controllers depends
mainly on the performance of process identification and on the
control algorithm applied. The process input must be determined
such that two objectives are simultaneously achieved:
- good compensation of current disturbances
- good future process identification.
This leads to the duat aontrotter of [25.5]. Both requirements may
be contradictory. If, for example, the process parameter estimates
are wrong, the controller should act cautiously, i.e. small chan-
ges in u(k), but to improve the parameter estimates large changes
in u(k) are required. Dual controllers therefore have to find an
appropriate compromise between the objectives. Hence the control-
ler design criterion has to take into account both the current
control signal and the future information '!.
o Nonduat adaptive aontrotters
Nondual controllers use only present and past signals and the
current information ~concerning the process. The controller de-
sign criteria most frequently used for nondual controllers have
been discussed in previous chapters. These were mostly quadratic
criteria or special criteria such as the principles of deadbeat
control, pole-zero cancellation or pole-assignment.

e) Control algorithms

The actual design of a control algorithm is of course performed before


implementation in a digital computer. Then it remains to calculate con-
troller parameters as functions of process parameters. Control algo-
rithms for adaptive control should satisfy:

(1) Closed loop identifiability condition 2


(2) Small computing and storage requirements for controller para-
meter calculation
(3) Applicability to many classes of process and signal.

The next section discusses which of the control algorithms meet these
re~irements for parameter-adaptive control. Within the class of self-
406 25. Parameter-adaptive Controllers

optimizing adaptive controllers based on process identification, non-


duaZ methods based on the aertainty equivaZenae prinaipZe and reaursive
parameter estimation have shown themselves to be successful both in
theory and practice. The resulting methods will be called parameter
adaptive aontroZ aZgorithms; they are also called seZf-tuning reguZa-
tors, e.g. [26.8], [26.13]. One could imagine there to be a distinction
between 'self-tuning' and 'adaptive', as the former appears to imply
constant process parameters. However, there is no sharp boundary bet-
ween the cases when considering their applicability, so the distinc-
tion is of secondary importance.

25.2 Suitable Control Algorithms

For minimal computational effort in calculating the controller para-


meters, the following control algorithms are preferred for parameter
adaptive control:

- deadbeat controller DB ( v) , DB ( v+ 1 )
- minimum variance controller MV3, MV4
- parameter-optimized controller iPC-j

More computation is required for:

- general linear controller with pole assignment LC-PA


- state controller sc
These control algorithms are now considered with regard to the identi-
fiability condition 2, Eq. (24.1-14), and the computational effort in-
volved. For control algorithms which theoretically cancel the process
poJes or zeros, one must distinguish if the controller is adjusted
exactly or not to the process. Modifications to achieve rapid control-
ler parameter calculation are also given.

25.2.1 Deadbeat Control Algorithms

The z-transfer function of DB(v) is

(25.2-1)

Its orders are v =rna and~ ~+d. Eq. (24.1-15) is for the case of
inexactly adjusted controller parameters
25.2 Suitable Control Algorithms 407

(25.2-2)

As no pole and zero cancel (p = 0), identifiability condition 2 is


satisfied. Eq. (25.2-2) changes with A= A and B= B to

D(z- 1 )
(25 .2-3)
A(z - 1 )

for exactly tuned controller parameters. In this case also no cancella-


tion arises and the process remains identifiable. The same result is
obtained for the deadbeat controller DB(v+1) of increased order where
v = ma+1 and~= ~+d+1.

25.2.2 Minimum Variance Controllers

Because of the assumed noise filter D/A, Eq. (24.1-2), only MV3 and
MV4 are of interest. The z-transfer function of MV3-d is, Eq. (14.2-12),

A -1 -dA -1 r A -1 (25.2-4)
zB(z )z F(z )~D(z )
1
with

(25.2-5)

The coefficients fi and li are given by the recursion formula [25.16]

fo
i-1
E fpai-p: i 1, ••. ,d
p=O
d
li di+d+1 - E a i+d+1-p f p'· i O, .•. ,m-1.
p=O

(ai = di =0 for i ~ m+1)

The orders are

v = max[md,ma] -

~ max[~ -1,md]
}d 0

v = max[m -d-1 m -1]


d ' a
} d ~ 1
~ max[~+d-1,md]

For d = 0 it follows from identifiability condition 2 that

(25.2-6)
408 25. Parameter-adaptive Controllers

and for d ~ 1

(25.2-7)

If the controller is not exactly tuned

D (25.2-8)

In general no common roots appear, p = o, and the identifiability con-


dition is satisfied for md ~~or md ~ ma+1 ford= 0 or d ~ 1. How-
ever, if the controller becomes exactly tuned to the process, A= A,
B= B, D = D

(25.2-9)

and p = md common roots appear, which means that the process is no lon-
ger identifiable ford= 0. If d ~ 1, Eq. (25.2-7) leads to

(25.2-10)

resulting in the requirement that

(25.2-11)

which is satisfied only for relatively large dead time. With r = 0 the
MV4-d controller arises, Eq. (14.2-13),

(25.2-12)

Identifiability for d = 0 in the untuned case is obtained with md~ma+1


and for d ~ 1 for any order. When tuning is exact p = md common roots

appear for r = 0, Eq. (25.2-9). Then the process is not identifiable


for d = 0. For d ~ 1

(25.2-13)

must again be satisfied. If the MV-controllers are multiplied by a


proportional and integral term, Eq. (14.3-3), the orders of v and~·

are increased by one and the process becomes identifiable for the same
conditions as for inexactly tuned MV3-d and MV4-d.
25.2 Suitable Control Algorithms 409

25.2.3 Parameter-optimized Controllers

A controller with the z-transfer function


-1 -v
u (z) qo+q1z + ... +qvz
GR(z) = e-{Z") (25.2-14)
w 1-z- 1
satisfies the identifiability condition Eq. (24.1-12) if its order is
related to the process by v ~ma-d and no common roots appear in Eq.
(24.1-15). Hence PID-controllers are suitable for processes where rna$
2+d. If the process possesses no deadtime the allowable maximum process
order is two. For higher process orders one can increase the sample
time so that the higher order parameters become close to zero and a mo-
del of order 2 results, see Table 3.7.1. Small time lags can often be
approximated by a dead time d. Therefore certain types of process may
be described by models or order m = 2 and a dead timed= 0,1,2, ...
There are several ways in which PID-controllers for parameter-adaptive
controllers can be designed.

a) Pole-assignment design

The transfer function for reference variable input is treated - see


Eq. (11.1-3). Form= 2 and d = 0,1,2, ... the characteristic equation
is
P ( z_, ) A ( z _, ) + Q ( z _, ) B ( z _, ) z -d 0

or

(1-z -1 )(1+a 1 z -1 +a 2 z -2 )+(q 0 +q 1 z -1 +q 2 z -2 )(b 1 z -1 +b 2 z -2 )z -d

-1 -2 -(4+d)
= 1 + a 1z + a 2z + ... + a 4 +dz = 0 (25.2-15)

The coefficients are:

d = 0:

a1 -1+a 1 +q 0 b 1

a2 a2-a1+q0b2+q1b1

a3 -a2 +q1b2+q2b1

a4 q2b2

d = 1:

a1 a 1-1

a2 a2-a1+qOb1

a3 -a2+qob2+q1b1
410 25. Parameter-adaptive Controllers

As 4 equations result to determine 3 parameters, only, 3 coefficients


have to be given, which can be obtained by pole-assignment, c.f. Eqs.
( 11. 1-14) , ( 11. 1-15) • The controller parameter is then simply obtained
by

d = 0:

d = 1:

The controller parameter q 0 can also be chosen as

q = 1
0 (1-a 1 ) (b 1 +b 2 )

c.f. Eq. (7.2-13), and only q 1 and q 2 need be calculated. The design
depends, of course, on the proper placement of selected poles.

b) Design by approximation of the deadbeat controller

As the extended order deadbeat controller DB(v+1) is easy to design,


and gives/good control in many cases with the specification of q 0 =
1/(1+a 1 )Ebi, a rapid design of a PID-controller

u (z) qo +q1z -1 +q2z -2


GR (z) = ew (z) (25.2-16)
1-z- 1

may be obtained by an approximation of the transient functions of the


controller DB(v+1), Eq. (7.2-11) and Eq. (7.2-14)
* _ u * (z) * * -1 * - (m+1)
_ qo+q1z + ••• +qm+1z
GR(z) - e<ZT- * -(d+ 1 ) * -(m+d+ 1 ) (25.2-17)
w 1 -p1+dz ••• -pm+d+1z

for k 0 and k + ~, c.f. Figures 5.2.2 and 7.2.2. The step response
25.2 Suitable Control Algorithms 411

of DB(v+1) is obtained from

m+1 * * m+1 *
u * (k) E p.+du (k-d-j)+ E qJ.e(k-j) (25. 2-18)
j=1 J j=O

fork= 0,1,2, .. assuming u * (k) 0 and ew(k) 0 for k < 0. Together


with Eq. (7.2-13) it follows that

*
u(O) = qo = (1-a )Eb •
1 (25.2-19)
1 i

The behaviour for large k is obtained by recursively solving of Eq.


(25.2-18). The increase o per sample time T0

o = lim [u * (k)-u * (k-1) J (25.2-20)


k->-oo

l
follows by

o "" u * (M) - *
u .(M-1)
or (25.2-21)
1 * (M)-u * (M-~)]
o ,.. ~[u

with M sufficiently large, for example M 4(m+d). Alternatively one


can calculate for k > m+d

o ,.. ~u
* (k) = u * (k)-u* (k-1) (25.2-22)

and stop if

~u
* (k)-~u
* (k-1) < EU
* (k) (25.2-23)

with for example E = 0.02. The increase 6 can also be calculated expli-
citely using

lim [u(k)-u(k-1) J (25.2-24)


k->-oo

with i = 1,2, ... ,m which follows from Eq. (25.2-17). The parameters of
the PID-controller 3PC-2 are obtained as follows, c.f. Figure 5.2.2:

*
qo (25.2-25)

u*(M)-Mo

q 0 -[u*(M)-Mo] (25 .2-26)

c) qo+q1+q2 = o

q1 = o-qo-q2. (25 .2-27)


412 25. Parameter-adaptive Controllers

The design effort for this PID-controller is relatively small. However,


this design should be restricted to processes with either no dead time
or only a small dead timed [25.16].

25.2.4 General Linear Controller with Pole-assignment

Section 11.1.1 shows that the controller parameters can be determined


uniquely from Eq. (11.1-21) for given poles if the controller polyno-
mials have orders v =rna and~= ~+d. If no roots cancel in Eq. (24.1-15)
the identifiability condition 2 is satisfied for d ~ 0. A drawback is
the relatively large computational effort.

25.2.5 State Controller

The derivation of the identifiability conditions in chapter 24 is ba-


sed on input/output controllers. The results can therefore be only
transferred to state controllers with state observers or state esti-
mators if the control algorithms can be put into an input/output repre-
sentation, c.f. section 8.7. The characteristic equation than has or-
der 1 ~2m, Eq. (8.7-19). Therefore the identifiability condition 2 is
-1
satisfied if no common roots appear in D(z ) = 0. The state controller
can be designed either by pole assignment or by recursive solution of
the matrix Riccati equation, for which relatively few recursion steps
may suffice, c.f. section 8.1.

Table 25.2.1 summarizes some properties of the different control algo-


rithms. Table 25.2.2 shows the computing and storage requirements.

Table 25.2.2 Computational effort and storage requirement for diffe-


rent control algorithms [25.16]

control controller parameter control algorithm


algorithm calculation
FORTRAN storage algebraic shift
statements [bytes] operations operations

DB(V) 12 171 4m+2 2m+d

DB (v+1) 19 328 4m+6 2m+d+2

MV4 32 508 4m+2d+2 2m+d+2

MV3 39 612 max[4m; 4m+2d-2] max[2m-1;2m+2d-2]

3PC-2 34 631 8 3 (Eq. 25.2-25/27

LCPA 78 1187 4m+2d-2 2m+d


IV
Table 25.2.1 Properties of different control algorithms with respect to application for parameter- lJ1
adaptive control IV

til
~
danger of evaluation for parameter- 1-'·
closed loop computational effort rt
control identifiability instability adaptive control Ill
tT
algorithm conditon 2 parameter for *) 1-'
operation (1)
satisfied calculation
()

suitable for asymptotically 0


deadbeat contr. very - ::I
yes medium A (z) = 0 stable processes rt
DB (v) , DB (v+1) small '1
0
1-'
min. variance - suitable for stochastic
d;;,md+1 small medium D (z) = 0 disturbances !!:'
contr. MV3-d 1-'
\.Q
0
min. variance - suitable for processes with '1
d;;,md+1 small medium B (z) = 0 1-'·
contr. MV4-d zeros inside the unit circle rt
-
D (z) = 0 ::r
and stochastic disturbances s
Ill

parameter- suitable,
small dependent on proper design
optimized v;;,m -d medium -
a (v=2)
contr. i-PC-j

linear contr. v=m suitable,


a large medium - if pole placement is no
LC-PA
].l=~+d problem

state contr. medium/ suitable,


yes large - if computational expense is
with observer large
no problem

*) compare Table 11.1.1 and Table 14.1.1

....
w
414 25. Parameter-adaptive Controllers

25.3 Appropriate Combinations of Parameter


Estimation and Control Algorithms (single input,
single output)
Parameter adaptive aontroZ aZgorithms can be designed on the basis of
the parameter estimation methods described in chapter 23 and chapter
24 and the control algorithms discussed in section 25.2. Necessary con-
ditions for convergence are that the parameter estimation algorithms
are suitable for process identification in closed loop and that the
control algorithms satisfy the closed loop identifiability condition.
Section 25.1 shows the many possibilities for combining identification
and control methods, making use for example of the separation principle,
leading to adaptive certainty equivalent or cautious controllers, or
the duality principle, resulting in adaptive dual controllers.

25.3.1 Certainty Equivalent Parameter-adaptive Controllers

This section considers parameter-adaptive control algorithms based on


the aertainty equivaZenae prinaipZe and which do not require external
perturbations for convergence. The discussion in the previous chapters
has shown that, as well as a suitable combination of estimation and
control algorithms, additional methods for d.c. value estimation of
signals and for compensation of offsets must be included. Therefore
certainty equivalent parameter-adaptive controllers consist (at this
stage) of suitable combinations of
(1) recursive parameter estimation algorithms + ~(k)
(2) d.c. value estimation + u00 (k), Y00 (k)
( 3) control algorithms+ u(k+1) = f[y(k);w(k);~(k) J
( 4) compensation for offsets, if lim e (k) f 0.
k+co W
The last chapter showed that the prediction error methods
- recursive least squares (RLS)
- recursive extended least squares (RELS)
- recursive maximum likelihood (RML)

are especially suitable for reaursive parameter estimation in closed


loop. They may be combined with aontroZ aZgorithms, for example
- a deadbeat controller DB(v), DB(v+1)
- a minimum variance controller MV3, MV4
- a parameter-optimized controller iPC-j
- a general linear controller with
pole assignment LCPA
- a state controller sc
25.3 Appropriate Combinations of Parameter Estimation 415

The methods discussed in section 23.2 can be used for treatment of the
d.c. values u00 and Y00 of the signals. Assuming that only stochastic
disturbances with E{v(k)} = 0 act on the loop, the d.c. values can be
estimated by simple averaging (method 2 in section 23.2), before the
adaptive control starts. Then both minimum variance controllers and
state controllers can be applied without additonal methods for remo-
ving of offsets, as no offset occurs. However, if the disturbances have
a non-zero mean (as in most cases) and there are also changes in the
reference variable w(k), the d.c. value must be taken into account and
the compensation for offsets must be considered for controllers with-
out integral action such as minimum variance controllers and state con-
trollers. The simplest way to reduce the d.c. value problem is to use
first order differences 6u(k) and 6y(k) in the parameter estimation
(method 1 in section 23.2). Offsets can then be avoided by adding a
pole at z 1 = to the estimated process model by multiplication with
S/(z-1) and by designing the controller for this extended model. How-
ever, this still leads to offsets for constant disturbances at the
process input and does not give the best control performance. Another
possibility is to replace y(k) by (y(k)-w(k)J and u(k) by 6u(k)=u(k)-
u(k-1) in both the parameter estimation and the control algorithm, as
proposed in [25.9]. This leads, however, to unnecessary changes of the
parameter estimates after setpoint changes and therefore to a negative
influence during a transient. Relatively good results have been ob-
tained by the estimation of a constant (method 3 in section 23.2).
Using Y00 = W(k) the d.c. value u00 can be easily calculated such that
offsets do not appear [25.15]. Then controllers without integral ac-
tion can be used directly.

The selection of suitable combinations of methods (1), (2), (3) and


(4) depends mainly on the following properties of the resulting para-
meter adaptive control algorithms:

a) stability of the parameter adaptive system


b) convergence and control behaviour near the
steady-state (asymptotic behaviour)
c) convergence and control behaviour during the
transient phase (short term behaviour)
d) computational effort
416 25. Parameter-adaptive Controllers

a) Stability

As the resulting parameter adaptive control systems are nonlinear,


time-varying and stochastic i t is very difficult to obtain general sta-
bility conditions. However, the following two conditions for stability
are obvious:

Stability condition 1 (necessary)


The closed loop is stable with the exactly tuned fixed controller.

Stability condition 2 (sufficient)


All parameter estimates required for designing the control algorithms
converge to their true values.

To satisfy the first condition, the pole zero cancellation problems,


which depend on the structure of the process and the controller, have
of course to be taken into account. Table 25.2.1 summarizes which pro-
cesses and controllers should not be combined to avoid instability.

With respect to the second condition the following combinations are


possible. As RLS gives only the process model B/A the controller de-
sign is in the first instance restricted to deterministic controllers
such as DB, iPC-j, LCPA and SC. RELS and RML provide both the process
model B/A and the noise model D/A, so that stochastic controllers such
as MV can be designed. However, RELS and RML can also be combined with.
the deterministic controllers, and RLS can be combined with the sto-
chastic controllers if D(z- 1 ) = 1 is specified, and the closed loop is
then stable. Therefore, based on the viewpoint of the process and noise
model structure, all combinations can be considered, making use of
appropriate modifications.

The required convergence to the true parameters presumes that


- the closed loop identifiability conditions are satisfied, and
- the parameter estimation yields consistent parameter estimates.
To discuss the closed loop identifiability conditions two types of
external signals are considered.

With stationary stochastic disturbances v(k) it is assumed that the


process and the noise filter are described by

y(z) (25.3-1)
25.3 Appropriate Combinations of Parameter Estimation 417

that the orders m and d are exactly known, that the forgetting factor
is A = 1 and that the reference variable w(k) = 0. For the assumed sto-
chastic disturbances the minimum variance controllers MV3 and MV4 are
suitable. During the transient phase, where the controllers are not ex-
actly tuned to the process and are assumed to be piecewise constant,
the identifiability condition 2

(24.1-14)

is satisfied for rna ~=rod= m if d ~ 0 for MV3 and d ~ 1 for MV4,


as p = 0, c. f. Eq. (25.2-6), (25.2-7). If the controllers can be assu-
med to be time varying any controller satisfies the closed loop iden-
tifiability (chapter 24). Therefore in both cases, the parameter esti-
mates converge to their true values as k increases

A(z- 1 ) + A(z- 1 )

B(z- 1 ) + B(z- 1 )

D(z- 1 ) -1
+ D(z ) .

However, if the control algorithms become exactly tuned, p = rod common


roots appear in Eq. (24.1-15) or Eq. (25.2-9), and the identifiability
condition Eq. (24.1-14) becomes violated, which means that no unique
solution is possible in parameter estimation (if the parameter estima-
tion were to startnow).Despite this, the parameter estimates give, for
k +

A(z- 1 ) A(z- 1 )

B(z- 1 ) B(z- 1 )

D(z- 1 ) D(z- 1 )

as this is the common solution of the parameter estimation during the


transient phase 0 < k < oo and the exactly tuned phase for k = oo. There-
fore the parameter adaptive control algorithms based on combinations
of RELS or RML and MV3 (d ~ 0) and MV4 (d ~ 1) may converge to the
exact controllers if the widened identifiability condition

(25.3-2)

is satisfied [25.16]. (The identifiability condition Eq. (24.1-14) is


only valid for fixed controllers.) An example of this convergence pro-
perty is shown in Figure 25.3.1. The parameter estimates in the closed
loop with the exactly tuned and fixed control algorithm do not converge
whereas good convergence is obtained by the parameter adaptive algo-
rithm.
418 25. Parameter-adaptive Controllers

0.6

0.2

1500 2000 k
-0.2 .r---~-..r--- .......

-0.6
a,

-1.

1500 2000 k

Figure 25.3.1 RML parameter estimates of a first order process (a 1 =


-0.8; b1 = 0.2; d1 = 0.5) operating in closed loop with
a) fixed, exactly tuned control algorithm MV3 (r = 0.05)
b) parameter adaptive controller RML/MV3 (A 0 = 0.99;
A(O) = 0.95; r = 0.05)

In [25.16] it is shown that for the combinations of RELS or RML with


MV3 convergence to the exact controller is also achieved for d = 0. If
the RLS parameter estimation method is applied to processes described
by Eq. (25.3-1), biased process parameters are obtained and no conver-
gence to the exactly tuned controller MV3 or others (DB) can be expec-
ted. However, the combination of RLS and MV4 converges to the exact
controller MV4 [25.8], which indicates that the second stability con-
dition is only sufficient.

Now changes in the referenae variable w(k) are considered and the
noise is assumed to be zero, v(k) = 0. The process is described by
-1
B(z ) -d ( ) (25.3-3)
y(z) -1 z u z .
A(z )

Further assumptions are that the orders m and d are exactly known, the
forgetting factor is A = 1 and the reference variable w(k) is (deter-
25.3 Appropriate Combinations of Parameter Estimation 419

ministic or stochastic) persistently exciting of order n ~ m. As the


reference variable w(k) can be interpreted as an external perturbation
which acts on the process outside of the measurement of u(k) and y(k),
the identifiability condition 2 is invalid in this case, c.f. section
24.2. The parameter estimates then converge to their true values and
for k -+ oo

is achieved, applying proper parameter estimation methods, for example


RELS, RML or RLS. Hence combinations with any controller designed using
-1 -1
A(z } and B(z ) converge to the exactly tuned controllers. In the
case of RELS and RML the estimates of D(z- 1 } result in arbitrary values
because of missing excitation of the noise filter. The minimum variance
control algorithms should therefore only be applied with a fixed poly-
nomial D(z- 1 ). For D(z- 1 ) = 1 RLS should be used instead of RELS or
RML.

If both changes of the reference variable w(k) and stochastic distur-


bances v(k) act on the parameter adaptive control system, combinations
of appropriate parameter estimation methods, for example RELS and RML,
with any controller converge to the exact controllers, as closed loop
identifiability is satisfied and there is no bias in the parameter es-
timates. To obtain consistent parameter estimates proper parameter es-
timation methods must be used and the convergence conditions stated in
chapter 23 must be met, which includes that the process input should
be persistently exciting of enough order.

This discussion of stability conditions has shown that the following


combinations of parameter estimation methods and control algorithms
can be recommended:

Stochastic disturbances v(k):


RELS; RML/MV3; MV4
RLS/MV4

Stochastic or deterministic reference variable w(k):


RLS/DB; iPC-j; LCPA; SC; MV
(MV3 or MV4 with D(z - 1 ) 1)

Extensive simulation and experience with real processes have shown that
parameter adaptive control algorithms are stable if the discussed con-
420 25. Parameter-adaptive Controllers

ditions are satisfied. This can be explained heuristically. If the pro-


cess model is wrong such that the closed loop poles move towards the
stability boundary, the process input amplitudes become larger. With
the assumption that the process input changes are persistently exci-
ting of sufficient order, c.f. chapter 23.2, and large enough compared
with the acting noise, the identified model improves. Then the control-
ler also improves and gives better closed loop behaviour. The assump-
tion of persistently exciting input signals of order m is satisfied if
the autocorrelation functions for example are related by ~uu(O)>~uu(1)

> ••• >~uu(m) or if m harmonics are involved. Even if the process signal
has the persistently exciting property only for a short period, the im-
provement in the process model may be sufficient. The results based on
simulation and experience, are not so comprehensive that a general sta-
bility proof is possible. Therefore new conditions for global stabili-
ty of the parameter adaptive control systems could contribute a lot.
[25.12] gives a review of this stability problem. Based on convergence
analysis of recursive parameter estimation methods some general condi-
tions for the combination of RLS, RELS, RML with MV controllers for
stochastic disturbances are given - see the next section. A further
reference is [25.20].

b) Convergence and control behaviour near the steady-state

Convergence to the steady-state is related to asymptotic stability. If


the parameter adaptive controller converges to the fixed controller
the control behaviour near the steady-state is also known approximate-
ly. Asymptotic behaviour near the steady-state could only be analysed
until now for special combinations, .c.f. section 25.3.2.

c) Control performance and convergence during the transient phase

Initial convergence depends on the parameter estimation algorithm, the


control algorithm, the initial values for the recursive parameter es-
timation and the acting external signals. At present, simulations are
the only economical way to obtain some results, see section 25.4.

d) Computational effort

Computational storage requirements depend mainly on those of the para-


meter estimation algorithm, c.f. Table 23.8.1, and the control algo-
rithm design, Table 25.2.2. The execution time using typical 16 bits
process computers and programming in FORTRAN is about 10 to 200 msec,
25.3 Appropriate Combinations of Parameter Estimation 421

and for an 8 bits microcomputer and assembler programming about 500 m-


sec.

In the next section some selected parameter-adaptive control algorithms


are considered in more detail.

25.3.2 Stochastic Parameter-adaptive Controllers

RLS/MV4
This was one of the first proposals [25.7], [25.8], [25.9]. D(z- 1 )=1
is assumed for the process model. Hence

(25.3-4)

and the corresponding minimum variance controller MV4-d given by Eq.


(14.2-13) or Eq. (25.2-12) is

G (z) = u(z) (25.3-5)


R y(z) A -1 A -1 o
zB(z )F(z )

The parameters of L(z- 1 ) and F(z- 1 ) follow from Eqs. (14,2-5) to


(14.2-7) by comparing the coefficients of

(25.3-6)

as in Example 14.2.1, but with d 1 , ••• ,dm = 0. Ford~ 1 all process


parameters are identifiable, as follows from Eq. (25.2-13) with md=O.
This can also be seen from the following considerations. According to
Eqs. (24.1-5), (24.1-14) and p =rod' section 25.2.2,

j = 1-p (25.3-7)

and for rod = 0

j = rna + ~ + d - 1 (25. 3-8)

parameters can be estimated. Applying RLS all rna+~ parameters can on-
ly be estimated ford~ 1. If d = 0 one parameter must be assumed known.

To reduce the computational effort of Eq. (25.3-6) parameter estima-


tion is performed for a modified model [25.8], [25.9]. Eq. (25.3-4) is
multiplied by F(z- 1 )

-d
FAy - BFz u = Fv (25.3-9)
422 25. Parameter-adaptive Controllers

and Eq. (25.3-6) is inserted giving

(25. 3-10)
With Eq. (25.3-5) one obtains

y(z) = -Q(z- 1 )z-(d+ 1 )y(z) + P(z- 1 )z-(d+ 1 )u(z) + F(z- 1 )v(z).

(25.3-11)
This modified model contains the controller parameters qi and pi which
can be directly estimated by applying RLS. For this Eq. (25.3-11) is
written as a difference equation with v

y(k) = -q 0 y(k-d-1)- ... -qvy(k-d-ma)

+p 0 [u(k-d-1)+ ... +p~u(k-~) J


+E: (k-d-1). (25. 3-12)

s (z) = F(z- 1 )v(z) is a moving average process of order d. Eq. (25.3-12)


contains rna parameters qi and ~+d parameters pi' so that ma+~+d pa-
rameters must be estimated. Instead of the rna+~ parameters of the mo-
del of Eq. (25.3-4) d more parameters must now be estimated. As only
ma+~+d-1 parameters can be estimated, one parameter must always be
assumed known, if the modified model of Eq. (25.3-11) is used. For
example p 0 = b 1 is assumed known. Then it follows that
T
y(k) = ~ (k-d)~ + p 0 u(k-d-1) + s(k-d-1) (25. 3-13)

with

-8 = [ qo • · • qv P 1 • • • P~ J (25.3-14)
T
~ (k-d) = [-y(k-d-1) ... -y(k-d-ma) p 0 u(k-d-2) .•. p 0 u(k-~) J.
(25. 3-15)
The RLS method is applied to Eq. (25.3-13) giving

T
~ (k+1) J.
A A

~(k) + y(k)[y(k+1)-p 0 u(k-d)-~ (k-d+1)~(k) (25.3-16)

The parameter estimates are inserted into the control algorithm Eq.
(25.3-5) and the new process input is calculated by

u(k+1) = [q0 y(k+1)+ ... +qvy(k-ma+2)]


Po

- p 1u(k)- ... -p~u(k-~-d+2). (25. 3-17)


25.3 Appropriate Combinations of Parameter Estimation 423

Some special properties of this 'selftuning algorithm' are:

a) The application of the least squares method to the modified model


Eq. (25.3-11) leads to unbiased estimates.

b) The parameter-adaptive algorithm which is designed for D(z- 1 ) 1,


when applied to a process with
-1 -rod
D(z- 1 ) = 1 + d 1 z + ••• + d z
rod

leads to biased parameter estimates. It has been shown in [25.8],


however, that the controller Eq. (25.3-17) converges to the optimal
minimum variance controller, if the parameter estimates converge.
Then the output signal y(k) is a moving average of order d, c.f.
Eq. (14.2-19).

c) Exact knowledge of the assumed parameter p 0 b 1 is unnecessary


[25.8], [25.11].

d) The parameter-adaptive controller RLS/MV4 may only be applied to


processes with zeros within the unit circle, c.f. Table 25.2.1.

e) Offsets can be avoided by inserting a pole z 1 = 1 into the control-


ler. This is achieved by first adding this pole to the process and
in Eq. (25.3-13) to Eq. (25.3-16) the following are replaced

y(k) by ew(k) y(k) - w(k)


u (k) by liU (k) u(k) - u(k-1)

The control algorithm then becomes


1 A A
llu(k+1) = --[q0 e (k+1)+ ••• +q e (k-m +2)]
Po w v w a
- p1 llu(k)- ••• -p~llu(k-~-d+2) (25. 3-18)

and the process input is calculated by

u(k+1) = u(k) + llu(k+1) (25.3-19)

c.f. [25.9]. Convergence properties of this selftuning algorithm


are given in [25.10]. An advantage for theoretical analysis is that
the results of convergence analysis for recursive parameter estima-
tion can be directly applied to the modified model, Eq. (25.3-11),
[23.17]. In [25.12] it is shown for a first order process that this
RLS/MV4 is globally asymptotically stable for 0 < b 1 /p 0 < 2 and in
424 25. Parameter-adaptive Controllers

[25.17] for the case of d = 0 that H(z) = 1/D(z)-1/2 must be positive


real. This means that H(z) is a transfer function which can be rea-
lized with purely passive elements: H(z) is stable and Re{H(z)} > 0
iwT 0
for z e and -~ < wT 0 < ~.

This implies ID(eiwTo)-1 I < 1, i.e. that the error involved in assu-
ming D(z- 1 ) = 1 should have a frequency response lying within the
unit circle and hence does not magnify any frequency.

RLS/MV3, RML/MV3, RELS/MV3


As the process input is unweighted with the minimum variance controller
MV4, the process input signals can oscillate with excessive amplitude
with many processes. As shown in chapter 14 the extended minimum vari-
ance controller MV3 allows one to adjust the manipulating effort to
the process and can also be used with processes with zeros outside the
unit circle. The combination RLS/MV3 was proposed in [25.13] and ana-
lysed in [25.18], [25.19]. RML/MV3 and RELS/MV3 have been tested in
[25.15].

Example 25.3.1: Equations for programming a stochastic parameter-adap-


tive controller

If the parameter-adaptive controller is programmed in a modular way by


separating parameter estimation and controller design, a typical exam-
ple is:

1. Removal of d.c. values by differencing (Eq. (23.2-25))

llu(k) U(k)- U(k-1)


lly(k) Y(k) - Y(k-1)
llu(k) u(k); lly(k) = y(k)

2. Parameter estimation (RLS, RELS, RML)


c.f. Table 23.7.1.
T ~
a) e(k) = y(k) - ~ (k)~(k-1)

b) ~(k) = ~(k-1) + y(k-1)e(k)


c) Inserting y(k) and u(k-d) into ~T(k+1)
d) y(k) = ~(k+1)~(k)~(k+1)

e) ~(k+1) = [l-y(k)~T(k+1) ]~(k)t

3. Controller parameter calculation (MV3)


The parameters of the extended minimum variance controller follow
from Eq. (25.2-4) and Eq. (25.2-5), c.f. Example 14.2.1.
25.3 Appropriate Combinations of Parameter Estimation 425

4. Calculation of the new manipulated variable


New controlled variable: y {k+1)
- - a)
b) New control deviation ew{k+1) = W(k+1) - y {k+1)
c) New d.c. value of y Yoo<k+1l W(k+1)
d) New d.c. value of u uoo<k+1l A(1J/B(1JY 00 (k+1l

+ .~ ai]/~~
1.=1 1.=1 r
bi koo<k+1)

--e) New manipulated variable: U(k+1) + u(k-1) + p 1u(k-2)

+ Pm+d- 1u(k-m-d+1)
- q 0 ew(k) - q 1ew(k-1) -
- qm_ 1ew(k-m+1)

5. Cycle
a) Replace y(k+1) by y(k) and u{k+1) by u(k).
b) Step to 1.

Notice that the old parameters ~{k) are used to calculate the process
input u(k+1) in order to save computing time between 4. a) and e).

25.3.2 Deterministic Parameter-adaptive Controllers

Deterministic controllers are frequently designed for step changes of


the reference variable - this is obvious for servo control systems -
but it is also used for regulators because the resulting control beha-
viour can be easily tested. Moreover, reference variable steps can
accelerate the convergence during the initial adaptation phase.

RLS-DB
A particularly simple parameter-adaptive controller is obtained by com-
bining RLS and DB(v) or, better, DB(v+1). The design effort for DB is
very small and no offset problem accurs. The adaptive algorithm is ge-
nerated by combining the parameter estimation algorithms given in Table
23.1 and the controller parameter calculations stated in chapter 8.

Other appropriate deterministic parameter-adaptive controllers with


more computational effort are RLS/3PC-3, RLS/SC and lU.S/LCPA.
426 25. Parameter-adaptive Controllers

25.4 Comparison by Simulation of Different


Parameter-adaptive Controllers

To compare the behaviour of various parameter-adaptive control algo-


rithms two parameter estimation methods were combined with six control
algorithms according to Table 25.4.1. They were programmed on a process
computer HP21MX in FORTRAN [25.15], [25.16] and tested in closed loop
with processes simulated on an analog computer. The a priori knowledge
for the parameter-adaptive control algorithms consists in the order rn
and the time delay d of the process model and the sample time T 0 . De-
pending on the control algorithm a weighting factor r or r' must be
chosen (MV3, DB(v+1), 3PC-3) or the poles must be placed (LC-PA) or
nothing more need be specified (MV4, DB(v+1)). Furthermore, the for-
getting factor A must be chosen.

Table 25.4.1 The investigated parameter-adaptive controllers

~trol algorithm
stochastic deterministic
p
estimation ~ MV4 MV3 DB(v) DB (v+1) 3PC-3 LCPA
1) 1)
RLS x3) x3) X X X X

RML x3 J x3) x2) x2) x2) x2)

1): D(z- 1 ) = 1 for controller design.


2): D(z- 1 ) not used for controller design.
3): For compensation of offsets d.c. value method 3, section 23.2,
applied.

First a second order process with continuous transfer function

1
(25.4-1)
GP(s) = (1+3.75s) (1+2.5s)

was used with T 0 = 2 sec (test process VII, see Appendix). The z-trans-
fer function with zero-order hold is
0.1387z- 1 +0.0889z- 2
Y.J.& (25. 4-2)
u(z) 1-1.036z- 1 +0.2636z_ 2 .

Only a second order model was chosen in order to obtain good parameter
estimates so enabling direct comparison with the exact parameters.
(This is difficult for higher order processes, even if the input/out-
25.4 Simulation of Different Parameter-adaptive Controllers 427

put behaviour fits well [23.9]. The d.c. values are zero.

Stochastic disturbances

A correlated reproducible noise was generated using the noise filter

n(z) -1 -2
+0.05z +0.8000z
v(z) -1 -2
1-1.036z +0.2636z

Some examples using two fixed and two parameter-adaptive control algo-
rithms are shown in Figure 25.4.1. The fixed controllers were designed
with the exact process parameters. For the parameter-adaptive algo-
rithms the forgetting factor was chosen to :\ = 0.98. This example shows
that:

- good control is achieved by all controllers


- after about 20 samples the stochastic parameter-adaptive control
algorithm RML/WJ3 shows about the same control performance as the
fixed HV3;
- after about 15 samples the variations of the output signal y(k)
are even smaller with RHL/DB(\!) than with the fixed DB(\!).

The process parameter estimates ai(k) and bi(k) are represented in


Figure 25.4.2. After initial variations they converge approximately to
the true values. The initial variations of the resulting gain
K = L:b.l I ( 1 + L:;;,.l ) are larger and in the case of RML/DB ( \!) they last
longer. But despite this the control performance is good.

A survey of the behaviour of all 12 parameter-adaptive control algo-


rithms shows [25.16]:

The control performance after about 20 samples

- is almost the same as the fixed, exactly tuned stochastic con-


trollers. This holds especially for RLS/MV4, RML/MV4, Rrffi/MV3;

- is smaller than with the fixed, exactly adjusted deterministic


controllers (except RLS/DB(\!+1). This indicates that these con-
trollers (which are not designed for stochastic disturbances) are
better tuned to the stochastic signals by the adaptation.

The convergence of the noise filter parameters di is much slower than


for the process parameters ;;,i and bi. Quickest overall adaptation was
achieved with RLS. Simulations have shown that the control performance
could not be improved by additional external perturbations.
428 25. Parameter-adaptive Controllers

rdkl

a)

y (k)

y (k) u (k)
u (k)

b)

Umax for k ~ 10
y (k)
u (k)

c)
100k

y(k)
u(k)

d)

y (k) Umax for k! 10

u (k) - _f

e)

Figure 25.4.1 Input signal u(k) and output signal y(k) for stochastic
disturbances. ( y(k) is drawn stepwise).
a) no control y(k) = n(k)
b) fixed controller MV3 (r=0 . 01)
c) adaptive controller RML/MV3 (r=0.01)
d) fixed controller DB( v )
e) adaptive controller RML/DB( v )
a,
a2
a, IV
Ul
02 a2
- - -- a2 .1:>

100 Ul
- ------- -------a.J _____ _ a2
~"''
100 k
- -a,-- ------ ----- a, I-'
Ill
rt
-- --- ~----------~,--------- -- a,
r ------~~~~~ "''0::l
0
Hl
~,
b2 0
6, r., Hl
"''
Hl
b2 o.qh~=--=-= =-:-:: -- - _=:::_ _- --= :::::.:::.:: ~~ (1)
t1
(1)
62 100 k ::l
rt
b, b,
L_ '0
Ill
62 100 'b2 t1
K Ill
~
rt
(1)
t1
I
K Ill
p.
Ill
'0
rt
"'<:'
(1)

(")
100 k 0
::l
rt
t1
0
I-'
I-'
(1)
K t1
{Jl

100

Figure 25.4.2 Parameter estimates ~. (k) and b. (k) and gain factor K(k) in the case of stoehastie
disturbanees ~ ~ .1:>
a) RML/MV3 (r=0.01) b) RML/DB(v) IV
\0
430 25. Parameter-adaptive Controllers

Deterministic reference value changes


Figures 25.4.3 and 25.4.4 show:
- after closing the loop, process input changes are generated by
the adaptive controller, leading to an approximate model, so that
the first step change gives acceptable control;
- after the second step the parameters differ only slightly from
their exact values. The gain factor also converges quickly to the
exact value;
- the control behaviour of the adaptive and exactly tuned fixed
controllers is approximately the same after the second step.

Different processes
Various parameter-adaptive control algorithms were applied to differ-
ent types of stable and unstable, proportional and integral action,
minimum phase and nonminimum phase processes, as given in Table 25.4.2.
Figure 25.4.5 presents the results for step changes in the reference
value with the best algorithms in each case. For all proportional ac-
tion and stable processes RLS/DB show quick adaptation to exact tuning.
With RLS/MV3 a stable closed loop can be achieved for the integral ac-
ting and the unstable processes.

These simulations may give a first insight into how the parameter-adap-
tive control algorithms work in conjunction with different signals and
different processes. They have shown that convergence to the true pa-
rameters is not a necessary condition for stable adaptive control.

Based on these and other simulations and applications to real process-


es, the applicability of the discussed parameter-adaptive algorithms
is summarized in Table 25.4.3. Instead of RML, RELS can also be tried
and is computationally simpler.
25.4 Simulation of Different Parameter-adaptive Controllers 4 31

100 k

y (K)
u (K)
w(K) v U(K)

I/ Y(K)
1
~ [J
ll [']_
100 k

Y(K) I
U(K)
W(K)

100 k

Y(K)
U(K)
W(K)

100 k

y (K)
u (K)
w(K) !--U(K)

ILY(K)
1
ru
"
I" "
100 k

Figure 25.4.3 Input signal U(k) and output signal Y(k) for
a) step changes in the reference value W(k)
b) fixed controller MV3 (r=0 . 025). D(z-1)=1.
c) adaptive controller RLS/MV3 (r=0.025)
d) fixed controller DB( v )
e) adaptive controller RLS/DB(v)
w
"'"
IV

a, a,
a? a,
_/~
-- --- - - - --- - - - -- _,..:.02-- ~0 2
100 k ~
a, 100
·<;;, a,

i),
6,

b, IV
l11
0.1

100 k '0
AI
I"!
AI
s
Ill
R R rt
Ill
I"!
I
AI
p.
AI
"0
rt
100 k
H
-+L '_,
100 k
1-'·
<
Ill
()
0
::s
rt
I"!
0
Figure 25.4.4 Parameter estimates ai(k) and bi(k) and gain factor K(k) with (deterministic) reference 1-'
1-'
value step changes Ill
a) RLS/MV3 (r=0.025) b) RLS/DB(v) I"!
(IJ
N
U1

ol:>

Table 25.4.2 Different simulated processes {fJ


f-'·

sample
~
f-'
process s-transfer function z-transfer function characterization Ill
time T0 rt
f-'·
0
1 0.1387z- 1 +0.0889z- 2 low-pass behaviour ::s
1 G1 ( s) 2 .o G1 (z) . -1 -2
1-1.036z +0.2636z 0
(1+2.5s) (1+3.75s) Hl

1 -2 3 low-pass behav.Lour; 1::1


0.0186z +0.0486z +0.0078z f-'·
1 2.0 G2 (z) one zero outside Hl
2 G2 (s) Hl
(1+2.5s) (1+3. 75s) (1+5s) 1-1.7063z- 1+0.958z- 2 -o.1767z 3 unit circle of the ro
z-plane li
ro
::s
rt
-1 -2 -3 damped oscillatory
1+2s 0.1098z +0.0792z -0.0229z behaviour 'CJ
3 G3 (s) 3.0 G3 (z) Ill
1-1.654z- 1 +1.022z- 2 -0.2019z- 3 li
( 1+3s) (25s 2 +5s+1) Ill
sCD
-3 process with rt
-9s 3.0 ro
4 G4 ( s) =G 3 ( s) • e G4 ( z) =G 3 ( z) · z time delay li
I
Ill
-1 -2 p,
-0.102z +0.173z Ill
1-4s 2.0 G5 (z) all-pass behaviour '"d
5 G5 (s) -1 -2 rt
(1+4s) (1+10s) 1-1.425z +0.496z f-'·
~
1 0.0088z- 1 +0.0086z- 2 integral behaviour n
6 G6 (s) o. 3 G6 (z) -1 -2 0
s(1+5s) 1-1.9418z +0.9418z ::s
rt
li
-1 -2 -3 oscillating unstable 0
f-'
s+0.03 0.1964z +0.0001z -0.1892z behaviour (model of f-'
7 G7 (s) 0. 5 G7 (z) -1 -2 -3 ro
( 1+ 2 s) ( s 2 -0. 3 5 s+O. 1 5) 1-2.930z +2.866z -0.9277z a helicopter) li
rn
-1 -G
1 G (z)=-0.0132z -0.0139z unstable behaviour
8 G8 (s) 0.5
(1+5s) (1-2s) 8 1-2.1889z- 1+1.1618z- 2

ol:>
w
w
434 25. Parameter-adaptive Controllers

process transient parameter-adapt i v e cont rol


function
ylkl ylkl
ulkl
wlkl

RLS/081
CD
ru
10 u so 100 k

ylkl ylkl
ulkl
wlk l

RLS/08 2
1 ~

~
10 k so 100 k

J
.-I
0
1-<

b
+.1
1::
y(kl 0
u(kl u
wlkl
Q)
RLS/081 :>
® 1
·.-l
+.1
0..
rrJ

"'
IS k 7S
rrJ
I
1-<
Q)

b
+.1
Q)

~
1-<
RLS/08 1 rrJ
1
0..

15 k 15 "'.::
rrJ
.::
0
·.-l
+.1
u
.::
ylkl ylkl ::lN
ulkl 44•
w [k ) +.1 • ""
>::lfl
Q)N
·.-l
Ul Q)
>:.-I
rrJ.Q
1-< rrJ
E-tE-t

® RLS IDB 2

~~
10
wur so 100 k
25.4 Simulation of Different Parameter-adaptive Controllers 435

RLS/ MV 3
®
165

ylk l
ulkl
wlkl

y(k) ylkl
u lkl
-Mkl

® RLS/MV3

165

Fig. 25.4.5 continuation

Table 25.4.3 Applicability of parameter-adaptive controllers as a


function of the type of process and signals

type of processes type of disturbances

asymptotic integ ra l zeros outside stochastic determinist.


stable behaviour unit circle n(k) w(k)

RLS/DB1
(2)
X - X - X

RLS/MV3 X X X X X

RLS/MV4 X - - X -
RML/MV3 X X - X X

RML/MV4 X - - X -
436 25. Parameter-adaptive Controllers

25.5 Choice of A Priori Factors

To start the parameter-adaptive control algorithms the following fac-


tors must be specified initially:

T0 sample time

m process model order

a process model dead time

A forgetting factor

r process input weighting factor

When applying parameter-optimized controllers it is seen that the


control is not very sensitive to the choice of the sample time T0 • For
proportional action processes good control can be obtained with Pin-
controllers within the range

(25.5-1)

where T 95 is the 95 % settling time of the process step response, c.f.


chapter 5. However, suitable sample times for deadbeat and minimum va-
riance controllers must be chosen more carefully. In particular, too
small sample intervals must be avoided in order to prevent excessive
process input changes. Simulations with a process of order m0 = 3 have
shown that the adaptive control was insensitive to wrong model orders
within the range

(25.5-2)

This indicates that the order needs not be exactly known. However, the
adaptive control algorithms are sensitive to the choice of the dead
time d. If d is unknown or changes with time, the control can become
either poor or unstable, but this can be overcome by including dead
time estimation [25.21).

The choice of the forgetting faetor A in the parameter estimation de-


pends on the speed of the process parameter changes, the model order
and the type of disturbances. For constant or very slowly time varying
processes A = 0.99 is recommended. For time varying processes and sto-
chastic disturbances 0.95 ~ A ~ 0.99 and with stepwise reference vari-
able changes 0.85 ~ A ~ 0.90 have been shown to be acceptable, where
the smaller values are valid for lower model orders (m = 1,2) and the
larger values for higher orders.
25.5 Choice of A Priori Factors 437

The influence of the weighting factor r on the manipulated variable can


be estimated by looking at the first input u(O) after a setpoint step
w(k) = w0 .1 (k). For the closed loop then gives u(O) = q 0 w0 . Therefore
q 0 is a measure for the process input. For the DB(v+1) controller there
is a linear relationship between q 0 and r'

(O:>r':>1) (25.5-3)

where qOmin = 1/(1-a 1 )Lbi, qomax= 1/Lbi, c.f. sections 7.2 and 20.2. In
the case of MV3 q 0 depends hyperbolically on r/b 1 for r/b 1 > b 1 , Eqs.
(14.1-25), (14.1-27). With q 0 = qomax = 1 0 /b 1 for r = 0 one obtains,
Eq • ( 1 4• 1 - 2 7 ) I

2
r" . r" > 1 (25. 5-4)
1+r/b 1

Therefore the weighting factor r must be related to b~. To obtain r" =


0.5 or 0.25 one has to select r/b~ = 1 or 3.
438 25. Parameter-adaptive Controllers

25.6 Examples of Applications

Various case studies have already shown the applicability of parameter


adaptive control algorithms to industrial and pilot processes. A summa-
ry of the application of the 'self-tuning regulators' RLS/MV4 to the
control of the moisture content of a paper machine and the input of an
ore crusher is given in [25.12]. The same type has also been applied
successfully to autopilots of tankers [25.22] and the titan dioxide
content in a kiln [25.23]. The application of RLS/MV3 microprocessor
based selftuners to the control of the room temperature, pH-value and
temperature of a batch chemical reactor is reported in [25.24]. RLS/MV3
and RLS/DB algorithms have been programmed on a microcomputer and appli-
ed to an airheater [25.25] and [25.26]. The application of RLS/MV4 and
RLS/SC to a pH-neutralisation process is described in [25.27].

In the following, two examples are considered of the parameter-adaptive


control of an airheater and a pH-process. For both processes the sta-
tic and dynamic behaviour depends on the operating point because of
nonlinearities.

25.6.1 Adaptive Control of an Airheater

Figure 25.6.1 is the schematic of an air-conditioning plant. The gain


K11 of the airheater plant with the position v 1 of the split range val-
ve for the warmwater flow as input and the air temperature ~ as output
is shown in Figure 25.6.2. It changes considerably (about 1:10) with
the position u 1 and the air flow. Moreover, the dynamic behaviour chan-
ges considerably for these operating points and with the direction of
the process input (nonlinear behaviour). A RLS/DB(v+1) algorithm was
used, implemented in an 8 bit microcomputer controller [25.25], [25.26].
Figure 25.6.3 shows that the adaptive controllers stabilize the loop
after about 10 samples at the setpoint. The following step changes of
the setpoint and of the air flow indicate good control. The process in-
put u 1 varies between 1 and 7 V, i.e. almost through the whole range
of the valve. The different amplitudes of the process inputs after set-
point changes of the same size (2 K) show the adaptation of the con-
troller parameters. A fixed analog PI-controller, tuned at W = 21°C
• 3
and M = 400 m /h gives acceptable behaviour only at that operating
point. If the setpoint is increased the response becomes too slow, and
if the air flow decreases to 300 m3 /h the control becomes unstable
[25.26].
25.6 Examples of Applications 439

AIR HEATER AIR HUMIDIFIER

M FAN HEAT EXCHANGER WATER SPRAYER


r1
,....J'-.... CONDITIONED
FRESH Q '---V- AIR
A IR - ¢
c:C>

DIGITAL PARAM ETER- 1---_ -3


_ _ ____,
PTIVE CONTROLLERI -- - - - - - . J
lj)

Figure 25 . 6.1 Schematic of an air-conditioning plant


~ air temperature . u 1 position of the inlet water valve
~ relative humidity
M air flow ratio u 2 position of the spray wate r valve

Kll
M =300m'lh
7
6
5
4
M=550m%
3
2

0
2 3 4 5 6 7 8 9 10

Figure 25.6 . 2 Gain K11 = ~ ~( oo )/ ~ U 1 (oo ) as a function of the valve posi-


tion u 1 for u2 = -1V .
""'""'
0
~ (°C)

25
23
21
k
19
17
u1 rv1

10 50 k
4 ~n
3 N
(J1

'"0
2 Ill
11
D>
~
rt
ro
11
I
550
"mi~~ l Ill
A.
Ill
'"0
:: 1o 5o[ 100 1So ' rt
I .....
300 <
ro
()
0
::l
Figure 25.6.3 Adaptive control of the air temperature for constant spray water flow, changing reference rt
values w(k) and changing air flow M. Adaptive controller: RLS/DB( v +1). 11
0
rn = 3; d = O; A = 0.9; T0 = 70 sec. 1-'
1-'
ro
11
Ul
25.6 Examples of Applications 441

25.6.2 Adaptive Control of a pH-Process

Figure 25.6.4 shows the scheme of a pilot pH-process where hydrochloric


acid and sodium hydroxide base are mixed with neutral water. The mani-
pulated variable is the base flow (changed by the stroke of the piston
pump), the controlled variable the pH-value, measured with a glass el-
ectrode in the output tube of a stirred tank. As is well-known, the pH-
process has a static nonlinear characteristic given by the titration
curve. The resulting gain K as a function of the base flow is shown in
Figure 25.6.5. It varies by about 1:6. Fig.25.6.6 a) shows the adaptive
control with RLS/DB(v+1). At t = 0 min the loop is closed with Q(O)=Q.
After about 2 minutes the control system is stabilized and the setpoint
pH = 7 is reached. The response to the setpoint changes in different
directions shows relatively short settling time and only small or no
overshoot. The response to a disturbance in the acid flow at t = 14 min
also indicates good control. The estimates of the gain during that run
vary between 0.023 at t = 5 min and 0.144 at t = 10 min, i.e. 1:6. The
next figure, Figure 25.6.6 b), shows the adaptive control with RLS/MV3.
Before starting the control, a brief open loop identification is ini-
tiated applying a few step changes (PRBS) during the first 2 minutes.
In this way a better starting model for the adaptive controller is ob-
tained, giving well defined initial changes of the process input. After
closing the adaptive loop, the pH-value is stabilized at pH= 7. Step
changes of the setpoint, the acid flow and the neutral water flow show
good control. More details are given in (25.28]. A comparison with a
fixed analog PI-controller demonstrates the considerably improved con-
trol achieved by the adaptive controllers. Both parameter-adaptive con-
trollers RLS/DB(v+1) and RLS/MV3 are suitable for this pH control.

These examples show that parameter-adaptive controllers can be used to


tune the controller parameters for a given operating point and to adapt
to the changing behaviour of nonlinear processes (shown by setpoint
changes) and time varying behaviour (shown by external disturbances
like mass flow changes). A comparison with fixed controllers has shown
that the control performance is significantly better with the adaptive
controllers and that much time can be saved by selftuning instead of
manually tuning by use of tuning rules.
442 25. Parameter-adaptive Controllers

Adaptive
Controller

Figure 25.6.4 Schematic of the pH-process

K [pH/%]
0.2

0.1

10 20 30 40 50 60 70 80 90 100 u[%]

Figure 25.6.5 Gain K 6pH(oo)/6Mb as a function of the base flow


~ [pH)1 N
9 -+~=-~G~~--~~ lJ1
17 --= I "'-
: (\ 0 - l,_,...,..
r ..,...._:--l.../ ~ t r . "'M
.. v .. 5 10 15 20 25 (\-.-, 3~ j t[min) >:
6 Pi
~ s
'CI
f-'
ro
U[ l/h) ~ (I}
24
0
H1

:t-
1:~ ......._,______-~ ~ 'CI
'CI
5 10 1S 20 25 30 t(min) f-'
1-'·
AM., [ \) 0
Pi
rt
a) 5 1~ I 15 20 25 30 t(min] 1-'·
0
_ 1~ 1 :::l
(I}

~ [pHB) f
7~-vP~P~-~J~~
~---~~--~],
S ~~-~~~~~~~~~~
=• """20 ~~~--~
30"""" 10 15 25 - t[min]

0
""'::L~~n~~----J~
. 5 10 15 20 25 30 t[min]
llMa [11 ]1
10
0 I
_ 10 S 10 15 20
I 25 30 t ( min]
-2q .._I_ _ __ _

[\]f
b) _ 1~ 5 10 15 I 20 2S 30 t[min]

Figure 25.6.6 Adaptive control of pH. Ma : acid flow; Mb: base flow; w neutral water flow M:
a) RLS/DB( v +1), r'=0.5, T 0 =15sec, m=3, d=2, A=0.88, ~(0)=500!.
...w...
b) RLS/MV3, r' '=0.15, T0 =15sec, m=3, d=2, A=0.88, ~(0)=500!.
444 25. Parameter-adaptive Controllers

25.7 Parameter-adaptive Feedforward Control

The same principle as for parameter-adaptive feedback control can be


applied to (certainty equivalent) feedforward control, Figure 25.7.1.

Figure 25.7.1 Block diagram for parameter-adaptive feedforward control

It is assumed that the process can be described by

y(z) = G (z)u(z) + Gv (z)v(z) (25. 7-1)


p

where v(z) is a measurable disturbance signal and


-1 -m
b 1z + ... +bm z P -d
~ z p (25.7-2)
u (z) -1 -m
1+a 1 z + ... +ampz P

-1 d d 1 z - 1 + ..• +dm z-mv


G (z) =~ = D(z ) z- v z -d v (25.7-3)
v v(z) C(z-1) -1 + ..• +c z -~
1+c 1 z
mv
The feedforward controller has the transfer function

(25.7-4)

Eq. (25.7-1) is given the structure

1 -1 -d -1 -d
A(z- )y(z) = B(z )z Pu(z) + D(z )z Vv(z) (25.7-5)

with
25.7 Parameter-adaptive Feedforward Control 445

A(z- 1 ) -1 -n
1 + a 1z + .•. + anz
B(z- 1 ) -1 -n
s 1z + ... +Snz } (25.7-6)
D(z- 1 ) -1 -n
01Z + .•. + onz

A(z- 1 ) is the common denominator of GP(z) and Gv(z) and B(z- 1 ) and
D(z- 1 ) the corresponding extended numerators. As all signals of Eq.
(25.7-5) are measurable, the parameters ai, Si and oi can be estimated
by recursive least squares (RLS), see section 23.2, using

T
§_ = (a 1 . . • an s 1 • . . sn o 1 ... on] (25. 7-7)
:l!_T(k) = (-y(k-1) .•• -y(k-n) u (k-d -1) ... u (k-d -n)
p p
(25.7-8)

Also in this case an identifiability condition must be satisfied as


u(k) and v(k) are correlated. Here the second way of deriving the iden-
tifiability condition 2 in section 24.1 can be used, see Eq. (24.1-32,
33). The feedforward control algorithm is

u(k-d -1) = -r 1 u(k-d -2)- ••. -r u(k-d -~-1)


p p ~ p
+s 0 v(k-d -1)+ ... +s v(k-d -v-1) (25.7-9)
p v p

and the elements in Eq. (25.7-8) then become linearily independent on-
ly i f

max (~;v+(d
p -d v ) J ;;,; n for d -d ;<: 0

max ( ~+ (d -d ) ; v J
v p
p v
;<: n for d -d
p v
,;; 0, } (25.7-10)

Based on the model Eq. (25. 7-5) and the parameter estimates ~, feed-
forward control algorithms can be designed using pole-zero cancella-
tion (section 17.1), minimum variance (section 17.4), the deadbeat
principle (25.29] or by parameter optimization (section 17.2). There-
sulting adaptive algorithms are described in (25.29]. They show rapid
adaptation, and an example is shown in Figure 25.7.2. The combination
of RLS with MV4 was proposed in (25.9].

For many industrial processes the control performance can be conside-


rably improved by feedforward control. The feedforward controllers usu-
ally have proportional, proportional and derivative or derivative and
lag behaviour. As their parameters often cannot be accurately tuned,
adaptive feedforward algorithms may lead to improvement by using them
for selftuning or adaptive feedforward control.
446 25. Parameter-adaptive Controllers

y(k)
v(k) v(k)
1 I

100 k

y(k )
u(k)
v(k)

/v(k) /u(k)

/;
L II
100 k

ty(k)

Figure 25.7.2 Parameter-adaptive feedforward control for a low-pass


process with order m = 3 and a disturbance filter with
m = 2
a) no feedforward control. v(k): steps
b) parameter adaptive feedforward control with RLS-DB
25.8 Parameter-adaptive Multivariable Controllers 447

25.8 Parameter-adaptive Multivariable Controllers

Simulations and practical results with parameter-adaptive controllers


for single input/single output processes have shown quick convergence,
giving several practical advantages compared with fixed controllers of
PID-type. These advantages become increasingly important as the process
becomes more complex. Therefore it is obvious that the principle of pa-
rameter-adaptive control will be extended to multivariable systems,
taking into account all the interactions of the process. Extensions to
the RLS/MV4 selftuning regulator to multivariable systems have been
made in [25.30], [25.31] and [25.32] using matrix polynomial models. A
review of a variety of appropriate combinations is given in [25.33].
There it is shown that for both parameter estimation and control algo-
rithm design the following linear multivariable process models with p
inputs and r outputs are suitable:

- p-canonical model
p -1 -d r -1
L B .. (z )z u.(z) + L D .. (z )v.(z) (25.8-1)
j=1 1J J j=1 1J J

i 1I • • • I r

-matrix polynomial model (Eq. (20.3-1))


-1 -1 -d -1
~(z )y(z) ~(z )z ~(z) + _Q(z )~(z) (25.8-2)

- innovation state model (Eq. (21.4-1 ,2))

~(k+1) A ~(k) + ~ ~(k) + G ~(k)


} (25.8-3)
y(k) c ~(k) + ~(k)

This model can be transformed to the observable eanonie form


(section 3. 2)

~t(k+1) = ~t~t(k) + ~t~(k) + Qt~(k)


} (25.8-4)
y(k) = ~~t(k) + ~(k)

using the transformation

~t(k) = T ~(k). (25.8-5)

A minimal input/output model follows by defining

T
y* (k) = [y(k)y(k+1) .•. y(k+n-1) JT (25.8-6)
448 25. Parameter-adaptive Controllers

and analogously to Eq. (3.6-62) by writing

y_* (k) = _g_*~* (k) + ~*£* (k) + y_* (k) + Q*y_* (k) . (25.8-7)

The matrix C* contains the transformation matrix T to the observa-


ble canonic model which is equal to the unit-matrix if the process
Eq. (25.8-3) is written in observable canonic form, Eq. (25.8-4).
In this form the states can be directly calculated without using
a state observer. For the detailed derivation of this model in the
deterministic case see [2.19] and in the stochastic case see [25.34],
[25.33].

The parameters of these models can be estimated via

i 1, ... ,r (25.8-9)

by the RLS or RELS method, applying the basic algorithm

e. (k+1) =e. (k)


-1 -1
+ 1.·1 (k+1)e.1 (k+1). (25.8-10)

To design multivariable parameter-adaptive controllers these parameter


estimation algorithms can be combined with the multivariable control
algorithms as described in chapters 20 and 21 [25.33]:

Matrix polynomial controllers

-matrix polynomial deadbeat controllers (MDB1, MDB2)


-matrix polynomial minimum variance controllers (MMV1, MMV2)

Multivariable state controllers

- the multivariable pole assignment state controller (HSPA)


- the multivariable matrix Riccati state controller (MSR)
- the multivariable decoupling state controller (MSD)
- multivariable minimum variance state controllers (MSMV1,MSMV2,MSMV3)

Simulations of the parameter-adaptive control of a twovariable process


shown in Figure 25.8.1 are presented in Figure 25.8.2 for step changes
in the reference values and in Figure 25.8.3 for stochastic disturban-
ces ni(k) of the outputs. In both cases the parameter-adaptive control-
lers are tuned after about 20 to 30 samples and the expected control
behaviour is achieved.
25.8 Paramete r-adaptiv e Multivar iable Controll ers 449

a)

u1 2 yl
1 + 10s + 21s

1 + 7s + 1 2s 2

+ 12s +

2 + 17s + 32s
u2 y2
bl

Figure 25.8.1 Twovaria ble process used as a test process


450 25. Parameter-adaptive Controllers

w1 1~1~
I
30 60 90 120 k

w21~~
k

y1 1~1~ ~
n
v
1- l r ""
I
~-0 k

y2(k)
1

v
u1 (k)

1\ lr
u
II k

Figure 25.8.2 Twovariable parameter-adaptive control of the test pro-


cess of Fig. 25.8.1 for step changes of w1 (k) and w2(k),
To=4sec, RLS/MDB. m1=3, m2=5. Restricted input signals
-2~ui (k)~2 for O~k~20.
25.8 Parameter-adaptive Multivariable Controllers 451

v- v
k
30 60 90 120

n21~ ...!'Vcflv..JV-v- (\;;::r=A:J k

y1lk)
1

Figure 25.8.3 Twovariable parameter-adaptive control of the test pro-


cess of Fig. 25.8.1 for stochastic disturbances RELS/MMV1.
T0 =4sec. ~=0.005l, ~=I· m1=3, m2=S. Restricted input sig-
nals -s~ui (k)~S for O~k~20.
452 25. Parameter-adaptive Controllers

Multivariable parameter-adaptive controllers have been applied to the


air-conditioning plant [25.35] whose scheme is shown in Figure 25.6.1.
The air flow isM= 400 m3 /h, the sample time T0 = 40 sec, the forget-
ting factor A = 0.95. The control algorithms have been implemented in
a process computer HP21MX-E. The computations after each sample require
less than 1 second. Figure 25.8.4 shows the results with a multivariab-
le parameter-adaptive deadbeat controller and a multivariable parameter-
adaptive state controller. During the starting phase from k = to 10
the process inputs are restricted to 4V ~ u1 ~ 2V and 2V ~ u2 ~ OV. In
both cases the controllers require about 20-25 samples to stabilize
the process and to yield relatively good control performance for chan-
ges in the reference values of air temperature and relative humidity.
The different responses indicate the varying behaviour of the process.
During the experiment the gains change as follows: airheater 1:2, humi-
difier 1:3, interactions 1:4 and 1:1.5.

In conclusion, it can be stated that both simulation and application


to a real process have demonstrated that multivariable parameter-adap-
tive controllers also show rapid convergence and good control.

Concluding remarks on parameter-adaptive control

The results of this chapter have demonstrated that suitably chosen pa-
rameter-adaptive control algorithms are asymptotically stable and con-
verge rather quickly if following conditions are satisfied:

(a) The linear process model and the noise model approximately corres-
pond to the real process.

(b) The process and noise parameters are constant.

In practice the process parameters are usually time varying. The para-
meter-adaptive control algorithms can then track the process if a for-
getting memory (A<1) is used in parameter estimation and the process
parameters change slowly compared with the process dynamics. However,
to avoid the process model 'talking asleep', it is required that

(c) a persistently exciting external signal acts on the closed loop.

Condition (a) is satisfied if the process is linearizable within the


control range and if the structural parameters of the process model,
the order m, the dead time d and the sample time T0 are properly cho-
sen, c.f. section 25.4. For time varying processes the forgetting fac-
~[DC] ~(DC]
U1
"'
23 23 (X)

'0
A>
k k H
A>
ffi
rt
(D
H
I
A>
Q,
A>
'0
~-==
. rt
k f-'·
k <:
·~ 55 55 t v-- (D

l
~
1--'
u~ !Vl U<~ !VI rt
f-'·
<:
I. I. A>
H
f-'·
A>
k
In~~ ~~~u~~ ~ tr
J~ ~t;J k 1--'
(D
2 2 (l
0
U'I'!VI u.,!VI ::J
rt
H
2 2 0
1--'
1--'
(D
~~nif~V ~ ~[W H
k k Ul
0 0

M[m3/h] M[m3/h]
500 500

1.0 80 k
I
1.0
I
80 k
300
I
300

Figure 25.8.4 Twovariable parameter-adaptive control of an air-conditioning plant


a) adaptive deadbeat controller RLS/MDB2 b) adaptive state controller RLS/MSR
m1 = m2 = 3 rn 1 = 3; n 2 = 2 ""'U1
w
454 25. Parameter-adaptive Controllers

tor A must also be chosen properly. Simulations and practical experien-


ce have shown that with the exception of the dead time d, the values
chosen for the design parameters m, T0 , A and the process input weight-
ing factor r are not critical within certain ranges. They may be found
empirically. However, research is going on to find them automatically.
For this task a third feedback level is added to the adaptive control
system, Figure 25.8.5. This level coordinates the adaptation by itself
adapting the design parameters. An example of the adaptation of dead
time is given in [25.20]. With this coordination level the number of
design parameters can be reduced to one or two, for example a damping
coefficient, a settling time, a variance ratio, a bandwidth.

In addition the third feedback level may also supervise the functions
of the adaptive loop, particularly if the stability and convergence
conditions are violated by the operating conditions, for example if the
process parameters change too quickly or no external signal excites the
parameter estimation. In the last case if no external signal is exci-
ting the parameter estimation with a forgetting memory it can happen
that the model dies out ('falls asleep') with time until the control
algorithm is changed such that the loop becomes unstable. A process in-
put is generated which looks like a burst, the parameter estimation is
restarted again (i.e. 'awakes') and the adaptive loop becomes stable,
until the next burst, etc. There are several ways to overcome this,
using functions in the third feedback level. For example the forgetting
factor A is changed to or the process model is frozen if the control
deviation or the process input is within a certain limit, for example
le (k) I = lw(k)-y(k) I < E • Simulation and experience with real pro-
w
w
cesses have also shown that the parameter-adaptive control algorithms
can also be applied to processes with large stepwise process parameter
changes [25.16].

The parameter adaptive control algorithms with linear process models


can also be applied to rather nonlinear processes, for which constant
linear controllers either cannot be used or give poor performance. If
the parameter estimation and the controller design is extended to non-
linear process models, particularly those which are linear in the para-
meters, parameter-adaptive control algorithms can also be applied to
strongly nonlinear processes.

The results in this chapter have shown that parameter-adaptive control


algorithms can be applied for
25.8 Parameter-adaptive Multivariable Controllers 455

selftuning control during commissioning


- at one operating point and then with fixed parameters
- at different operating points (load) and then with feedforward
adapting parameters;
adaptive control of slowly time-varying processes.

LEVEL 3:
COORDINATION COORDINATION

,--
1
I controller 1 parameter I LEVEL 2:
I
/\

parameter ~ I
I calculation .--- estimation I--
I ADAPT ION
L-~~----- - - - - - - - _j
~-- ---- -------- I
w I "'ONTROLLER u PROCESS
I y LEVEL 1:
- I
I CONTROL
I
L _______________ j

Figure 25.8.5 Parameter-adaptive control with a third feedback level


G Digital Control with Process
Computers and Microcomputers

As well as choosing appropriate control algorithms and their tuning to


the process, several other aspects must be considered in order to ob-
tain good control with digital computers. AmpLitude quantization in the
A/D converter, in the central processor unit and in the D/A converter
is discussed in chapter 26 with regard to the resulting effects and re-
quired word length. Another requirement is suitable fiLtering of dis-
turbanaes which cannot be reduced by the control algorithms. Therefore
the filtering of high and medium frequency signals with analog and di-
gital filters is considered in chapter 27. The aombination of aontroL
aLgorithms and various aatuators is treated in chapter 28. The lineari-
zation of constant speed actuators and the problem of windup are both
considered there. Chapter 29 deals with the aomputer aided design of
aontroL aLgorithms. In the last chapter aase studies of identifiaation
and digitaL aontroL are demonstrated for a heat exchanger, a rotary
dryer and a steam generator.
26. The Influence of Amplitude Quantization
of Digital Control
In the previous chapters the treatment of digital control systems was
based on sampled, i.e. discrete-time signals only. Any amplitude quan-
tization was assumed to be so fine that the amplitudes could be consi-
dered as quasi continuous. This assumption is justified for large sig-
nal changes in current process computers. However, for small signal
changes and for digital controllers with small word lengths the resul-
ting effects have to be considered and compared with the continuous
case.

26.1 Reasons for Quantization Effects

Quantization of amplitudes arises at several places in process compu-


ters and digital controllers. If the sensors provide digital signals
this implies quantization. A second occurs in the central processor
unit (CPU), and a third is in the digital or analog output device. With
analog sensors and transducers quantization also arises at three pla-
ces. This case is treated below.

26.1.1 Analog Input

In the analog input device typically standard voltages (0 ••• 10 v or


currents 0 ... 20 rnA or 4 ... 20 rnA) are sampled by the analog/digital
converter (ADC) and digitized. The signal value generally is represen-
ted in fixed point form. The quantization unit ~ (= resolution) is then
given by the word length WL (with no sign bit). The decimal numerical
range NR of the word with length WL [bits] is for one polarity

NR = 2WL - 1. (26.1-1)

Hence, the quantization unit becomes

(26.1-2)
NR

The quantization units of an ADC with WL 7 ... 15 bits are shown in


Table 26.1.1
458 26. The Influence of Amplitude Quantization on Digital Control

Table 26.1.1 Quantization units as functions of the word length with


no sign bit

word length [bits] 7 8 10 12 15

numerical range NR 127 255 1023 4095 32767

quantization unit /',. 0.00787 0.00392 0.00098 0.00024 0.00003

quantization unit /',. [% J 0.787 0.392 0.098 0.024 0.003

Two examples are given as illustration:

If the largest numerical value is the voltage 10 V = 10000 mV, for word
lengths of 7 ..• 15 bits the smallest representable unit is!':.= 78.7 .•.
0.305 mV. If a temperature of 100°C is considered, this gives /',. = 0.787
•.. 0.003°C.

Analog/digital converters count the integer L multiples of quantization


units !':. which corresponds to the analog voltage y

YQ = L/':. L = 0, 1 , 2, .•. , NR. (26.1-3)

The remainder oy < !':. is either rounded up or down to the next integer,
i.e. to L, or simply trunaated. Both cases give

(26.1-4)

with quantization error o y

- for rounding

(26.1-5)

- for truncation

(26.1-6)

Amplitude quantization therefore introduces a first nonlinearity, see


Figure 26.1.1.

26.1.2 The Central Processor Unit

The ADC discretized signal (yQ)AD is transferred to the CPU and is theiE
represented mainly using a larger word length - the word length WLN of
number representation. For a linear control algorithm the following
computations are made:

- calculation of the control deviation


(26.1-7)
N
m

:<J
ro
P>
Ul
0
::J
Ul

H1
0

"
0
c
P>
::J
rt
t-'·
N
P>
,-------l rt
kTo t-'·
I k! o I 0
(walcPu ::J
I t:'l
Yl H1
H1
I ro
()
I rt
Ul
I rounding 1
I in A/D- 1
L __ ~o~ver~ ~

ANALOG INPUT CENTRAL PROCESSING UNIT ANALOG OUTPUT PROCESS

Figure 26.1.1 Simplified block diagram of the nonlinearities in a digital closed loop, caused by am~ li­
tude quantization

(J1
""
\.!)
460 26. The Influence of Amplitude Quantization on Digital Control

- calculation of the manipulated variable

uQ(k) =-p 10uQ(k-1)- .•. -p~Q(k-~)uQ(k-~)+q 0 QeQ(k)+ ..• +qvQeQ(k-v).


(26.1-3)
In the CPU new quantization errors are added because of the limited
word length WLCP
- reference variable w0 (k)

manipulated variable u 0 (k-i), i 1 '2' ...

- parameters piQ' qiQ

products piQuQ(k-i), qiQeQ(k-i) i 0,1,2, •••

-sum of the products u 0 (k),

For fixed point representation the quantization units shown for the
ADC hold if 8 bits or 16 bits word length CPUs are used. The quantiza-
tion can be decreased by the use of double length working.

In the case of fZoating point representation for process computers with


16 bits word length two or more words are often used. The floating
point number

(26.1-9)

for example can be represented using two words of 16 bits each, with
7 bits for the exponent E (point after the lowest digit) and 23 bits
for the mantissa M (point after the largest digit), within a numerical
range of

-0.8388608·2- 128 ~ L ~ 0.8388607·2 127

-0.24651902·10- 39 ~ L ~ 0.14272476·10 39 •

Therefore the smallest representable unit is

which is negligible for digital control. If fixed point representation


with a small word length is used, quantization errors can come up by
the products which introduce nonlinearities, Figure 26.1.1.

The quantization of the reference variable and the controller parame-


ters cause only deviations from their nominal values and do not intro-
duce nonlinearities into the loop.
26.1 Reasons for Quantization Effects 461

26.1.3 Analog Output

With analog controlled actuators the quantized manipulated variable


u 0 (k) is transferred to a digital/analog converter (DAC) followed by a
holding element. The quantization interval of the DAC depends on its
word length. As shown in Figure 26.1.1., the DAC introduces a further
nonlinear multiple point characteristic.

The above discussions have shown the various places where nonlineari-
ties crop up. As it is hard to treat theoretically the effect of only
one nonlinearity on the dynamic and static behaviour of a control loop
the effects of all the quantizations are difficult to analyze. The
known publications assume either statistically uniformly distributed
quantization errors or a maximal possible quantization error (worst
case) [26.1] to [26.6], [2.17]. The method of describing functions
[5.14], [2.19] and the direct method of Ljapunov [5.17] can be used to
analyze stability. Simulation probably is the only feasible way, for
example [26.3], to investigate several quantizations and nontrivial
processes and control algorithms.

The following sections consider the effects of quantization using sim-


ple examples. The principal causes can be summarized as follows:

- quantization of variabZes (rounding of the controlled or manipulated


variables in the ADC, DAC or CPU)
- quantization of coefficients (rounding of the controller parameters)

- quantization of intermediate resuZts in the control algorithm


(rounding of products, Eq. (26.1-8)).

In digital control systems the effect of these quantizations is of in-


terest when considering the behaviour of the closed loop, which is as-
sumed to be asymptotically stable without these nonlinearities. Here
the following effects are to be observed:

a) The control loop remains approximately asymptoticaZZy stabZe, as


the quantization effects are negligible. After an initial change
the control deviation becomes
lim e(k) ~ 0.
k+oo

b) The control loop does not return into the zero steady state posi-
tion as offsets occur
lim e(k)
k+oo
+ 0.
c) An additional stochastic signal - the quantization noise or roun-
462 26. The Influence of Amplitude Quantization on Digital Control

ding noise - is generated if the loop is persistently excited.

d) A limit cycle with period M arises


lim e(k) = lim e(k+M) f 0.
k->-oo

26.2 Various Quantization Effects

26.2.1 Quantization Effects of Variables

One multiple point characteristic with quantization unit ~ for the ADC
is assumed within the loop, as drawn in Figure 26.1.1. The possible
quantization errors o then are given by Eq. (26.1-5) and Eq. (26.1-6)
for rounding and truncation.

Quantization noise
If a variable changes stochastically such that different quantization
levels are crossed it can be assumed that the quantization errors o(k)
are statistically independent. As the o(k) can attain all values with-
in their definition interval Eq. (26.1-5) and Eq. (26.1-6) uniform dis-
tribution can be assumed, Figure 26.2.1. The digitized signal yQ then

p (6) 1 pi OJ
J6
L I1 - L-IK ...
-2
6
6 0 6
2 6

a) b)
Figure 26.2.1 Probability density of the quantization error for
a) rounding b) truncation

consists in the analog signal value y and a superimposed noisy value


o. Eq. (26.1-4) gives

yQ(k) = y(k) - o(k). (26.2-1)

The expectation of the quantization noise then becomes


26.2 Various Quantization Effects 463

- rounding: E{6(k)} fp(6)6 d6 0 (26.2-2)

-truncation: E{6(k)} !:o/2 (26.2-3)

and the variance in both cases is

(26.2-4)

If this white quantization noise is generated in the ADC it acts as a


white noise n(k) on the controlled variable, and its variance cannot
be decreased by any control. This leads to undesirable changes of the
manipulated variable which can be larger than one quantization unit of
the DAC, as shown by the next example.

Example 26.2.1: Effect of the ADC quantization error on the manipulated


variable

The process is assumed to have low-pass behaviour. Parameter-optimized


controllers then tend to have PD-behaviour, chapter 13. With e(k)=-y(k)
the control algorithm becomes

The superimposed quantization noise becomes

u 0 (k) = -q 0 6(k)-q 1 6(k-1).

If u 0 (k) is filtered by the low-pass process such that the resulting


output component y 0 (k) ~ 0, the variance of the moving average signal
process uo(k) is
2 2 2 2
0 uo ~ [qo + q1Jcr6.

With the controller parameters q 0 -1.5 the standard devia-


tion is, using Eq. (26.2-4),

auo ~ 3.35a 0 = (3.35/V12)to = 0.97/:o.

The quantization noise in the ADC therefore generates about a 3-times


larger standard deviation in the manipulated variable.
D

Because of the nonzero expectation of the quantization error, rounding


must be preferred to truncation.
464 26. The Influence of Amplitude Quantization on Digital Control

Offsets and limit cycles


Two examples are used to illustrate the quantization effects if a de-
terministic signal acts on the loop.

Example 26.2.2: Offsets due to quantization in the ADC

A first order process

y(k+1) = -a 1y(k) + b 1u(k) with a 1 = -0.5867 and b 1 = 0.4133

and a P-controller are assumed, c.f. Example 16.1. The control devia-
tion is

e(k) = w(k) - yQ(k).

In the ADC the measured analog signal y(k) is rounded to the second
place after the decimal point, resulting in yQ(k). The response of the
signals without and with rounding to a reference value step w(k) = 1 (k)
the initial conditions y(k) = 0 and u(k) = 0 for k < 0 and the gain
q0 = 1.3 is shown in Table 26.2.1

Table 26.2.1 Effect of rounding in the ADC. q 0 1.3.

without rounding with rounding


k u(k) y(k) u(k) y(k) yQ(k)

0 1 .3000 0 1.3000 0
1 0.6015 0.5373 0.5980 0.5373 0.54
2 0.5670 0.5638 0.5720 0.5640 0.56
3 0.5653 0.5651 0.5720 0.5649 0.56
4 0.5652 0.5652 0.5720 0.5649 0.56
5 0.5652 0.5649 0.56

The quantization unit is 6 = 0.01. The rounded controlled variable


stops at yQ = 0.56. This results in an offset of 6y = 0.003 which is
negligible. D

Example 26.2.3: Limit cycle by quantization in the ADC

In the loop of Example 26.2.2 the controller gain is increased to q 0 =


2.0. The same assumptions on rounding are made. Table 26.2.2 shows the
effects. A constant amplitude oscillation with period M = 3 arises,
i.e. a limit cycle with amplitude l6yl ~ 0.003, which is very small.
The amplitude of the manipulated variable becomes l6ul = 0.01 which is
the same size as the quantization unit of the controlled variable.
26.2 Various Quantization Effects 465

Table 26.2.2 Effect of rounding in the ADC. q 0 2.0.

without rounding with rounding

k u(k) y(k) u(k) y(k) yQ(k)

0 2.0000 0 2.0000 0 0
1 0.3468 0.8266 o. 3400 0.8266 0.83
2 0.7434 0.6283 0. 7400 0.6254 0.63
3 0.6482 0.6759 0.6600 0.6727 0.67
4 0.6711 0.6644 0.6600 0.6675 0.67
5 0.6656 0.6672 0.6800 0.6644 0.66
6 0.6669 0.6665 0. 6600 0.6708 0.67
7 . 0.6667 0.6600 0.6663 0.67
8 . . 0.6600 0.6664 0.67
9 . 0.6800 0.6637 0.66
10 . 0.6600 0.6705 0.67
11 0. 6600 0.6661 0.67
12 0.6800 0.6636 0.66
13 0.6600 0.6703 0. 67
14 0.6600 0.6661 0.67
15 0.6800 0.6636 0.66
0

These examples have shown that the resulting amplitudes of ~uantization

noise, offsets or limit cycles in the controlled variable are at least


one quantization unit of the ADC. Limit cycles arise particularly with
strongly acting control algorithms. They can disappear if the control-
ler gain is reduced. The simplest investigation of these quantization
effects is obtained by simulation. This is true particularly if quan-
tizations occur at more than one location.

Example 26.2.4: Simulation of ADC and DAC quantization effects

A third order process, the test process VI (see Appendix) was simula-
ted together with a P-controller having q 0 = 4. The controlled variable
was quantized (ADC) with quantization unit~ 0.1 and the manipulated
y
variable with ~u = 0.3 (DAC). In Figure 26.2.2 the response is shown
to the initial condition y(O) = 2.2 and the reference variable w0 = 3.5.
A limit cycle occurs with amplitudes J~yJ ~ ~Y and J~uJ ~ 3~u·
466 26. The Influence of Amplitude Quantization on Digital Control

y,u

0
0 20 40 60 80 100 120 140 160 180 t [sec]

Figure 26.2.2 Response of the controlled variable y and the manipula-


ted variable u for quantization in the ADC and DAC with
quantization units ~y = 0.1 and ~u = 0.1. Third order
process (test process VI). P-controller: q 0 =4. T0 =4sec.

D
The describing function or the direct method of Ljapunov can be used in
stability investigations for the detection of limit cycles. To deter-
mine the describing function of one multiple point characteristic, for
example two three-point characteristics can be connected in parallel,
to obtain a five-point characteristic, etc. [5.14, chapter 52]. A limit
cycle results if there is an intersection of the negative inverse locus
-1/G(iw) of the remaining linear loop with the describing function.

To apply Ljapunov's method it is assumed that the linear open loop of


Figure 26.1.1 with the transfer function y(z)/(y 0 (z))AD can be describ-
ed by
26.2 Various Quantization Effects 467

~(k+1)
} (26 .2-5)
y(k)

As with Eq. (26.2-1) only the solution for the superimposed quantiza-
tion error oy as input is considered and the stability of

~(k+1) (26.2-6)

is analyzed. Further details are given in [5.17, chapter 12]. After de-
fining of a Ljapunov function

T T
V(k) ~ (k)~ ~(k); ~ ~ ~ - Y = I

maximal possible errors ~y in the output can be obtained dependent on


the quantization error ~.

26.2.2 Quantization Effects of Coefficients

The influence of rounding effects in the controller parameters can usu-


ally be neglected, even in the case of fixed point representation.
This becomes obvious if the quantization errors of these coefficients
are compared with the process model parameter errors which influence
the controller design.

26.2.3 Quantization Effects of Intermediate Results

Offsets and limit cycles


Intermediate results within control algorithms are products of coeffi-
cients and variables, as in Eq. (26.1-8). Both the factors and the
products are rounded in general. The resulting product error can be
estimated as follows: Let the product be qe. Then

q = Q~ + oq ; e = E~ + oe (26. 2-7)

qe = QE~ 2 + Q~o e + E~o q + 0q 0 e • (26.2-8)

-----
~

If the rounding errors o and oe are statistically independent and


2 2 q
have variance a 0 = ~ /12, the product error obtained by rounding the
factors is

(26.2-9)
468 26. The Influence of Amplitude Quantization on Digital Control

To this one must add the rounding error of the product

oQE = QE~ 2 - (QE)Q~ (26.2-10)

with variance a~ =a~. Hence, for statistically independent oq' oe and


oQE' the variance of the overall error becomes

2 2 2 222 2 222
aoqe = a 1 + a 2 ~ [1+~ (Q +E ) )a 0 ~ [1+q +e )a 0 • (26.2-11)

This shows that with increasing values of the factors q and e, the fac-
tor rounding mainly determines the overall error.

The quantization effects are affected according to whether the roun-


ding is performed for each product or for its sum. If each product is
rounded the resulting error in the manipulated variable for the control
algorithm Eq. (26.1-8) with quantization error opui and oqei of the
products piu(k-i) and qie(k-i) becomes

The estimation of the resulting error depends largely on the assump-


tions of the mutual dependence of the quantization errors. For stochas-
tic signals they can be assumed to be statistically independent. Then
the variance of ou is

~ 2 v 2
E aopui + E aoqei" (26.2-13)
i=1 i=O

This increases with the number of products. A statistical analysis for


control loops with quantization in the ADC and rounding of products
has been made in [26.3]. The resulting error standard deviations in
the output signal decrease by the factor 3 per 1 bit increase of the
word length. They also depend on the programming- see also [20.1],
[20.2]. A simple example shows the effect for a deterministic input
signaL

Example 26.2.5: Limit cycle due to quantization of the products in


the control algorithm

The same control loop is assumed as in Example 26.2.2. The factors and
the product in the control algorithm are rounded to the second decimal
place such that the quantization unit is~= 0.01. The results are
shown in Table 26.2.3. A limit cycle with period M 3 arises as with
quantization in the ADC. The amplitude is also about the same: I~YI~
0.0034 and l~ul = 0.01, though there is only one product.
26.2 Various Quantization Effects 469

Table 26.2.3 Effect of rounding the product in the control algorithm.


q 0 = 2.0.

without rounding with rounding


k u(k) y (k) uQ(k) y(k)

0 2.0000 0 2.00 0
1 0.3468 0.8266 0.34 0.8266
2 0.7434 0.6283 0.74 0.6255
3 0.6482 0.6759 0. 66 0. 6728
4 0.6711 0.6644 0.66 0.6675
5 0.6656 0.6672 0. 68 0.6644
6 0.6669 0.6665 0.66 0.6708
7 0. 6667 0.66 0.6664
8 0. 68 0.6637
9 0. 66 0.6705
10 0.66 0.6661
11 0. 68 0.6636
12 0.66 0. 6704
13 0. 66 0.6661
14 0. 68 0.6636
15 0.66 0.6704

Dead band
If the parameters of feedforward control algorithms or digital filters
lie within certain ranges, offsets in the output variable can arise, by
product rounding, which are multiples of the quantization units of the
products.

Example 26.2.6

Consider a first order feedforward control algorithm

u(k+1) = -a 1 u(k) + b 1 v(k)

with a 1 = -0.9 and b 1 0.1. As the gain is K = b 1 /(1+a 1 ) = 1 in the


ideal case y(oo) = 1 is attained for v(k) = 1. With rounding of the pro-
ducts to the second decimal place (i.e. ~ = 0.01) one obtains for va-
rious initial values u(O) the final values of u(k) given in Table 26.2.4.

Depending on the initial values, the following final values are attain-
ed: u(O) $ 0.9640: lim u(k) 0.96
k->-oo
u(O) ~ 1.0450: lim u(k) 1.05.
k->-oo
470 26. The Influence of Amplitude Quantization on Digital Control

Table 26.2.4 Effects of rounding the product of a feedforward control


algorithm for v(k) 1 and various initial values u(O).

k u(k) uQ(k) k u(k) UQ(k) k u(k) uQ(k)

0 0.9000 0.90 0 1 .1000 1 . 10 0 0.9800 0.98


0.9100 0.91 1 .0900 1.09 0.9820 0.98
2 0.9180 0. 92 2 1 . 0810 1.08 2 0.9820 0.98
3 0. 9280 0. 93 3 1 .0720 1.07
4 0.9370 0. 94 4 1 • 06 30 1 .06
5 0.9460 0.95 5 1 .0540 1.05
6 0.9550 0.96 6 1 .0450 1 .05
7 0.9640 0.96 7 1 .0450 1.05
8 0. 9640 0.96

Fork~ 1 all initial values 0.9639 $ u(O) $ 1.0449 give a nearby roun-
ded steady state value within the range 0.97 $ uQ $ 1.04. The region
0.96 $ uQ $ 1.05 is called a dead band [26.6] which lies around the
steady state value for a constant process input. If starting with
u(O) 0.96 the input v(k) = 0 is applied, the signal u(k) approaches
u(k) 0.05 for k ~ 24. The dead band always lies around the new steady
state.

If the same difference equation is used as a process and a P-controller


with q 0 = 2.0 is used as feedback and rounding due to this example
with ~ = 0.01 are made a limit cycle arises with period M = 3 and
[~u[ = 0.005 and [~y[ ~ 0.0005. Because of the feedback there is no
large dead band over several quantization units.
0

In [2.22, chapter 27], it is shown how the dead band can be calculated
for a first order difference equation.

Based on these examples, the following conclusions can be drawn as to


how undesired quantization effects in digital control loops can be a-
voided:
1) The word lengths of the ADC and DAC and the numerical range of the
CPU must be sufficiently large and coordinated.
26.2 Various Quantization Effects 471

2) The word lengths or dynamic range at all quantization locations must


be utilized as much as possible, by appropriate scaling of variables.

3) To avoid excessive quantization errors of factors and products, the


CPU word length for fixed point calculations must be significantly
larger than that of the ADC (for example double word) .

4) If limit cycles arise for a given digital controller, the controller


parameters should be modified to give a weaker controller action
(detuned) .

5) For feedforward control algorithms and digital filters one must


take care of the dead band effect around the steady state.

The word length of the ADC should be chosen such that its quantization
error is smaller than the static and dynamic errors of the sensors. A
word length of 10 bits (resolution 0.1 %) is usually sufficient. The
word length of the DAC must be coordinated with that of the ADC. For
digital control it can be taken such that one quantization unit of the
manipulated variable arises, after transfer through the process, in a-
bout one quantization unit of the ADC.
27. Filtering of Disturbances

Some control systems and many measurement techniques require the deter-
mination of signals which are contaminated by noise. Suitable filter-
ing methods then have to separate the signal from the noise. It is as-
sumed that a signal s(k) is contaminated additively by n(k) and only
y(k) = s(k)+n(k) is measurable. If the frequency spectra of the signal
and the noise lie in different ranges, they can be separated by suita-
ble bandpass filters, Figure 27.0.1. This is treated in this chapter
for some important cases in the control field. However, if the spectra
of the signal and the noise have overlapping frequency ranges, estima-
tion methods have to be used to determine the signal. In this case it
is not possible to determine the signal without error. The influence
of the noise can only be minimized. The Wiener filter was developed
first in 1940 for continuous-time signals; the method of least squares
estimation was used. However, there are considerable realizability pro-
blems. A considerable extension to filter design is the Kalman filter,
published in 1960. This filter does not use a nonparametric signal mo-
del but a parametric model instead. It was first derived for discrete-
time signals in state space form. With the aid of the method of least
squares state estimation of ~(k) of the signal model is performed which
allows the calculation of the signal s(k). This problem was discussed
in section 15.4.

s y band pass SF
filter

Figure 27.0.1 Separation of a signals and a noise n by a bandpass


filter. y: measured signal; sF: filtered signal.
27.1 Noise Sources in Control Systems and Noise Spectra 473

In section 27.1 the noise sources and the noise spectra which usually
contaminate control systems are considered. Various filters are then
briefly described: analog filters in section 27.2 and digital filters
in section 27.3.

27.1 Noise Sources in Control Systems and Noise


Spectra
The graph of the dynamic control factor IR(z) I, Figure 11.4.1, shows
that high frequency disturbances n(k) with frequencies in the range
III, w > wii or IR(z) I ~ 1 cannot be influenced by the control system.
These disturbances (noise) cause undesired actuator changes and should
therefore be eliminated by suitable filters. First the sources of this
noise are discussed. High frequency noise in general consists of follow-
ing components:

a) high frequency disturbances of the process;

b) high frequency measurement noise (for example turbulent flow,


vibrations, instrument noise);

c) electrical interference on the transmission of the measured signal.

The components a) mostly cannot be changed. Occasionally components b)


can be reduced. In the case of amplitude modulated d.c. current trans-
mission the noise components c) emerge because of galvanic, inductive
and capacitive coupling to other electric sources and consists of high
and low frequency components. The high frequency noise does not gene-
rally significantly influence the function of analog control devices,
because of their natural low-pass behaviour. However, in the case of
digital signal processing the noise is sampled and transmitted. There-
fore it must be reduced at source and filtered at the digital computer
input. Noise reduction can be performed for example by proper instal-
lation with sufficient spacing between cables, twisting as protection
against inductive coupling, proper earthing of the computer, different
potential of measurement cable and analog input [28.1]. Despite these
techniques there generally is some residual high frequency noise which
must be processed by analog and digital filters. Therefore its fre-
quency spectrum is considered. The continuous measurement signal is
described by

y(t) = s(t) + n(t) (27.1-1)

with s(t) the undisturbed signal and n(t) the noise. y(t) is sampled
474 27. Filtering of Disturbances

with sample time T0 or at a sample frequency w0 = 2n/T 0 . By this samp-


ling the Fourier transform of a deterministic signal becomes periodic
at w0

y * (iw) v = 011121··· (27.1-2)

c.f. section 3.2. The power density spectrum of a stochastic signal is


also periodic

(27 .1-3)

As well as the basic spectrum (v=O) 1 side spectra (side bands) with
distance w0 appear for v = ±11±2 1 ••• These are shown in Figure 27.1.1
a) for the signal s(t) and in b) for the noise n(t) for v = +1.

Sss (w)
a) ..,..4-, /
I '
,/
,1'
I ' /
/ /

Wmax Ws 3Ws W

b) Snn(W) SnniW+Wo)

d)

Wo w

Figure 27.1.1 Power density spectra S(w) for the signal s(k) 1 the
noise n(k) and their low-pass filtering
a) signal c) low-pass filter
b) noise d) filtered signals
w0 : sample frequency; w8 = w0 /2: Shannon frequency
27.1 Noise Sources in Control Systems and Noise Spectra 475

If w is the maximum frequency of the signal which is of interest for


max
control then, if wmax > w0 /2, the basic and side spectra overlap. The
continuous signal cannot then be reconstituted without error using ide-
al bandpass filters. To reconstitute a limited frequency spectrum w ~
wmax from the sampled signal, Shannon's sampling theorem states that
wmax ~ w0 /2 = ws must be satisfied, c.f. section 3.2. Hence,

(27.1-4)

If high frequency noise n(t) with S nn (w) f 0 for w > ws is contained


in the measured signal y(t), side spectra Snn(w+vw 0 ) are generated
which are superimposed on the basic spectrum S nn (w), forming s* (w),
nn
see Figure 27.1.1 b). High frequency noise with (angular) frequency
ws < w1 < w0 generates after sampling at w0 a low frequency component
with frequency

(27.1-5)

with the same amplitude. To illustrate this so-called aliasing effect


Figure 27.1.2 shows a sinusoidal oscillation with period Tp = 12T 0 /10.5

n (t) n(k)
w,
•• • •••
k
•••

Figure 27.1.2 The aliasing effect: generation of a low frequency sig-


nal n(k) with frequency w2 by sampling of a high frequen-
cy signal n(t) with frequency w1 > w0 /2 with sampling
frequency w0 > w1 .

and therefore w = 2n/T = 14n/8T0 with sampling frequency w0 =2n/T 0 =


1 p
16n/8T0 . This results in a low frequency component with the same ampli-
tude and with frequency w2 = w0 - w1 = 2n/T 0 [27.2]. Noise components
with w1 ~ w0 therefore generate very low frequency noise w2 . This is
the reason why high frequency noise with significant spectral densities
for w > ws = n/T 0 have to be filtered before they are sampled. This is
shown in Figure 27.1.1 c) and d). Analog filters are effective for this
purpose.
476 27. Filtering of Disturbances

27.2 Analog Filtering

Using analog techniques broad band noise with w > w5 = n/T 0 can be fil-
tered. For filtering of noise before sampling, low-pass filters must
be used which have sufficient attenuation at w = w5 = w0 /2 of about
1/10 ... 1/100 or -20 ... -40 dB, depending on the noise amplitudes.
To design frequency responses of low-pass filters there are the follow-
ing possibilities [27.4].

Simple low-pass filters are obtained by connecting first order lags in


series

n=1,2,3, ... (27.2-1)


(1+iwT)n

with normalized frequency ~e = w/we = wT. we = 1/T is called the cor-


ner or break-point frequency (Bode plot) . In filtering the normalized
frequency is usually related to the limiting fre~uency w for which
g
the amplitudes are decreased to -3dB = 0.708. Then

~g = w/wg or o i~
g

(27.2-2)

In this representation the time constant T changes with the order n.


Higher order low-pass filters with n ~ 2 can be designed differently.
Here compromises must be made with regard to a flat pass-band, to a
sharp cut-off and to a small overshoot of the resulting step response,
c.f. Figure 27.2.1. Such special low-pass filters are for example
Butterworth-, Bessel- and Tschebyscheff-filters. They have the trans-
fer function

GF (o) = 2 n (27.2-3)
1+a 1 o+a 2 o + ... +ano

with magnitude

(27.2-4)

and phase shift

~(Q ) = -arc tan (27.2-5)


g

c.f. [5.14, p.86].


27.2 Analog Filtering 477

0 ... .. .. . . .... .... :"~

IGFI N
kJ B)
-10 ~
:~ ~
: '.
-20 ~'I\

-30
4\~\3~~ '
0,001 0,01

Figure 27.2.1 Frequency response magnitudes of various low-pass fil-


ters with order n = 4. [27.4].
1 Simple low-pass due to Eq. (27-7)
2 Butterworth low-pass
3 Bessel low-pass
4 Tschebyscheff low-pass (~1.5dB pass-band oscillations)

Butterworth filters are characterized by amplitudes

(27.2-6)

have a flat pass-band and a rapid transition to the asymptote


= 1/~n. However, the step response shows an overshoot which is
g
for n = 4.

Bessel filters have a phase shift proportional to the frequency

cp W g ) = -c~
g
. (27.2-7)

The time delay caused by the phase shift is then ~t = -cp/w = -cp/~ w
g g
=
c/w and is therefore independent of the frequency. This results in a
g
step response with little overshoot. The amplitude does not descend so
quickly to the asymptote 1/~~ as for the Butterworth filter.

The frequency response of Tschebyscheff filters

c (27 .2-8)

contain Tschebyscheff polynomials (S~g = q)

s 1 (q) = q; s 2 (q) 2q 2 -1; s 3 (q) = 4q 3 -3q; etc.

These filters have particularly rapid transitions to the stop-band


478 27. Filtering of Disturbances

asymptotes which is paid for by oscillations in the amplitude response


w < w and by large overshoots of the step response. E determines the
g
pass-band oscillations.

Unlike simple low-pass filters, these special filters have conjugate


complex poles. They can be built with active elements, especially ope-
rational amplifiers together with RC-networks [27.4], [27.5]. Simple
low-pass filters with passive elements are cheap for the high frequen-
cy range off >5Hz or w > 31.4 1/sec. If the filter should have
g g
20dB damping for ws w0 /2 = n/T 0 and an order n = 2 the limiting fre-
=
quency is wg ~ 0.3 ws. Therefore passive RC-filters can be used for
sample times T0 < 0.15/fg = 0.03 sec. For lower frequency noise 0.1Hz
< f <5Hz or 0.6 1/sec < w < 31.4 1/sec active low-pass filters are
g g
appropriate. This corresponds to sample times of 1.5 sec> T0 > 0.03sec.

27.3 Digital Filtering

As analog filters for frequencies off < 0.1 Hz become expensive, such
g
low frequency noise should be filtered by digital methods. This section
first considers digital low pass filters. Then digital highpass filters
and some special digital filtering algorithms are reviewed.

It is assumed that the sampled signal s(k) is contaminated by noise


n(k), so that

y(k) = s(k) + n(k)

is measurable. If the spectra of s(k) and n(k) lie in different fre-


quency ranges, the signal s(k) can be separated by a bandpass filter
which generates sF(k) at its output, c.f. Figure 17.0.1. Linear filters
are described by difference equations of form

sF(k) + a 1 sF(k-1) + ... + amsF(k-m)


= b 0 y(k) + b 1y(k-1) + ..• + bmy(k-m) (27.3-1)

or by the z-transfer function

SF ( z) B ( z- 1)
(27.3-2)
y(z) A ( z 1)

Some simple discrete-time filters are considered now.


27.3 Digital Filtering 479

27.3.1 Low-pass Filters

The z-transfer function of a first order low-pass filter with s-trans-


fer function

sF(s)
(27 .3-3)
y(s) 1+Ts

with no holding element follows from the z-transformation table (see


the Appendix) as

SF (z)
(27.3-4)
-1
y(z) 1 +a 1 z

and with a zero-order hold due to Eq. (3.4-10)

-1. (27.3-5)
y(z) 1+a 1 z

The parameters are


-TO/T 1
a, = -e bo T"; b1 1+a1

and the gains become

GF 1 ( 1 ) = T ( 1:a 1 ) and GF 2 ( 1 ) = 1.

As GF 2 (z)/GF 1 (z) = z-1b 1 /b 0 the filter GF 2 (z) gives, compared with


GFl (z), the filtered signals with a dead time d = 1 but with unity
gain. Therefore GF 1 (z) is preferred in general. As GF 1 (1) = 1 is ob-
tained b 0 must be replaced by b 0 = 1+a 1

b0 (27.3-6)
GF1 (z) = -1 ·
1+a 1 z

The frequency response of the first order low-pass filter using z =


eTOiw is
b' b'
0 0
GFl (iw)

0
b [(1+a 1 cos wT 0 )+ia 1 sin wT 0 ]
(27.3-7)
(1+a 1 cos wT 0 ) 2 +(a 1 sin wT 0 ) 2

This gives the amplitudes

(27. 3-8)
480 27. Filtering of Disturbances

2.0

1.0

!
0.1

I
I I
W T0 = 21l 3TI

0.01
0.1 WT0 10 20

Figure 27.3.1 Amplitudes of first order low-pass filters


discrete filter
---- continuous filter

with IGF 1 I =
1 for wT 0 = 0,2n,4n, ..• In Figure 27.3.1 the amplitudes of
a discrete-time and continuous-time filter are shown for T0 /T = 4/7.5.
There is good agreement in the low frequency range. At wT 0 = 1 the
difference is about 4 %. Unlike the continuous filter, the discrete
filter shows a first minimum of the amplitudes at the Shannon frequen-
cy w0 T = n with the magnitude

(27. 3-9)

This is followed by a maximum at wT 0 = 2n with 1Gp 1 I = 1, a minimum at


wT 0 = 2n, a maximum at wT 0 = 4n, etc. The discrete filter cannot there-
fore effectively filter signals with higher frequencies than the Shan-
non frequency. For frequencies w > n/T 0 continuous filters must be used.

A second order low-pass filter with


27.3 Digital Filtering 481

(27.3-10)
GF(s) = (1+Ts)2

follows from the z-transform table (without hold)

b ,z -1
l
b ,z -1
1

G:f(z) = (27.3-11)

with the coefficients


-T /T
a 1 = -2e 0 ; a2 b'
1

For noise filtering in control systems for frequencies f g < 0.1 Hz di-
gital low-pass filters should be applied. They can filter the noise in
the range of wg < w < wS. Noise with w > ws must be reduced with ana-
log filters. The design of the digital filter, of course, depends much
on the further application of the signals. In the case of noise pre-
filtering for digital control, the location of the Shannon frequency
ws = ~;T 0 within the graph of the dynamic control factor IR(z) I, sec-
tion 11.4, is crucial, c.f. Figure 27.3.2. If ws lies within range III

IRiz)j IR(z)j

--,---
1
I
I
I
w w

a) b)
Figure 27.3.2 Location of the Shannon frequency ws = ~;T 0 within the
dynamic control factor
a) ws in range III b) ws in range II
small sample time large sample time

for which a reduction of the noise components is not possible by feed-


back, a discrete low-pass filter of first or second order (or an analog
filter) with limiting frequency wg ~ wii can be used. The controller
parameters must not be changed in this case. If ws lies close to wii
or within range II, an effective low-pass filter becomes a significant
part of the process to be controlled, implying that the controller
482 27. Filtering of Disturbances

must be detuned. The graph of the dynamic control factor would change,
leading possibly to a loss in the control performance in regions I and
II. Any improvement that can be obtained by the low-pass filter depends
on the noise spectrum and must be analyzed in each case. The case of
Figure 27.3.2 a) arises if the sample time T0 is relatively small and
the case of Figure 27.3.2 b) if T0 is relatively large.

27.3.2 High-pass Filters

The z-transfer function of the continuous first order high-pass filter

T2 s
GF(s) = (27. 3-12)
1+T 1 s

with zero-order hold is


-1
b 0 +b 1 z b 0 (1-z- 1 )
-1 -1 (27. 3-13)
1+a 1 z 1+a 1 z

with parameters

In this case a hold is required, as z-transfer functions do not exist


for differentially acting elements (a low-pass must follow the sampler).
The first order high-pass filter has a zero at z = 1. The transmission
range follows from the corner frequency

(27. 3-14)

In the high frequency range is IGF(iw) I = 0 for wT 0 = vn, with v=2,4, ...
For low frequencies the behaviour is as the continuous filter.

A special case of the first order high-pass filter arises with a 1 =o


and b 0 =1

(27. 3-15)

with the difference equation

SF(k) = y(k)- y(k-1). (27.3-16)

Only the differences of two successive signal values need be taken.


27.3 Digital Filtering 483

As well as the above simple low order filters, many other more complex
discrete-time filters can be designed. The reader is referred for exam-
ple to [27.1], [27.2], [2.20].

27.3.3 Special Filters

This subsection considers discrete-time filters for special tasks, such


as for recursive averaging and for filtering of bursts.

Recursive averaging
For some tasks only the current average value of the signals is of in-
terest, i.e. the very low frequency component. An example is the d.c.
value estimation in recursive parameter estimation. The following algo-
rithms can be applied.

a) Averaging with infinite memory

It is assumed that a constant vaZue s is superimposed on the noise n(k)


with E{n(k)} = 0 and the measured signal is given by

y(k) = s + n(k). (27 .3-17)

The least squares method with the loss function

N N 2
l: e 2 (k)
A

v l: [y(k)-s] (27.3-18)
k=1 k=1

yields with dV/ds 0 the well-known estimate

N
s (N) N l: y(k). (27.3-19)
k=1

The corresponding recursive estimate results from subtraction of s(N-1)


1
s (k) = s (k-1) + k [y (k) -s (k-1) ] .
A A A
(27.3-20)

This algorithm is suitable for a constant s. With increasing k the


errors e(k) and therefore the new measurements are weighted increasing-
ly less. However, if s(k) is slowly timevariant and the current ave-
rage is to be estimated, other algorithms should be used.

b) Averaging with a constant correcting factor

If the correcting factor is frozen by setting k = k 1 , the new measure-


ments y(k) always give equally weighted contributions
484 27. Filtering of Disturbances

1 k,-1 1
J = -k- s{k-1) + ky{k). {27.3-21)
A

s{k-1} + k [y{k)-s{k-1)
1 1 1
The z~transfer function of this algorithm is

-1
{27. 3-22)
1+a 1 z
with a 1 = -(k 1-1)/k 1 and b 0 = 1/k 1 • Hence, this algorithm is the same
as the discrete first order low-pass filter, Eq. {27.3-6).

c) Averaging with limited memory


Only N past measurements are averaged with equal weight

A 1 k
s {k) = N E y{i). {27 .3-23)
i=k-N+1
Subtraction of s(k-1) gives recursive averaging with limited memory

1
s{k) = s(k-1) +
A A
N [y(k)-y(k-N) J {27.3-24)

with the z-transfer function


A -N
G(z) = s(z) 1 {1-z ) {27.3-25)
y(z) = N {1 -z-1) •

d) Averaging with fading memory


The method of weighted least squares is used, c.f. section 23.7, by
exponential weighting of past measurements
N
V = E AN-ke 2 {k) IAI < 1. (27.3-26)
k=1
The older the measurement the less the weight. dV/ds 0 yields for
large N
N
s(N) = {1-A) E y(k)AN-k (27.3-27)
k=1
using the approximation

1 +A+. A2 + ••• +A N-1 ~ [1-A] -1 •

Subtraction of s{N-1) gives a recursive average with fading memory

s<k> = s<k-1) + <1-A>y<k>. {27.3-28)

This algorithm has the z-transfer function


27.3 Digital Filtering 485

1-/.
--_-1. (27.3-29)
1-z
Note that the pole of Eq. (27.3-22) is close to one, and the poles of
Eqs. (27.3-25) and (27.3-28) are z 1 = 1.

As these averaging algorithms track low frequency components of s(k)


and eliminate high frequency components n (k) , they can be consi.dered
as special low-pass filters. Their frequency responses are shown in
Figure 27.3.3.

0,001 0,1 n: 2n: 10 20-WTo

Figure 27.3.3 Magnitudes of the frequency response of various recur-


sive algorithms for averaging of slowly time varying
signals
1 frozen correcting factor k 1 20
2 limited memory N 20
3 fading memory I. 0.95
486 27. Filtering of Disturbances

The recursive algorithm with a constant correcting factor has the same
frequency respons.e as a discrete low-pass filter with T0 /T = ln ( 1 /-a 1 ).
Noise with wT 0 > ~ cannot be filtered and therefore increases the va-
riance of the average estimate. The frequency response of the recursive
algorithm with limited memory becomes zero at wT 0 = v~/N with v=2,4,6, •..
Noise with these frequencies is eliminated completely, as with the in-
tegrating A/D converter. The amplitudes have a maximum at wT 0 = v~/N
with v = 1,3,5, •.• Therefore noise with wT 0 > 2~/N cannot be effective-
ly filtered. The magnitude of the frequency response of the algorithm
with a fading memory is IG(iw) I ~ (1-A)/T 0 w for low frequencies. It be-
haves like a continuous integral acting element with integration time
T = T0 /(1-A). Because of the pole at z 1 = 1 it satisfies IG(iw 0 ) I
for wT 0 = v~, with v = 2,4,6, ... Near the Shannon frequency wT 0 = ~
the magnitude behaves as that of the discrete low-pass algorithm. There-
fore averaging with a fading memory can only be recommended if no noise
appears for wT 0 > ~, or if used in conjunction with analog filters.

Filtering of outliers

Sometimes measured values appear which are totally wrong and lie far
away from the normal values. These outliers can arise because of dis-
turbances of the sensor or of the transmission line. As they do not
correspond to a real control deviation they should be ignored by the
controller. A few methods are considered to filter these types of dis-
turbance. It is assumed that the normal signal y(k) consists of the
signal s(k) and the noise n(k), and that outliers in y(k) must be eli-
minated. The following methods can be used:

a) - estimation of the mean value y E{y(k)}


- estimation of the variance 02
y
- 2
E{[y(k)-y] }

b) - estimation of the signal s<k>


(signal parameter estimation as in section 23.2.2,
then Kalman filtering as in section 15.4)
2 - 2
- estimation of the variance a s = E{[s(k)-s] }
c) - estimation of parametric signal model as in section 23.2.2
- prediction y(kJk-1)
2
- estimation of the variance cry•

Here only the simplest method is briefly described.

Estimation of the mean value can be performed by recursive averaging


~ ~ 1 !::..
y(k+1) = y(k) + k+1 [y(k+1)-y(k) ]. (27. 3-30)
27.3 Digital Filtering 487

For slowly time varying signals an averaging with a constant correc-


tion factor is recommended
A

y(k+1) = y(k) + K[y(k+1)-y(k) J (27.3-31)

with K = 1/(1+k 1 ), Eq. (27.3-21). The variance with


A

~y(k+1) = y(k+1) - y(k+1)

in the recursive estimator becomes


A2 A2 ~ 2 2
a (k+1) = a (k) + K (k) [ (y (k+1) -y (k+1)) -a (k) ] (27.3-32)
y y y

with K(k) = 1/k or, better, K = const. To detect outliers knowledge of


the probability density p(y) is required. Then it can be assumed that
measured signals with

l~y(k+1) I = ly(k+1)-y(k+1) I > K~


y
(k+1) (27.3-33)

are outliers, with for example K ~ 3 for a normal distribution. The


value y(k+1) is finally replaced by y(k+1) = y(k+1) and used for con-
trol.
28. Combining Control Algorithms and
Actuators

This section deals with the connection of control algorithms with va-
rious types of actuator. Therefore the way to control the actuators
and the dynamic response of the actuators are considered initially.

Actuator control

At the digital computer output the required manipulated variable or its


change is briefly present as a digitized value. Analog eontroZled ac-
tuators (for example pneumatical, hydraulical or electrical actuators)
require a D/A-converter (DAC) with intermediate storage and a holding
element which maintains the value of manipulated variable over one
sample. The desired analog actuator position UR or its change uR is
transmitted then as a d.c. voltage 0 ... 10 V or as an impressed d.c.
current 0 ... 20 rnA to the actuator, where it is transformed and ampli-
fied to a proper pneumatic, hydraulic or electrical signal. Dependent
on there being one or several DACs, the configuration of Figure 28.1a)
and b) have to be distinguished. For directly controlled digital actu-
ators, as e.g. an electrical stepping motor, no DAC is required, but
only digital addressing and data latching by selector switches and in-
termediate storages, Figure 28.1 c).

If the position U(k} (0 ••• 100 %) is transmitted, the DAC requires only
one sign but a relatively large \mrd length (8 ••• 12 bits). If the change
u(k) is transmitted the DAC must have both signs, but a smaller word
length (6 ... 8 bits) is sufficient.

Response of actuators

Table 28.1 summarizes some properties of frequently used actuators.


Because of the great variety available, only a selection of types can
be considered here.

With respect to the dynamic response the following grouping can be ma-
de:
28. Combining Control Algorithms and Actuators 489

actuator
address

actuator
command

digital
memory
a) selector analog
switch holds

actuator
address

actuator u
command

b) selector
·digital
memory
D/A-
converter and
switch digital holds

actuator
address

actuator u
command

C) selector
digital
memory
switch

Figure 28.1 Various configurations to control the actuators


a) analog controlled actuator: one DAC and several analog
holding elements
b) analog controlled actuator: several DACs
c) digitally controlled actuators
490 28. Combining Control Algorithms and Actuators

Group I Proportionat actuators

- proportional behaviour with lags of first or higher


order
- pneumatic actuators; hydraulic actuators with mechanical
feedback

Group II Integrat actuators with varying speed

- linear integral behaviour


- hydraulic actuators without feedback; electrical actua-
tors with speed controlled d.c. current motors

Group III: Integrat actuators with constant speed

- nonlinear integral behaviour


- electrical actuators with a.c. motors and three-way
switches

Group IV Actuators with quantization

- integral or proportional behaviour


- electrical stepping motors

Actuator feedforward and feedback control

Various actuator control schemes are used to adjust the actuator posi-
tion change uA(k) to the manipulated variable uR(k) required by the
control algorithm, Figure 28.2:

a) Position feedforward control


The output uR(k) of the DAC directly controls the actuator.

b) Analog position feedback control


uR(k) acts as reference value on an analog position controller
(positioner).

c) Digital position feedback control


The position uA(k) of the actuator is fed back to the CPU and the
position deviation
ue(k} = u(k) - uA(k)

is formed. Mostly a P-controller is sufficient

d) Position feedback to the control algorithm


The position change uA(k) of the actuator is fed back to the CPU.
The control algorithm (for the process) calculates the present ma-
nipulated variable u(k) by using the past effective position chan-
tv
Table 28 .1 P r o perties of frequently used actuators 00

n
rising simplified 0
t y p e of con- input D/ A- analog time
actuator struc- signal conver - trans - behaviour group power time blo c k diagram 5-
f-'·
e nergy tion s i on by mitte r [rnkp] [se c] ::l
f-'·
::l
pneuma- mem- air D/A- e lectr./ proport. 0.01 ... 1 ... .a
tical brane pressure conver- pneumat. with I 200 10 n
0
with 0.2 ... t er trans- time lag ~ ::l
mitte r rt
spr i ng 1 .o 2
kp / cm 0
"
f-'

piston integral II 10 .. . 1 ••• ;I>


75000 10 f-'
without .a
me ch. 0
f eed-
~ f-'·
h y drau- back oil D/ A- electri-
"rt
:r
lical pressure conver- cal / h y - ;3
piston proport. 10 ... 1 ... Ul
ter draulic . with I 75000 10
with transm. Ill
me ch. time lag ~ ::l
0.
f e edback
;I>
D/A- ampli- inte gral 1 .. . 0.01 0
d.c . d.c. rt
shunt voltage conv er- fi e r v ariabl e II 400 ... 60 c
~ Ill
ele ctro t er speed rt
0
motor
e lectro- Ul
me chan. a.c. a.c . control 3-po i nt inte gral III 1 ... 1 .. . "
two voltage unit relais constant 400 60
phase spe e d ~
el e ctro
motor
ste p voltage actua- step- 0.02
- wis e IV ... 60
motor pulses tor ~
proport.

\!)
""
492 28. Combining Control Algorithms and Actuators

ges uA(k-1}, uA(k-2}, •••

u(k} = -p 1 uA(k-1}-p 2 uA(k-2}- •.. +q 0 e(k}+q 1 e(k-1}+ •.• ( 2 8-1}

Scheme a} is the simplest, but gives no feedback from the actuator res-
ponse. Schemes b), c) and d) require position feedback. b) and c) have
the known advantages of a positioner which acts such that the required
position is really attained. c) requires in general a smaller sample
time in comparison to that of the process, which is an additional bur-
den on the CPU. Scheme d) avoids the use of a special position control
algorithm. The calculation of u(k} is based on the real positions of
the actuator. This is an advantage with integral acting control algo-
rithms if the lower or the upper position constraint is reached. Then
no wind-up of the manipulated variable occurs.

Now some details on the control of the actuator are considered.

Proportional actuators
For the proportional actuators of group I the change of the manipulated
variable calculated by the control algorithm can be used directly to
control the actuator, as in Figure 28.2 a}. In the case of actuator
position feedback control the schemes Figure 28.2 b) and d) are appli-
cable. Figure 28.3 indicates the symbols used.

Integral actuators with varying speed


Scheme Figure 28.2 a} can be applied for integral actuators of group
II if the integral behaviour is included in the process model used in
the design. Schemes b) and c) give proportional behaviour of the posi-
tion loop, so that u(k} or u' (k} can be applied as with proportional
actuators. Direct feedforward control as in a} can also be achieved by
programming the control algorithm to give ~u(k} = u(k}-u(k-1}. In the
case of a PID-algorithm this becomes

or

The integral actuator with transfer function G(s} 1/Ts results in


the z-transfer function with zero order hold

T0 z -1
T ----1
1-z
493
28. Combining Control Algorithms and Actuators

process
control
algorithm

a)

process
control
algorithm

b)

process a.
position
controlalg

c)

process
control.
algorithm

d)
Figure 28.2 Various possibilities for actuator control
{shown for an analog controlled actuator)
a) feedforward position control
b) analog feedback position control
c) digital feedback position control
d) position feedback into the control algorithm
4 94

u ---
Amax

__ L __ UA
- - - operating
I point
I I
uAoo
u tLJ;i
-- _,.'---+--'--'-l___-_._ _ _ ___
Am1n
URoo

Figure 28.3 Characteristic of a proportional acting actuator


UR controller DAC output uR change of controller position
UA actuator position uA change of actuator position

The control algorithm and actuator then result in the PID-transfer


function with unity dead time

(28-2)
-1
T 1-z

The actuator then becomes part of the controller. Its integration time
T must be taken into account when determining the controller parameters.
(Note for mathematical treatment that no sampler follows the actuator.)

Integral actuators with constant speed


Actuators with integral behaviour and constant speed (group III) must
be switched by a three-way switch to give right-hand rotation, left-
hand rotation or stop. The first possibility of feedforward position
control consists in connecting u(k) directly to the three-way switch.
The actuator then moves to the right-hand or the left-hand side during
the sample interval if lu(k) I > uRt' where uRt is the deadband or
stops if I u (k) I < uRt. The actuator speed must then be relatively small
to avoid too large a change. This may result in a poor control perfor-
mance. To attain the same position changes u(k) in a shorter time,
28. Combining Control Algorithms and Actuators 495

the actuator speed must be increased and the switch durations TA < T0
must be calculated and stored. To save computing time in the CPU this
is often performed in special actuator control devices [1.11]. This
actuator control device can also be interpreted as a special D/A-con-
verter outputting rectangular pulses with quantized duration TA. Figure
28.4 shows a simplified block diagram of the transfer behaviour of the
actuator-control device and the actuator.

actuator three-way position


integrator
control device switch constraint

llu

Figure 28.4 Simplified block diagram of integral acting actuators


with constant speed

Integral actuators with constant speed are described by a three-way


switch and a linear integrator, Table 28.1. If T 8 is the settling time
of the actuator, i.e. the running time for the whole position range

0 Amax = 0 Amax - 0 Amin <28 - 3 )

it follows for the position speed that uR = +uRO' uR = -uRO or uR = 0


depending on the three-way switch position, c.f. Figure 28.4,

(28-4)
496 28. Combining Control Algorithms and Actuators

Hence the position change per switch duration TA is

6Ui(TA) = Ui(TA)-Ui(O) ui (TA)

TA TA uRO
J UA (t) dt 0 Amax T I uRO I •
8
(28-5)
0

The three-way switch introduces a nonlinear behaviour. If the dead band


-uRt ~ uRt of the switch is large enough, no limit cycle appears
uR ~

from this nonlinearity and stable steady state can be attained, c.f.
[5.14, chapter 52). To generate the position changes u(k) calculated
by the control algorithm, the actuator control device has to produce
pulses with amplitudes uRO' 0, -uRO and the switch duration TA(k), i.e.
pulse modulated ternary signals, see Figure 28.4. This introduces a
further nonlinearity. The smallest realizable switch duration TAO de-
termines the quantization unit 6A of the actuator position

(28-6)

It is recommended to choose this as the quantization unit of a corres-


ponding DAC for position changes, 6A = 6DA' i.e. about 6 ... 8 bit. The
smallest switch duration must be large enough such that the motor does
actuate. The required switch duration TA(k) follows for the required
position change from one sample point to another

6u(k) = u(k)-u(k-1) = j·6A j = 1,2,3, ...

from Eq. (28-5) as


luRol 6u(k)
T ------ (28-7)
S uRO 0 Amax

which is for example transmitted as a pulse number j to the actuator


control device. The largest position change per sample time T0 is,
Eq. (28-5)
To
6U'Amax = T 8.
UAmax -

Therefore position changes

(28-8)

with quantization unit 6A can be realized within one sample time. They
result in the ramps shown in Figure 28.4.

As these actuators with constant speed introduce nonlinearities into


the loop, the next section briefly discusses when the behaviour can be
l-inearized.
28. Combining Control Algorithms and Actuators 497

a) Method of 'small time constants'

The rampwise step responses of the actuator with three-way switch and
control device can be described approximately by first-order time lags
with the amplitude dependent time constant

(28-9)

If these time constants are negligible compared with the process time
constants, a proportional action element without lag can be assumed and
therefore a linearized actuator. Process model simplification by the
neglection of small time constants was investigated in [3.4], [3.5].
For the case of continuous-time PID-controllers, small time constants
Tsm can be neglected for processes with equal time constants T of order
n = 2,4,6 or 8, assuming an error of ~ 20 % of the suadratic performan-
ce index Eq. (4-1) for r = 0 if

Tsm/TZ ~ 0.015, 0.045, 0.083 or 0.13 (28-10)

where TZ =nTis the sum of time constants, c.f. section 3.7.3. Eqs.
(28-9) and (28-10) give position changes for which the actuator can be
linearized

(28-11)

b) Method of 'amplitude density'

Another possibility is to estimate negligible small actuator action


times from the ratio of the amplitude densities of a ramp and a step
function
wTA wTA
K 2 -J;--
= (sin -- 2- (28-12)
with TA the switch duration of the ramp function, c.f. [3.11]. I f dif-
ferences of 5 ... 20 %, i.e. K = 0. 95 ... 0. 80 are allowed for the maximum
frequency wmax of interest it follows that

(28-13)

In general wmax w8 = n/T 0 , c.f. chapter 27. Hence

o. 72 (28-14)

or with Eq. (28-7)

(28-15)
498 28. Combining Control Algcrithms and Actuators

This leads to a rule of thumb:

Actuators with constant speed can be linearized i f the maximum switch


duration TA is about half of the sample time T0 .

Note for the application of this rule that the sample time has to be
chosen suitably such that wmax = w8 .

An example in [2.22], Table 29.2, shows the application of these two


methods to an injection valve of a steam superheater and the steam flow
valve of a heat exchanger. The linearizable position ranges vary between
12 and 50 %.

These considerations have been in reference to feedforward controlled


actuators. However, analog or digital feedback control or a position
feedback to the control algorithm can also be applied to integral ac-
tuators with constant speed.

Proportional actuators with quantization (stepping motors)

Actuators with quantization (group IV) such as electrical stepping mo-


tors are well suitable for process computers or digital controllers.
They have a behaviour which is proportional to the pulse count and
needs no DAC. An amplifier transforms the low energy pulses of the di-
gital computer output to higher energy pulses which excite stepwise
the stator windings. The angle rotation per step varies from 1 degree
to 240 degrees. The smaller the step angle the more coils are required
and the smaller the torque. Both single steps and stepping frequencies
up to the kHz-range are attainable. For low frequencies the stepping
motor can be stopped within one step. However this is not possible at
the higher frequencies for which the motor can be regarded as a syn-
chroneous machine, because of the mechanical inertia. If exact posi-
tioning is required, as for example in the case of feedforward actua-
tor control, the moment of inertia of the driven valve or others and
the step frequency must be kept small. By using digital feedback posi-
tioning can be accelerated [2.18].

Avoidance of wind-up

It is well-known that controllers with integral action continue to in-


tegrate the control deviations if one signal in the loop reaches satu-
ration (for example the mechanical restriction of an actuator) . If the
sign of the control deviation changes it needs a relatively long time
to restore the integrator during which the loop remains at saturation.
28. Combining Control Algorithms and Actuators 499

To avoid this 'wind-up' the following actions can be taken. If the ac-
tuator reaches a constraint at uAmax or uAmin in the control algorithm
these true positions must be used instead of the calculated u(k-1),
u(k-2), ••• This can be triggered by end position switches or by posi-
tion feedback or in the case of a unique relationship between the com-
puter outputs and the actuator by position counting software. Another
possibility is the feedback of the real actuator position into the con-
trol algorithm described by Eq. (28-1).
29. Computer Aided Control Algorithm
Design

Conventionally analog controllers and digital control algorithms of


PID-type are designed and tuned by trial and error supported by rules
of thumb and sometimes by simulation studies. For processes with large
settling time or for multivariable processes with strong interactions
this procedure is generally quite time consuming and does not result
in the best possible control. Better control in a shorter time can be
achieved by the computer aided design of digital control systems. Based
on the design methods for feedback and feedforward control algorithms
treated in this book, programs can be developed which provide for in-
teractive computer aided design. A necessary pre-condition is, however,
the knowledge of suitable mathematical process models and possibly of
signal models. Process models may be obtained either by theoretical
modeling or by process identification, as in section 3.7.4. Theoreti-
cal modeling must be used if the process is not available, for example
before its construction. However, there are some natural limitations
on accuracy of theoretical modeling. There are, for example, the limi-
ted accuracy of available process data and parameters, the simplifying
assumptions made during the model derivation, or imprecisely known ac-
tuator, valve or sensor models. Particularly in the field of industri-
al processes (chemical, energy and basic industries) some physical or
chemical laws are either unknown or are difficult to formulate with a
reasonable number of equations. Therefore process models can often be
obtained much more rapidly and with greater accuracy by measuring the
dynamics of an existing process, i.e. by applying identification me-
thods. This can be performed off-line or, if a computer is already
connected to the process, on-line. As parametric process models are
very suitable for the design of parametric control algorithms the iden-
tification methods described in chapter 23 may be used. There are pro-
gram packages for process identification which contain perturbation
signal generation, process signal filtering, parameter estimation me-
thods, model order search and model verification [3.13], [29.1], [29.2],
[29.3], [23.16], [29.4].
29. Computer Aided Control Algorithm Design 501

Based on the process models, the computer aided control algorithm de-
sign may be organized as follows, if a computer in on-line operation
is used:

1. Assumption of a control scheme

- single loop, cascaded loops, multiple loops


- feedforward paths

2. Transfer of process and signal models to the controller design pro-


gram

3. Design of various control algorithms

4. Simulation of the control system behaviour

5. Modification of the control algorithms and the final selection

6. Control algorithm implementation in the computer

7. Setting of operation conditions

- restrictions on the manipulated variables


- reference variables

8. Closed loop operation and supervision of the resulting control per-


formance

For off-line computer aided design, items 1. to 5. are appropriate.


This design approach has the following advantages:

o Automation of the design and the start-up of digital control.

o Simulation of the control system with various control schemes and


control algorithms without disturbing the process. Modification is
simple.

o Saving of implementation and start-up time, especially for process-


es with large settling time, complicated behaviour or strong inter-
actions.

o Improvement of the control performance by better tuned simple algo-


rithms or more sophisticated control algorithms.

o Determination of the dependence of controller parameters on the o-


perating point. Therefore feedforward control algorithms may be
quickly designed.
502 29. Computer Aided Control Algorithm Design

A pre-condition for the practical success of the described design pro-


cedure is that the process behaviour either does not change or changes
only by a small amount between process identification and the final
control. For slowly time-varying processes, self-tuning or self-adap-
tive control algorithms can be used, c.f. chapter 25.

As an example, Figure 29.1 shows the structure of the program package


CADCA-SISO (Eomputer ~ided ~esign of Eontrol ~lgorithms for ~ingle ln-
put/~ingle ~utput processes), [29.5], [29.6], [29.7]. The development
of this package has indicated that .a hierarchical, modular and trans-
parent program structure with an easily understood conversation (user-
interface) has many advantages. Figure 29.1 presents four hierarchical
levels and three design phases. Figure 29.2 shows the resulting control
behaviour for an analog simulated third order process including an i -
dentification run with program package OLID-SISO [29.4]. The resulting
control behaviour agrees precisely with the theoretically expected be-
haviour. The same program package was used to obtain the results in
section 11.4.
IV
"OLID" r
~CADCA \0
~--:::.J task
levels
~ ~ n
- l 0
s
'0
controller simulation of controller direct monitoring of tasks c
design process model implementation digital closed loop rt
and control control operation
ro
>i
algorithms !:!>
1-'·
Q,
---.::::;:::: ro
t Q,
parameter deadbeat state
--- transfer of control control characteristic algorithms
optimized control control
n
algorithms, signal algorithm values of control 0
control algorithms algorithms acquisition, con- performance, ::s
rt
algorithms straints, setpoint/
process input
'
alarms t1
0
f-'
adjustment !:!>
~ f-'
<.Q
0
~ >i
1-'·
parameter basic
kt digital
t
data ac- actuator data
~ ~ data process display- utility rt
:J'
optimi- maths. closed loop quisition position- acqui- filte- output ing functions s
zation routines simulation ing sition ring commands tJ
and ro
Ul
storage 1-'·
<.Q
::s

controller design phase controller implementa- direct digital control phase


tion phase

Figure 29.1 Organization of the program package CADCA

U1
0
w
504 29. Computer Aided Control Algorithm Design

1,0

0 +--+--+---
0 20 40 t[secl 0 20 40 t[sec] 0 20 40 t[sec] 0 20 40 t[sec]

0 20 40 t[sec] 0 20 40 t!secl 0 20 40 t[sec] 0 20 40 t[sec]

sc 3PC-2 DB(V) DB(v+l)


state controller PIP-controller deadbeat deadbeat controller
with observer with given q 0 controller with increased ord.

Figure 29.2 Closed loop behaviour for an analog simulated process and
CADCA designed control algorithms. Process: Gp(s) = 1.2/
(1+4.2s) (1+1s) (1+0.9s), To = 2 sec.
a) process input and output during on-line identification
b) process input and output for 4 control algorithms and
with a step change of the reference value
30. Case Studies of Identification and Digital
Control

In this last chapter the application of various methods of identifica-


tion and digital control to industrial processes is discussed. The pre-
ceding chapters have shown that there are two main ways of combining
methods of process identification with methods of control design:

a) Identification and computer aided control algorithm design


(->- ID-CAD)

b) Self-optimizing adaptive (self-tuning) control algorithms (->- SOAC)

In the first case the process model is identified once only and a con-
stant (fixed) control algorithm is designed (on-line or off-line), c.f.
chapter 29. For the second case the process model is identified sequen-
tially and the control algorithm is designed after each identification
(on-line), c.f. chapter 25. Sections30.1 and 30.2 demonstrate the ap-
plication of ID-CAD to a heat exchanger and a rotary drier, and sec-
tion 30.3 shows the application of ID-CAD and SOAC to a simulated
steam-generator.

30.1 Digital Control of a Heat Exchanger

Figure 30.1.1 is the schematic of a steam heated heat exchanger which


consists of 14 tubes with inner diameter di = 25 mm and length 1 = 2.5
m. The process input is the position U of a pneumatically driven steam
valve, and the process output Y is the water temperature as measured
with thermocouples. For on-line identification with a process computer
HP2100A a PRBS was generated and after an identification time of about
10 minutes, Figure 30.1.2, the following model was estimated using
RCOR-LS (recursive correlation and least squares parameter estimation)
-1 -2 -3
- 0.0218z -1
lliL
u ( z)
-0.0274z - 0.0692z
-1 -2 -3 z
1 - 1.2329z + 0.0478z - 0.01276z
506 30. Case Studies of Identification and Digital Control

The sample time was chosen to be T0 = 3 sec. With a settling time of


T 95 ~ 60 sec this gives T 95 ;T 0 = 20. The model order search program re-
sulted in m = 3 and d = 1. For more details see [29.1], [29.2].

In Figure 30.1.3 the closed loop response to steps in the reference


value is shown for various control algorithms designed with CADCA-SISO.
Because of the nonlinear behaviour of the steam valve and the heat ex-
changer, the closed loop response depends on the direction of the step
change. However, satisfactory agreement (on average) between the CADCA
simulated and the real response is obtained. The various control algo-
rithms show typical responses as discussed in chapter 11.
30.1 Digital Control of a Heat Exchanger 507

warm water

u
......------tlt--1___.,.,_~
n 11--L--~.J...L..t!fJnnnrf
r
steam
steam
valve
Mw

pipe condensate

water
valve cold water

Figure 30.1.1 Stearn heated heat exchanger. Input: position U of the


~neumatic steam yalve. Output: temperatur e Y of the water.
Mw = 3100 kg/h. Mu
~ 30 kg/h. P 01 ~ 2.5 bar.

3.0 ,.... ,....

ulmAl

5.0
~----------r-------~-------,--------,--------,----------~-
0 117 210 303 396 489 tis eel

81
0 117 210 303 396 489 tis eel

Figure 30.1.2 Process input and output signal during on-line identifi-
cation. PRBS: period N 31. Clock time A= 1. Sample
time T 0 = 3 sec.
U1
0
())

7a a t = tlsed200
100= o 100 llsecl
7aBCO
y y
j•cJ
,.cl . l"cl 1•cJ
78 c
81.2 0 100 200
81.2
78t:
81.2 0 100 tlsec1
81.2
w
0

()
OJ
Ul
CD
tlsecl 100 tlsecl
(/)
0 100 rt
u ~
lrMJ p.
f-'·
61.r------ 6 CD
0 100 200 100 tlsed Ul
!Iseel 0
u
1rMl
~ 8
b 0
H1
H
8 p.
CD
::J
rt
f-'·
H1
f-'·
state controller with observer ()
parameter-op timized controller OJ
3PC-3 (PID) sc rt
f-'·
0
::J
Figure 30.1.3 a) Closed loop response with CADCA designed control algorithms based on an identified pro- OJ
::J
cess model . Reference variable steps in both directions. p.

measured response .....• simulated response (during design phase) 0


f-'·
r -a
k2 k3 k4 k5 f-'·
k1 rt
0 . 04 OJ
-3 . 9622 -3.4217 -2.7874 0.4472 1. 3372 f-'
sc
()
qo q1 s2 q3 q4 qs Po p1 p2 p3 1P 4 P5 0
::J
0 . 01 rt
3PC-3 -1.7125 2.3578 -0.7781 - - - 1 . 0000 -1 .0000 - - - - 11
- -
0
,__.
tlsecl
0 100 w
7M ~B .
~.CY 78.8(
100:tlsecl
0 0

I cl l'cl 0
1-'·
81.2
x!
,.C 81.;~L_
0 100
81.2
100
82
1.
\!)
1-'·
rt
IIseeI tlsecl Ill
f-'
tlsed
0 100 0 0
0
4 ::J
rt
li
0
2 f-'
0 100 u
tlsecl 0
u Hl
ImAl
lmAI Ill

8
4 I~ - - --
6
b 0 100 Iisee] ::r:
CD
u Ill
ImAl rt
u
8 t'l
lmAI 6 X
()
10 ::r
Ill
::J
\!)
8;o CD
100 li
tlsecl
12

deadbeat deadbeat controller of


controller DB( v) increased order DB( v+l)

Figure 30.1 . 3 b) Closed loop response with CADCA designed control algorithms based on an identified pro-
cess model. Reference variable steps in both directions .
measured response .. •. .. simulated response (during design phase)

qo q1 q2 q3 q4 q5 Po p1 p2 p3 p4 P5

DB( \! ) -8. 4448 10.4119 -4.0384 1 .0775 0.0000 - 1 .0000 0.0000 -0 .2 3 17 -0.5841 -0.1842 -

DB (\!+1 ) -3.8612 0.1770 3 .8048 -1.6993 0.5848 0.0000 1 .0000 0.0000 -0.1059 -0.3928 -0.4013 -0.100
lJ1
0
\!:>
510 30. Case Studies of Identification and Digital Control

30.2 Digital Control of a Rotary Dryer

In sugar production sugar beet cosettes or pulp is a by-product which


is used as cattle fodder. This pulp has to be dried thermally in a ro-
tary dryer. Properly dried pulp should contain about 10 % moisture or
90 % dry substance. Below 86 % dangerous internal heating occurs du-
ring storage and the nutrients decompose. With the dry substance ex-
ceeding 90 % the overdried pulp becomes too brittle and the payback
falls because of higher fuel consumption and loss in weight. The goal
is therefore to keep the dry substance within a tolerance range of a-
bout± 1 %.

Figure 30.2.1 shows the schematic of a rotary dryer. The oven is hea-
ted by oil. Flue gases from a steam boiler are mixed with the combus-
tion gases to cool parts of the oven. An exhaust fan sucks the gases
through the dryer. The wet pulp (pressed pulp with about 75-85 % mois-
ture content) is fed in by a screw conveyor with variable speed. The
drum is fitted inside with cross-shaped shelves so as to distribute.
the pulp within the drum. Because of the rotation of the drum (about
1.5 rpm) the pulp drops from one shelf to another. At the end of the
drum another screw conveyor transports the dried pulp to an elevator.
The heat transmission is performed mainly by convection. Three sections
of drying can be distinguished. In the first section the evaporation
of water takes place at the surface of the pulp, in the second section
the evaporation zone changes to the inner parts of the cosettes and in
the third section the vapour pressure within the cosettes becomes less
than the saturated vapour pressure because of the pulp's hygroscopic
properties.

The control of the drying process is difficult because of its nonmini-


mum phase behaviour with dead times of several minutes, large settling
time (about 1 hour), large variations of the water content of the wet
pulp and unmeasurable changes of the drying properties of the pulp.
Because of these reasons the rotary dryers are mostly controlled ma-
nually partly supported by analog control of some temperatures. How-
ever, the control performance achieved is unsatisfactory with toleran-
ces of more than~ 2.5 %, c.f. Figure 30.2.7 a). Figure 30.2.2 shows
a block diagram of the plant. The main controlled variable is the dry
substance of the dried pulp. The gas temperatures at the oven outlet,
in the middle of the drum and in the dryer exhaust can be used as auxi-
30.2 Digital Control of a Rotary Dryer 511

liary controlled variables. The main manipulated variable is the fuel


flow. The speed of the wet pulp screw conveyor can be used as an auxi-
liary manipulated variable. The water content of the pressed pulp is
the main disturbance variable.

The goal was to improve the control performance using a computer. Be-
cause of the complicated dynamical behaviour and the large settling
time, computer aided design of the control system was preferred. The
required mathematical models of the plant cannot be obtained by theo-
retical model building as the knowledge of the physical laws descri-
bing the heat and mass transfer and the pulp motion is insufficient.
Therefore a better way is process identification. Because of the strong
disturbances step response measurements give hardly any information on
the process dynamics. Hence parameter estimation using PRBS as input
was applied [30.1], [30.2]. Special digital data processing equipment
based on a microcomputer was used, generating the PRBS, sampling and
digitizing the data for storage on a paper tape. The evaluation of the
data was performed off-line using the parameter estimation method RCOR-
LS of the program package OLID-SISO. The initial identification experi-
ments have shown that the following values are suitable:

sample time To 3 min


clock interval \ 4
amplitudes fuel ~~ 0.25 t/h
amplitudes speed ~n 1 rpm

The required identification times varied between 112 T0 to 335 T0


which is 5.6 to 16.8 hours. Figure 30.2.3 shows one example of ani-
dentification experiment. The step responses of the identified models
are presented in Figure 30.2.4. The settling time is shortest for the
oven outlet temperature and increases considerably for the gas tempera-
tures in the middle and at the end of the dryer. The dry substance has,
with fuel flow as input, a dead time of 6 min, an allpass behaviour
with undershoot which lasts for about 30 min, and a 95 % settling time
of about 2.5 h. This behaviour is one of the main reasons for the con-
trol problem. With the screw conveyor speed as input the dry substance
shows a dead time of 18 min. The estimated model orders and the dead
times are given in Figure 30.2.4.

Based on the identified process models various control systems were


designed using the program package CADCA [30.1], [30.2]. The manipula-
ted variable is the fuel flow and the main controlled variable the dry
substance. If only the dry substance is fed back control is poor;
U1
emergency 1\)
smoke stack

gases

fuel pressed
pulp measuring po-
back sition: band
j conveyor w
0
Jl#/7? ()
PI
(ll
---ll-, I
CD
DRUM (/l
I rt
I
I cp.
flue ' '~1 ~-J f-'·
) \ j 11 j:H: I 1· CD
gases (ll

from 0
Hl
the
H
boiler p.
CD
~G ::1
rt
cumbustion f-'·
Hl
air revolu- water f-'·
tions gas (}
content temperature PI
of the gas tempera- rt
mass flow of gas tempera- of the in the dry substance f-'·
screw ture at the 0
the fuel oi 1 ture at the pressed middle of percentage ::1
con- exhaust
oven outlet pulp the drum PI
veyor 1/JTS
-&0 WPS ::1
~ n -&M ~ p.
0
f-'·
\Q
Figure 30.2.1 Schematic of the rotary dryer (Sueddeutsche Zucker AG, Werk Plattling) f-'·
rt
drum diameter DD 4.6 m oil mass flow MF ~ 4.0 t/h PI
max 1-'
drum length 21 .0 m temperatures 1? 0 ~ 1oso 0c
()
~D 0
wet pulp mass flow MPSmax ~ 80 tlh !?H ~ 140-210 °C ::1
rt
flue gas mass flow MKG ~ 8000•) Nm 3 /h IJA ~ 1i0-155 °C 11
max 0
1-'
w
gas temperature at 0
the oven outlet .,F [t/tU
···--··· ... ·.-·· ...... N
-·····- ..-....... - ....
"o 0
1-'·
...." ...... <!l
1-'·
" -· .. .. ···· .... ······ ···-····· rt
gas temperature in " AI
the middle of the I-'
mass flow of drum ()
the fuel
~ ~C] 0
1::1
MF ""' ... ...:· rt
··-· , ··-··.. ·... .......... 11
• ' ,•
0
I-'
.....,
,··· .. ._.. _ 0
1110 Hl
····· .... ....... -····
AI
revolutions ""
of the screw gas temperature a t ::a
0
conveyor the plant outlet [•c) rt
~
,. .. AI
n .. .. -. 11
"A .. ··-· '<:
lOl
0
... 11
'<:
,.. .·-· ro
11
;,.t [•c)
water content of
,.,
the pressed pulp dry substance
""
WPS <i'ts lll'IT ....

<1>,.1 [•iJ
.
:f/ .·.
:: ·=··...
...
Figure 30.2.2 Block diagram of the rotary dryer "'
"•
Figure 30.2.3 .."
Data records of an identification experirnen~ with fuel flow I
" U1
changes t (n)
....
w
514 30. Case Studies of Identification and Digital Control

M0 [•C )

90
80
70 ···························································· ···
60
50
40
30
20
10 2 3 t [h)
- 10 ·..............................................................
2 3 t[h)
-20
m d = 0
-30
-40
ll>lM (•C)
40 .····························································
30
20 m d
10

2 3 I [h)
2 3 I [h)
m d = 0
-10 ·····························································
-20
MA [•C] -30
-40
40 .............................................. MA r•ct
30
20
10 m 3 d 2
2 3 t)h]

m = 3 d 2 3 I [h)

-2
fi!JITS (%) -4
········· ······················· -6 ····································
10 .·· -8
8
-10
6
4 lllllrs [%]
2
0
-2 2 3 t[h) m 5 d 6
-4

m 5 d = 2

a) b)

Figure 30.2.4 Step responses of the identified models. n=13rpm. T =3min.


0
a) change of the fuel flow ~MF = 1 t/h
b) change of the screw conveyor speed ~n = 1 rpm
30.2 Digital Control of a Rotary Dryer 515

feedback of the gas temperatures ~M and ~A improves it considerably.


Figure 30.2.5 a) shows the simulated responses to step changes of the
screw conveyor's speed for a double cascade control system with 3 PID-
control algorithms and a state controller with observer. The better con-
trol (better damped and with fewer oscillations) is obtained with state
control. Control can be improved considerably using a second order
feedforward control algorithm GF 1 measuring the speed n and manipula-
ting the fuel flow, Figure 30.2.5 b). Because of practical reasons the
cascaded control system was finally implemented on a SIEMENS 310 K pro-
cess computer (easy transfer to other dryers, transparency for the ope-
rator, computer manufacturer's program package SIMAT C). A block dia-
gram of the. implemented control system is shown in Figure 30.2.6 which
also shows the positional control algorithms for the actuators, a feed-
forward control GF 2 for the case where a reliable water content mea-
surement of the wet pulp is possible, a feedforward controller GFS to
change the speed of the screw conveyor such that the total water mass
flow is kept constant and a feedforward controller GF 7 of differential
type to reduce the nonminimum phase behaviour of the dry substance by
changing the boiler flue gases such that the gas flow through the dry-
er is initially kept constant after a fuel flow change. Figure 30.2.7a)
shows signal records for manual analog control (original status) and
Figure 30.2.7 b) for digital cascaded control with feedforward control
GF 1 • Although the pulp mass flow MPS is fairly constant the dry sub-
stance oscillates within a tolerance of about ± 2.5 % for manual/ana-
log control. With digital control the tolerance is reduced to about
± 1 % for larger pulpmass flow disturbances than in Figure 30.2.7 a),
or ± 0.5 % for periods with fairly constant pulpmass flow. This shows
a significant improvement in performance using the digital control.

A report of the practical experience with the digital control of three


rotary dryers with one process computer has shown that the fuel saving
because of better control was about 2.5 %, which is about 329 tons of
oil annually [30.3). This rotary dryer is a typical example of a pro-
cess with complicated internal behaviour and large settling time for
which manual tuning of controller parameters did not result in satis-
factory control. The process identification and computer aided design
of various control systems led to a good insight into the process' be-
haviour and allowed the simulation and comparison of various control
systems. As the rotary dryer generally operates at full load, fixed
control algorithms are suitable and an adaptive algorithm is not re-
quired.
516 30. Case Stud ies of Identification and Digi tal Control

AM .. Jtlhl b.M, ltlh]

1.0 1.0

0.8 cascade control 0.8


control
0.6 0.6

0.4 0.4
state control state control
0.2 0.2

o.o

"""" I'Cl A~~ I'CI

: ~v~ \; •/ (as~~:· centro~


20 20

15 (\'·(.·.. .···~ascade control


1
: I/. ·. ·,. :::'· · G:·~::: .~-~~:·::~:: : : ::.. \~·:~·: ····~~~- ~~-~ -~
_: V'
I'CI

·:,.,;::·\~·:~~ · ~·~:t~~~
.···::··.,,,_ ...fascade control _.,,. ..,,,_ ... /cascade control
10 10
•;:~: .... .-··· ... ····· ······· .....·--.
\ ........................................
,
.

'. /i:· state control


0 ''>,(: 0

1 cascade control 6<l>T5 l% 1 cascade control


-····.. 1 I
··---··· -- .......... ·--........... ..
0 '' '\ \ ...- ···:1····· ··\····'""'·· ········:::::::.,....,:::::::::::: 0 T"\ ..>:....\"···..··............... .......
state control
-I
\\\ /~t:·,~-
-1 \\ \::::: .. >... ::·· · state control
-2 i\\·1\ •./ / I
-2 \.:<:·1.

-J _II;i II
\""
-3 I
_, \ •A
'·\\ /
.
_, i

r
I
'{
-5 i ithout control \
without control
'
./.................................................... .
-5

-6
'·.
·......-···
. ..............................................
-6
\. ..-····
...

"5 I (h) 5 I (h)

a} b)
Figure 30 . 2.5 Simulated control behavi our of the rotary dryer for step
changes of the screw conveyor speed of 6n = 1 rpm, mea-
sur i ng the dry substance and the flue gas temperatures
~M and ~A' To = 3 min .
a) without feedforward control b) with feedforw. control
GF1
w
0
N

t:l
f-'·
.a
f-'·
f).Wps rT
PJ
1-'
()
0
!:l
rT
~
0
1-'
6nmon
0
Hl
DRYING
y=t.~TS PJ
PROCESS
::0
0
rT
PJ
6 MKG man 'i
'<
t:l
~
'<
ro
'i
UF
w UR -y W1 UR1 w2

Yz=t.~M y1=6~A

Figure 30.2.6 Block diagram of the cascaded control system implemented on a process computer
U1

-..J
<~>Ts ! [%1 V1
95 .'-'", ollrsf%1
a ~ CX>
94 1'·. _
_.,.· .,. . . . . . .r\ ,-....._ ./--.. . . , .,....-.
:·. 92 ........-.-- \ / ~ ... .~... / "--../"'~,;--....,._,...'·........, v l .... "/...-..
,/'.. 91 \,/ ... w.
93 /"\. !
/\- . ....../ \} ...._./
_.,.... ,-/
92 \.....--'"'\,,/
!
91··· \ ../
-..._:._). -,~_i.:
90
~~
·cl
140
120 ~~
i). -,.['CI 100
140
120 ......__.~------~~~
~~~['CI w
100- 0
190 __ _ _ _........_ ...._ ___..------------..-.----...---~
155 ()
P>
Ul
n l [~l (1)

~:F--~--~------ (/)
rt
c
0:
:;~::~__fo-~,-~,,,--C'--~----~'~ t-'·
(1)
Ul

w,.u%1 0
H1


--~ MJ!r; ...
~ t a ... H
0:
(1)
·-·r. . . . . ::l
rt
t-'·
H1
t-'·
(l
P>
rt
t-'·
0
::l
P>
::l
0:
18 19 20 21 22 23 00 01 02 03 04 OS t!hl
0
t-'·
a) b) '-0
t-'·
15 16 17 16 19 20 21 22 23 00 01 02 03 tlhl rt
P>
Figure 30.2.7 Signal records of the rotary dryer. Signals are defined in Figure 30.2.1.
....
()
MM: molasses mass flow 0
b) digital control with cascaded control system ::l
a) manual control rt
and feedforward controller GF 1 1-1
0
....
30.3 Digital Control of a Steam Generator 519

30.3 Digital Control of a Steam Generator

To study various methods of identification and control, models of the


plants for the steam pressure and steam temperature control of an oil
fired natural circulating drum steam generator are used. The block dia-
gram of this part of the steam generator has been shown in Fig. 18.1.1.
The transfer functions of the blocks were obtained by theoretically
modeling the superheater and the evaporator of an existing steam gene-
rator [18.5] and [18.6] and are given in the Appendi~. They compare fa-
vorably with the measured behaviour. The superheater must be treated
as a distributed parameter process. After linearization for small sig-
nals the transcendental transfer functions can be approximated by ra-
tional transfer functions with a small dead time. Both simplifications
induce negligible small error. The two input/two output process was
simulated on an analog computer which was connected to a process com-
puter HP21MX. For the examples shown no noise is added to the models
in order to simplify comparisons. As oil fired steam generators gene-
rally have small process noise, the influence of noise on the main re-
sults of this study is relatively small.

As the twovariable process possesses a strong interaction G5 from the


firin0 to the superheater, the control algorithms must be designed u-
sing multivariable methods. Three ways of identification and digital
control are studied [30.4]:

(1) Two input/two output identification and c.a.d. of parameter-opti-


mized controllers (+ ID-CAD-TITO.)

(2) Alternating single input/single output adaptive control


(+ SOAC-SISO)

(3) Two input/two output adaptive control (+ SOAC-TITO)

In all cases program packages based on FORTRAN were used, and these
involved between 6-16 K words of core memory and 25-60 K words of disk
memory.
520 30. Case Studies of Identification and D~gital Control

30.3.1 Two Input/Two Output Identification and C.A.D. of Parameter-


optimized Controllers
~ogram packages OLID-MIMO, CADCA-MIMO, CAFCA-SISO)

a) Two main feedback controllers

The two variable system is simultaneously excited by a PRBS and a pseu-


do random quarternary signal (both uncorrelated), Figure 30.3.1 a). Ap-
plying identification method COR-LS leads to a TITO z-transfer func-
tion model after about 130 min. Based on this model two parameter-opti-
mized main controllers for the steam temperature (PID) and the steam
pressure (PI) are designed by numerical parameter optimization (design
time 5 to 10 min). Figures 30.3.1 b) and c) show the responses for a
step change of the reference variables w1 (k) and w2 (k). Because of the
negligibly small interaction between the injection water (u 1 ) and the
steam pressure (y 2 ), the steam temperature control has only a very
small effect on the steam pressure control, Figure 30.3.1 b). However,
strong interaction between the fuel flow (u 2 ) and the steam temperature
(y 1 ) causes a dominant influence of the steam pressure control on the
steam temperature control, Figure 30.3.1 c). Figure 30.3.1 d) repre-
sents the responses to a steam flow disturbance v(k). For a steam flow
decrease the steam temperature results first in positive and then, be-
cause of the fuel flow decrease, in negative changes. This 'undershoot'
of the steam temperature in the 'wrong direction' has a major influence
on the steam temperature control. Its suppression is one key for impro-
ving steam temperature control [18.5].

b) Feedforward controllers

The addition of a proportional feedforward controller u 2 (k) = cv 2 v(k)


(steam flow to fuel flow) results in a considerable improvement in
pressure control and steam temperature control, Figure 30.3.1 e). A
comparison of Figure 30.3.1 d) and Figure 30.3.1 e) shows that for
steam pressure control the maximum offset is improved from 3.3 bar to
0.9 bar and the settling time from about 100 min to 25 min and for
steam temperature control the maximum offset is improved from +2.7 K
to -2.4 K and the settling time from 50 min to 10 min. This is an exam-
ple of the advantage of a well adjusted feedforward controller. The
total time for the on-line design of the two main controllers with 130
min for identification, 30 min for control algorithm design and selec-
tion, and 70 min for two set point response tests, is about 230 min.
30.3 Digital Control of a Steam Generator 521

30.3.2 Alternating Single Input/Single Output Selftuning Control


(Program Package ADREG)

In this section it will be shown how SISO parameter-adaptive control


algorithms may be used for selftuning of two main controllers of a two
input/two output control system. The adaptive control of the superhea-
ter is started by 30 min open loop identification using a PRBS as exci-
tation, Figure 30.3.2 a). This is done to obtain a rough starting pro-
cess model and to avoid an undefined initial adaptation phase after
switching on the adaptive control algorithm RLS-DB2. Two setpoint step
responses show good control. This controller then is kept constant (no
further adaptation) . Then the steam evaporator is identified in open
loop for 15 min and adaptively controlled with RLS-MV3, Figure 30.3.2b).
Figure 30.3.2 c) shows the responses to a setpoint step w2 (k) with both
controllers fixed and Figure 30.3.2 d) the responses to a disturbance
step v(k). The resulting control performance is similar to that obtain-
ed with the parameter-optimized controllers, as a comparison with Fig.
30.3.1 c) and d) shows. The required total time period is about 130min.
A simultaneous application of both adaptive controllers does not give
convergence as the 'processes' of each single controller vary too much
with time.

30.3.3 Two Input/Two Output Adaptive Control (Program Package MACS)

Figure 30.3.3 shows the results obtained with a multivariable adaptive


controller developed in [25.33]. The controller consists of a combina-
tion of the recursive least squares method (multivariable model) with
a quadratic optimal state controller (RLS-IS1/SC). In Figure 30.3.3 a)
two different PRBS signals are first applied to both process inputs to
obtain a starting model by open loop identification for the adaptive
controller which is switched on after 35 min, leading immediately to
a steady state without offset. The resulting control for two simulta-
neous setpoint steps is very good, Figure 30.3.3 d). The required time
period is about 90 min. An alternative is just to change both setpoints
by a step at t = 0 and immediately to start the adaptive algorithm with
constrained manipulated variables within -10 % ~ u ~ +10 %. Figure
30.3.3 b) shows that the control system is stabilized after only 20
min. The setpoint steps at t = 32 min show very good control. The time
taken is about 70 min. Figure 30.3.3 c) shows the responses of the a-
daptive two input/two output parameter-adaptive control system to a
step change of the steam pressure setpoint. A comparison with the re-
522 30. Case Studies of Identification and Digital Control

sults for the parameter-optimized controllers, Figure 30.3.1 c), indi-


cates a significant improvement especially in steam temperature con-
trol. The maximum steam temperature offset is reduced from 4.2 K to
1.3 K, the settling time from 50 min to 25 min and the pressure shows
no overshoot. This example indicates that the complete state feedback
controller results in considerably better control than that of parame-
ter-optimized main controllers. The time taken for multivariable para-
meter-adaptive control is about 80 min, including two separate setpoint
responses. Hence, this multivariable adaptive control gives the best
control in the shortest adaptation time.

20 40 50 80 100 [min)

Figure 30.3.1 Two input/two output identification and c.a.d. of con-


trol algorithms:
a) Identification. Input u 1 : PRBS; N=127; A=1.
Input u 2 : PRQS, derived from the PRBS of u 1 •
Method: COR-LS (multivariable)
0; m22
0; m12 0
30.3 Digital Control of a Steam Generator 523

''l~
',~f
3PC3/2PC2

-10
3PC3/ 2PC2
~~~==--------------
y2 [ba~
5

'~I
b) 20 40 50 [min]
:

v troll

' l\
-10
3PC3
,::+'1---------------------
3PC3 -------
y,[K]j
""""- ==-=-----= _,v
-5
,,:~ u,[%]h

~
-5 -10r

'1 1~
y,[~l=

""
2 PC2
FEED FORWARD
-5

''~C:=
u[%]
2

-10 -10

d) 20 40 50 [min] e) 20 40 50 [min]
Figure 30.3 .1
b) Responses to setpoint steps w1 (k) c) Responses to setpoint steps
of the steam temperature. Two w2(k) of the steam pressure.
main feedback controllers. Two main feedback controllers
Steam temperature controller: (controllers as b)).
PID r = 0. e) Responses to a disturbance step
Steam pressure controller:
v(k) (steam flow). Two main
PI r = 0.
feedback controllers as in b)
d) Responses to a disturbance step and one proportional feedfor-
v(k) (steam flow). Two main ward controller from steam
feedback controllers as in b) . flow v to fuel flow u 2 .
524 30. Case Studies of Identifica tion and Digital Control

u,l%)
/DENT/F. ADAPTI VE CONTROL FIXED CONTROL
RLS -D82
.~~~ r
u 10

n ~

II
~
~

·10

\
vv
Y, [I(J (\ r--.
t

1
·5

-5

' ':j
a) T-~--~2o~~~,~o--~-s~o~--~s~o----~--~~---
IOO mini t
C) 20 40 I min) I

X.=.
u,(o/oli-_ _ _ ____:.F....:I.:..: CD:::.N:.:_T.:_:R~O=.!l=------l
E.=.D-.:::.
20

u,(%1 FIXED CONT ROL


10
10

~
-10

,~,1 1\
co:::---
·5
·5

~I
IDENTF. ADAPTIVE CONTROL
uoor--T------------~--~
-, RLS - MVJ r : 0,0096

5
·5 ~
Y,I~I

v[%1 1
~
10

-10

b) 20 40 60 (rrinl t d) 20 40 (min] t

Figure 30.3.2 Alternatin g selftuning control of a two input/two output


process
a) Short open loop identifica tion b) Short open loop identifica tion
(PRBS) and adaptive control (arbitrary ternary signal) and
with RLS-DB2 (r' = 1) of the adaptive control with RLS-MV3
steam t e mperature. No steam (r=0.0096) for the steam pressure.
pressure control. DB2 controller of steam tempera-
c) Responses to setpoint step ture fixed.
w2 (k). Both controller s fixed. d) Responses to a disturbanc e step
v(k) (steam flow) . Ibth controller s
fixed.
30.3 Digital Control of a Steam Generator 525

ADAPTI VE
CON TROL

RL 5 - 15 11 5C

-10 -10
a) 1-----~2o ,~
~----~ o ----r~~7
o ----~6~ l ~-
·n~
b) 20 LO 60 {monl I

RL 5- 151/5C RL5- 15115C

u 21%1 u 2(%J
10 10

-10 -10
~--~~2~0----~L 0----~6~0~~~~
7 l ~l-
m in~ 7
60 [monl I
c) d) 20 LO

Figure 30.3.3 Two input/two output adaptive control with RLS-IS1/SC,


a combination of recursive least squares method with
state controller.
g = [£T£] R [oc/ ~]
a) Open loop identification and b) Adaptive control with setpoint
subsequent adaptive control. steps w1 (k) and w2 (k) at t=O and
t = 32 min.
c) Adaptive control for a set- d) Adaptive control for setpoint
point step w2(k) at t=O after steps w1 (k) and w2(k) at t=O af-
the adaptation phase. ter the adaptation phase.
526 30. Case Studies of Identification and Digital Control

The methods discussed and results for only the superheater (SISO) and
their implementation time requirements are summarized in Table 30.3.1.

Table 30.3.1 Implementation time estimates required for different me-


thods of identification and feedback control

Tests
Identi- Controller Adapta- [min] Total
Procedure
fication design tion time setp.setB time
time [min] time [min] [min] w1 w2 [min]
SISO 1. Identification and c.a.d.
of control algorithms
60 15 - 20 - 95
(super-
heater) 2. Adaptive control 15 - 25 20 - 60

TITO 1. Identification and c.a.d.


of control algorithms
130 30 - 20 so 230
(super-
heater 2. Alternating SISO self- 15 - 25 20 -
and eva- tuning control 15 - 35 - 40 }1so
porator)
3. TITO adaptive control - - 30 15 35 80

The numbers given may also be valid if small/medium size process dis-
turbances act on the control variables. By using parameter-adaptive
control algorithms far less time is required.

An evaluation of the results of this study and also of the other case
studies of this chapter is given in section 30.4. Specific results for
steam generator control are:

If the steam temperature control is considered as a single variable


control, properly tuned PID and state controllers result in approxi-
mately the same control performance. Taking into account the strong
interactions, state control feedback gives significantly better results
than two main feedback PID and PI controllers. A feedforward control-
ler from steam flow to fuel flow considerably improves the control.
30.4 Conclusions 527

30.4 Conclusions

The case studies discussed in this chapter have shown that the follow-
ing results can be summarized concerning the different methods of iden-
tification and control.

Identification and computer aided design of aontroL aLgorithms (ID-CAD)

Advantages:
- any control algorithm can be designed and evaluated
- different control schemes can be designed and compared by simulation
- off-line or on-line identification methods can be used
- general open loop model obtained

Disadvantages:
- process should be time invariant during identification and design
time
- requires more time than adaptive algorithms

Therefore this method should be used for the basic design of the con-
trol scheme and for the design of fixed control or feedforward adap~
tive control if the process is time invariant.

Parameter-adaptive controL aLgorithms (SOACJ

Advantages:
- requires less time than identification and c.a.d.
- can be used for slowly time varying processes
Disadvantages:
- control algorithms may have to satisfy closed loop identifiability
conditions
- small computational effort for identification and controller design
required
- special closed loop model obtained
Therefore parameter-adaptive control algorithms should be used if the
process is slowly time varying and the available design time is small.
These algorithms may be used for tuning of fixed control algorithms
or for self-adaptive control.
Appendix

Test Processes for Simulation

Various 'test processes' have been used in this book to simulate the
typical dynamical behaviour of processes in order to test control sys-
tems with various control algorithms, identification and parameter es-
timation methods and adaptive control algorithms. These test processes
are models of processes with various pole-zero configurations and dead
times, and were chosen with regard to several viewpoints. The discrete-
time transfer functions G(z) were determined by z-transformation from
the continuous-time transfer function G(s) with a zero-order hold, c.
f. Eq. (3.4-10), if not otherwise indicated.

Process I: second order, oscillating behaviour

-1 -2
b z + b 2z
... .
ylk)

-1 2 10
1+a 1 z +a 2 z
7,5 - -.- - - - - · . ;-.-.- ........ ·~ ..L ........ _____

-1.5; a 2 = o. 7 5

2 sec.

There is no corresponding transfer function G(s) in this case. This


test process was proposed in [23.15].

Process II: second order, ylkl


nonminimum phase behaviour 1,0 - - - - - - - - - - : ~ ~.----; • • • -.-.-.--

0,5

= 4 sec; T 2 10 sec
-1 -2 o++~~~~,_---------r------~
b 1z +b 2 z
o·· 10 20
-1 -2
1+a 1 z +a 2 z T 0 = 2 sec

Parameters for T 0 = 1; 4; 8; 16 sec, see Table 5.4.1.


Appendix 529

Process III: third order with dead time, low-pass behaviour

K(1+T 4 s) -T s
( e t ylkl
GIII s) = (1+T 1 sl (1+T 2 sl (1+T 3 s)

10 sec; T 2 = 7 sec;
1.0 - -~--

.
-.----; -;-.-.-.--.---. ................... ---

0,5
Tt = 4 sec; T 3 = 3 sec; T 4 2 sec.
-1 -2 -3
b 1 z +b 2 z +b 3 z -d
GIII(z) = -1 -2 -3 z 0+++++++++4~--------~--------
1+a 1 z +a 2 z +a 3 z 0 10 20 k

Parameters for T0 = 1; 4; 8; 16 sec; 4 sec


see Table 5.4.2.
Process I, II and III: c.f. [23.9], [3.13].

Process IV: fifth order, low-pass behaviour.


Model of a steam superheater [18.5], [18.6].

(1+13.81s) 2 (1+18.4s) [~] y


GIV(s)
(1+59s) 5 % -1,0
...................
• ••••
••
••


20 sec: •

-3.562473 -1.7 3 10- 3 •

5.076484 -1.831 10- 3 ••

-3.616967 2.143 10- 3 Oi*~·--~r-----r-----T-----,_---
0 10 20 30 L.O k
1.288535 -5.95 10- 4
-0.183615 4.9 10- 5

T0 40 sec:

-2.538242 b1 -9.725 10- 3


2.577069 b2 -2.1679 10- 2
-1.308245 b3 2.18 10- 3
0.332064 b4 3.28 10- 4
-o.o33714 b5 -3.6 10- 5

Process V: twovariable process 'evaporator and superheater of a drum


steam generator' due to Figure 18.1.1 [18.5], [18.6].

G11 ( s)

G21 (s) 1.771 [ %K 1


(1+153.5s) (1+24s) (1+15s)
530 Appendix

0.96 bar
695s ( 1+15s) %

G12(s) =
0.0605
695s
[I ba%r

T0 = 20 sec:

GIV(z) see process IV


2.476•10- 2 z- 1+5.744·10- 2 z- 2 +7.859·10- 3 z- 3
1-1.576z- 1+0.7274z- 2 -0.1006z- 3

0.01237z- 1+0.00798z- 2
1-1.264z- 1+0.264z- 2

0.001741z- 1
1-z- 1

Process VI: third order, low-pass behaviour

K = 1; T 1 = 10 sec; T 2 = 7.5 sec; T 3 5 sec.

T0 = 4 sec: y
1.0 - - - - - -- -
••
-
••••• ~· .........
••
-1.7063; a 2 = -0.9580; a 3 = -0.1767 .•

o~------~----~------
o.o186; b 2 = o.o486; b 3 o.oo78 0 10 20 k
For T0 = 2; 6; 8; 10; 12 sec see Table 3.7.1.

Process VII: second order, low-pass behaviour

K = 1; T 2 = 7.5 sec; T 3 5 sec. 1,0


y

••••
-----~_. ................ -

b 1 z -1 +b 2 z -2
••


0+------+------+---_.
4 sec: 0
To 10 20 k
a, -1 .036; a2 0.2636
b, 0.1387; b2 0.0889
Appendix 531

Process VIII: second order, low-pass behaviour

K
GVI II ( s) = ( 1+T 1 s) ( 1 +T 2 s)

K = 1; T 1 10 sec; T 2 = 5 sec.

b 1z
-1 +b z -2
2
GVIII (z)

To 4 sec:

a1 -1.1197; a2 0.3012
b1 0.1087; b2 0.0729

For T 0 = 1; 2; 6; 8; 12 sec see Table 3.7.2.


532 Appendix

On the Differentiation of Vectors and Matrices

Let the be a function of the parameters a 1 , a 2 , .•• , an. The


vector~

partial differential of this vector with respect to the single parame-


ters is sought. Here a partial differential operator is defined as the
vector
_a_
aai
a
a~
a
aan
As this is defined as a column vector, the operator cannot be applied
to

ru
but only to its transpose X
T
This results in

ax, ax2
~
T aa 1 aa 1 aa 1
ax
aa
ax 1 ax2
~
a an a an a an

If X is the inner product of two other vectors

X
T
v w

i.e. a scalar, this gives


[v1 ...
v,JriJ v,w, + . .. + vw
p p

a T avT ClwT
a a [~ Y!':] aa w +
aa v

av, av aw 1 aw
a a, w,
+ ... + __1?.
aa 1 p
w
a a, v,
+ . .. + __1?. v
a a, p

+
av, av aw,
aan w, + ... + __1?.
aa
w
p aan v, + ... + ~
aa
v
p
n n
Appendix 533

If the elements of the vector v do not depend on the parameters ai and


~, then

() T
()a [~ ~J = v.

If, on the other hand, the elements of w do not depend on the parame-
ters ai and v ~,

The above pair of equations is also valid for the matrices V and W in-
stead of the vectors v and w

a
()a
~LaTWJ W

Let A be a quadratic matrix, then

d T
ax [~ ~ y] ~y

d
ay
[~T~ y] ATx

d
ax [~T~ ~J 2 A X A symmetrical.
534 Appendix

Table of z-Transforms
The following table contains some frequently used time functions x(t),
their Laplace,transforms x(s) and z-transforms x(z). The sample time
is T 0 • More functions can be found in [2.15], [2.19], [2.21], [2.11],
[2.13], [2.14].

x(t) x(s) x (z)

z
s z-1

T0 z
1
t
2s (z-1) 2

2
T0 z(z+1)

( z-1) 3

-at 1 z
e s+a z-e-aTo

-at
t·e 2
(s+a)

To2ze-aTO(z+e-aTo)
2 -at 2
t ·e 3
(s+a) (z-e -aTO) 3

-at a (1-e-aTO)z
1 - e s(s+a) (z-1) (z-e-aTo)

-at -bt b-a


e -e (s+a) (s+b)

sin w1 t

s
cos w1 t 2 2
s +w 1

e-at sin w1 t
2 2
(s+a) +w 1

-at s+a
e cos w1 t 2 2
(s+a) +w 1
Literature
[1.1] Thompson, A.: Operating experience with direct digital control.
IFAC/IFIP Conference on Application of Digital Computers for
Process control, Stockholm 1964, New York: Pergamon Press.

[1.2] Giusti, A.L., Otto, R.E. and Williams, T.J.: Direct digital
computer control. Control Engineering 9 (1962), 104-108.

[1.3] Evans, c.s. and Gossling, T.H.: Digital computer control of a


chemical plant. 2.IFAC-Congress, Basel 1963.

[1.4] Ankel, Th.: ProzeBrechner in der Verfahrenstechnik, gegenwarti-


ger Stand der Anwendungen. Regelungstechnik 16 (1968), 386-394.

[1.5] Ernst, D.: Digital control in power systems.4thiFAC/IFIP Symp.


on Digital Computer Applications to Process Control, ZUrich
0974). Lecture Notes 'Control Theory' 93/94 Berlin: Springer
( 1974).

[1.6] Amrehn, H.: Digital computer applications in chemical and oil


industries. 4th IFAC/IFIP Symposium on Digital Computer Appli-
cations to Process Control, ZUrich (1974). Lecture Notes 'Con-
trol Theory' 93/94 Berlin: Springer (1974).

[1.7] Savas, E.S.: Computer control of industrial processes. London:


Me Graw-Hill (1965).

[1.8] Miller, W.E. (Ed.): Digital computer applications to process


control. New York: Plenum Press (1965).

[1.9] Lee, T.H., Adams, G.E. and Gaines, W.M.: Computer process con-
trol: modeling and optim~zation, New York: Wiley (1968).

[1.10] Schone, A.: ProzeBrechensysteme der Verfahrensindustrie. MUn-


chen: Carl Hanser (1969).

[1.11] Anke, K., Kaltenecker, H. and Oetker, R.: ProzeBrechner. Wir-


kungsweise und Einsatz. MUnchen: R. Oldenbourg (1971).

[1.12] Smith, C.L.: Digital computer process control. Scranton: In-


text Educ. Publish. (1972).

[1.13] Harrison, T.J. (Ed.): Handbook of industrial control computers.


New York: Wiley-Interscience (1972).

[1.14] Syrbe, M.: Messen, Steuern, Regeln mit ProzeBrechnern. Frank-


furt: Akad. Verlagsgesellschaft (1972).

[2.1] Oldenbourg, R.C. and Sartorius, H.: Dynamik selbsttatiger Rege-


lungen. MUnchen: R. Oldenbourg (1944 and 1951).

[2.2] Zypkin, J.S.: Differenzengleichungen der Impuls- und Regeltech-


nik. Berlin: VEB-Verlag Technik (1956).

[2.3] Jury, E.I.: Sampled-data control systems. New York: John Wiley
(1958).
536 Literature

[2.4] Ragazzini, J.R. and Franklin, G.F.: Sampled-data control sys-


tems. New York: Me Graw Hill (1958).

[2.5] Smith, O.J.M.: Feedback control systems. New York: Me Graw


Hill (1958).

[2.6] Zypkin, J.S.: Theorie der Impulssysteme. Moskau: Staatl. Verlag


fur physikalisch-mathematische Literatur (1958).

[2.7] Tou, J.T.: Digital and sampled-data control systems. New York:
Me Graw Hill (1959).

[2.8] Tschauner, J.: Einflihrung in die Theorie der Abtastsysteme.


Mlinchen: Oldenbourg (1960).

[2.9] Monroe, A.J.: Digital processes for sampled-data systems. New


York: J. Wiley (1962).

[2.10] Kuo, B.C.: Analysis and synthesis of sampled-data control


systems. Prentice-Hall (1963).

[2.11] Jury, E.I.: Theory and application of the z-transform method.


New York: J. Wiley (1964).

[2.12] Zypkin, J.S.: Sampling systems theory. New York: Pergamon Press
( 1964) •

[2.13] Freeman, H.: Discrete-time systems. New York: J. Wiley (1965).

[2.14] Lindorff, D.P.: Theory of sampled-data control systems. New


York: John Wiley (1965).

[2.15] Strejc, V.: Synthese von Regelungssystemen mit ProzeBrechnern.


Berlin: Akademie-Verlag (1967).

[2.16] Zypkin, J.S.: Theorie der linearen Impulssysteme. Mlinchen:


R. Oldenbourg (1967).

[2.17] Kuo, B.C.: Discrete-data control systems. Englewood-Cliffs,N.J.:


Prentice Hall (1970).

[2.18] Cadzow, J.A. and Martens, H.R.: Discrete-time and computer con-
trol systems. Englewood-Cliffs, N.J.: Prentice Hall (1970).

[2.19] Ackermann, J.: Abtast-Regelung. Berlin: Springer (1972).

[2.20] Leonhard, W.: Diskrete Regelsysteme. Mannheim: Bibliographi-


sches Institut (1972).

[2.21] Fellinger, o.: Lineare Abtastsysteme. Mlinchen: R. Oldenbourg


( 197 4) .

[2.22] Isermann, R.: Digitale Regelsysteme (in German, 1.Editio~).


Berlin: Springer (1977).
Literature 537

[3.1] Kurzweil, F.: The control of multivariable processes in the


presence of pure transport delays. IEEE Trans. on Aut. Control
(1963), 27-34.

[3.2] Koepcke, R.W.: On the control of linear systems with pure time
delays. Trans. ASME (1965), 74-80.

[3.3] Tustin, A.: Automatic and manual control. London: Butterworth


( 1952)0

[3.4] Isermann, R.: Theoretische Analyse der Dynarnik industrieller


Prozesse. Mannheim: Bibliographisches Institut (1971) Nr. 764/
764a.

[3.5] Isermann, R.: Results on the simplification of dynamic process


models. Int. J. Control (1973), 149-159.

[3.6] Campbell, D.P.: Process dynamics. New York: John Wiley (1958).

[3.7] Profos, P.: Die Regelung von Darnpfanlagen. Berlin: Springer


( 1962)0

[3.8] Gould, L.A.: Chemical process and control. Massachusets:


Addison-Wesley (1969).

[3.9] MacFarlane, A.G.J.: Dynamical system models. London: G.G.


Harrap (1970).

[3.10] Gilles, E.D.: Systeme mit verteilten Pararnetern. Mlinchen: R.


Oldenbourg (1973).

[3.11] Isermann, R.: Experimentelle Analyse der Dynamik von Regel-


systemen. Mannheim: Bibliographisches Institut (1971) Nr. 515/
515a.

[3.12] Eykhoff, P.: System identification. London: John Wiley (1974).

[3.13] Isermann, R.: ProzeBidentifikation. Berlin: Springer (1974).

[3.14] Wilson, R.G., Fisher, D.G. and Seborg, D.E.: Model reduction
for discrete-time dynamic systems. Int. J. Control (1972),
549-558.

[3.15] Gwinner, K.: Modellbildung technischer Prozesse unter besonde-


rer Berlicksichtigung der ~dellvereinfachung. PDV-Entwicklungs-
notiz PDV-E 51 (1975). Karlsruhe: Ges. fur Kernforschung.
538 Literature

[5.1] Bernard, J.W. and Cashen, J.F.: Direct digital control. Instr.
and Contr. Systems 38 (1965) Nr. 9, 151-158.

[5.2] Cox, J.B., Williams, L.J., Banks, R.S. and Kirk, G.J.: A prac-
tical spectrum of DDC chemical process control algorithms.
!SA-Journal 13 (1966) Nr. 10, 65-72.

[5.3] Davies, W.T.D.: Control algorithms for DDC. Instrument Prac-


tice 21 (1967) Nr. 1, 70-77.

[5.4] Lauber, R.: Einsatz von Digitalrechnern in Regelungssystemen.


ETZ-A 88 (1967), 159-164.

[5.5] Amrehn, H.: Direkte digitale Regelung. Regelungstechnische


Praxis 10 (1968), 24-31, 55-57.

[5.6] Hoffmann, M. and Hofmann, H.: Einfuhrung in die Optimierung.


Weinheim: Verlag Chemie (1973).

[5.7] Isermann, R., Bux, D., Blessing, P. and Kneppo, P.: Regel-
und Steueralgorithmen fur die digitale Regelung mit ProzeB-
rechnern - Synthese, Simulation, Vergleich -. PDV-Bericht Nr. 54
KFK-PDV, Karlsruhe: Gesellschaft fur Kernforschung (1975).

[5.8] Rovira, A.A., Murrill, P.W. and Smith, C.L.: Modified PI algo-
rithm for digital control. Instruments and Control Systems,
Aug. (1970), 101-102.

[5.9] Isermann, R., Bamberger, W., Baur, u., Kneppo, P. and Siebert,
H.: Comparison and evaluation of six on-line identification
methods with three simulated processes. IFAC-Symposium on Iden-
tification, Den Haag 1973, und IFAC-Automatica 10 (1974), 81-103.

[5.10] Lee, W.T.: Plant regulation by on-line digital computers.


S.I.T. Symposium on Direct Digital Control,

[5.11] Goff, K.W.: Dynamics in direct digital control, I and II.


ISA-Journal, Nov. 1966, 45-49, Dec. 1966, 44-54.

[5.12] Beck, M.S. and Wainwright, N.: Direct digital control of che-
mical processes. Control (1968) Part 5, 53-56.

[5.13] Bakke, R.M.: Theoretical aspects of direct digital control.


ISA-Trans. 8 (1969), 235-250.

[5.14] Oppelt, W.: Kleines Handbuch technischer Regelvorgange. Wein-


heim: Verlag Chemie (1960).

[5.15] Lopez, A.M., Murrill, P.W. and Smith, C.L.: Tuning PI- and
PID-digital controllers. Instrum. and Control Systems 42
(1969), 89-95.

[5.16] Takahashi, Y., Chan, c.s. and Auslander, D.M.: Parameterein-


stellung bei linearen DDC-Algorithmen. Regelungstechnik und
ProzeBdatenverarbeitung 19 (1971), 237-244.

[5.17] Takahashi, Y., Rabins, M. and Auslander, D.: Control and


dynamic systems. Reading: Addison-Wesley (1969).
Literature 539

[5.18] Schwarz, Th.: Einstellregeln fUr diskrete parameteroptimierte


Regelalgorithmen. Universitat Stuttgart: Studienarbeit Nr. 72/
74, Abteilung fUr Regelungstechnik und ProzeBdynamik (IVD),
( 1975).

[5.19] Tolle, H.: Optimierungsverfahren. Berlin: Springer (1971).

[5.20] Wilde, D.J.: Optimum seeking methods. Englewood Cliffs:


Prentice-Hall (1964).

[5.21] Hofer, E. and Lunderstadt, R.: Numerische Methoden der Optimie-


rung. MUnchen: R. Oldenbourg (1975).

[6.1] Bergen, A.R. and Ragazzini, J.R.: Sampled-data processing tech-


niques for feedback control systems. AlEE Trans. 73 (1954), 236.

[6.2] Strejc, V.: Synthese von Regelkreisen mit ProzeBrechnern. Mes-


sen, Steuern, Regeln (1967) Nr. 6, 201-207.

[6.3] Dahlin, E.B.: Designing and tuning digital controllers. Instr.


and Control Systems 41 (1968) Nr. 6, 77-83 und Nr. 7, 87-92.

[7.1] Jury, E.I. und Schroeder, W.: Discrete compensation of sampled


data and continuous control systems. Trans. AlEE, Vol. 75 (1956)
Pt. II.

[7.2] Kalman, R.E.: Discussion remark to a paper by Bergen, A.R. and


Ragazzini, J.R. Trans. AlEE (1954), 236-247.

[8.1] Bellman, R.E.: Dynamic programming. Princeton: Princeton Uni-


versity Press (1957).

[8.2] Kalman, R. and Koepcke, R.V.: Optimal synthesis of linear


sampling control systems using generalized performance indexes.
Trans. ASME (1958) Nr. 80, 1820-1826.

[8.3] Athans, M. and Falb, P.L.: Optimal control. New York: Me Graw
Hill (1966).

[8.4] Kwakernaak. H. and Sivan, R.: Linear optimal control systems.


New York: Wiley-Interscience (1972).

[8.5] Kneppo, P.: Vergleich von linearen Regelalgorithmen fUr Pro-


zeBrechner. Diss. Univ. Stuttgart. Karlsruhe: Gesellschaft
fUr Kernforschung, PDV-Bericht KFK-PDV 96 (1976).

[8.6] Johnson, C.D.: Accomodation of external disturbances in linear


regulators and servomechanical problems. IEEE Trans. Aut.Contr.,
AC-1 6 ( 1 9 7 1 ) Nr. 6 •

[8.7] Kreindler, E.: On servo problems reducible to regulator pro-


blems. IEEE Trans. Aut. Contr., AC-14 (1969) Nr. 4.
540 Literature

[8.8] Bux, D.: Anwendung und Entwurf konstanter, linearer Zustands-


regler bei linearen Systernen mit langsarn veranderlichen Para-
rnetern. Diss. Univ. Stuttgart. Fortschritt-Bericht VDI-Zschr.
Reihe 8 (1975) Nr. 21.

[8.9] Rosenbrock, H.H.: Distinctive problems of process control.


Chern. Eng. Progress 58 (1962), 43-50.

[8.10] Porter, B. and Crossley, T.R.: Modal control. London: Taylor


and Francis (1972).

[8.11] Gould, L.A.: Chemical process control. Reading: Addison-Wesley


( 1969) .

[8.12] Follinger, 0.: Einflihrung in die modale Regelung. Regelungs-


technik 23 (1975), 1-10.

[8.13] Luenberger, D.G.: Observing the state of a linear system. IEEE


Trans. on Military Electronics (1964) Nr. 8, 74-80.

[8.14] Luenberger, D.G.: Observers for rnultivariable systems. IEEE


Trans. AC (1966) Nr. 11, 190-197.

[8.15] Luenberger, D.G.: An introduction to observers. IEEE Trans.AC


16 (1971) Nr. 6, 596-602.

[8.16] Levis, A.H., Athans, M. and Schlueter, R.A.: On the behavior


of optimal linear sampled data regulators. Atlanta: Preprints
Joint Automatic Control Converence (1970), 695-669.

[9.1] Reswick, J.B.: Disturbance response feedback. A new control


concept. Trans. ASME 78 (1956), 153.

[9.2] Smith, O.J.M.: Closer control of loops with dead time. Chern.
Engng. Progr. 53 (1957) Nr. 5, 217-219.

[9.3] Smith, O.J.M.: Feedback control systems. New York: Graw Hill
( 19 58) .

[9.4] Smith, O.J.M.: A controller to overcome dead time. !SA-Journal


6 (1958) Nr. 2, 28-33.

[9.5] Giloi, W.: Zur Theorie und Verwirklichung einer Regelung fur
Laufzeitstrecken nach dern Prinzip der erganzenden Rlickflihrung.
Diss. Univ. Stuttgart (1959).

[9.6] Schmidt, G.: Vergleich verschiedener Totzeitregelsysterne. Mes-


sen, Steuern, Regeln 10 (1967), 71-75.

[9.7] Frank, P.M.: Vollstandige Vorhersage irn stetigen Regelkreis


mit Totzeit. Regelungstechnik 16 (1968), 111-116 and 214-218.
Literature 541

[10.1] Horowitz, I.M.: Synthesis of feedback systems. New York: Aca-


demic Press (1963).

[10.2] Kreindler, E.: Closed-loop sensitivity reduction of linear op-


timal control systems. IEEE Trans. AC-13 (1968), 254-262.

[10.3] Perkins, W.R. and Cruz, J.B.: Engineering of dynamic systems.


New York: John Wiley (1969).

[10.4] 2nd IFAC-Symposiurn on System Sensitivity and Adaptivity, Du-


brovnik (1968). Preprints Yugoslav Committee for Electronics
and Automation (ETAN), Belgrad/Jugoslavia.

[10.5] 3rd IFAC-Symposiurn on Sensitivity, Adaptivity and Optimality,


Ischia (1973). Proceedings Instrument Soc. of America (ISA),
Pittsburgh.

[10.6] Tomovic, R. and Vucobratovic, M.: General sensitivity theory.


New York: Elsevier (1972).

[10.7] Frank, P.M.: Empfindlichkeitsanalyse dynamischer Systeme.


Mlinchen: R. Oldenbourg (1976).

[10.8] Anderson, B.D.O. and Moore, J.B.: Linear optimal control. En-
glewood Cliffs: Prentice-Hall (1971).

[12.1] Aoki, M.: Optimization of stochastic systems. New York: Aca-


demic Press (1967).

[12.2] Meditch, J.S.: Stochastic optimal linear estimation and control.


New York: Me Graw Hill (1969).

[12.3] Bryson, A.E. and Ho Y.C.: Applied optimal control. Watham:


Ginn and Company (1969).

[12.4] Rstrom, K.J.: Introduction to stochastic control theory. New


York: Academic Press (1970).

[12.5] Kushner, H.J.: Introduction to stochastic control. New York:


Holt, Rinehart and Winston (1971).

[12.6] Schlitt, H. and Dittrich, F.: Statistische Methoden der Rege-


lungstechnik. Mannheim: Bibliographisches Institut (1972),
Nr. 526.

[12.7] Davenport, W. and Root, W.: An introduction to the theory of


random signals and noise. New York: Me Graw Hill (1958).

[12.8] Bendat, J.S. and Piersol, A.G.: Random data: analysis and mea-
surement procedures. New York: Wiley Interscience (1971).

[12.9] Box, G.E.P. and Jenkins, G.M.: Time series analysis, fore-
casting and control. San Francisco: Holden Day (1970).

[12.10] Jazwinski, A.H.: Stochastic processes and filtering theory.


New York: Academic Press (1970)
542 Literature

[14.1] Clarke, M.A. and Hastings-James, R.: Design of digital con-


trollers for randomly disturbed systems. Proc. IEE, Vol.118
(1971) Nr. 10, 1503-1506.

[14.2] Schumann, R.: Various multivariable computer control algorithms


for parameter-adaptive control systems. IFAC Symposium on Com-
puter Aided Design of Control Systems, Zurich (1979). Oxford:
Pergamon Press.

[15.1] Bar-Shalom, Y. and Tse, E.: Dual effect, certainty equivalence


and separation in stochastic control. IEEE Trans. Auto. Contr.
AC-1 9 ( 19 7 4) , 4 9 4-500.

[15.2] Sage, A.P. and Melsa, J.L.: Estimation theory with applications
to communications and control. New York: Me Graw Hill (1971).

[15.3] Nahi, N.E.: Estimation theory and applications. New York: John
Wiley ( 1 9 6 9) •

[15.4] Kalman, R.E.: A new approach to linear filtering and prediction


problems. Trans. ASME, Series D, 82 (1960), 35-45.

[15.5] Kalman, R.E. and Bucy, R.S.: New results in linear filtering
and prediction theory. Trans. ASME, Series D. 83 (1961), 95-108.

[15.6] Deutsch, R.: Estimation theory. Englewood Cliffs: Prentice Hall


( 1965).

[15.7] Bryson, A.E. and Ho, Y.C.: Applied optimal control. Watham: Ginn
(Blaisdell) (1969).

[15.8] Jazwinski, A.H.: Stochastic processes and filtering theory.


New York: Academic Press (1970).

[15.9] Theory and Applications of Kalman Filtering. AGARDOgraph Nr.139


(1970). zentralstelle flir Luftfahrtdokurnentation. Mlinchen.
ESRO/ELDO Space 114, Neuilly-sur-Seine/France.

[15.10] Brammer, K. and Siffling, G.: Kalman-Bucy-Filter. Mlinchen:


R. Oldenbourg (1975).

[16.1] Benennungen flir Steuer- und Regelschaltungen. VDI/VDE-Richt-


linie 3526, VDI/VDE-Handbuch Regelungstechnik. Berlin: Beuth-
Vertrieb (1972).

[16.2] PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches In-


stitut (1967).

[16.3] Leonhard, W.: Einflihrung in die Regelungstechnik. Braunschweig:


Vieweg (1972).

[17.1] Isermann, R. and Bauer, H.: Entwurf von Steueralgorithrnen flir


ProzeBrechner. ETZ-A 36 (1975), 242-245.
Literature 543

[18.1] Mesarovic, M.D.: The control of multivariable systems. New


York: John Wiley (1960).

[18.2] Schwarz, H.: Mehrfach-Regelungen. 1. Band. Berlin: Springer


(1967).

[18.3] Schwarz, H.: Mehrfach-Regelungen. 2. Band. Berlin: Springer


( 1971) .

[18.4] Thoma, M.: Theorie linearer Regelsysteme. Braunschweig: Vieweg


( 197 3) .

[18.5] Isermann, R.: Die Berechnung des dynamischen Verhaltens der


Dampftemperatur-Regelung von Dampferzeugern. Regelungstechnik
14 (1966), 469-475 and 519-522.

[18.6] Isermann, R., Baur, U. and Blessing, P.: Test case C for the
comparison of different identification methods. Boston: Proc.
of the 6. IFAC-Congress (1975).

[18.7] Schramm, H.: Beitrage zur Regelung von ZweigroBensystemen am


Beispiel eines Verdampfers. Fortschr.-Bericht VDI-Z, Reihe 8
(1976) Nr. 24.

[18.8] Freund, E.: Zeitvariable MehrgroBensysteme. Lecture Notes in


Operations Research and Mathematical Systems. Berlin: Springer
( 1971) .

[18.9] Sinha, N.K.: Minimal realization of transfer functions matri-


ces - a comparative study of different methods. Int. J. Con-
trol 22 (1975), 627-639.

[18.10] Kucera, V.: Discrete linear control. Prague: Academia (1979).

[19.1] Kraemer, W.: Grenzen und Moglichkeiten nicht entkoppelter, li-


nearer Zweifachregelungen. Fortschritt-Bericht VDI-Z, Reihe 8
( 1968) Nr. 10.

[19.2] Muckli, W.: Analyse und Optimierung nicht entkoppelter zwei-


fachregelkreise. Aachen: Diss. TH (1968).

[19.3] Muckli, W. and Kraemer, W.: Reglereinstellung an nicht entkop-


pelten ZweigroBensystemen. Regelungstechnik 20 (1972), 155-163.

[19.4] Zietz, H.: Stabilitatsbetrachtungen und Reglerentwurf bei nicht


entkoppelten ZweigroBenregelungen. Messen, Steuern, Regeln 16
(1973) ' 84-88.

[19.5] Niederlinski, A.: A heuristic approach to the design of linear


multivariable interacting control systems. Automatica 7 (1971),
691-701.
[19.6] Engel, W.: Grundlegende Untersuchungen tiber die Entkopplung von
Mehrfachregelkreisen. Regelungstechnik 14 (1966), 562-568.
544 Literature

[20.1] Schumann, R.: Varous multivariable computer control algorithms


for parameter-adaptive control systems. IFAC Symposium on Com-
puter Aided Design of Control Systems, Zurich (1979). Oxford:
Pergamon Press.

[20.2] Keviczky, L. and Hetthessy, J.: Self-tuning minimum variance


control of MIMO discrete systems. Autom. Control Theory and
Applic., Vol. 5 (1977).

[21.1] Falb, P.L. and Wolovich, W.A.: Decoupling in the design and
synthesis of multivariable control systems. IEEE Trans. AC 12
(1967)' 651-659.

[21.2] Gilbert, E.G.: The decoupling of multivariable systems by state


feedback. SIAM Control, Vol. 7 (1969) Nr. 1, 50-63.

[21.3] Schwarz, H.: Optimale Regelung linearer Systeme. Mannheim:


Bibliograph. Institut (1976).

[22.1] Aseltine, J.A., Mancini, A.R. and Sature, C.W.: A survey of


adaptive control systems. IRE Trans. Aut. Contr. 6 (1958), 102.

[22.2] Stromer, R.R.:Adaptive and self-optimizing control systems -


a bibliography. IRE Trans. Aut. Contr. 4 (1959), 65.

[22.3] Truxal, J.: Adaptive control- a survey. Proceed. 2nd IFAC-


Congress, Basel (1963).

[22.4] Donaldsson, D.P. and Kishi, F.H.: Review of adaptive control


systems theories and techniques. Modern Control Systems Theory,
edited by C.T. Leondes. New York: Me Graw Hill (1965).

[22.5] Tsypkin, Y.Z.: Adaptation, training and self organization in


automatic systems. Automation Remote Control 27 (1971), 16.

[22.6] Mishkin, E. and Braun, L.: Adaptive control systems. New York:
Me Graw Hill (1961).

[22.7] Eveleigh, V.W.: Adaptive control and optimization technique.


New York: Me Graw Hill (1967).

[22.8] Mendel, J.M. and Fu, K.S.: Adaptive, learning and pattern re-
cognition systems. New York: Academic Press (1970).

[22.9] Tsypkin, Y.Z.: Adaptation and learning in automatic systems.


New York: Academic Press (1971).

[22.10] Weber, W.: Adaptive Regelungssysteme. Band I und II. Mlinchen:


R. Oldenbourg (1971).

[22.11] Maslov, E.P. and Osovskii, L.M.: Adaptive control systems with
models. Automation and Remote Control 27 (1966), 1116.

[22.12] Landau, I.D.: A survey of model reference adaptive techniques.


Proc. 3rd IFAC-Symposium on Sensitivity, Adaptivity and Opti-
mality, Ischia (1973). Pittsburgh: Instrument Soc. of America
( 1973).
Literature 545

[22.13] Lindorff, D.P. and Carroll, R.L.: Survey of adaptive control


using Ljapunov design. Int. J. Control 18 (1973), 897.

[22.14] Wittenrnark, B.: Stochastic adaptive control methods: A survey.


Int. J. Control 21 (1975), 705-730.

[22.15] Saridis, G.N., Mendel, J.M. and Nicolic, Z.J.: Report on de-
finitions of self-organizing control processes and learning
systems. IEEE Control System Soc. Newsletters (1973) Nr. 48,
8-13.
[22.16] Gibson, J.: Nonlinear automatic control. New York: Me Graw
Hi 11 ( 1 9 6 2) .

[22.17] Landau, I.D.: A survey of model reference adaptive techniques


- theory and applications. Automatica 10 (1974), 353-379.

[23.1] Panuska, V.: A stochastic approximation method for identifi-


cation of linear systems using adaptive filtering. Proc. JACC
( 1968).

[23.2] Panuska, V.: An adaptive recursive least squares identifica-


tion algorithm. Proc. IEEE Symp. on Adaptive Processes, Deci-
sion and Control (1969).

[23.3] Young, P.C.: The use of linear regression and related proce-
dures for the identification of dynamic processes. Proc. 7th
IEEE Symp. on Adaptive Processes. New York: IEEE (1968).

[23.4] Young, P.C., Shellswell, S.H. and Neethling, e.G.: A recursive


approach to time-series analysis. Report CUED/B-Control/TR16,
University of Cambridge (1971).

[23.5] Wong, K.Y. and Polak, E.: Identification of linear discrete


time systems using the instrumental variable method. IEEE
Trans. Aut. Contr. AC-12 (1967), 707-718.

[23.6] Young, P.C.: An instrumental variable method for real-time


identification of a noisy pr0cess. IFAC-Automatica 6 (1970),
271-287.
[23.7] Fuhrt, B.P. and Carapic, M.: On-line maximum likelihood algo-
rithm for the identification of dynamic systems. 4th IFAC-
Symposium on Identification, Tbilisi (1976).

[23.8] Soderstrom, T.: An on-line algorithm for approximate maximum


likelihood identification of linear dynamic systems. Report
7308. Dept. of Automatic Control, Lund Inst. of Technology
( 197 3) .

[23.9] Isermann, R., Baur, U., Bamberger, W., Kneppo, P. and Siebert,
H.: Comparison of six on-line identification and parameter
estimation methods. IFAC-Automatica 10 (1974), 81-103.

[23.10] Saridis, G.N.: Comparison of six on-line identification algo-


rithms. IFAC-Automatica 10 (1974), 69-79.

[23.11] Baur, U.: On-line Parameterschatzverfahren zur Identifikation


linearer dynamischer Prozesse mit ProzeBrechnern. Diss. Univ.
Stuttgart. Karlsruhe: Gesellschaft f. Kernforschung, Bericht
KFK-PDV 65 (1976).
546 Literature

[23.12] Baur, U. and Isermann, R.: On-line identification of a heat


exchanger with a process computer. IFAC-Automatica 13 (1977).

[23.13] Soderstrom, T., Ljung, L. and Gustavsson, I.: A comparative


study of recursive identification methods. Dept. of Automat.
Control, Lund Inst. of Technology, Report 7427 (1974).

[23.14] Hannan, E.J.: Multiple time series. New York: J. Wiley (1970).

[23.15] Rstrom, K.J. and Bohlin, T.: Numerical identification of


linear system dynamics from normal operating records. IFAC-
Symposium on Theory of self adaptive control systems,
Teddington. New York: Plenum Press (1966).

[23.16] Isermann, R.: Practical aspects of process identification.


IFAC-Automatica 16 (1980).

[23.17] Ljung, L.: Analysis of recursive stochastic algorithms. IEEE


Trans. AC-22 (1977), 551-575.

[23.18] Kaminski, P.G., Bryson, A.E. and Schmidt, S.F.: Discrete


square root filtering. A survey of current techniques. IEEE
Trans. AC-16 (1971), 727-735.

[23.19] Peterka, V.: A square root filter for real time multivariate
regression. Kybernetika 11 (1975), 53-67.

[23.20] Strejc, V.: Least squares parameter estimation. Automatica 16


( 1980).

[23.21] Ljung, L., Morf, M. and Falconer, D.: Fast calculation of gain
matrices for recursive estimation schemes. Int. J. Control 27
(1978), 1-19.
[23.22] Mancher, H.: Vergleich verschiedener Rekursionsalgorithmen flir
die Methode der kleinsten Quadrate. Technische Hochschule
Darmstadt: Diploma thesis,Institut flir Regelungstechnik (1980).

[24.1] Bohlin, T.: On the problem of ambiguities in maximum likeli-


hood identification. Automatica 7 (1971), 199-210.

[24.2] Gustavsson, I., Ljung, L. and Soderstrom, T.: Identification


of linear, multivariable process dynamics using closed loop
experiments. Report 7401, Dept. of Automat. Control, Lund
Inst. of Technology (1974).

[24.3] Kurz, H. and Isermann, R.: Methods for on-line process iden-
tification in closed loop. 6th IFAC-Congress, Boston (1975).

[24.4] Gustavsson, I., Ljung, L. and Soderstrom, T.: Identification


of process in closed loop - identification and accuracy as-
pects. Preprints 4th IFAC-Symposium on Identification, Tbili-
si (1976) and Report 7602, Dept. of Automat. Control, Lund
Inst. of Technology (1976).

[24.5] Kurz, H.: Recursive process identification in closed loop with


switching regulators. Proc. 4th IFAC-Symp. on Identification.
Amsterdam: North Holland (1977).
Literature 547

[25.1] Patchell, J.W. and Jacobs, O.L.R.: Separability, neutrality and


certainty equivalence, Int . .J. Control 13 (1971), 337-342.

[25.2] Bar-Shalom, Y. and Tse, E.: Dual effect, certainty equivalence


and separation in stochastic control. IEEE Trans. AC-19 (1974),
494-500.
[25.3] Tou, J.T.: Optimum design of digital control systems. New York:
Academic Press (1963).

[25.4] Gunckel, T.L. and Franklin, G.F.: A general solution for linear
sampled-data control. J. bas. Engng. 85 (1963), 197.

[25.5] Feldbaurn, A.A.: Optimal control systems. New York: Academic


Press ( 1965).

[25.6] Kurz, H., Isermann, R. and Schumann, R.: Development, compa-


rison and application of various parameter-adaptive digital
control algorithms. 7th IFAC-CongreB, Helsinki (1978).

[25.7] Peterka, V.: Adaptive digital regulation of noisy systems.


2nd IFAC-Symposium on Identification, Prag (1970).

[25.8] ~strom, K.J. and Wittenmark, B.: On self tuning regulators.


IFAC-Automatica 9 (1973), 185-199.

[25.9] Wittenmark, B.: A self tuning regulator. Report 7311, Dept.


of Automat. Control, Lund Inst. of Technology (1973).

[25.10] Ljung, L. and Wittenmark, B.: Asymptotic properties of self


tuning regulators. Report 7404. Dept. of Automat. Control.
Lund Inst. of Technology (1974).

[25.11] Borrisson, U.: Self tuning regulators- industrial application


and multivariable theory. Report 7513, Dept. of Automat. Con-
trol, Lund Inst. of Technology (1975).

[25.12] ~strom, K.J., Borrisson, U., Ljung, L. and Wittenmark, B.:


Theory and applications of adaptive regulators based on recur-
sive parameter estimation. Paper 50.1, 6th IFAC-Congress,
Boston (1976) and Automatica 13 (1977), 457-476.

[25.13] Clarke, D.W. and Gawthrop, B.A.: Self tuning controller. Proc.
lEE 122 (1975), 929-934.

[25.14] Kurz, H. and Isermann, R.: Feedback control algorithms for


parameter adaptive control - comparison and identifiability
aspects. San Francisco: Joint Automatic Control Conference
( 1977) .

[25.15] Kurz, H., Isermann, R. and Schumann, R.: Experimental compari-


son and application of various parameter-adaptive control
algorithms. Automatica 16 ( 1980), 117-133.

[25.16] Kurz, H.: Digitale adaptive Regelung auf der Grundlage rekur-
siver Parameterschatzung. Dissertation Technische Hochschule
Darmstadt. Karlsruhe: Ges. f. Kernforschung, Bericht KFK-PDV
188 ( 19 80) .

[25.17] Ljung, L.: On positive real transfer functions and the conver-
gence of some recursions. IEEE Trans. AC-22 (1977), 539.
548 Literature

[25.18] Gawthrop, P.J.: Some interpretations of the self tuning con-


troller. Proc. IEE 124 (1977), 889-894.

[25.19] Clarke, D.W. and Gawthrop, P.J.: Self tuning control. Proc.
IEE 126 ( 1979), 633-640.

[25.20] Egardt, B.: Stability of adaptive controllers. Lecture Notes


in Control and Information Sciences. Berlin: Springer (1979).

[25.21] Kurz, H.: Digital parameter adaptive control of processes with


unknown constant or timevarying dead time. 5th IFAC Symposium
on Identification and System Parameter Estimation Darmstadt
( 1979).

[25.22] Kallstrom, e.G., Rstrom, K.J., Thorell, N.E., Eriksson, J.


and Sten, L.: Adaptive autopilots for large tankers. 7th IFAC-
Congress Helsinki (1978).

[25.23] Dumont, G.A. and Belanger, R.R.: Self tuning control of a


titanium dioxide kiln. IEEE Trans. AC-23 ( 1978) , 532-538.

[25.24] Clarke, D.W. and Gawthrop, P.J.: Implementation and applica-


tion of microprocessor based self tuners. 5th IFAC Symposium
on Identification and System Parameter Estimation Darmstadt
( 1979) .

[25.25] Bergmann, S., Radke, F. and Isermann, R.: Ein universeller


digitaler Regler mit Mikrorechner. Regelungstechnische Praxis
20 (1978), 289-294 and 322-325.

[25.26] Bergmann, S. and Schumann, R.: Digitale adaptive Regelung


einer Lliftungsanlage. Regelungstechnische Praxis 22 (1980),
280-286.
[25.27] Buchholt, F. and Klimmel, M.: Self tuning control of a pH-
Neutralization Process. Automatica 15 (1979), 665-671.

[25.28] Bergmann, S. and Lachmann, K.-H.: Digital parameter adaptive


control of a pH process. San Francisco: Joint Automatic Con-
trol Conference (1980).

[25.29] Schumann, R. and Christ, H.: Adaptive feedforward controllers


for measurable disturbances. Denver: Joint Automatic Control
Converence (1979).

[25.30] Peterka, v. and Rstrom, K.J.: Control of multivariable systems


with unknown but constant parameters. 3rd IFAC Symposium on
Identification and System Parameter Estimation The Hague (1973).

[25.31] Keviczky, L. and Hetthessy, J.: Self tuning minimum variance


control of MIMO discrete time systems. Automatic Control
Theory and Applic. 5 (1977).

[25.32] Borisson, V.: Self tuning regulators for a class of multi-


variable systems. 4th IFAC Symposium on Identification and
System Parameter Estimation Tbilisi (1976).

[25.33] Schumann, R.: Identification and adaptive control of multi-


variable stochastic linear systems. 5th IFAC Symposium on
Identification and System Parameter Estimation Darmstadt (1979).
Literature 549

[25.34] Blessing, P.: Identification of the input-output and noise-


dynamics of linear multivariable systems. 5th IFAC Symposium
on Identification and System Parameter Estimation Darmstadt
( 1979) .

[25.35] Schumann, R.: Digital parameter-adaptive control of an air


conditioning plant. 6th IFAC/IFIP Conference on Digital
Computer Applications to Process Control Dusseldorf (1980).

[26.1] Bertram, J.E.: The effect of quantization in sampled feedback


systems. AIEE on Applic. and Industry, Vol. 77 (1958), 177-182.

[26.2] Zypkin, Y.Z.: An estimate of the influence of amplitude quan-


tization on process in digital control systems. Automata i
Telemekh. 21 ( 1960) , 195.

[26.3] Knowles, J.B. and Edwards, R.: Effect of a finite word length
computer in a sampled-data-feedback system. Proc. IEE, Vol.
112 (1965), 1197-1207 and 2376-2384.

[26.4] Biondi, E., Debenedetti, A. and Rotolni, P.: Error determina-


tion in quantized sampled-data-systems. 3rd IFAC-Congress
London (1966).

[26.5] Koivo, A.J.: Quantization error and design of digital control


systems. Trans. IEEE Aut. Contr. AC-14 (1969), 55-58.

[26.6] Scheel, K.H.: Der EinfluB des Rundungsfehlers beim Einsatz des
ProzeBrechners. Regelungstechnik 19 (1971), 326, 329-331 and
389-392.
[26.7] Blackman, R.B.: Linear data-smoothing and prediction in theory
and practice. Reeding, Mass.: Addison-Wesley (1965).

[27.1] Gold, B. and Rader, Ch.M.: Digital processing of signals.


New York: Me Graw Hill (1969).

[27.2] Schussler, H.W.: Digitale Systeme zur Signalverarbeitung.


Berlin: Springer (1973).

[27.3] Lauber, R.: ProzeBautomatisierung I. Berlin: Springer (1976).

[27.4] Goff, K.: A systematic approach to DOC design. ISA-J. (1966).

[27.5] Welfonder, E.: Vergleich analoger und digitaler Filterung beim


Einsatz von ProzeBrechnern. Regelungstechnik (1975).

[27.6] Schenk, Ch. and Tietze, U.: Aktive Filter. Elektronik 19 (1970).

[27.7] Berthold, W. and Leffler, U.: Aktive Hoch- und TiefpaBfilter


mit handelsublichen Bauelementen. Elektronik 25 (1976).
550 Literature

[28.1] Fellinger, 0.: Nichtlineare Regelungen. Munchen: Oldenbourg,


Band I (1969), Band II (1970), Band III (1970).

[28.2] Leonhard, W.: Einfuhrung in die Regelungstechnik. Nichtlineare


Regelvorgange. Braunschweig: Vieweg (1970).

[28.3] Glattfelder, A.H.: Regelungssysteme mit Begrenzungen. Munchen:


Oldenbourg (1974).

[28.4] PreBler, G.: Regelungstechnik. Mannheim: Bibliographisches In-


stitut (1967).

[29.1] Baur, u. and Isermann, R.: On-line identification of a heat


exchanger- a case study. IFAC-Automatica 13 (1977).

[29.2] Baur, U.: On-line-Pararneterschatzverfahren zur Identifikation


linearer dynamischer Prozesse mit ProzeBrechnern. Karlsruhe:
Gesellschaft f. Kernforschung, Bericht KFK-PDV 65 (1976).

[29.3] Blessing, P. and Baur, U.: On-line-Identifikation von Ein- und


ZweigroBenprozessen und den Prograrnrnpaketen OLIO. Dusseldorf:
VDI-Bericht Nr. 276 'ProzeBmodelle 1977'.

[29.4] Mann, W.: 'OLID-SISO'. Ein Prograrnrn zur On-line-Identifikation


dynamischer Prozesse mit Prozessrechnern - Benutzeranleitung -
Karlsruhe: Gesellschaft f. Kernforschung, Bericht E-PDV 114
( 1 978) .

[29.5] Dymschiz, E. and Isermann, R.: Computer aided design of control


algorithms based on identified process models. 5th IFAC/IFIP-
Conference on Digital Computer Applications to Process Control
Den Haag (1977).

[29.6] Dymschiz, E.: Rechnergestutzter Entwurf von Regelungen mit Pro-


zeBrechnern und dem Prograrnrnpaket CADCA. Dusseldorf: VDI-Be-
richt Nr. 276 'ProzeBmodelle 1977'.

[29.7] Dymschiz, E.: 'CADCA-SISO'. Ein Prograrnrnpaket zurn rechnerge-


stutzten Entwurf von Regelalgorithmen. Karlsruhe: Gesellschaft
f. Kernforschung, PDV-Entwicklungsnotiz (1980).

[29.8] Dymschiz, E.: A process computer program package for inter-


active computer aided design of multivariable control systems.
2nd IFAC/IFIP Symposium on Software for Computer Control Prague
( 1979).

[30.1] Mann, W.: Identifikation und digitale Regelung eines Trornrnel-


trockners. Dissertation Technische Hochschule Darmstadt.
Karlsruhe: Gesellschaft f. Kernforschung, PDV-Bericht (1980).

[30.2] Mann, w.: Digital control of a rotary drier in the sugar indu-
stry. 6th IFAC/IFIP Conference on Digital Computer Applications
Dusseldorf (1980).
Literature 551

[30.3] Mosel, P., Feuerstein, E., Peters, P. and Scholze, G.: Flihrung
einer Trommeltrockneranlage flir PreBschnitzel mit einem ProzeB-
rechner. Zuckerind. 105 (1980), 554-561.

[30.4] Isermann, R.: Digital control methods for power station plants
based on identified process models. IFAC Symposium on Automatic
Control in Power Generation, Distribution and Protection
Pretoria (1980).
List of Abbreviations and Symbols

This list defines commonly occurring abbreviations and symbols.

Symbols

a
b
parameters of the difference equations of the process
c
d
parameters of the difference equations of stochastic signals
d dead time d = Tt/T 0 = 1,2 ...
e control deviation e = w - y (also ew = w- y); or equation error
for parameter estimation; or the number e = 2. 71323 ...
f frequency, f = 1/Tp (Tp period), or parameter
g impulse response (weighting function)
h parameter
i cinteger; or index; or i2 = -1
k discrete time unit k = t/T 0 = 0,1, 2 ...
l integer; or parameter
m order of the polynomials A( ) , B( ) , C( ) , D( )
n disturbance signal (noise)
p parameters of the difference equation of the controller, or integer
p ( l probability distribution
q parameters of the difference equation of the controller
r weighting factor of the manipulated variable; or integer
s variable of the Laplace transform s = a+ iw; or signal
t continuous time
u input signal of the process, manipulated variable u (k) = U (k) - Uoo
v nonmeasurable, virtual disturbance signal
w reference value, command variable,setpoint w(k) =W(k) -w 00
x state variable
y output signal of the process, controlled variable y (k) = Y (k) - Y00
z variable of the z-transformation z = eTos
a
parameters of the differential equations of the process
b
List of Abbreviations and Symbols 553

A(z) denominator polynomial of the z-transfer function of the process


model
B(z) numerator polynomial of the z-transfer function of the process
model
C(z) denominator polynomial of the z-transfer function of the noise
model
D(z) numerator polynomial of the z-transfer function of the noise
model
G(z) z-transfer function
H( ) transfer function of a holding element
I control performance criterion
K gain
L word length
M integer
N integer or discrete measuring time
P( denominator polynomial of the z-transfer function of the control-.
ler
Q( numerator polynomial of the z-transfer function of the controller
R( dynamical control factor
s power density or sum criterion
T time constant
T95 settling time of a step response until 95% of final value
To sample time
Tt dead time
u process input (absolute value)
v loss function
w reference variable (absolute value)
y process output variable (absolute value)

b control vector
c output vector
k parameter vector of the state controller
n noise vector (rx1)
u input vector (px1)
v noise vector (px1)
~ reference variable vector (rx1)
X state variable vector (mx1)
y output vector (rx1)
554 List of Abbreviations and Symbols

A system matrix (mxm)


B input matrix (mxp)
C output matrix, observation matrix (rxm)
D input-output matrix (rxp), or diagonal matrix
F noise matrix or -F =A-
- -B-
K
G matrix of transfer functions
I unity matrix
K parameter matrix of the state controller
Q weighting matrix of the state variables (mxm)
R weighting matrix of the inputs (pxp); or controller matrix

A-<z) denominator polynomial of the z-transfer function, closed loop


'S<zl numerator polynomial of the z-transfer function, closed loop
':f information
t< Laplace-transform
%< z-transform
Z< correspondence G(s) + G(z)

coefficient
(3 coefficient
y coefficient;or state variable of the reference variable model
0 deviation, or error
E coefficient
c; state variable of the noise model
n state variable of the noise model; or noise/signal ratio
K coupling factor;or stochastic control factor
standard deviation of the noise v(k)
order of P(z)
v order of Q(z); or state variable of the reference variable model
')[ 3.14159 ••.
0 standard deviation, 02 variance, or related Laplace variable
time shift
w angular frequency w = 2'TI/T (T period)
.P p

deviation; or change; or quantization unit


parameter
product
sum
related angular frequency
List of Abbreviations and Symbols 555

.
X = dx/dt

xo exact quantity

..
X

x, t:.x
estimated or observed variable
...
= x- x 0 estimation error
X average
xoo value in steady state

Mathematical abbreviations
E { } expectation of a stochastic variable
var []variance
cov [] covariance
dim dimension, number of elements
tr trace of a matrix: sum of diagonal elements
adj adjoint
det determinant

Indices
P process
Pu process with input u
Pv process with input v
R or C feedback controller, feedback control algorithm, regulator
S or C feedforward controller, feedforward control algorithm
0 exact value
00 steady state, d.c.- value

Abbreviations for controllers or control algorithms {C)

i-PC-j parameter optimized controller with i parameters and j parameters


to be optimized
DB Deadbeat-controller
LC-PA linear controller with pole assignment
PREC predictor controller
MV minimum variance controller
SC state controller (usually with an observer)
556 List of Abbreviations and Symbols

Abbreviations for parameter estimation methods

COR-LS correlation analysis with LS parameter estimation


IV instrumental variables
LS least squares
ML maximum-likelihood
STA stochastic approximation

The letter R means recursive algorithm, i.e. RIV, RLS, RML

Other abbreviations

ADC analog-digital converter


CPU central processing unit
DAC digital-analog converter
P~S pseudo-random binary signal

M ••• multivariable ....


Subject Index

Abbreviations Aliasing effect, 475


list, 552 Amplitude quantization, 457
Actuators, 13, 488 Analog/Digital (A/D) conversion,
adaptation of control algorithms, 10, 457
control, 488 Analog/Digital (A/D) converter
feedforward (open loop) control (ADC), 10
properties, 491 integrating, 486
response, 489 quantization, 457
Actuator speed Autocorrelation, 242
constant, 490, 494 Autocovariance, 242
variable, 490, 492 Automation, 2
Adaptive control systems, 360 Autoregressive moving average
applications, 438, 521 process (ARMA), 365
definition, 361 Autoregressive process (AR), 247
feedback, 361 Auxiliary control variables, 294
feedforward, 361, 444 Averaging of mean values
multivariable, 447 constant value, 483
nondual, 405 fading memory, 484
self-optimizing, 361, 401 frozen correcting factor, 482
Adaptive controllers infinite memory, 483
cautious, 404 limited memory, 484
convergence, 420 recursive, 371, 483
dual, 405 timevariant variables, 371
feedback, 361 Averaging of vector measurements,
feedforward controlled, 361 286
multivariable, 447
nondual, 405 Batch processes, 53
parameter adaptive, 401,414 Bellman's optimality principle,
reference model, 362 135
self-optimizing, 13, 401 Bessel filter, 477
stability, 416 Bias
Airheater unbiased, 365
adaptive control, 438 Butterworth filter, 477
558 Subject Index

Cancellation controller, 117, 211 cesses, 192


Cancellation feedforward control, dynamic control factor, 232
11 71 304 performance with different
Cancellation of poles and zeros, controllers, 217
120 poles and zeros, 207
Canonical structures Computational effort
multivariable processes, 317 algorithm synthesis, 230, 412
P-canonical, 319 between samplings, 230, 412
state representation, 43 Computer
transfer functions, 317 micro, 3
v-canonical, 317 process, 3
Cascade control, 294 Computer aided design (CAD),
see also rotary dryer 13, 500
Case studies, 505 Consistent, 365
heat exchanger, 505 Control
identification and digital digital, 9
control, 505 processes with large parameter
parameter adaptive control, 521 changes, 206
rotary dryer, 510 robust, 206
steam generator, 519 sampled-data, 9
Cautious controller, 404 supervisory, 3
Central processing unit (CPU), variable processes, 199
10, 458 Control algorithms, 11
Centralization, 3 see also controllers
Certainty equivalence controller, computer aided design, 500
404, 414 design, 69
Certainty equivalence principle low order, 76
adaptive control, 404 properties, 239, 413
state controller, 278 Control deviation (error), 72
Characteristic equation, 150, 210 Control factor
closed loop, 208 dynamic, 232
given, 150, 210 stochastic, 251
Markov signal, 246 Control performance
multivariable process, 321 criteria, 70, 78
state control, 143 quadratic, 78, 135
state representation, 49 Control system structure, 12, 316
Comparison Control systems
adaptive controllers, 426 adaptive, 360, 401
controller structure, 207 analog, 2
controllers, 207 auxiliary variables, 294
controllers for deadtime pro- cascade, 294
Subject Index 559

computer aided design, 500 matrix polynomial, 352, 448


design, 11, 501 minimum variance, 253, 407
deterministic, 67 modified PID-controllers, 85
digital, 4, 11 parameter-optimized 74, 211,
feedback, 2, 12, 69 239, 249, 335, 409
feedforward, 2, 12, 69, 302 P-, PD-, PI-, PID-, 85
interconnected, 293 PID-, 74
modal, 152 predictor, 121, 213
multilevel, 1, 2 processes with large deadtime,
multivariable, 316 183
optimally structured, 70 self tuning, 406, 423
parameter-optimized, 70, 74 state, 134, 214, 240, 274, 412
reference, 67 structure, 67
sampled-data, 10, 11 Convergence
servo, 67 parameter-adaptive control, 420
singlevariable, 69 parameter estimation, 369
state variable, 69 Convolution sum, 27
stochastic, 241, 403 Coordination, 2
structure, 67 parameter-adaptive control, 454
structure optimal, 70 Correction matrix, 290
terminal, 67 Correction vector, 368, 381
tracking, 67 Correlation function, 242
Control variable, 67 auto correlation, 242
Controllability, 49 cross correlation, 243
Controller Coupling controller, 335
adaptive, 36o Coupling elements, 318
analog, 2, 11 Coupling factor, 321
auxiliary, 294 dynamic, 322
cancellation, 117, 211 negative, 323
comparison, 207 positive, 323
computer aided design, 500 static, 323
coupling, 335 Covariance function, 242
deadbeat, 122, 212, 240,406 auto covariance, 242
digital, 3, 456 cross covariance, 243
feedback, 12 Covariance matrix, 244
feedforward, 12 Cross correlation, 243
finite settling time, Cross covariance, 243
see deadbeat Cumulative integration,
general linear, 209, 412 see wind-up
input-output, 207
main, 294, 362, 335
560 Subject Index

D.C.value (direct current), 365, 415 Disturbances


estimation, 370 deterministic, 67
DOC (direct digital control), 3 initial value, 135
see digital control systems stochastic, 241
Dead band, 469
Deadbeat controller Eigen values, 246
increased order, 127 state control, 153
matrix polynomial, 353 Eigen vectors, 154
normal order, 122, 212, 406 Equation error, 367
state representation, 152, 157 Estimation
Deadtime, 37, 183 recursive, 285, 367
controller, 183, 267 state, 284
state representation, 43 vector states, 288
Decentralization , 3 Expectation value, 242
Decoupling, 346
Definite matrix Feedback
positive, 135 control, 69
semi- , 1 3 5 Feedforward control, 69, 302
Delta function dead band, 469
see impulse ideal cancellation, 117, 304
Kronecker-, 274 minirJuo variance, 313
Describing function, 466 parameter-optimi zed, 307
Design state variable, 313
computer-aided, 13, 500 Filter
control systems, 11 analog, 476
Difference equations, 14 band-pass, 472
stochastic, 247 Bessel, 477
Differential equation, 30 Butterworth, 477
Differentiation digital, 478
vectors and matrices, 532 high-pass 482
Digital/Analog (D/A) converter Kalman, 284, 289, 472
( DAC) , 10, 4 57, 4 8 8 low-pass, 476, 479, 485
Digital computer, 9, 456 special, 483
Digital control systems, 4 Tschebyscheff, 477
applications, 505 Filtering
design, 11 aliasing effect, 475
with computers, 9, 456 analog, 476
Discrete-time digital, 478
models, 54 disturbance signals, 472
signals, 14 noise, 13
systems, 14 noise sources, 473
Subject Index 561

outliers, bursts, 486 Laplace transformation, 20


Fixed point representation, 460 Lead coefficient, 30
Forgetting factor, 381, 436 Least squares method
extended, 374
Gain factor non-recursive, 367
PID-Controller, 80 recursive, 286, 368
process, 31 Limit cycles, 464
Linearisation
Heat exchanger actuators, 496
identification and digital Ljapunov's method, 466
control, 505 Loss function, 368
High-pass filter, 482 Low-pass filter
Holding element (hold), 10, 23 analog, 476
Hurwitz criterion, 37 digital, 4 79

Identifiability conditions Main controller, 294, 326, 335


closed loop, 390, 395 I-1ain transfer elements, 318
Identification, 65, 364 ~1anagement, 2
applications, case studies, 505 Manipulating variable (or process
closed loop, 387 input) , 72
direct, 387, 394, 398 Manipulation effort, 231
indirect, 387, 339 Manipulation range
methods, 364, 396, 399 see position range
on-line, 364 Markov signal process, 244, 284
real-time, 364 Matrix
Identification and digital control controllability, 50
case studies, 505 observability, 51
Impulse, 1 8 Riccati equation, 141
Impulse response, 32, 49 system's, 40
Initial value, 34, 135 transition, 42
Innovation state model, 447 Matrix polynomial controllers,
Instrumental variables method, 375 3521 448
Integration Maximum-likeliho od method, 376
cumulative, see wind-up Memory (estimation algorithms)
rectangular, 58 forgetting, 484
trapezoidal, 58 infinite, 483
Integration coefficient, 80 limited, 484
Micro(process)co mputer, 3, 9,
Kalman filter, 284, 289, 472 438, 456, 505
Kronecker delta function, 274 Microprocessors, 3, 9
Minimal prototype response, 121
562 Subject Index

Minimal realization, 334 Newton-Raphson algorithm, 79


Minimum-variance controller, Noise
253, 407 filtering, 13, 472
generalized, 253, 261 model, 365
matrix polynomial, 354 quantization, 462
state controller, 358 sources, 473
with deadtime, 261 white, 243
without deadtime, 253 Non-interaction
without offset, 265 multivariable control systems,
Minimum-variance feedforward con- 34 7
trol, 313 Nonlinearities
Modal state control, 152 actuators, 494
Model order through digitalization, 457
reduction, 64 Numerical range
Modelling, 65 process computer, 460
Models
mathematical, 51 Observability, 50
process, 51 Observer, 134, 159
signal, 242 external disturbances, 165
Monitoring, 2 identity 175
Moving average process (MA), 248 initial values, 164
Multilevel control, 1 reduced order, 175
Multivariable control Offset, 73, 415, 461
decoupling, 346, 357 Optimality principle, 135
deadbeat, 353 Optimization
matrix polynomial control, dynamic, 1 35
3461 35 7 on-line, 2
minimum variance, 354 parameter, 343
parameter-adaptive, 447 steady-state, 2
parameter-optimized, 335 Orthogonalities
stability regions, 338 state estimation, 292
state control, 356 Outliers, 486
twovariable control, 336 Overshoot, 219
Multivariable processes, 316
canonical model structures, 316 Parameter-adaptive controllers,
matrix polynomial representation 401, 414
329 application examples, 438
state representation, 329 choice of a priori factors,438
structures, 316 comparison, 426
transfer function representation, determ~nistic, 425
317 feedforward, 444
Subject Index 563

multivariable, 447 state controller, 151, 356


stochastic, 421 Pole excess, 119
Parameter estimation, 364 Poles, 33
correlation and least squares, closed loop, 207
505 Position algorithm, 74
fast algorithms, 385 Position control, 490
instrumental variables method, Position range, 495
375 Power density spectra, 474
maximum likelihood method, 376 Prediction - one step ahead, 286
nonrecursive, 368 Predictor controller, 121, 186,213
numerical modifications, 385 Process
objective, 366 basic types, 52
recursive, 364, 381 batch, 53
square root filtering, 385 classification, 52
stochastic approximation, 380 continuous, 52
stochastic signals, 373 dead times, 37
timevarying process parameters, piece good, 53
381 signal (see signal processes)
unified recursive algorithm, 381 Process computer, 3, 10, 505
Parameter identifiability, 390 Process identification, 65, 364
Parameter optimization, 77, 350 Process modelling, 65
Parameter optimization method, 79 Process models, 66
Parameter-optimized controller, gaining with process computer,
249, 335, 409 66
Parameter sensitivity, 201 identification, 365
P-canonical structure, 317 non-parametric, 65
P-controller, 83 parametric, 66
PD-controller (proportional plus simplification, 60
derivative) 83, 252 Processor
Persistency exciting signal, 369 central, 10
Perturbation, 388, 397, 400, 427 micro, 3
pH-process Program packages
adaptive control, 441 computer-aided design of
PI-controller (proportional plus control algorithms, 500
integral) , 82 development of adaptive con-
Pro-controller (proportional plus trollers, 521
derivative plus integral) on-line identification, 500
7 4 1 80 1 2 4 9 1 4 9 4 Pulse series, 9, 19
Pole assignment (placement), 210 Pulse transfer function, 28
parameter-optimized controller
409
564 Subject Index

Quantization, 10 function, 201, 235


amplitude, 457 inexact process model, 228
coefficients, 461, 467 Separation principle
dead band, 469 adaptive control, 404
effects, 457, 462 state controller, 278
error, 458 Separation theorem
limit cycles, 461, 467 state controller and observer,
noise, 462 278
offsets, 461, 467 Settling time 219
unit, 457 actuator, 495
variables, 461 finite, 122
step-response, 95
Realizability, 32 Shannon's theorem, 13, 481
Reference model, 362 Signal models
Reference variable, 67 stochastic signals, 242
Regulator, 67 Signal process, 242
Ripple, 121 autoregressive, 247
Rotary dryer Markov, 244
identification and digital moving average, 248
control, 510 nonstationary, 242, 248
Rounding, 465 parametric model, 244
stationary, 242
Sample (sampling) time, 10 vectorial, 245
choice Signals
deadbeat controller, 131 continuous, 14
parameter-optimized controllers, discontinuous, 9
103, 105 discrete, 9
Pin-controllers, 103, 105 discrete-amplitude, 9
state controller, 179 discrete-time, 9, 14
Sampled-data controller, 10 non-stationary, 242
Sampled-data control systems, orthogonal, 243
9, 11, 14 stationary, 242
Sampler, 10, 14 stochastic, 241, 242
Sampling theorem, 20 uncorrelated, 243
Selftuning control algorithms, vectorial, 243
406, 423 Simplification
Sensitivity, 199 process models, 60
changes in deadtime, 192 Simulations
changes in process parameters, cascade control systems, 294
2061 228 comparison of different con-
control system, 200 trollers, 217
Subject Index 565

control systems for deadtime with observers, 163


processes, 185 with state estimation, 277
digital control of a rotary State estimation, 284, 288, 472
dryer, 510 State observer, 134, 159
digital control of a steam Stationary signal, 242
generator, 519 Steam generator
feedforward control systems, 307 block diagram, 317
minimum variance control systems, identification and digital
269 control, 519
parameter-adaptive control sys- models, 529
tems, 426 parameter adaptive control, 521
parameter-optimized control Stepping motor, 498
systems, 87, 249 Stochastic
quantization effects, 465 difference equation, 247
Square root filtering, 385 signals, 240
Stability, 33 Stochastic approximation method,
condition, 3 7 380
criterion, 37 Structure
Stability regions control system, 12
twovariable control, 338 hierarchical, 1
Stabilization, 136 Supervisory control, 3
State controller, 134, 214 Symbols
deadbeat, 157 list, 552
dead time, 191
decoupling, 357
finite settling time, 157 Test processes, 88, 528
given characteristic equation,150 Time constants
matrix Riccati, 357 neglection of small, 60
minimum variance, 358 Trace operator, 247
modal, 152 Transfer function
multivariable, 134, 356 continuous signals, 30
optimal for external disturbances, discrete-time, 29
145, 279 Transformation
optimal for initial values, 135, bilinear, 37
142 Laplace, 20
optimal for white noise linear, 43
pole placement, 151, 356 z- , 24
singlevariable, 135 Transition matrix, 42
stochastic disturbances, 274 Tschebyscheff filter, 177
white noise, 274 Tuning rules, 13, 70 77, 109
twovariable processes, 343
566 Subject Index

Variance, 243 Wind-up, 498


V-canonical structure, 317 Word length (bit) , 457
Vector
control, 40 Zeros, 33
output, 40 closed loop, 207
Vector difference equation, 39, 43 z-transfer function
Vector differential equation, 42 definition, 28
Vector stochastic signal, 243 determination, 54
Velocity algorithm, 75 process model, 54
properties, 34
Weighting factor realizability, 32
manipulating variable, 105 representation, 28
Weighting function simplification, 60
see impulse function z-transform
Weighting matrix definition, 24
manipulated variables, 179 inverse, 26
White noise, 243 table, 534
Wiener filter, 472 theorems, 26

You might also like