Methods: Conventional and Intelligent Control Systems by M. Gopal ISBN 9780070668805, 0070668809
Methods: Conventional and Intelligent Control Systems by M. Gopal ISBN 9780070668805, 0070668809
https://ebooknice.com/product/digital-control-and-state-variable-
methods-conventional-and-intelligent-control-systems-10945072
https://ebooknice.com/product/digital-control-and-state-variable-
methods-5186136
https://ebooknice.com/product/control-systems-principles-and-
design-23300446
https://ebooknice.com/product/intelligent-systems-modeling-
optimization-and-control-automation-and-control-engineering-2021666
(Ebook) Product Engineering: Tools and Methods Based on Virtual
Reality (Intelligent Systems, Control and Automation: Science and
Engineering) by Doru Talaba, Angelos Amditis ISBN 9781402081996,
1402081995
https://ebooknice.com/product/product-engineering-tools-and-methods-
based-on-virtual-reality-intelligent-systems-control-and-automation-
science-and-engineering-1742748
https://ebooknice.com/product/variable-gain-control-and-its-
applications-in-energy-conversion-automation-and-control-
engineering-49163108
https://ebooknice.com/product/intelligent-systems-in-cybernetics-and-
automation-control-theory-7319892
Digital control and state variable methods conventional
and intelligent control systems 3 ed. Edition M. Gopal
Digital Instant Download
Author(s): M. Gopal
ISBN(s): 9780070668805, 0070668809
Edition: 3 ed.
File Details: PDF, 56.28 MB
Year: 2009
Language: english
THIRD EDITION
About the Author
M GOPAL
Professor
Department of Electrical Engineering
Indian Institute of Technology
Delhi
ISBN-13: 978-0-07-0668805
ISBN-10: 0-07-0668809
Information contained in this work has been obtained by Tata McGraw-Hill, from sources believed to be reliable.
However, neither Tata McGraw-Hill nor its authors guarantee the accuracy or completeness of any information
published herein, and neither Tata McGraw-Hill nor its authors shall be responsible for any errors, omissions, or
damages arising out of use of this information. This work is published with the understanding that Tata McGraw-Hill
and its authors are supplying information but are not attempting to render professional services. If such services are
required, the assistance of an appropriate professional should be sought.
Typeset at Script Makers, 18, DDA Market, A-1B Block, Paschim Vihar,
New Delhi 110063 and printed at Unique Color Carton, 15/72, Punjabi Bagh, New Delhi-110 026.
RAXYYRYFDZZDZ
Dedicated
with all my love to my
son Ashwani
and
daughter Anshu
Contents
Preface XI
Part I Digital Control: Principles and Design in Transform-Domain 1
1. Introduction 3
1.1 Control System Terminology 3
1.2 Computer-Based Control: History and Trends 9
1.3 Control Theory: History and Trends 12
1.4 An Overview of the Classical Approach to Analog Controller Design 14
1.5 Scope and Organization of the Book 19
2. Signal Processing in Digital Control 21
2.1 Why Use Digital Control 21
2.2 Configuration of the Basic Digital Control Scheme 23
2.3 Principles of Signal Conversion 24
2.4 Basic Discrete-Time Signals 30
2.5 Time-Domain Models for Discrete-Time Systems 32
2.6 The z-Transform 41
2.7 Transfer Function Models 52
2.8 Frequency Response 58
2.9 Stability on the z-Plane and the Jury Stability Criterion 61
2.10 Sample-and-Hold Systems 69
2.11 Sampled Spectra and Aliasing 72
2.12 Reconstruction of Analog Signals 77
2.13 Practical Aspects of the Choice of Sampling Rate 80
2.14 Principles of Discretization 82
Review Examples 100
Problems 107
3. Models of Digital Control Devices and Systems 113
3.1 Introduction 113
3.2 z-Domain Description of Sampled Continuous-Time Plants 115
3.3 z-Domain Description of Systems with Dead-Time 122
3.4 Implementation of Digital Controllers 126
3.5 Tunable PID Controllers 133
3.6 Digital Temperature Control System 146
3.7 Digital Position Control System 151
3.8 Stepping Motors and their Control 157
3.9 Programmable Logic Controllers 163
Review Examples 180
Problems 183
viii Contents
4. Design of Digital Control Algorithms 193
4.1 Introduction 193
4.2 z-Plane Specifications of Control System Design 196
4.3 Digital Compensator Design using Frequency Response Plots 212
4.4 Digital Compensator Design using Root Locus Plots 224
4.5 z-Plane Synthesis 237
Review Examples 242
Problems 246
Index 775
Preface
The dramatic development of computer technology has radically changed the boundaries of practical control
system design options. It is now possible to employ very complicated, high-order digital controllers, and to
carry out the extensive calculations required for their design. These advances in implementation and design
capability can be achieved at low cost due to the widespread availability of inexpensive, powerful digital
computers and related devices. There is every indication that a high rate of growth in the capability and
application of digital computers will continue far into the future.
Fortunately, control theory has also developed substantially over the past 45 years. The classical design
methods have been greatly enhanced by the availability of low-cost computers for system analysis and
simulation. The graphical tools of classical design like root locus plots, Nyquist plots, Bode plots, and
Nichols chart can now be more easily used with computer graphics. Coupled with hardware developments
such as microprocessors and electro-optic measurement schemes, the classical control theory today provides
useful design tools to practising control engineers.
The modern control theory (which can’t be termed modern any longer) refers to the state-space based
methods. Modern control methods initially enjoyed a great deal of success in academic circles, but did not
perform very well in many areas of application. Modern control provided a lot of insight into system struc-
ture and properties; nevertheless, it masked other important feedback properties that could be studied, and
manipulated, using classical control. During the past three decades, a series of methods, which are a combi-
nation of modern state-space methods and classical frequency-domain methods, have emerged. These tech-
niques are commonly known as robust control.
The rapid development in digital computers and microelectronics has brought about drastic changes in
the approach to analysis, design, and implementation of control systems. The flourishing of digital control
has just begun for most industries and there is much to be gained by exploiting the full potential of new
technology.
Implementation of nonlinear robust control schemes—model reference adaptive control, self-tuning con-
trol, variable structure sliding mode control—has now become a relatively simple task. In the process of
understanding and emulating salient features of biological control functions, a new field called ‘intelligent
control’, has emerged.
Fuzzy logic, artificial neural networks, and genetic algorithms have grown into three distinct disciplines
with the aim of designing “intelligent” systems for scientific and engineering applications. The theory of
fuzzy logic provides a mathematical morphology to emulate certain perceptual and linguistic attributes,
associated with human cognition. It aims at modelling the inexact modes of reasoning and thought processes,
that play an essential role in the remarkable human ability to make rational decisions in an environment of
uncertainty and imprecision. Artificial neural networks attempt to emulate the architecture and information
representation schemes of the human brain. Genetic algorithms provide an adaptive, robust, parallel and
randomized searching technique where a population of solutions evolves over a sequence of generations, to
a globally optimal solution.
While the development of individual tools—fuzzy logic, artificial neural networks, genetic algorithms—
was in progress, a group of researchers felt the need of integrating them in order to enjoy the merits of
xii Preface
different biologically inspired techniques into one system. The result is the development of several hybrid
paradigms, like neuro-fuzzy, neuro-genetic, fuzzy-genetic, and neuro-fuzzy-genetic. These hybrid paradigms
are suitable for solving complex real-world problems for which only one tool may not be adequate.
We are passing through a phase of rosy outlook for control technology: there has been a tendency to
portray neural network and/or a fuzzy-logic based controller as magical; a sort of black box that does
amazing things. Enthusiastic researchers have dreamed big dreams and made sweeping predictions on the
potential of intelligent control systems with neural processor/fuzzy logic chips in the control loop. The
growth of the new field during the past few years has been so explosive that the traditional model-based
control theory and the traditional op amp/computer-based implementations of controllers are under threat.
New design methods are required to prove themselves in actual practice, before they can displace well-
accepted techniques. Intelligent control technology is slowly gaining wider acceptance among academics
and industry. Even if there has been some over-enthusiastic description of the field, the scientific commu-
nity, and industry, are converging to the fact that there is something fundamentally significant about this
technology. Also, preparations are underway, to accord a warm welcome to integrated control technology–
integration of intelligent control theory with the traditional model-based theory, and integration of the VLSI
microprocessors with neuro-fuzzy processors to implement the controller.
This book includes a tutorial introduction to the knowledge-based tools for control system design. Rigor-
ous characterization of theoretical properties of intelligent control methodology will not be our aim in our
tutorial presentation; rather we focus on the development of systematic engineering procedures, which will
guide the design of the controller for a specific problem.
The arrangement of topics in the text, is a little different from the conventional one. Typically, a book on
digital control systems starts with transform-domain design and then carries over to state space. These books
give a detailed account of state variable methods for discrete-time systems. Since the state variable methods
for discrete-time systems run quite parallel to those for continuous-time systems, full-blown repetition is not
appreciated by the readers conversant with state variable methods for continuous-time systems. And for
readers with no background of this type, a natural way of introducing state variable methods, is to give the
treatment for continuous-time systems, followed by a brief parallel presentation for discrete-time systems.
To meet this objective, we have divided the treatment of transform-domain design and state-space design
in two parts of the book. Part I deals with digital control principles and design in transform domain, assum-
ing that the reader has had an introductory course in control engineering concentrating on the basic princi-
ples of feedback control and covering various classical analog methods of control system design. The mate-
rial presented in this part of the book is closely related to the material already familiar, but towards the end
a direction to wider horizons is indicated. Part II of the book deals with state variable methods in automatic
control. State variable analysis and design methods are usually not covered in an introductory course. It is
assumed that the reader is not exposed to the so-called modern control theory. Our approach is to first
discuss state variable methods for continuous-time systems, and then give a compact presentation of the
methods for discrete-time systems using the analogy with the continuous-time systems.
For the purpose of organizing different courses for students with different backgrounds, the sequencing
of chapters, and dependence of each chapter on previous chapters, has been properly designed in the text.
The typical engineering curriculum at the second-degree level includes core courses on ‘digital control
systems’, and ‘linear system theory’. Parts I and II of the book have been designed to fully meet the require-
ments of the two courses. In Part III of the book, a reasonable detailed account of nonlinear control schemes,
both the conventional and the intelligent, is given. The requirements of elective courses on ‘nonlinear
Preface xiii
control systems’, and ‘intelligent control’, will be partially or fully (depending on the depth of coverage of
the courses) served by Part III of the book.
The typical engineering curriculum at the first-degree level includes a core course on feedback control
systems, with one or two elective courses on the subject. The book meets the requirements of elective courses
at the first-degree level.
WEBSITE URL
The Online Learning Center for the book:
http://www.mhhe.com/gopal/dc3e
· provides the students with the source codes of the MATLAB problems given in Appendix A and
Appendix B of the book; and
· provides the faculty with the Solution Manual for the exercise problems given at the end of all chapters
in the book. This part of the site is password protected, and will be made available by the Publisher to
the faculty on request.
M Gopal
[email protected]
Part I
DIGITAL CONTROL: PRINCIPLES AND DESIGN IN
TRANSFORM-DOMAIN
Automatic control systems play a vital role in the (technological) progress of human civilization. These
control systems range from the very simple to the fairly complex in nature. Automatic washing machines,
refrigerators, and ovens are examples of some of the simpler systems used in the home. Aircraft auto-
matic pilots, robots used in manufacturing, and electric power generation and distribution systems repre-
sent complex control systems. Even such problems as inventory control, and socio-economic systems
control, may be approached from the theory of feedback control.
Our world is one of continuous-time variables type. Quantities like flow, temperature, voltage,
position, and velocity are not discrete-time variables but continuous-time ones. If we look back at the
development of automatic control, we find that mass-produced analog (electronic) controllers have been
available since about the 1940s. A first-level introduction to control engineering, provided in the
companion book ‘Control Systems: Principles and Design’, deals with basics of control, and covers
sufficient material to enable us to design analog (op amp based) controllers for many simple control loops
found in industry.
From the 1980s onward, we find microprocessor digital technology beginning to take over. Today,
most complex industrial processes are under computer control. A microprocessor determines the input to
manipulate the physical system, or plant; and this requires facilities to apply this input to the physical
world. In addition, the control strategy typically relies on measured values of the plant behaviour; and this
requires a mechanism to make these measured values available to the computing resources. The plant can
be viewed as changing continuously with time. The controller, however, has a discrete clock that governs
its behaviour and so its values change only at discrete points in time. To obtain deterministic behaviour
and ensure data integrity, the sensor must include a mechanism to sample continuous data at discrete
points in time, while the actuators need to produce a continuous value between the time points with
discrete-time data.
Computer interfacing for data acquisition, consists of analog-to-digital (A/D) conversion of the input
(to controller) analog signals. Prior to the conversion, the analog signal has to be conditioned to meet the
input requirements of the A/D converter. Signal conditioning consists of amplification (for sensors
generating very low power signals), filtering (to limit the amount of noise on the signal), and isolation
(to protect the sensors from interacting with one another and/or to protect the signals from possible
damaging inputs). Conversion of a digital signal to an analog signal (D/A) at the output (of controller), is
to be carried out to send this signal to an actuator which requires an analog signal. The signal has to be
amplified by a transistor or solid state relay or power amplifier. Most manufacturers of electronic instru-
mentation devices are producing signal conditioners as modules.
The immersion of the computing power into the physical world has changed the scene of control
system design. A comprehensive theory of digital “sampled” control has been developed. This theory
requires a sophisticated use of new concepts such as z-transform. It is, however, quite straightforward to
translate analog design concepts into digital equivalents. After taking a guided tour through the analog
design concepts and op amp technology, the reader will find in Part I of this book sufficient material to
enable him/her to design digital controllers for many simple control loops, and interfacing the controllers
to other sub-systems in the loop; thereby building complete feedback control systems.
The broad space of digital control applications can be roughly divided into two categories: industrial
control and embedded control. Industrial control applications are those in which control is used as part of
the process of creating or producing an end product. The control system is not a part of the actual end
product itself. Examples include the manufacture of pharmaceuticals and the refining of oil. In the case of
industrial control, the control system must be robust and reliable, since the processes typically run
continuously for days, weeks or years.
Embedded control applications are those in which the control system is a component of the end prod-
uct itself. For example, Electronic Control Units (ECUs) are found in a wide variety of products including
automobiles, airplanes, and home applications. Most of these ECUs implement different feedback con-
trol tasks. For instance, engine control, traction control, anti-lock braking, active stability control, cruise
control, and climate control. While embedded control systems must also be reliable, cost is a more sig-
nificant factor, since the components of the control system contribute to the overall cost of manufacture
of the product. In this case, much more time and effort is usually spent in the design phase of the control
system to ensure reliable performance without requiring any unnecessary excess of processing power,
memory, sensors, actuators etc., in the digital control system. Our focus in this book will be on industrial
control applications.
Perhaps more than any other factor, the development of microprocessors has been responsible for the
explosive growth of the computer industry. While early micoprocessors required many additional
components in order to perform any useful task, the increasing use of large-scale integration (LSI) or
very large-scale integration (VLSI) semiconductor fabrication techniques has led to the production of
microcomputers, where all of the required circuitry is embedded on one or a small number of integrated
circuits. A further extension of the integration is the single chip microcontroller, which adds analog and
binary I/O, timers, and counters so as to be able to carry out real-time control functions with almost no
additional hardware. Examples of such microcontrollers are Intel 8051, 8096 and Motorola MCH
68HC11. These chips were developed largely, in response to the automotive industries’ desire for
computer-controlled ignition, emission control and anti-skid systems. They are now widely used in proc-
ess industries. This digital control practice, along with the theory of sampled-data systems will be dealt
with in Chapters 2–4 of the book.
Introduction
1
1.1 CONTROL SYSTEM TERMINOLOGY
A Control System is an interconnection of components to provide a desired function. The portion of the
system to be controlled is given various names: process, plant, and controlled system being perhaps the most
common. The portion of the system that does the controlling is the controller. Often, a control system
designer has little or no design freedom with the plant; it is usually fixed. The designer’s task is, therefore, to
develop a controller that will control the given plant acceptably. When measurements of the plant response
are available to the controller (which in turn generates signals affecting the plant), the configuration is a
feedback control system.
A digital control system uses digital hardware, usually in the form of a programmed digital computer, as
the heart of the controller. In contrast, the controller in an analog control system is composed of analog
hardware; an electronic controller made of resistors, capacitors, and operational amplifiers is a typical
example. Digital controllers normally have analog devices at their periphery to interface with the plant; it is
the internal working of the controller that distinguishes digital from analog control.
The signals used in the description of control systems are classified as continuous-time and discrete-time.
Continuous-time signals are defined for all time, whereas discrete-time signals are defined only at discrete
instants of time, usually evenly spaced steps. The signals for which both time and amplitude are discrete, are
called digital signals. Because of the complexity of dealing with quantized (discrete-amplitude) signals,
digital control system design proceeds as if computer-generated signals were not of discrete amplitude. If
necessary, further analysis is then done, to determine if a proposed level of quantization is acceptable.
Systems and system components are termed continuous-time or discrete-time according to the type of
signals they involve. They are classified as being linear if signal components in them can be superim-
posed—any linear combination of signal components, applied to a system, produces the same linear combi-
nation of corresponding output components; otherwise a system is nonlinear. A system or component is
time-invariant if its properties do not change with time—any time shift of the inputs produces an equal time
shift of every corresponding signal. If a system is not time-invariant, then it is time-varying.
A typical topology of a computer-controlled system is sketched schematically in Fig. 1.1. In most cases,
the measuring transducer (sensor) and the actuator (final control element) are analog devices, requiring,
respectively, analog-to-digital (A/D) and digital-to-analog (D/A) conversion at the computer input and
4 Digital Control and State Variable Methods
output. There are, of course, exceptions; sensors which combine the functions of the transducer and the A/D
converter, and actuators which combine the functions of the D/A converter and the final control element are
available. In most cases, however, our sensors will provide an analog voltage output, and our final control
elements will accept an analog voltage input.
Command input Disturbance Controlled
(Desired plant behaviour) inputs output
Final
A/D Computer D/A control Plant
element
Clock
Sensor
In the control scheme of Fig. 1.1, the A/D converter performs the sampling of the sensor signal (analog
feedback signal ) and produces its binary representation. The digital computer (control algorithm) generates
a digital control signal using the information on desired and actual plant behaviour. The digital control signal
is then converted to analog control signal via the D/A converter. A real-time clock synchronizes the actions
of the A/D and D/A converters, and the shift registers. The analog control signal is applied to the plant
actuator to control the plant’s behaviour.
The overall system in Fig. 1.1 is hybrid in nature; the signals are in the sampled form (discrete-time
signals) in the computer, and in a continuous form in the plant. Such systems have traditionally been called
sampled-data systems; we will use this term as a synonym for computer control systems/digital control
systems.
The word ‘servomechanism’ (or servosystem) is used for a command-following system wherein the con-
trolled output of the system is required to follow a given command. When the desired value of the controlled
outputs is more or less fixed, and the main problem is to reject disturbance effects, the control system is
sometimes called a regulator. The command input for a regulator becomes a constant and is called set-point,
which corresponds to the desired value of the controlled output. The set-point may however be changed in
time, from one constant value to another. In a tracking system, the controlled output is required to follow, or
track, a time-varying command input.
To make these definitions more concrete, let us consider some familiar examples of control systems.
One of the earliest applications of radar tracking was for anti-aircraft fire control, first with guns and later
with missiles. Today, many civilian applications exist as well, such as satellite-tracking radars, navigation-
aiding radars, etc.
The radar scene includes the radar itself, a target, and the transmitted waveform that travels to the target
and back. Information about the target’s spatial position is first obtained by measuring the changes in the
back-scattered waveform relative to the transmitted waveform. The time shift provides information about
the target’s range, the frequency shift provides information about the target’s radial velocity, and the
Introduction 5
received voltage magnitude and phase provide infor-
nna ax
ismation about the target’s angle1[1].
Ante In a typical radar application, it is necessary to
point the radar antenna toward the target and follow its
movements. The radar sensor detects the error be-
tween the antenna axis and the target, and directs the
antenna to follow the target. The servomechanism for
Elevation Azimuth steering the antenna in response to commands from
a b radar sensor, is considered here. The antenna is
designed for two independent angular motions, one
about the vertical axis in which the azimuth angle is
Fig. 1.2 Antenna configuration varied, and the other about the horizontal axis in
which the elevation angle is varied (Fig. 1.2). The
servomechanism for steering the antenna is described by two controlled variables—azimuth angle b and
elevation angle a. The desired values or commands are the azimuth angle br and the elevation angle ar of the
target. The feedback control problem involves error self-nulling, under conditions of disturbances beyond
our control (such as wind power).
The control system for steering antenna can be treated as two independent systems—the azimuth-angle
servomechanism, and the elevation-angle servomechanism. This is because the interaction effects are
usually small. The operational diagram of the azimuth-angle servomechanism is shown in Fig. 1.3.
The steering command from the radar sensor, which corresponds to target azimuth angle, is compared with
the azimuth angle of the antenna axis. The occurrence of the azimuth-angle error causes an error signal to pass
through the amplifier, which increases the angular velocity of the servomotor in a direction towards an error
reduction. In the scheme of Fig. 1.3, the measurement and processing of signals (calculation of control signal)
is digital in nature. The shaft-angle encoder combines the functions of transducer and A/D converter.
Figure 1.4 gives the functional block diagrams of the control system. A simple model of the load
(antenna) on the motor is shown in Fig. 1.4b. The moment of inertia J and the viscous friction coefficient B
are the parameters of the assumed model. Nominal load is included in the plant model for the control design.
1
The bracketed numbers coincide with the list of references given at the end of the book.
6 Digital Control and State Variable Methods
The main disturbance inputs are the deviation of the load from the nominal estimated value as a result of
uncertainties in our estimate, effect of wind power, etc.
Load
disturbance
r Motor B
+ e Power
+ J
amplifier
– Load
(a) (b)
Load
disturbance
r + Motor
e + Power
amplifier +
Load
– –
Tachogenerator
Rate
signal
(c)
Fig. 1.4 Functional block diagrams of azimuthal servomechanism
In the tracking system of Fig. 1.4a, the occurrence of error causes the motor to rotate in a direction
favouring the dissolution of error. The processing of the error signal (calculation of the control signal) is based
on the proportional control logic. Note that the components of our system cannot respond instantaneously,
since any real-world system cannot go from one energy level to another in zero time. Thus, in any real-world
system, there is some kind of dynamic lagging behaviour between input and output. In the servosystem of
Fig. 1.4a, the control action, on occurrence of the deviation of the controlled output from the desired value
(the occurrence of error), will be delayed by the cumulative dynamic lags of the shaft-angle encoder, digital
computer and digital-to-analog converter, power amplifier, and the servomotor with load. Eventually, how-
ever, the trend of the controlled variable deviation from the desired value, will be reversed by the action of
the amplifier output on the rotation of the motor, returning the controlled variable towards the desired value.
Now, if a strong correction (high amplifier gain) is applied (which is desirable from the point of view of
control system performance, e.g., strong correction improves the speed of response), the controlled variable
overshoots the desired value (the ‘run-out’ of the motor towards an error with the opposite rotation), causing
a reversal in the algebraic sign of the system error. Unfortunately, because of system dynamic lags, a reversal
of correction does not occur immediately, and the amplifier output (acting on ‘old’ information) is now
actually driving the controlled variable in the direction it was already heading, instead of opposing its excur-
sions, thus leading to a larger deviation. Eventually, the reversed error does cause a reversed correction, but
the controlled variable overshoots the desired value in the opposite direction and the correction is again in
the wrong direction. The controlled variable is thus driven, alternatively, in opposite directions before it
settles on to an equilibrium condition. This oscillatory state is unacceptable as the behaviour of antenna-
Introduction 7
steering servomechanism. The considerable amplifier gain, which is necessary if high accuracies are to be
obtained, aggravates the described unfavourable phenomenon.
The occurrence of these oscillatory effects can be controlled by the application of special compensation
feedback. When a signal proportional to motor’s angular velocity (called the rate signal) is subtracted from
the error signal (Fig. 1.4c), the braking process starts sooner before the error reaches a zero value.
The ‘loop within a loop’ (velocity feedback system embedded within a position feedback system)
configuration utilized in this application, is a classical scheme called minor-loop feedback scheme.
Many industrial applications require variable speed drives. For example, variable speed drives are used for
pumping duty to vary the flow rate or the pumping pressure, rolling mills, harbour cranes, rail traction, etc.
[2–4].
The variable speed dc drive is the most versatile drive available. The Silicon Controlled Rectifiers (SCR)
are almost universally used to control the speed of dc motors, because of considerable benefits that accrue
from the compact static controllers supplied directly from the ac mains.
Basically, all the dc systems involving the SCR controllers are similar but, with different configurations
of the devices, different characteristics may be obtained from the controller. Figure 1.5 shows a dc motor
driven by a full-wave rectified supply. Armature current of the dc motor is controlled by an SCR which is, in
turn, controlled by the pulses applied by the SCR trigger control circuit. The SCR controller thus combines
the functions of a D/A converter and a final control element.
Load
Speed Actual
reference + – speed
Tachogenerator
A /D Motor
Full Over-current
bridge protection
ac Digital
rectifier
computer
circuit
SCR SCR
trigger control
circuit
Firing angle of the SCR controls the average armature current which, in turn, controls the speed of the dc
motor. The average armature current (speed) increases as the trigger circuit reduces the delay angle of firing of
the SCR, and the average armature current (speed) reduces as the delay angle of firing of the SCR is increased.
In the regulator system of Fig. 1.5, the reference voltage which corresponds to the desired speed of the dc
motor, is compared with the output voltage of tachogenerator, corresponding to the actual speed of the
motor. The occurrence of the error in speed, causes an error signal to pass through the trigger circuit, which
8 Digital Control and State Variable Methods
controls the firing angle of the SCR in a direction towards an error reduction. When the processing of the
error signal (calculation of the control signal) is based on the proportional control logic, a steady-state error
between the actual speed and the desired speed exists. The occurrence of steady-state error can be elimi-
nated by generating the control signal with two components: one component proportional to the error signal,
and the other proportional to the integral of the error signal.
This example describes the hardware features of the design of a PC-based liquid-level control system. The
plant of our control system is a cylindrical tank. Liquid is pumped into the tank from the sump (Fig. 1.6). The
inflow to the tank can be controlled by adjusting valve V1. The outflow from the tank goes back into the sump.
Valve V1 of our plant is a rotary valve; a stepping motor has been used to control the valve. The stepping
motor controller card, interfaced to the PC, converts the digital control signals into a series of pulses which
are fed to the stepping motor using a driver circuit. Three signals are generated from the digital control
signal at each sampling instant, namely, number of steps, speed of rotation, and direction of rotation. The
stepping-motor driver circuit converts this information into a single pulse train, which is fed to the stepping
motor. The valve characteristics between the number of steps of the stepping motor and the outflow from the
valve, are nonlinear.
The probe used for measurement of liquid level, consists of two concentric cylinders connected to a
bridge circuit, to provide an analog voltage. The liquid partially occupies the space between the cylinders,
with air in the remaining part. This device acts like two capacitors in parallel, one with dielectric constant of
air ( ~- 1) and the other with that of the liquid. Thus, the variation of the liquid level causes variation of the
electrical capacity, measured between the cylinders. The change in the capacitance causes a change in the
bridge output voltage which is fed to the PC through an amplifier circuit. The characteristics of the sensor
between the level and the voltage are approximately linear.
Step motor
controller card
Expansion
slots of PC
Driver Step
circuit motor
V1 V2
Pump Sump
A /D
conversion card Bridge and
amplifier circuit
Digital computers were first applied to the industrial process control in the late 1950s. The machines were
generally large-scale ‘main frames’ and were used in a so-called supervisory control mode; the individual
temperature, pressure, flow and the like, feedback loops were locally controlled by electronic or pneumatic
analog controllers. The main function of the computer was to gather information on how the overall process
was operating, feed this into a technical-economic model of the process (programmed into computer
memory), and then, periodically, send signals to the set-points of all the analog controllers, so that each
individual loop operated in such a way as to optimize the overall operation.
In 1962, a drastic departure from this approach was made by Imperial Chemical Industries in England—
a digital computer was installed, which measured 224 variables and manipulated 129 valves directly. The
name Direct Digital Control (DDC) was coined to emphasize that the computer controlled the process
directly. In DDC systems, analog controllers were no longer used. The central computer served as a single,
time-shared controller for all the individual feedback loops. Conventional control laws were still used for
each loop, but the digital versions of control laws for each loop resided in the software in the central compu-
ter. Though digital computers were very expensive, one expected DDC systems to have economic advantage
for processes with many (50 or more) loops. Unfortunately, this did not often materialize. As failures in the
central computer of a DDC system shut down the entire system, it was necessary to provide a ‘fail-safe’
back-up system, which usually turned out to be a complete system of individual loop analog controllers, thus
negating the expected hardware savings.
There was a substantial development of digital computer technology in the 1960s. By the early 1970s,
smaller, faster, more reliable, and cheaper computers became available. The term minicomputers was coined
for the new computers that emerged. DEC PDP11 is by far, the best-known example. There were, however,
many related machines from other vendors.
The minicomputer was still a fairly large system. Even as performance continued to increase and prices to
decrease, the price of a minicomputer main frame in 1975, was still about $10,000. Computer control was still
out of reach for a large number of control problems. However, with the development of microcomputer, the
price of a card computer, with the performance of a 1975 minicomputer, dropped to $500 in 1980. Another
consequence was that digital computing power in 1980 came in quanta as small as $50. This meant that
computer control could now be considered as an alternative, no matter how small the application [54-57].
Microcomputers have already made a great impact on the process control field. They are replacing analog
hardware even as single-loop controllers. Small DDC systems have been made using microcomputers.
Operator communication has vastly improved with the introduction of colour video-graphics displays.
The variety of commercially available industrial controllers ranges from single-loop controllers through
multiloop single computer systems to multiloop distributed computers. Although the range of equipment
available is large, there are a number of identifiable trends which are apparent.
Single-loop microprocessor-based controllers, though descendants of single-loop analog controllers, have
greater degree of flexibility. Control actions which are permitted, include on/off control, proportional action,
integral action, derivative action and the lag effect. Many controllers have self-tuning option. During the self-
tune sequence, the controller introduces a number of step commands, within the tolerances allowed by the
operator, in order to characterize the system response. From this response, values for proportional gain, reset
time, and rate time are developed. This feature of online tuning in industrial controllers is interesting, and
permits the concept of the computer automatically adjusting to changing process conditions [11-12].
10 Digital Control and State Variable Methods
Multiloop single computer systems have variability in available interface and software design. Both the
single-loop and multiloop controllers may be used in stand-alone mode, or may be interfaced to a host
computer for distributed operation. The reducing costs and increasing power of computing systems, has
tended to make distributed computing systems for larger installations, far more cost effective than those built
around one large computer. However, the smaller installation may be best catered for by a single multiloop
controller, or even a few single-loop devices.
Control of large and complex processes using distributed computer control systems (DCCS), is facili-
tated by adopting a multilevel or hierarchical view point of control strategy. The multilevel approach sub-
divides the system into a hierarchy of simpler control design problems. On the lowest level of control (direct
process control level), the following tasks are handled: acquisition of process data, i.e., collection of instan-
taneous values of individual process variables, and status messages of plant control facilities (valves, pumps,
motors, etc.) needed for efficient direct digital control; processing of collected data; plant hardware moni-
toring, system check and diagnosis; closed-loop control and logic control functions, based on directives
from the next ‘higher’ level.
Supervisory level copes with the problems of determination of optimal plant work conditions, and genera-
tion of relevant instructions to be transferred to the next ‘lower’ level. Adaptive control, optimal control, plant
performance monitoring, plant coordination and failure detections are the functions performed at this level.
Production scheduling and control level is responsible for production dispatching, inventory control,
production supervision, production re-scheduling, production reporting, etc.
Plant(s) management level, the ‘highest’ hierarchical level of the plant automation system, is in charge of
the wide spectrum of engineering, economic, commercial, personnel, and other functions.
It is, of course, not to be expected that in all available distributed computer control systems, all four
hierarchical levels are already implemented. For automation of small-scale plants, any DCCS having at least
two hierarchical levels, can be used. One system level can be used as a direct process control level, and the
second one as a combined plant supervisory, and production scheduling and control level. Production
planning and other enterprise-level activities, can be managed by the separate mainframe computer or the
computer centre. For instance, in a LAN (Local Area Network)-based system structure, shown in Fig. 1.7a,
the ‘higher’ automation levels are implemented by simply attaching the additional ‘higher’ level computers
to the LAN of the system [89].
LAN
PLANT
Supervisory computer
Transfer Automatic
CAD/CAM FMS system inspection
Raw Material
Products
DNC Manufacturing Assembly
system cell station
The development of control system analysis and design can be divided into three eras. In the first era,
we have the classical control theory, which deals with techniques developed during the 1940s and 1950s.
Classical control methods—Routh-Hurwitz, Root Locus, Nyquist, Bode, Nichols—have in common the use
of transfer functions in the complex frequency (Laplace variable s) domain, and the emphasis on the graphi-
cal techniques. Since computers were not available at that time, a great deal of emphasis was placed on
developing methods that were amenable to manual computation and graphics. A major limitation of the
classical control methods was the use of single-input, single-output (SISO) control configurations. Also, the
use of the transfer function and frequency domain limited one to linear time-invariant systems. Important
results of this era will be discussed in Part I of this book.
In the second era, we have modern control (which is not so modern any longer), which refers to state-
space based methods developed in the late 1950s and early 1960s. In modern control, system models are
Introduction 13
directly written in the time domain. Analysis and design are also carried out in the time domain. It should be
noted that before Laplace transforms and transfer functions became popular in the 1920s, engineers were
studying systems in the time domain. Therefore, the resurgence of time-domain analysis was not unusual, but
it was triggered by the development of computers and advances in numerical analysis. As computers were
available, it was no longer necessary to develop analysis and design methods that were strictly manual.
Multivariable (multi-input, multi-output (MIMO)) control configurations could be analysed and designed.
An engineer could use computers to numerically solve or simulate large systems that were nonlinear and/or
time-varying. Important results of this era—Lyapunov stability criterion, pole-placement by state feedback,
state observers, optimal control—will be discussed in Part II of this book.
Modern control methods initially enjoyed a great deal of success in academic circles, but they did not
perform very well in many areas of application. Modern control provided a lot of insight into system struc-
ture and properties, but it masked other important feedback properties that could be studied and manipulated
using the classical control theory. A basic requirement in control engineering is to design control systems
that will work properly when the plant model is uncertain. This issue is tackled in the classical control theory
using gain and phase margins. Most modern control design methods, however, inherently require a precise
model of the plant. In the years since these methods were developed, there have been few significant
implementations and most of them have been in a single application area—the aerospace industry. The
classical control theory, on the other hand, is going strong. It provides an efficient framework for the design
of feedback controls in all areas of application. The classical design methods have been greatly enhanced by
the availability of low-cost computers for system analysis and simulation. The graphical tools of classical
design can now be more easily used with computer graphics for SISO as well as MIMO systems.
During the past three decades, the control theory has experienced a rapid expansion, as a result of the
challenges of the stringent requirements posed by modern systems such as: flight vehicles, weapon control
systems, robots, and chemical processes; and the availability of low-cost computing power. A body of
methods emerged during this third era of control-theory development, which tried to provide answers to the
problems of plant uncertainty. These techniques, commonly known as robust control, are a combination of
modern state-space and classical frequency-domain techniques. For a thorough understanding of these new
methods, we need to have adequate knowledge of state-space methods, in addition to the frequency-domain
methods. This has guided the preparation of this text.
Robust control system design has been dominated by linear control techniques, which rely on the key
assumption of availability of uncertainty model. When the required operation range is large, and a reliable
uncertainty model could not be developed, a linear controller is likely to perform very poorly. Nonlinear
controllers, on the other hand, may handle the nonlinearities in large range operation, directly. Also,
nonlinearities can be intentionally introduced into the controller part of a control system, so that the model
uncertainties can be tolerated. Advances in computer technology have made the implementation of nonlinear
control schemes—feedback linearization, variable structure sliding mode control, adaptive control,
gain-scheduling—a relatively simple task.
The third era of control-theory development has also given an alternative to model-based design meth-
ods: the knowledge-based control. In this approach, we look for a control solution that exhibits intelligent
behaviour, rather than using purely mathematical methods to keep the system under control.
Model-based control techniques have many advantages. When the underlying assumptions are satisfied,
many of these methods provide good stability, robustness to model uncertainties and disturbances, and speed of
response. However, there are many practical deficiencies of these ‘crisp’ (‘hard’ or ‘inflexible’) control algo-
rithms. It is generally difficult to accurately represent a complex process by a mathematical model. If the
process model has parameters whose values are partially known, ambiguous, or vague, crisp control algorithms
that are based on such incomplete information, will not usually give satisfactory results. The environment with
14 Digital Control and State Variable Methods
which the process interacts, may not be completely predictable and it is normally not possible for a crisp
algorithm, to accurately respond to a condition that it did not anticipate, and that it could not ‘understand’.
Intelligent control is the name introduced to describe control systems in which control strategies are
based on AI techniques. In this control approach, which is an alternative to model-based control approach,
a behavioural (and not mathematical) description of the process is used, which is based on qualitative
expressions and experience of people working with the process. Actions can be performed either as a result
of evaluating rules (reasoning), or as unconscious actions based on presented process behaviour after a
learning phase. Intelligence comes in as the capability to reason about facts and rules, and to learn about
presented behaviour. It opens up the possibility of applying the experience gathered by operators and
process engineers. Uncertainty about the knowledge can be handled alongwith ignorance about the structure
of the system.
Fuzzy logic, and neural networks are very good methods to model real processes which cannot be described
mathematically. Fuzzy logic deals with linguistic and imprecise rules based on expert’s knowledge. Neural
networks are applied in the case where we do not have any rules but several data.
The main feature of fuzzy logic control is that a control engineering knowledge base (typically in terms of a
set of rules), created using expert’s knowledge of process behaviour, is available within the controller and the
control actions are generated by applying existing process conditions to the knowledge base, making use of an
inference mechanism. The knowledge base and the inference mechanism can handle noncrisp and incomplete
information, and the knowledge itself will improve and evolve through learning and past experience.
In the neural network based control, the goal of artificial neural network is to emulate the mechanism of
human brain function and reasoning, and to achieve the same intelligence level as the human brain in learn-
ing, abstraction, generalization and making decisions under uncertainty.
In the conventional design exercises, the system is modelled analytically by a set of differential equations,
and their solution tells the controller how to adjust the system’s control activities for each type of behaviour
required. In a typical intelligent control scheme, these adjustments are handled by an intelligent controller, a
logical model of thinking processes that a person might go through in the course of manipulating the system.
This shift in focus from the process to the person involved, changes the entire approach to automatic control
problems. It provides a new design paradigm such that a controller can be designed for complex, ill-defined
processes without knowing quantitative input-output relations, which are otherwise required by conven-
tional methods.
The ever-increasing demands of the complex control systems being built today, and planned for the
future, dictate the use of novel and more powerful methods in control. The potential for intelligent control
techniques in solving many of the problems involved is great, and this research area is evolving rapidly. The
emerging viewpoint is that model-based control techniques should be augmented with intelligent control
techniques in order to enhance the performance of the control systems. The developments in intelligent
control methods should be based on firm theoretical foundations (as is the case with model-based control
methods), but this is still at its early stages. Strong theoretical results guaranteeing control system properties
such as stability are still to come, although promising results reporting progress in special cases have been
reported recently. The potential of intelligent control systems clearly needs to be further explored and both
theory and applications need to be further developed. A brief account of nonlinear control schemes both the
conventional and the intelligent, will be given in Part III of this book.
The tools of classical linear control system design are the Laplace transform, stability testing, root locus, and
frequency response. Laplace transformation is used to convert system descriptions in terms of integro-
Introduction 15
differential equations to equivalent algebraic relations involving rational functions. These are conveniently
manipulated in the form of transfer functions with block diagrams and signal flow graphs [155].
The block diagram of Fig. 1.8 represents the basic structure of feedback control systems. Not all systems
can be forced into this format, but it serves as a reference for discussion.
In Fig. 1.8, the variable y(t) is the controlled variable of the system. The desired value of the controlled
variable is yr(t), the command input. yr(t) and y(t) have the same units. The feedback elements with transfer
function H(s) are system components that act on the controlled variable y(t) to produce the feedback signal
b(t). H(s) typical represents the sensor action to convert the controlled variable y(t) to an electrical sensor
output signal b(t).
+ e
w
–
+ y
yr r + e u m+
A (s ) D (s) G A (s ) G P (s )
–
b
H (s )
The reference input elements with transfer function A(s) convert the command signal yr(t) into a form
compatible with the feedback signal b(t). The transformed command signal is the actual physical input to the
system. This actual signal input is defined as the reference input.
The comparison device (error detector) of the system compares the reference input r(t) with the feedback
signal b(t) and generates the actuating error signal e$ (t). The signals r(t), b(t), and e$ (t) have the same units. The
controller with transfer function D(s) acts on the actuating error signal to produce the control signal u(t).
The control signal u(t) has the knowledge about the desired control action. The power level of this signal
is relatively low. The actuator elements with transfer function GA(s), are the system components that act
on the control signal u(t) and develop enough torque, pressure, heat, etc. (manipulated variable m(t)), to
influence the controlled system. GP(s) is the transfer function of the controlled system.
The disturbance w(t) represents the undesired signals that tend to affect the controlled system. The
disturbance may be introduced into the system at more than one location.
The dashed-line portion of Fig. 1.8 shows the system error e(t) = yr – y(t). Note that the actuating error
signal e$ (t) and the system error e(t) are two different variables.
The basic feedback system block diagram of Fig. 1.8 is shown in abridged form in Fig. 1.9. The output
Y(s) is influenced by the control signal U(s) and the disturbance signal W(s) as per the following relation:
Y(s) = GP(s) GA(s) U(s) + GP(s) W(s) (1.1a)
= G(s) U(s) + N(s) W(s) (1.1b)
where G(s) is the transfer function from the control signal U(s) to the output Y(s), and N(s) is the transfer
function from the disturbance input W(s) to the output Y(s). Using Eqns (1.1), we can modify the block
diagram of Fig. 1.9 to the form shown in Fig. 1.10. Note that in the block diagram model of Fig. 1.10, the
plant includes the actuator elements.
16 Digital Control and State Variable Methods
W (s)
W (s) N (s )
= D(s) H(s)
LM A(s ) Y (s)
r Y (s)
OP (1.2b)
N H (s) Q
Using Eqns (1.2a) and (1.2b), we can simplify Fig. 1.10 to obtain the structure shown in Fig. 1.11.
W (s) N (s )
W (s) N (s )
W (s)
E (s ) U (s) + + Y (s)
D (s) G (s) Mw ( s )
–1
R (s ) + + Y (s)
H (s) M (s)
Fig. 1.15 Block diagram without reference input Fig. 1.16 Reduced block diagram model
for system of Fig. 1.13
The transfer functions given by Eqns (1.3) and (1.4) are referred to as closed-loop transfer functions. The
denominator of these transfer functions has the term D(s)G(s)H(s) which is the multiplication of all the
transfer functions in the feedback loop. It may be viewed as the transfer function between the variables R(s)
and B(s) if the loop is broken at the summing point. D(s)G(s)H(s) may therefore be given the name open-
loop transfer function. The roots of denominator polynomial of D(s)G(s)H(s) are the open-loop poles, and
the roots of numerator polynomial of D(s)G(s)H(s) are the open-loop zeros.
The roots of the characteristic equation
1 + D(s)G(s)H(s) = 0 (1.6)
are the closed-loop poles of the system. These poles indicate whether or not the system is bounded-input
bounded-output (BIBO) stable, according to whether or not all the poles are in the left half of the complex
plane. Stability may be tested by the Routh stability criterion.
A root locus plot consists of a pole-zero plot of the open-loop transfer function of a feedback system,
upon which is superimposed the locus of the poles of the closed-loop transfer function as some parameter is
varied. Design of the controller (compensator) D(s) can be carried out using the root locus plot. One begins
with simple compensators, increasing their complexity until the performance requirements can be met. Prin-
cipal measures of transient performance are peak overshoot, settling time, and rise time. The compensator
poles, zeros, and multiplying constant are selected to give feedback system pole locations, that result in
acceptable transient response to step inputs. At the same time, the parameters are constrained so that the
resulting system has acceptable steady-state response to important inputs, such as steps and ramps.
Frequency response characterizations of systems have long been popular because of the ease and practi-
cality of steady-state sinusoidal response measurements. These methods also apply to systems in which
rational transfer function models are not adequate, such as those involving time delays. They do not require
explicit knowledge of system transfer function models; experimentally obtained open-loop sinusoidal
response data can directly be used for stability analysis and compensator design. A stability test, the Nyquist
criterion, is available. Principal measures of transient performance are gain margin, phase margin, and
bandwidth. The design of the compensator is conveniently carried out using the Bode plot and the Nichols
chart. One begins with simple compensators, increasing their complexity until the transient and steady-state
performance requirements are met.
There are two approaches to carrying out the digital controller (compensator) design. The first approach
uses the methods discussed above to design an analog compensator, and then transform it into a digital one.
The second approach, first transforms analog plants into digital plants, and then carries out the design using
digital techniques. The first approach performs discretization after design; the second approach performs
Introduction 19
discretization before design. The classical approach to designing a digital compensator directly using an
equivalent digital plant for a given analog plant, parallels the classical approach to analog compensator
design. The concepts and tools of the classical digital design procedures will be given in Chapters 2–4. This
background will also be useful in understanding and applying the state-variable methods to follow.
This text is concerned with digital control and state variable methods for a special class of control systems,
namely time-invariant, lumped, and deterministic systems. We do not intend to solve all the problems that
can be posed under the defined category. Coverage of digital control theory and practice is modest. Various
concepts from interdisciplinary fields of computer science and computer engineering which relate to digital
control system development—number representation, logical descriptions of algorithmic processes, compu-
ter arithmetic operations, computer system hardware and software—are beyond the scope of this book. In
fact, a course on control engineering need not include these topics because specialized courses on computer-
system architecture are normally offered in undergraduate curricula of all engineering disciplines.
It is assumed that the reader has had an introductory course in control engineering concentrating on
the basic principles of feedback control and covering various classical analog methods of control system
design. The classical digital methods of design are developed in Part I of this book, paralleling and extend-
ing considerably the similar topics in analog control.
State variable analysis and design methods are usually not covered in an introductory course. It is
assumed that the reader is not exposed to the so-called modern control theory. Our approach in Part II of this
book is to first discuss state variable methods for continuous-time systems, and then give a compact presen-
tation of the methods for discrete-time systems, using the analogy with the continuous-time case.
This text also prepares a student for the study of advanced control methods. However, detailed coverage
of these methods is beyond the scope of the book; only a brief account is given in Part III.
There are eleven chapters in the book in addition to this introductory chapter. In each chapter, analysis/
design examples are interspersed to illustrate the concepts involved. At the end of each chapter, there are a
number of review examples that take the reader to a higher level of application; some of these examples also
serve the purpose of extending the text material. The same approach is followed in unsolved problems.
Answers to problems have been given to inspire confidence in the reader.
The examples we have considered in this book are generally low-order systems. Such a selection of
examples helps in conveying the fundamental concepts of feedback control without the distraction of large
amounts of computations, inherent in high-order systems. Many of the real-life design problems are more
complex than the ones discussed in this book. High-order systems are common and, in addition, several
parameters are to be varied during design stage to investigate their effect on the system performance. Com-
puter-Aided-Design (CAD) tools are extremely useful for complex control problems. Several software
packages with computer graphics are available commercially for CAD of control systems [151–154].
Let us now go through the organization of the chapters. Chapters 2–4 deal with digital control theory and
practice. The philosophy of presentation is that the new material should be closely related to material
already familiar, and yet, by the end, a direction to wider horizons should be indicated. The approach leads
us, for example, to relate the z-transform to the Laplace transform and to describe the implications of poles
and zeros in the z-plane to those known meanings attached to poles and zeros in the s-plane. Also, in devel-
oping the design methods we relate the digital control design methods to those of continuous-time systems.
Chapter 2 introduces the sampling theorem and the phenomenon of aliasing. Methods to generate
discrete-time models which approximate continuous-time dynamics, and stability analysis of these models
are also included in this chapter.
20 Digital Control and State Variable Methods
Chapter 3 considers the hardware (analog and digital) of the control loop with emphasis on modelling.
Models of some of the widely used digital control systems are also included.
Chapter 4 establishes a toolkit of design-oriented techniques. It puts forward alternative design methods
based on root-locus and Bode plots. Design of digital controllers using z-plane synthesis is also included in
this chapter. References for the material in Chapters 2–4 are [11–12, 23–25, 30–31, 52, 80–98].
Chapters 5–8 deal with state variable methods in automatic control. Chapter 5 is on state variable analysis.
It exposes the problems of state variable representation, diagonalization, solution, controllability, and
observability. The relationship between the transfer function and state variable models is also given.
Although it is assumed that the reader has the necessary background on vector-matrix analysis, a reasonably
detailed account of vector-matrix analysis is provided in this chapter for convenient reference.
State variable analysis concepts, developed in continuous-time format in Chapter 5, are extended to
digital control systems in Chapter 6.
The techniques of achieving desired system characteristics by pole-placement using complete state
variable feedback are developed in Chapter 7. Also included is the method of using the system output to
form estimates of the states for use in state feedback. Results are given for both continuous-time and
discrete-time systems.
Lyapunov stability analysis is introduced in Chapter 8. In addition to stability analysis, Lyapunov func-
tions are useful in solving some optimization problems. We discuss in this chapter, the solution of linear
quadratic optimal control problem through Lyapunov synthesis. Results are given for both continuous-time
and discrete-time systems. References for the material in Chapters 5–8 are [27–28, 99–119, 122–124].
Describing function and phase-plane methods, which have demonstrated great utility in analysis of
nonlinear systems, have been paid considerable attention in Chapter 9. Also included is stability analysis of
nonlinear systems using Lyapunov functions. References for the material in Chapter 9 are [125-129].
The concepts of feedback linearization, model reference adaptive control, system identification and self-
tuning control, and variable structure control are briefly introduced in Chapter 10. Also included in this
chapter, is an introduction to an emerging nonlinear-control architecture—the reinforcement learning
control. For detailed study, refer [120–121, 125–136, 147–150].
A tutorial introduction to knowledge-based tools (neural networks, support vector machines, fuzzy logic,
genetic algorithms) for control system design is given in Chapters 11 and 12. Rigorous characterization of
theoretical properties of intelligent control methodology is not our aim in our tutorial presentation; rather,
we focus on the development of systematic engineering procedures, which will guide the design of controller
for a specific problem. For detailed study, refer [137–146].
Appendices A and B provide an introduction to the MATLAB environment for computer-aided control
system design.
Signal Processing in Digital
2 Control
Digital control systems offer many advantages over their analog counterparts. Of course, there are possible
problems also. Let us first look at the advantages of digital control over the corresponding analog control
before we talk of the price one has to pay for the digital option.
Figure 2.2 depicts a block diagram of a digital control system showing a configuration of the basic control
scheme. The basic elements of the system are shown by the blocks.
Controlled
Disturbance
output
Digital
set-point
Final
A/D Computer D/A control Plant
element
Clock
Anti-aliasing
S/H Sensor
filter
The analog feedback signal coming from the sensor is usually of low frequency. It may often include high
frequency ‘noise.’ Such noise signals are too fast for control system to correct; low-pass filtering is often
needed to allow good control performance. The anti-aliasing filter shown in Fig. 2.2 serves this purpose. In
digital systems, the phenomenon called aliasing introduces some new aspects to the noise problems. We will
study this phenomenon later in this chapter.
The analog signal, after anti-aliasing processing, is converted into digital form by the A/D conversion
system. The conversion system usually consists of an A/D converter preceded by a sample-and-hold (S/H)
device. The A/D converter converts a voltage (or current) amplitude at its input into a binary code represent-
ing a quantized amplitude value closest to the amplitude of the input. However, the conversion is not
instantaneous. Input signal variation, during the conversion time of the A/D converter, can lead to erroneous
results. For this reason, high performance A/D conversion systems include a S/H device, which keeps the
input to the A/D converter, constant during its conversion time.
24 Digital Control and State Variable Methods
The digital computer processes the sequence of numbers by means of an algorithm and produces a new
sequence of numbers. Since data conversions and computations take time, there will always be a delay when
a control law is implemented using a digital computer. The delay, which is called the computational delay,
degrades the control system performance. It should be minimized by the proper choice of hardware and by
the proper design of software for the control algorithm. Floating-point operations take a considerably longer
time to perform (even when carried out by arithmetic co-processor) than the fixed-point ones. We therefore
try to execute fixed-point operations whenever possible. Alternative realization schemes for a control
algorithm will be given in the next chapter.
The D/A conversion system in Fig. 2.2 converts the sequence of numbers in numerical code into a
piecewise continuous-time signal. The output of the D/A converter is fed to the plant through the actuator
(final control element) to control its dynamics.
The basic control scheme of Fig. 2.2 assumes a uniform sampling operation, i.e., only one sampling rate
exists in the system and the sampling period is constant. The real-time clock in the computer, synchronizes
all the events of A/D conversion–computation–D/A conversion.
The control scheme of Fig. 2.2 shows a single feedback loop. In a control system having multiple loops,
the largest time constant involved in one loop may be quite different from that in other loops. Hence, it may
be advisable to sample slowly in a loop involving a large time constant, while in a loop involving only small
time constants, the sampling rate must be fast. Thus, a digital control system may have different sampling
periods in different feedback paths, i.e., it may have multiple-rate sampling. Although digital control
systems with multi-rate sampling are important in practical situations, we shall concentrate on single-rate
sampling. (The reader interested in multi-rate digital control systems may refer Kuo [87]).
The overall system in Fig. 2.2 is hybrid in nature; the signals are in a sampled form (discrete-time signals/
digital signals) in the computer and in continuous-time form in the plant. Such systems have traditionally
been called sampled-data control systems. We will use this term as a synonym to computer control systems/
digital control systems.
In the present chapter, we focus our attention on digital computer and its analog interfacing. For the time
being, we delink the digital computer from the plant. The link will be re-established in the next chapter.
Figure 2.3a shows an analog signal y(t)—it is defined at the continuum of times, and its amplitudes assume
a continuous range of values. Such a signal cannot be stored in digital computers. The signal, therefore, must
be converted to a form that will be accepted by digital computers. One very common method to do this is
to record sample values of this signal at equally spaced instants. If we sample the signal every 10 msec,
for example, we obtain the discrete-time signal sketched in Fig. 2.3b. The sampling interval of 10 msec
corresponds to a sampling rate of 100 samples/sec. The choice of the sampling rate is an important one,
since it determines how accurately the discrete-time signal, can represent the original signal.
In a practical situation, the sampling rate is determined by the range of frequencies present in the original
signal. Detailed analysis of uniform sampling process, and the related problem of aliasing will appear later
in this chapter.
Notice that the time axis of the discrete-time signal in Fig. 2.3b, is labelled simply ‘sample number’ and
index k has been used to denote this number (k = 0, 1, 2, ...). Corresponding to different values of sample
number k, the discrete-time signal assumes the same continuous range of values assumed by the analog
signal y(t). We can represent the sample values by a sequence of numbers ys (refer Fig. 2.3b):
ys = {1.7, 2.4, 2.8, 1.4, 0.4, ...}
Signal Processing in Digital Control 25
y (t) y ( t)
3 3
2 2
1 1
0 10 20 30 40 0 1 2 3 4
t (msec) Sample number k
y q (k ) (a) (b)
3
k Digital word
0 10
2
1 10
2 11
1
3 01
4 00
k
0 1 2 3 4
(c) (d)
Fig. 2.3 Sampling, quantization and coding of an analog signal
In general,
ys = {y(k)}, 0 ≤ k < ∞
where y(k) denotes the kth number in the sequence.
The sequence defined above is one-sided sequence; ys = 0 for k < 0. In digital control applications, we
normally encounter one-sided sequences.
Although, strictly speaking, y(k) denotes the kth number in the sequence, the notation given above is often
unnecessarily cumbersome, and it is convenient and unambiguous to refer to y(k) itself as a sequence.
Throughout our discussion on digital control, we will assume uniform sampling, i.e., sample values of the
analog signal are extracted at equally spaced sampling instants. If the physical time, corresponding to the
sampling interval is T seconds, then the kth sample y(k), gives the value of the discrete-time signal at t = kT
seconds. We may, therefore, use y(kT) to denote a sequence wherein the independent variable is the physical time.
The signal of Fig. 2.3b is defined at discrete instants of time. The sample values are, however, tied to a
continuous range of numbers. Such a signal, in principle, can be stored in an infinite-bit machine because a
finite-bit machine can store only a finite set of numbers.
A simplified hypothetical 2-bit machine can store four
numbers given alongside:
Binary number Decimal equivalent
The signal of Fig. 2.3b can be stored in such a machine if the
00 0 sample values are quantified to four quantization levels. Figure
01 1 2.3c shows a quantized discrete-time signal for our hypotheti-
10 2 cal machine. We have assumed that any value in the interval
[0.5, 1.5) is rounded to 1, and so forth. The signals for which
11 3
both time and amplitude are discrete, are called digital signals.
26 Digital Control and State Variable Methods
After sampling and quantization, the final step required in converting an analog signal to a form accept-
able to digital computers is coding (or encoding). The encoder maps each quantized sample value into a
digital word. Figure 2.3d gives coded digital signal, corresponding to the analog signal of Fig. 2.3a for our
hypothetical 2-bit machine.
The device that performs the sampling, quantization, and coding is an A/D converter. Figure 2.4 is a
block diagram representation of the operations performed by an A/D converter.
Continuous-time
continuous-amplitude Digital
signal words
Discrete-time Discrete-time
continuous-amplitude discrete-amplitude
signal signal
Fig. 2.4 Operations performed by an A/D converter
It may be noted that the quantized discrete-time signal of Fig. 2.3c and the coded signal of Fig. 2.3d carry
exactly the same information. For the purpose of analytical study of digital systems, we will use the quantized
discrete-time form for digital signals.
The number of binary digits carried by a device is its word length, and this is obviously an important
characteristic related to the resolution of the device—the smallest change in the input signal that will pro-
duce a change in the output signal. The A/D converter that generates signals of Fig. 2.3 has two binary digits
and thus four quantization levels. Any change, therefore, in the input over the interval [0.5, 1.5) produces no
change in the output. With three binary digits, 23 quantization levels can be obtained, and the resolution of
the converter would improve.
The A/D converters in common use have word lengths of 8 to 16 bits. For an A/D converter with a word
length of 8 bits, an input signal can be resolved to one part in 28, or 1 in 256. If the input signal has a range
of 10 V, the resolution is 10/256, or approximately 0.04 V. Thus, the input signal must change by at least
0.04 V, in order to produce a change in the output.
With the availability of converters with resolution ranging from 8 to 16 bits, the quantization errors do not
pose a serious threat in the computer control of industrial processes. In our treatment of the subject, we
assume quantization errors to be zero. This is equivalent to assuming infinite-bit digital devices. Thus we
treat digital signals as if they are discrete-time signals with amplitudes assuming a continuous range of
values. In other words, we make no distinction between the words ‘discrete-time’ and ‘digital.’
A typical topology of a single-loop digital control system is shown in Fig. 2.2. It has been assumed that the
measuring transducer and the actuator (final control element) are analog devices, requiring respectively A/D
and D/A conversion at the computer input and output. The
Digital Discrete-time Analog
signal signal D/A conversion is a process of producing an analog signal
words
from a digital signal and is, in some sense, the reverse of
the sampling process discussed above.
Zero-order The D/A converter performs two functions: first,
Decoder
hold generation of output samples from the binary-form
Fig. 2.5 Operations performed by a D/A digital signals produced by the machine, and second, con-
converter version of these samples to analog form. Figure 2.5 is
Signal Processing in Digital Control 27
a block diagram representation of the operations performed by a D/A converter. The decoder maps each
digital word into a sample value of the signal in discrete-time form. It is usually not possible to drive a
load, such as a motor, with these samples. In order to deliver sufficient energy, the sample amplitude
might have to be so large that it is infeasible to be generated. Also large-amplitude signals might saturate
the system being driven.
The solution to this problem is to smooth the output samples to produce a signal in analog form. The
simplest way of converting a sample sequence into a continuous-time signal is to hold the value of the
sample until the next one arrives. The net effect is to convert a sample to a pulse of duration T—the sample
period. This function of a D/A converter is referred to as a zero-order hold (ZOH) operation. The term zero-
order refers to the zero-order polynomial used to extrapolate between the sampling times (detailed discus-
sion will appear later in this chapter). Figure 2.6 shows a typical sample sequence produced by the decoder,
and the analog signal1 resulting from the zero-order hold operation.
y (k ) y h ( t)
3 3
2 2
1 1
k t
0 1 2 3 4 0 T 2T 3T 4T
(a) (b)
Fig. 2.6 (a) Sampled sequence (b) Analog output from ZOH
1
In the literature, including this book, the terms ‘continuous-time signal’ and ‘analog signal’ are frequently
interchanged.
28 Digital Control and State Variable Methods
Full scale
7/8
(ratio to full scale)
6/8
Analog output
5/8
4/8
3/8
2/8
1/8
3R
2R R R 2R
–
A
V0
2R 2R 2R +
b0 b1 b2
– V ref
Fig. 2.7 Three-bit D/A converter
Similarly, if b0 = 1 and b2 = b1 = 0, then the equivalent circuit is as shown in Fig. 2.8c. The output
voltage is
i 1
V0 = 3R 0 = Vref
8 8
In this way, we find that when the input data is b2b1b0 (where the bi’s are either 0 or 1), then the output
voltage is
V0 = (b22–1 + b12–2 + b02–3)VFS (2.1)
where VFS = Vref = full scale output voltage.
The circuit and the defining equation for an n-bit D/A converter easily follow from Fig. 2.7 and Eqn. (2.1),
respectively.
Signal Processing in Digital Control 29
3R
1
i
2 2
2R 2R
–
1 1 A
i i
2 2 2 2 V0
i2
2R +
– V ref
(a)
3R
1
i
2R R 2R 4 1
–
1 1 1
i i i
2 1 2 1 4 1 V0
i1 +
2R 2R 1
i
4 1
– V ref
(b)
3R
1
i
2R R R 2R 8 0
–
1 1 1 1
i i i i
2 0 2 0 4 0 8 0 V0
2R i0 2R 2R +
1 1
i i
4 0 8 0
– Vref
(c)
Fig. 2.8 Equivalent circuits of the D/A converter shown in Fig. 2.7;
(a) b0 = b1 = 0, b2 = 1; (b) b0 = 0, b1 = 1, b2 = 0; (c) b0 = 1, b1 = b2 = 0.
Clock
Digital
output
V0
D /A V FS
There are a number of basic discrete-time signals which play an important role in the analysis of signals and
systems. These signals are direct counterparts of the basic continuous-time signals.2 As we shall see, many of
the characteristics of basic discrete-time signals are directly analogous to the properties of basic continuous-
time signals. There are, however, several important differences in discrete time, and we will point these out
as we examine the properties of these signals.
2
Chapter 2 of the companion book [155].
Signal Processing in Digital Control 31
Unit Sample Sequence The unit sample sequence contains only one nonzero element and is defined by
(Fig. 2.11a)
d (k) =
RS1 for k = 0
(2.2a)
T0 otherwise
The delayed unit sample sequence, denoted by d (k – n), has its nonzero element at sample time n (Fig. 2.11b):
d (k – n) =
RS1 for k = n
(2.2b)
T0
otherwise
One of the important aspects of the unit sample sequence is that an arbitrary sequence can be represented
as a sum of scaled, delayed unit samples. For example, the sequence r (k) in Fig. 2.11c can be expressed as
r(k) = r(0)d (k) + r(1)d (k – 1) + r(2)d (k – 2) + L
¥
= å r(n)d (k – n) (2.3)
n = 0
r(0), r(1), K , are the sample values of the sequence r(k). This representation of a discrete-time signal is
found useful in the analysis of linear systems through the principle of superposition.
d (k ) d (k – n ) r (k )
1 1
k k k
0 1 2 3 0 1 2 n 0 1 2 3
(a) (b) (c)
1 1
k 3 5
k k
0 1 2 0 1 n 0 1
(d) (e)
(f)
Fig. 2.11 Basic discrete-time signals
As we will see, the unit sample sequence plays the same role for discrete-time signals and systems, that
the unit impulse function does for continuous-time signals and systems. For this reason, the unit sample
sequence is often referred to as the discrete-time impulse. It is important to note that a discrete-time impulse
does not suffer from the mathematical complexity that a continuous-time impulse suffers from. Its definition
is simple and precise.
32 Digital Control and State Variable Methods
Unit Step Sequence The unit step sequence is defined as3 (Fig. 2.11d)
m(k) =
RS1 for k ³ 0
(2.4)
T0 otherwise
The delayed unit step sequence, denoted by m(k – n), has its first nonzero element at sample time n (Fig. 2.11e):
m(k – n) =
RS1 for k ³ n
(2.5)
T0 otherwise
An arbitrary discrete-time signal r(k) switched on to a system at k = 0 is represented as r(k)m(k).
Sinusoidal Sequence A one-sided sinusoidal sequence has the general form (Fig. 2.11f )
r(k) = A cos(W k + f) m(k) (2.6)
The quantity W is called the frequency of the discrete-time sinusoid and f is called the phase. Since k is a
dimensionless integer, the dimension of W must be radians (we may specify the units of W to be radians/
sample, and units of k to be samples).
The fact that k is always an integer in Eqn. (2.6) leads to some differences between the properties of
discrete-time and continuous-time sinusoidal signals. An important difference lies in the range of values the
frequency variable can take on. We know that for the continuous-time signal r(t) = A cos w t = real {Ae jw t},
w can take on values in the range (– ∞ , ∞ ). In contrast, for the discrete-time sinusoid r(k) = A cos Wk =
real {Ae jWk}, W can take on values in the range [– p, p].
To illustrate the property of discrete-time sinusoids, consider W = p + x, where x is a small number
compared with p. Since
e jWk = e j(p + x)k = e j(2p – p + x)k = e j(– p + x)k
a frequency of (p + x) results in a sinusoid of frequency (– p + x). Suppose now, that W is increased to 2p.
Since e j2pk = e j0, the observed frequency is 0. Thus, the observed frequency is always between – p and p, and
is obtained by adding (or subtracting) multiples of 2p to W until a number in that range is obtained.
The highest frequency that can be represented by a digital signal is therefore p radians/sample interval.
The implications of this property for sequences obtained by sampling sinusoids and other signals will be
discussed in Section 2.11.
Bientôt l'église surgit du sol, appuyée sur une crypte qui devait
recevoir les sépultures royales, et offrant aux regards l'aspect des
primitives basiliques. Elle pouvait avoir, nous dit un historien, deux
cents pieds de long sur cinquante à soixante de large[321]. L'intérieur
en était non voûté, mais lambrissé à la manière antique; de riches
mosaïques ainsi que des peintures murales en animaient les parois.
On y avait accès, du côté occidental, par un triple portique orné,
comme l'intérieur, de mosaïques et de peintures représentant des
scènes de l'Ancien et du Nouveau Testament[322]. A côté de l'église
s'élevèrent de spacieux bâtiments conventuels pour la demeure des
chanoines réguliers qui devaient la desservir. Un vaste territoire,
longeant les jardins du palais et allant d'un côté jusqu'à la Seine et
de l'autre jusqu'à la Bièvre, forma la seconde enceinte de cette
fondation vraiment royale. La plus grande partie en était occupée par
des closeries et des vignobles à travers lesquels circulaient
d'ombreux sentiers de noyers et d'amandiers chantés au douzième
siècle, en vers agréables, par le poète Jean de Hautefeuille. Le
douaire assigné au monastère était considérable: il comprenait
Nanterre, Rosny, Vanves, Fossigny, Choisy, et la terre connue sous
le nom de fief de Sainte-Clotilde[323].
[321] Viallon, Vie de Clovis le Grand, pp. 448 et suiv.
[322] Vita sanctæ Genovefæ, xi, 53 (Kohler): Miracula sanctæ Genovefæ; cf. du
Molinet, Histoire de sainte Geneviève et de son église royale et apostolique à
Paris, manuscrit de la bibliothèque Sainte-Geneviève, livre III, chap. ii.
[323] Du Molinet, o. c., livre III, chap. iii, suivi par les autres historiens de sainte
Geneviève. Le livre de du Molinet, resté inédit, est un travail excellent, qu'il n'y
aurait plus intérêt à publier toutefois, parce que la meilleure partie en a passé
depuis dans les travaux consacrés au même sujet.
Clovis ne vécut pas assez longtemps pour voir l'achèvement de
cette fondation grandiose; c'est Clotilde qui la mit sous toit et qui en
termina les dépendances[324]. Il paraîtrait toutefois, si l'on en croit le
chroniqueur parisien auquel nous avons déjà fait des emprunts, que
le roi put encore assister à la consécration de l'église. Cet écrivain
ajoute que Clovis pria le pape de lui envoyer des reliques des saints
Pierre et Paul, parce qu'il voulait en faire les patrons du nouveau
sanctuaire, et qu'à cette occasion il fit tenir au souverain pontife de
riches cadeaux[325]. C'est probablement alors aussi qu'il lui envoya
une superbe couronne d'or, garnie de pierres précieuses, qu'on
appelait «le Règne»[326]. Plusieurs historiens du moyen âge ont
parlé de cette couronne qui mérite une mention ici, puisqu'elle fut le
premier hommage de la royauté très chrétienne à l'Église
universelle.
[324] Vita sancti Genovefæ, xi, 53 (Kohler).
[325] C'est ce que dit une note d'un des manuscrits du Liber historiæ, c. 17. Dans
tous les cas, elle se trompe tout au moins sur le nom du pape, qu'elle appelle
Hormisdas. Hormisdas ne monta sur le trône de saint Pierre qu'en 514, trois ans
après la mort de Clovis. Peut-être faut-il garder le nom d'Hormisdas, et remplacer
celui de Clovis par celui de Clotilde?
[326] Eodem tempore venit regnus cum gemmis preciosis a rege Francorum
Cloduveum christianum, donum beato Petro apostolo. Liber Pontificalis, éd.
Duchesne, t. I, p. 271. Ce passage, écrit au sixième siècle, a passé en substance
dans le Vita sancti Remigii de Hincmar, Acta Sanctorum, p. 156, F, et de là dans
Sigebert de Gembloux, Chronicon (dom Bouquet, III, p. 337). M. l'abbé Duchesne
écrit à ce sujet, o. c., p. 274: «Clovis mourut trois ans avant l'avènement du pape
Hormisdas. Il est possible que l'envoi du regnus ou couronne votive, dont il est ici
question, ait souffert quelque retard. Du reste, le nom de Clovis n'est attesté ici
que par les manuscrits de la seconde édition; l'abrégé Félicien coupe la phrase
après Francorum.» Cf. A. de Valois, I, pp. 270 et 299; dans ce dernier passage, il
fait dire à ses sources que Clovis envoya la couronne qu'il avait reçue d'Anastase.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com