Automation&Control Systems Corrige
Automation&Control Systems Corrige
E-mail: [email protected]
BY
1
Contents
UNIT I: SYSTEMS AND THEIR REPRESENTATION 4
1. GENERAL INTRODUCTION 4
1.1 Definitions 4
1.2 Introduction to control systems and System Concept 5
1.2.1 5
3. Introduction to control systems and System Concept 5
3.1 What is a system? 5
3.2 What is a control system? 6
3.3 Example of control system applications 6
3.4 Open-loop and closed loop control systems 7
3.4.1. Open loop control systems 7
3.4.2. Closed loop control systems 7
3.4.3. Comparison between open loop and closed loop control system 8
3.5 Methods of representing systems 9
4. Classification of control systems 11
5.Test waveforms used in control system 14
6. Mathematical modeling of control systems 16
6.1. Laplace Transform 17
6.2. Solving Differential Equations using Laplace transforms 22
6.3. Transfer function and impulse-response function 23
6.4 Convolution Integral 25
6.5. Application: Modeling control systems 26
6.5.1. Modeling of electrical systems 26
6.5.2. Modeling of mechanical systems 27
6.5.3. Modeling of thermal systems 33
7.Block diagrams 34
7.1. Definitions 34
7.2. Block diagram properties 35
7.3. Closed-Loop System Subjected to a Disturbance 38
8. Block Diagram Reduction 40
8.1. Block Diagram Reduction Rules 40
8.2. Example problems and solutions 43
8.3. Tutorials: 52
9. Procedures for Drawing a Block Diagram 55
10. Transfer Functions with MATLAB 58
11. Automatic Controllers 58
12. Classifications of Industrial Controllers 59
13. Signal Flow Graph Representation of Control Systems 62
13.1. Construction of a Signal Flow Graph for a System 64
13.2. Mason's Gain Formula 65
UNITII: TIME RESPONSE 72
2.1. Introduction 72
2.2. First order systems 72
2.2.1. Unit-step response of first order system 72
2.2.2. Unit-Ramp Response of First-Order Systems 73
2.2.3. Unit-Impulse Response of First-Order Systems. 74
2.3. Second order systems 75
2
2.3.1. Standard form of Second-Order System 76
2.3.2. Step Response of Second-Order System 78
2.4. Definitions of Transient-Response Specifications 80
2.5. Effects of integral and derivative control 86
2.5.1. Integral Control Action 86
2.5.2. Proportional Control of Systems 87
2.5.3. Integral Control of Systems 89
2.5.4. Response to Torque Disturbances (Proportional Control) 89
2.5.5. Response to Torque Disturbances (Proportional-Plus-Integral Control) 90
2.5.6. Derivative Control Action 92
2.5.7. Proportional-Plus-Derivative Control of a System with Inertia Load 93
2.5.8. Proportional-Plus-Derivative Control of Second-Order Systems 94
2.6. Steady-state errors 95
2.6.1. Definition and Test Inputs 95
2.6.2. Sources of Steady-State Error 96
2.6.3. Steady-state errors in unity-feedback control systems 96
2.6.4. Static Position Error Constant Kp 97
2.6.5. Static Velocity Error Constant Kv 98
2.6.6. Static Acceleration Error Constant Ka 100
2.6.7. Summary 102
UNIT III: STABILITY OF CONTROL SYSTEM 108
3.1. Introduction 108
3.2. Location of roots in s-plane for stability 110
3.3. Routh-Hurwitz Criterion 111
3.3.1. Generating a Basic Routh Table 112
3.3.2. Interpreting the Basic Routh Table 114
3.3.3. Routh-Hurwitz Criterion: Special Cases 115
UNIT IV: COMPENSATORS DESIGN 123
4.1. Introduction 123
4.2. Types of compensators 123
4.3. Compensating networks 124
4.3.1. Lead compensator 124
4.3.2. Lagcompensation 125
4.3.3. Lead-lag compensator 126
4.4. Cascade Compensation in Frequency Domain Using Bode Plots 128
4.4.1. Design of Lead Compensator 128
4.4.2. Design of Lag Compensator 132
4.4.3. Design of Lag-Lead Compensator 136
4.5. Comparison of Lead, Lag, and Lag–Lead Compensation 137
4.6. Designing and tuning PID controllers 140
4.6.1. Designing of PID controller 140
4.6.2.Tune a PID controller based on Ziegler-Nichols tuning rules 151
REFERENCES 154
APPENDIX 155
3
UNIT I: SYSTEMS AND THEIR REPRESENTATION
1. GENERAL INTRODUCTION
Automatic control has played a vital role in the advance of engineering and sciences. In addition to
its extreme importance in space vehicle systems, missile-guidance systems, robotic systems, etc,
automatic control has become an important and integral part of modern manufacturing and
industrial processes.
1.1 Definitions
The controlled variable is the quantity or condition that is measured and controlled, while for the
manipulated variable is the quantity or condition that is varied by the controller so as to affect the
value of the controlled variable. Normally, the controlled variable of the system is the output of the
system.
Control means measuring the value of the controlled variable of the system and applying the
manipulated variable to the system to correct or limit deviation of the measure value from a desired
value.
ii. Plants: A plant may be a piece of equipment, perhaps a set of machine parts
functioningtogether, the purpose of which is to perform a particular operation.
v. Disturbances: A disturbance is a signal that tends to adversely affect the value of the output
ofthat system. If the disturbance is generated within the system, it is called internal, while the
external disturbance is generated outside the system and is an input.
4
vi. Feedback control: Feedback control refers to an operation that, in the presence
ofdisturbances, tends to reduce the difference between the output of a system and some reference
input and does so on the basis of this difference.
Example of systems:
✔ A power station
✔ A steam turbine in the power station
✔ An airplane
✔ A human being
There would be interaction between the system and its surroundings. The boundary between the
system and its environment is thought depending both on the physical entities involved and on its
purpose
Example of systems:
-A power station
-A steam turbine in the power station
-An aeroplane
-A human being
-There would be interaction between the system and its surroundings. The boundary between the
system and its environment is thought depending both on the physical entities involved and on its
purpose
5
Ex. Power station – Interest: Relationship between the power station and the community.
-The signals which pass from the environment to the system are termed the system inputs. The
signals passing out from the system to the environment are termed the system outputs.
-Only certain of inputs will be available for adjustment (controlled inputs) whereas the others
will be disturbance inputs over which no control exists.
The controlled variable or output: the direction of the two front wheels.
The input variable (or the actuating signal): the direction of the steering wheel.
An open-loop control system utilizes an actuating device to control the process directly
without using feedback. On the basis of knowledge about the system and of past experience, a
prediction is made of what input should be applied to give the desired output, and then the input is
adjusted accordingly.
Examples:
6
*Programmable washing machine.
1) Such systems are inaccurate and unreliable because accuracy of such systems are totally
dependent on the accurate recalibration of the controller.
2) Such system give inaccurate results if there are variations in the external environment i.e.
such systems cannot sense environmental changes.
3) Similarly they cannot sense internal disturbances in the system, after the controller stage.
4) To maintain the quality and accuracy, recalibration of controller is necessary from time to
time.
To overcome all the above disadvantages, generally in practice “closed loop systems” are used.
A closed-loop control system uses a measurement of the output and feedback this signal to
compare it with the desired input (reference or command). The system output is measured and
compared to the desired value and the system continually attempts to reduce the error between the
two.
Examples:
Feedback could increase the system gain in one frequency range but decrease it in another.
Feedback reduces error between the reference input and the system output.
Feedback can improve stability or be harmful to stability if it not applied properly.
Feedback can reduce the effect of noise and disturbance on system performance.
7
Advantages
1) Accuracy of such system is always very high because controller modifies and manipulates
the actuating signal such that error in the system will be zero.
2) Such system senses environmental changes, as well as internal disturbances and accordingly
modifies the error.
3) In such system, there is reduced effect of nonlinearities and distortions.
4) Bandwidth of such system i.e. operating frequency zone for such system is very high.
Disadvantages
1) Such systems are complicated and time consuming from design point of view and hence
costlier.
2) Due to feedback, system tries to correct the error from time to time. Tendency to overcorrect
the error may cause oscillations without bound in the system.
Hence system has to be designed taking into consideration problems of instability due to feedback.
The stability problems are severe and must be taken care of while designing the system
3.4.3. Comparison between open loop and closed loop control system
1 Any change in output has no effect on the Changes in output, effects the input which
input i.e. feedback does not exists. is possible by use of feedback.
2 Output measurement is not required for Output measurement is necessary.
operation of system.
3 Feedback element is absent Feedback element is present
4 Error detector is absent Error detector is necessary
5 It is inaccurate and unreliable Highly accurate and reliable
6 Highly sensitive to the disturbances Less sensitive to the disturbances
7 Highly sensitive to the environmental Less sensitive to the environmental
changes changes
8 Bandwidth is small Bandwidth is large
9 Simple to construct and cheap Complicated to design and hence costly
10 Generally are stable in nature Stability is the major consideration while
designing
11 Highly affected by nonlinearities Reduced effect of nonlinearities
Different pictorial and mathematical ways (models) are used to represent systems.
A useful and very frequently used representation of systems is the block diagram.
Individual blocks are used to represent functional parts of the system. The lines with arrows
indicate the signal flow paths.
8
Block diagram of temperature control of passenger compartment of a car.
The temperature of the passenger compartment differs considerably depending on the place where it
is measured. Instead of using multiple sensors for temperature measurement and averaging the
measured values, it is economical to install a small suction blower at the place where passengers
normally sense the temperature. The temperature of the air from the suction blower is an indication
of the passenger compartment temperature and is considered the output of the system.
The controller receives the input signal, output signal, and signals from sensors from disturbance
sources. The controller sends out an optimal control signal to the air conditioner or heater to control
the amount of cooling air or warm air so that the passenger compartment temperature is about the
desired temperature.
Block diagrams show only the interrelationships between the different parts of the system and for
analysis must be supplemented by a quantitative description in the form of appropriate
mathematical expression for each of the blocks of the diagram.
The equation relating the outputs to the inputs of the blocks will in general be differential
equations.
Example:
Mathematical Model:
9
For control engineering purposes, those differential equations are often written as transfer
functions defined as the ratio of the Laplace transform of the output and the input when the
initialconditions are zero.
Block diagram:
Transfer function:
An alternative pictorial representation is the signal flow graph. It illustrates the passage of signals
through a system using junctions points (called nodes) and arrowed line segments (called branches).
Natural control system: The systems inside a human being or a biological system are known as
natural control systems.
Man-made control systems: the various control systems that are designed and developed by man
are known as man-made control systems. An automobile system is an example of man-made
control systems.
10
Combinational control systems: The combination of a natural control system and a man-made
control system. Driver driving a car is an example of combinational control systems.
Time-variant and time invariant control systems: If the parameters of a control system vary with
time, the control system is termed as time-variant system (system a)).If the parameters of a control
system are not varying with time, it is termed as time-invariant control system (system b)).
A space vehicle leaving earth is an example of time varying system. The elements of an electrical
network such as resistance, inductance and capacitance are not time varying; this is an example of
the time-invariant system.
Parameters of Parameters of
Input system are Output Input system are Output
functions of time constants and not
functions of time
Superposition:
Homogeneity:
A Control System in which output varies linearly with the input is called a linear control system.
11
Fig. 2. Linear systems characteristics
When the input and the output has nonlinear relationship the system is said to be nonlinear.
Key point: In practice it is difficult to find perfectly linear system. Most of the physical systems are
non-linear in nature. If the presence of certain non-linearity does not affect the performance of the
system much, the presence of non-linearity can be neglected and the system can be treated as a
linear system.
Continuous-Time and Discrete-Time Control Systems : If all the system variables of a control
system are function of time, it is termed as a continuous-time control system. If one or more system
variables of a control system are known at a certain discrete time, it is termed as a discrete-time
control system. The speed control of a dc motor with tacho-generator feedback is an example of
continuous-time control systems. The microprocessor- or computer-based system is an example of
discrete-time control system.
3.1.Discrete signal
Deterministic vs Stochastic Control System: If the response to input, and to external
disturbances, of a control system is predictable and repetitive, the control system is known as a
deterministic system. Any control system is called stochastic if such a response is unpredictable.
12
Fig 4.1.Input signal Fig 4.2.Repetitive response
If a control system ca be described by partial differential equations, such a control system is known
as distributed-parameter control system. In a transmission line, its characteristics are always
described by partial differentials equations.
13
Input
Input Process Output
Process Output
Test input signals are used, both analytically and during testing, to verify the design. It is neither
necessarily practical nor illuminating to choose complicated input signals to analyze a system's
performance. Thus, the engineer usually selects standard test inputs.
These inputs are impulses, steps, ramps, parabolas, and sinusoids, as shown in Table 2.
An impulse is infinite at t = 0 and zero elsewhere. The area under the unit impulse is 1. An
approximation of this type of waveform is used to place initial energy into a system so that the
response due to that initial energy is only the transient response of a system. From this response the
designer can derive a mathematical model of the system.
A step input represents a constant command, such as position, velocity, or acceleration. Typically,
the step input command is of the same form as the output.
For example, if the system's output is position, the step input represents a desired position, and the
output represents the actual position. If the system's output is velocity, the step input represents a
constant desired speed, and the output represents the actual speed. The designer uses step inputs
because both the transient response and the steady-state response are clearly visible and can be
evaluated.
14
The ramp input represents a linearly increasing command. For example, if the system's output is
position, the input ramp represents a linearly increasing position.
If the system's output is velocity, the input ramp represents a linearly increasing velocity. The
response to an input ramp test signal yields additional information about the steady-state error.
The previous discussion can be extended to parabolic inputs, which are also used to evaluate a
system's steady-state error.
Sinusoidal inputs can also be used to test a physical system to arrive at a mathematical model.
Review questions
1. Define the following: (i) System, (ii) Plant, (iii) Controller, (iv) Input, (v) Output, (vi)
Control System, (vii) Disturbance.
15
2. Define the following control systems: (i) Time - invariant (ii) Time-Variant, (iii)
Continuous (iv) Discrete , (v) Deterministic , (vi) Stochastic.
3. Define linear and non-linear control systems.
4. Define Open-loop and closed-loop control systems.
5. State the effects of feedback.
6. What do you mean by servomechanism?
7. State the applications of open-lop and closed-loop control systems.
A mathematical model of a dynamic system is defined as a set of equations that represents the
dynamics of the system accurately, or at least fairly well.
Note that a mathematical model is not unique to a given system. A system may be represented in
many different ways and, therefore, may have many mathematical models, depending on one’s
perspective.
Differential equations: The dynamics of many systems, whether they are mechanical, electrical,
thermal, and so on, may be described in terms of differential equations. Such differential equations
may be obtained by using physical laws governing a particular system—for example, Newton’s
laws for mechanical systems and Kirchhoff’s laws for electrical systems.
Transfer function: It is defined as the ratio of the Laplace transform of the output variable to the
Laplace transform of the input variable, with all zero initial conditions.
Block Diagram: It is used to represent all types of systems. It can be used, together with transfer
functions, to describe the cause and effect relationships throughout the system.
State-space-representation.
The Laplace transform is one of the mathematical tools used for the solution of ordinary linear
differential equations. The Laplace transform method has the following two attractive features:
1. The homogeneous equation and the particular integral are solved in one operation.
2. The Laplace transform converts the differential equation into an algebraic equation in s.
It is possible to manipulate the algebraic equation by simple algebraic rules to obtain the solution in
the s-domain.
16
Given the function 𝑓(𝑡) that satisfies the condition:
The variable s is referred to as the Laplace operator, which is a complex variable,𝑠 = +𝑗, where σ
is the real part and ω is the imaginary part, as shown in figure below:
17
3. Unit impulse function
Therefore:
4. Exponential function:
Note Instead of computing Laplace transform for each function, and/or memorizing complicated
Laplace transform, use the Laplace transform table.
2. Time delay
18
3. Differentiation
4. Integration
Examples:
Partial-fraction expansion when all the poles of the transfer function are simple and real:
Note that a pole is a value of s that makes a function, such an F(s), infinite, by making the
denominator of the function to zero.
1. Find the partial-fraction expansion of the following transfer function, and its inverse Laplace.
Solution:
19
The inverse Laplace transform of the given transfer function can be obtained by using some
equation in Table.
Partial-fraction expansion when some poles of the transfer function are of multiple order:
2. Find the partial-fraction expansion of the following transfer function, and its inverse Laplace.
Solution:
20
The inverse Laplace of the given transfer function can be obtained by using equations in Laplace
transform Table:
Exercises:
Find the partial-fraction expansion of the following transfer function, and its inverse Laplace.
Decomposition in partial-fraction:
The Laplace Transform can greatly simplify the solution of problems involving differential
equations.
21
For this, we will need two main tools: property 1 and 3 of Laplace transforms with the following
“General derivative formula”.
The procedures:
- The first step is to take the Laplace transform of both sides of the
original differential equation in t to convert it into an algebraic equation in s,
- Solve and find Y (s),
- Simplify the expression of Y (s) using the method of partial fractions,
- Recall the inverse transforms,
- Using linearity of the inverse transform.
Examples:
Therefore,
22
2. Solve
To find the inverse Laplace transform we will need first simplify the expression for Y (s) using the
partial fraction decomposition.
To find A, B and C here is especially simple. For example, for A multiply both sides by
s − 3 and plug s = 3 into the expressions to obtain A = 1/2. In a similar way B = −2
and C = 5/2.
Therefore, using the linearity of the inverse Laplace transform, we will find
Definitions:
Linear Time-Variant systems: Control systems represented by a differential equation with one or
more of its coefficients are functions of time, t.
An example of a time-varying system is a spacecraft system which the mass of spacecraft changes
during flight due to fuel consumption.
23
The coefficients m and b are constants.
Dynamic systems that are described by linear, constant-coefficient, differential equations are
called linear time-invariant (LTI) systems.
Note that a long with this course we deal with linear time-invariant (LTI) systems.
Transfer Function: The transfer function of a linear, time-invariant, differential equation system is
defined as the ratio of the Laplace transform of the output (response function) to the Laplace
transform of the input (driving function), under the assumption that all initial conditions are zero.
𝑌(𝑠)
Transfer function 𝐺(𝑠) = 𝑈(𝑠)
Consider the linear time-invariant system defined by the following differential equation:
Taking the Laplace transform and considering zero initial conditions we have:
24
2. The transfer function is a property of a system itself, independent of the magnitude and nature of
the input or driving function.
3. All initial conditions of the system are set to zero.
4. The transfer function includes the units necessary to relate the input to the output; however, it
does not provide any information concerning the physical structure of the system. The transfer
functions of many physically different systems can be identical.
5. If the transfer function of a system is known, the output or response can be studied for various
forms of inputs with a view toward understanding the nature of the system
6. If the transfer function of a system is unknown, it may be established experimentally by
introducing known inputs and studying the output of the system. Once established, a transfer
function gives a full description of the dynamic characteristics of the system, as distinct from its
physical description.
To derive the transfer function of a system, we use the following procedures:
1. Develop the differential equation for the system by using the physical laws, e.g. Newton’s
laws and Kirchhoff’s laws.
2. Take the Laplace transform of the differential equation under the zero initial conditions.
3. Take the ratio of the output Y(s) to the input U(s). This ratio is the transfer function.
Or 𝑌(𝑠) = 𝐺(𝑠)𝑋(𝑠)
Note that multiplication in the complex domain is equivalent to convolution in the time domain,so
the inverse Laplace transform of 𝑌(𝑠) = 𝐺(𝑠)𝑋(𝑠) is given by the following convolution integral:
Impulse-Response Function:
Consider the output (response) of a linear time-invariant system to a unit-impulse input when the
initial conditions are zero. Since the Laplace transform of the unit-impulse is unit (X(s) = 1), the
Laplace transform of the output of the system is 𝑌(𝑠) = 𝐺(𝑠).
The inverse Laplace transform of the output 𝑌(𝑠) = 𝐺(𝑠) gives the impulse response of the
system.
25
The inverse Laplace transform of G(s)
Is called the impulse-response function. This function g(t) is also called the weighting function of
the system.
The impulse-response function g(t) is thus the response of a linear time-invariant system to a unit-
impulse input when the initial conditions are zero. The Laplace transform of this function gives the
transfer function. Therefore, the transfer function and impulse-response function of a linear, time-
invariant system contain the same information about the system dynamics.
It is hence possible to obtain complete information about the dynamic characteristics of the system
by exciting it with an impulse input and measuring the response.
A mathematical model of an electrical circuit can be obtained by applying Kirchhoff’s laws to it.
Example 1.
1) Find the transfer function of the network, Vo(s)/Vi(s).
2) Find the response v0(t) for a unit-step input, i.e. Vi(𝑡) = {1 𝑡 ≥ 00 𝑡<0
Example 2.
Consider the LCR electrical network shown in the figure below.
2) Find the time response of vo(t) of the above system for R = 2.5W, C = 0.5F, L=0.5H
26
And(𝑡) = {2 𝑡 ≥ 00 𝑡<0
This section first discusses simple spring systems and simple damper systems. Then we derive
transfer-function models of various mechanical systems.
Figure 3–1
(a) System consisting of two springs in parallel;
(b) System consisting of two springs in series
EXAMPLE 3–1 Let us obtain the equivalent spring constants for the systems shown in Figures 3–
1(a) and (b), respectively.
For the springs in parallel [Figure 3–1(a)] the equivalent spring constant keq is obtained from
For the springs in series [Figure–3–1(b)], the force in each spring is the same. Thus
The equivalent spring constant keq for this case is then found as
27
EXAMPLE 3–2
Let us obtain the equivalent viscous-friction coefficient for each of the damper systems shownin
Figures 3–2(a) and (b).An oil-filled damper is often called a dashpot. A dashpot is a device that
provides viscous friction, or damping. It consists of a piston and oil-filled cylinder. Any relative
motion between the piston rod and the cylinder is resisted by the oil because the oil must flow
around the piston (or through orifices provided in the piston) from one side of the piston to the
other. The dashpot essentially absorbs energy. This absorbed energy is dissipated as heat, and the
dashpot does not store any kinetic or potential energy.
Figure 3–2
(a) Two dampers connected in parallel;
(b) Two dampers connected in series.
28
EXAMPLE 3–3
Consider the spring-mass-dashpot system mounted on a mass less cart as shown in Figure 3–3. Let
us obtain mathematical models of this system by assuming that the cart is standing still for t<0 and
the spring-mass-dashpot system on the cart is also standing still for t<0. In this system, u(t) is the
displacement of the cart and is the input to the system. At t=0, the cart is moved at a constant speed,
or constant. The displacement y(t) of the mass is the output. (The displacement is relative to the
ground.) In this system, m denotes the mass, b denotes the viscous-friction coefficient, and k
denotes the spring constant. We assume that the friction force of the dashpot is proportional to and
that the spring is a linear spring; that is, the spring force is proportional to 𝑦 −̇ 𝑢̇ .
where m is a mass, a is the acceleration of the mass, and is the sum of the forces acting on the mass
in the direction of the acceleration a. Applying Newton’s second law to the present system and
noting that the cart is mass less, we obtain
Or
29
This equation represents a mathematical model of the system considered. Taking the Laplace
transform of this last equation, assuming zero initial condition,
30
6.5.3. Modeling of thermal systems
Example
Consider the system shown in Figure below
It is assumed that the tank is insulated to eliminate heat loss to the surrounding air.
It is also assumed that there is no heat storage in the insulation and that the liquid in the tank is
perfectly mixed so that it is at a uniform temperature. Thus, a single temperature is used to describe
the temperature of the liquid in the tank and of the outflowing liquid.
31
Let us define
7.Block diagrams
7.1. Definitions
A block diagram of a system is a pictorial representation of the functions performed by each
component and of the flow of signals. The block diagram gives an overview of the system.
32
The above figure shows the way the various items in block diagrams are represented:
Arrows: are used to represent the directions of signal flow.
A summing point: a circle with a cross is the symbol that indicates a summing operation. The plus
or minus sign at each arrowhead indicates whether that signal is to be added or subtracted (It is
where signals are algebraically added together). It is important that the quantities being added or
subtracted have the same dimensions and the same units.
The takeoff point (Branch Point) is similar to the electrical circuit takeoff point. It is a point from
which the signal from a block goes concurrently to other blocks or summing points.
The block is usually drawn with its transfer function written inside it.
33
U(s) is the input to the block, Y(s) is the output of the block and G(s) is the transfer function of the
block.
Series connection:
The Figure above shows an example of a block diagram of a closed-loop system. The output Y(s) is
fed back to the summing point, where it is compared with the reference input R(s). The output of
the block, Y(s) in this case, is obtained by multiplying the transfer function G(s) by the input to the
block,E(s).Anylinear control system may be represented by a block diagram consisting of blocks,
summing points, and branch points.
The closed loop transfer function of a closed loop system when the feedback transfer function is
𝑌(𝑠) 𝐺(𝑠)
unity is: 𝑅(𝑠) = 1+𝐺(𝑠)
When the output is fed back to the summing point for comparison with the input, it is necessary to
convert the form of the output signal to that of the input signal. For example, in a temperature
control system, the output signal is usually the controlled temperature. The output signal, which has
34
the dimension of temperature, must be converted to a force or position or voltage before it can be
compared with the input signal.
The role of the feedback element is to modify the output before it is compared with the input. In
most cases the feedback element is a sensor that measures the output of the plant. The output of the
sensor is compared with the system input, and the actuating error signal is generated.
The closed loop transfer function of a closed loop system when the feedback transfer function is not
unity.
The ration of feedback signal B(s) to actuating error signal E(s) is called the open-loop transfer
function (OLTF). That is:
𝐵(𝑠)
𝑂𝐿𝑇𝐹(𝑠) = = 𝐺(𝑠)𝐻(𝑠)
𝐸(𝑠)
The ration of output C(s) to actuating error signal E(s) is called Feedforward transfer function
(FTF).
𝐶(𝑠)
𝐹𝑇𝐹(𝑠) = = 𝐺(𝑠)
𝐸(𝑠)
If the feedback transfer function H(s) is unity, then the open-loop transfer function and the
feedforward transfer function are the same.
Closed-Loop Transfer Function, the output C(s) and input R(s) are related as follows:
Or
35
Exercise 1. Find the closed-loop transfer function for the following block diagram:
Exercise 2. A control system has a forward path of two elements with transfer functions K
and1/(s+1) as shown. If the feedback path has a transfer function s, what is the transfer function of
the closed loop system?
36
37
8. Block Diagram Reduction
8.1. Block Diagram Reduction Rules
In many practical situations, the block diagram of a Single Input-Single Output (SISO), feedback
control system may involve several feedback loops and summing points. In principle, the block
diagram of (SISO) closed loop system, no matter how complicated it is, it can be reduced to the
standard single loop form shown in Figure 10-3. The basic approach to simplify a block diagram
can be summarized as following:
38
Example
39
A complicated block diagram involving many feedback loops can be simplified by a step-by-step
rearrangement. Simplification of the block diagram by rearrangements considerably reduces the
labor needed for subsequent mathematical analysis. It should be noted, however, that as the block
diagram is simplified, the transfer functions in new blocks become more complex because new
poles and new zeros are generated.
Consider the system shown in Figure 2–13(a). Simplify this diagram. By moving the summing
point of the negative feedback loop containing H2 outside the positive feedback loop containing
H1,we obtain Figure 2–13(b).Eliminating the positive feedback loop, we have Figure 2-13(c).The
elimination of the loop containing H2/G1 gives Figure 2–13(d). Finally, eliminating the feedback
loop results in Figure 2–13(e).
40
8.2. Example problems and solutions
Example 1
Solution
41
Example 2
42
Example 3
Solution. First, move the branch point of the path involving H1 outside the loop involving H2,
asshown in Figure 2–18(a).
43
Then eliminating two loops results in Figure 2–18(b).
Example 4
Simplify the block diagram shown in Figure 2–19.Obtain the transfer function relating C(s) and
R(s).
Solution:
44
The block diagram of Figure 2–19 can be modified to that shown in Figure 2–20(a). Eliminating the
minor feedforward path, we obtain Figure 2–20(b), which can be simplified to Figure 2–20(c).The
transfer function C(s)/R(s) is thus given by
Example 5
Simplify the block diagram shown in Figure 2–21.Then obtain the closed-loop transfer function
C(s)/R(s).
45
First move the branch point between G3 and G4 to the right-hand side of the loop containing
G3,G4, and H2. Then move the summing point between G1 and G2 to the left-hand side of the first
summing point. See Figure 2–22(a).
By simplifying each loop, the block diagram can be modified as shown in Figure 2–22(b).
Further simplification results in Figure 2–22(c), from which the closed-loop transfer function
C(s)/R(s) is obtained as
Example 6
Obtain transfer functions C(s)/R(s) and C(s)/D(s) of the system shown in Figure 2–23.
46
Figure 2–23 Control system with reference input and disturbance input.
Example7
Figure 2–24 shows a system with two inputs and two outputs. Derive C1(s)/R1(s), C1(s)/R2(s),
C2(s)/R1(s), and C2(s)/R2(s). (In deriving outputs for R1(s), assume that R2(s) is zero, and vice
versa.)
47
Figure 2–24 System with two inputs and two outputs.
48
Note that Equations (2–56) and (2–57) give responses C1 and C2, respectively, when both inputsR1
and R2 are present.
Notice that when R2(s)=0, the original block diagram can be simplified to those shown in Figures
2–25(a) and (b). Similarly, when R1(s)=0, the original block diagram can be simplified to those
shown in Figures 2–25(c) and (d). From these simplified block diagrams we can also obtain
C1(s)/R1(s), C2(s)/R1(s), C1(s)/R2(s), and C2(s)/R2(s), as shown to the right of each corresponding
block diagram.
8.3. Tutorials:
49
Solution
1- First, move the branch point of the path involving H1 outside the loop involving H2.
2. Simplify the block diagram shown below. Obtain the transfer function relating C(s) and
R(s). (To be solved by students)
3. Simplify the block diagram shown below. Then obtain the closed-loop transfer function
C(s)/R(s). (To be solved by students)
50
4. Obtain transfer functions C(s)/R(s) and C(s)/D(s) of the system shown below. (To be solved
by students)
Then take the Laplace transforms of these equations, assuming zero initial conditions, and represent
each Laplace-transformed equation individually in block form. Finally, assemble the elements into
a complete block diagram. As an example, consider the RC circuit shown in Figure 2–12(a).The
equations for this circuit are
The Laplace transforms of Equations (2–4) and (2–5), with zero initial condition, become
51
Equation (2–6) represents a summing operation, and the corresponding diagram is shown in Figure
2–12(b).Equation (2–7) represents the block as shown in Figure 2–12(c). Assembling these two
elements, we obtain the overall block diagram for the system as shown in Figure 2–12(d).
It is important to note that blocks can be connected in series only if the output of one block is not
affected by the next following block. If there are any loading effects between the components, it is
necessary to combine these components into a single block.
Similarly, you can draw the block diagram of any electrical circuit or system just by following this
simple procedure.
1. Convert the time domain electrical circuit into an s-domain electrical circuit by applying
Laplace transform.
2. Write down the equations for the current passing through all series branch elements and
voltage across all shunt branches.
3. Draw the block diagrams for all the above equations individually.
4. Combine all these block diagrams properly in order to get the overall block diagram of the
electrical circuit (s-domain).
Example
52
Consider a series of RLC circuit as shown in the following figure. Where, 𝑉𝑖 (t) and 𝑉𝑜 (t) are the
input and output voltages. Let (𝑡) be the current passing through the circuit. This circuit is in time
domain.
By applying the Laplace transform to this circuit, will get the circuit in s-domain. The circuit is as
shown in the following figure.
Let us now draw the block diagrams for these two equations individually. And then combine those
block diagrams properly in order to get the overall block diagram of series of RLCCircuit (s-
domain)
53
The overall block diagram of the series of RLC Circuit (s-domain) is shown in the following figure
54
11. Automatic Controllers
Automatic controllers compares the actual value of the plant output with the reference input
(considered value), determines the deviation, and produces a control signal that will reduce the
deviation to zero or to a small value.
The manner in which the automatic controller produces the control signal is called the control
action. The figure below is a bloc diagram of an industrial control system,
It consists of an automatic controller, an actuator, a plant, and a sensor (measuring element). The
controller detects the actuating error signal, which is usually at a very low power level, and
amplifies it to a sufficiently high level.
The output of an automatic controller is fed to an actuator, such as an electric motor, a hydraulic
motor, or a pneumatic motor or valve. (The actuator is a power device that produces the input to the
plant according to the control signal so that the output signal will approach the reference input
signal.)
The sensor or measuring element is a device that converts the output variable into another
suitable variable, such as a displacement, pressure, voltage, etc. that can be used to compare
the output to the reference input signal. This element is in the feedback path of the closed loop
55
system. The set point of the controller must be converted to a reference input with the same
unit as the feedback signal from the sensor or measuring element.
Le the output signal from the controller be u(t) and the actuating error signal be e(t). In two-position
control, the signal u(t) remains at either a maximum or minimum value, depending on whether the
actuating error signal is positive or negative, so that
Where U1 and U2 are constants. The minimum value U2 is usually either zero or –U1. Figures a)
and b) below show the block diagrams for Two-position or On-Off controllers. The range through
which the actuating error signal must move before and the switching occurs.
56
Proportional Control Action
For a controller with proportional control action,the relationship between the output of the
controller u(t) and the actuating error signal e(t) is
In Laplace-transformed quantities,
In a controller with integral control action, the value of the controller output u(t) is changed at a rate
proportional to the actuating error signal e(t).That is,
Or
57
Proportional-Plus-Integral-Plus-Derivative Control Action.
The combination of proportional control action, integral control action, and derivative control
action is termed proportional-plus-integral-plus derivative control action. It has the advantages of
each of the three individual control actions. The equation of a controller with this combined action
is given by
Where Kp is the proportional gain, is the integral time, and is the derivative time.
The block diagram of a proportional-plus-integral-plus-derivative controller is shown in Figure 2–
10
A signal flow graph describes how a signal gets modified as it travels from input to output and the
overall transfer function can be obtained very easily by using Mason's gain formula. Let us now
see how a system can be represented by a signal flow graph. Before we describe a system using a
signal flow graph, let us define certain terms.
58
1. Signal flow graph: It is a graphical representation of the relationships between the variables of
a system.
2. Node: Every variable in a system is represented by a node. The value of the variable is equal to
the sum of the signals coming towards the node. Its value is unaffected by the signals which are
going away from the node.
3. Branch : A signal travels along a branch from one node to another node in the
direction indicated on the branch. Every branch is associated with a gain constant or transmittance.
The signal gets multiplied by this gain as it travels from one node to another.
Fig. 2.36 Example shows nodes, branches and gains of the branches
59
13.1. Construction of a Signal Flow Graph for a System
A signal flow graph for a given system can be constructed by writing down the equations governing
the variables. Consider the network shown in Fig. 2.38.
60
Fig. 2.41 Signal flow graph for the system in Fig. 2.40
The transfer function (gain) of the given signal flow graph can be easily obtained by using Mason's
gain formula. Signal flow graphs were originated by S.l. Mason and he has developed a formula to
obtain the ratio of output to input, called as gain, for the given signal flow graph.
61
Fig. 2.42 Signal flow graph to illustrate Mason's gain formula
62
Example
63
64
Fig. 2.44 (a) Block diagram of a system
Fig. 2.44 (b) Signal flow graph of block diagram in Fig. 2.44 (a)
65
UNITII: TIME RESPONSE
2.1. Introduction
The time response of a control system consists of two parts: the transient response and the steady-
state response. By transient response, we mean that which goes from the initial state to the final
state.
By steady-state response, we mean the manner in which the system output behaves as t approaches
infinity. Thus the system response 𝑐(𝑡)may be written as 𝑐(𝑡) = 𝑐𝑡𝑟 (𝑡) + 𝑐𝑠𝑠 (𝑡).where the first
term on the right-hand side of the equation is the transient response and the second term is the
steady-state response.
66
A simplified block diagram is shown as follows:
This states that initially the output 𝑐(𝑡) is zero and finally it becomes unity.
One important characteristic of such an exponential response curve 𝑐(𝑡) is that at 𝑡 = 𝑇 the value of
𝑐(𝑡) is 0.632, or the response 𝑐(𝑡) has reached 63.2% of its total change.
This may be easily seen by substituting 𝑡 = 𝑇 in 𝑐(𝑡).That is,
Note that the smaller the time constant T, the faster the system response. Another important
characteristic of the exponential response curve is that the slope of the tangent line at t=0 is 1/T,
since
67
The output would reach the final value at t=T if it maintained its initial speed of response. The
slope of the response curve 𝑐(𝑡) decreases monotonically from 1/T at t=0 to zero at 𝑡 = ∞.
In one time constant, the exponential response curve has gone from 0 to 63.2%of the final value. In
two time constants, the response reaches 86.5%of the final value. At t=3T, 4T, and 5T, the response
reaches 95%, 98.2%, and 99.3%, respectively, of the final value. Thus, for 𝑡 = 4𝑇, the response
remains within 2% of the final value. The steady state is reached mathematically only after an
infinite time.
The exponential response curve of the first order time response with unit step signal is shown below
68
As t approaches infinity, 𝑒 –𝑡/𝑇 approaches zero, and thus the error signal 𝑒(𝑡) approaches Tor
𝑒(∞) = 𝑇
The error in following the unit-ramp input is equal to T for sufficiently large t. The smaller the time
constant T, the smaller the steady-state error in following the ramp input.
69
2.3. Second order systems
In this section, we shall obtain the response of a typical second-order control system to a step input,
ramp input, and impulse input. Here we consider a servo system as an example of a second order
system as shown below.
The above servo system consists of a proportional controller and load elements (inertia and viscous-
friction elements). Suppose that we wish to control the output position c in accordance with the
input position r.
The equation for the load elements is 𝐽𝑐̈ + 𝐵𝑐̇ = 𝛵 where T is the torque produced by the
proportional controller whose gain is K. By taking Laplace transforms of both sides of this last
equation, assuming the zero initial conditions, we obtain
70
Simplifying the block diagram we obtain
Such a system where the closed-loop transfer function possesses two poles is called a second-order
system. (Some second-order systems may involve one or two zeros.)
71
The natural frequency of a second-order system is the frequency of oscillation of the system
without damping.For example, the frequency of oscillation of a series RLC circuit with the
resistance shorted would be the natural frequency.
We define the damping ratio, 𝜁, to be
If 0 < 𝜁 < 1, the closed-loop poles are complex conjugates and lie in the left-half s plane. The
system is then called under-damped, and the transient response is oscillatory.
If 𝜁 =0, the transient response does not die out, and the response is called undamped.
72
2.3.2. Step Response of Second-Order System
Using the standard form of the second order systems, we shall now solve for the response of the
system to a unit-step input. We shall consider three different cases: the under-damped (0 < 𝜁 < 1),
critically damped (𝜁 = 1), and over-damped(𝜁 > 1)cases.
The inverse Laplace transform can be obtained easily if 𝐶(𝑠)is written in the following form:
73
Or,
Hence
It can be seen that the frequency of transient oscillation is the damped natural frequency 𝜔𝑑 and thus
varies with the damping ratio 𝜁.
The error signal for this system is the difference between the input and output and is
This error signal exhibits a damped sinusoidal oscillation. At steady state, or at 𝑡 = ∞, no error
exists between the input and output.
If the damping ratio 𝜁 is equal to zero, the response becomes undamped and oscillations continue
indefinitely. The response 𝑐(𝑡) for the zero damping case may be obtained by substituting 𝜁 = 0,
74
2.4. Definitions of Transient-Response Specifications
Frequently, the performance characteristics of a control system are specified in terms of the
transient response to a unit-step input, since it is easy to generate and is sufficiently drastic. (If the
response to a step input is known, it is mathematically possible to compute the response to any
input.)
The transient response of a system to a unit-step input depends on the initial conditions.
For convenience in comparing transient responses of various systems, it is a common practice to
use the standard initial condition that the system is at rest initially with the output and all time
derivatives thereof zero. Then the response characteristics of many systems can be easily compared.
The transient response of a practical control system often exhibits damped oscillations before
reaching steady state. In specifying the transient-response characteristics of a control system to a
unit-step input, it is common to specify the following:
1. Delay time, 𝑡𝑑
2. Rise time, 𝑡𝑟
3. Peak time, 𝑡𝑝
4. Maximum overshoot, 𝑀𝑝
5. Settling time, 𝑡𝑠
These specifications are defined in what follows and are shown graphically in the figure below:
75
1. Delay time, 𝑡𝑑 : The delay time is the time required for the response to reach half the final
value the very first time.
2. Rise time,𝑡𝑟 : The rise time is the time required for the response to rise from 10% to 90%,
5% to 95%, or 0% to 100% of its final value. For underdamped second order systems, the
0%to 100%rise time is normally used. For overdamped systems, the 10% to 90% rise time
is commonly used.
𝜋−𝛽
𝑡𝑟 = , where 𝜔𝑑 = 𝜔𝑛 √1 − 𝜁 2
𝜔𝑑
𝜔𝑑
and the angle is obtained from the figure below as follows:𝛽 = ) with 𝜎 = 𝜁𝜔𝑛
𝜎
3. Peak time, 𝑡𝑝 : The peak time is the time required for the response to reach the first peak of
𝜋
the overshoot.𝑡𝑝 = 𝜔
𝑑
4. Maximum (percent) overshoot, 𝛭𝑝 : The maximum overshoot is the maximum peak value of
the response curve measured from unity. If the final steady-state value of the response
differs from unity, then it is common to use the maximum percent overshoot.
76
𝜎
−( )𝜋
𝛭𝑝 = 𝑒 𝜔𝑑
× 100%, where 𝜎 = 𝜁𝜔𝑛 and 𝜔𝑑 = 𝜔𝑛 √1 − 𝜁 2
The amount of the maximum (percent) overshoot directly indicates the relative stability of the
system.
5. Settling time, 𝑡𝑠 : The settling time is the time required for the response curve to reach and
stay within a range about the final value of size specified by absolute percentage of the final
value (usually 2% or 5%). The settling time is related to the largest time constant of the
control system. Which percentage error criterion to use may be determined from the
objectives of the system design in question?
Example 1
Consider the system shown in figure below, where 𝜁 = 0.6 and 𝜔𝑛 = 5 𝑟𝑎𝑑/𝑠𝑒𝑐. Let us obtain
the rise time 𝑡𝑟 , peak time 𝑡𝑝 , maximum overshoot 𝛭𝑝 and settling time 𝑡𝑠 when the system is
subjected to a unit-step input.
Solution:
77
Example 2
When the system shown in Figure (a) is subjected to a unit-step input, the system output responds
as shown in Figure (b).
78
Determine the values of K and T from the response curve.
Solution
Consequently,
79
Example 3
Determine the values of K and k of the closed-loop system shown in Figure below so that the
maximum overshoot in unit-step response is 25% and the peak time is 2 sec.
Assume that 𝐽 = 1 𝑘𝑔𝑚2 .
Solution
80
2.5. Effects of integral and derivative control
2.5.1. Integral Control Action
In the proportional control of a plant whose transfer function does not possess an integrator 1/s,
there is a steady-state error, or offset, in the response to a step input. Such an offset can be
eliminated if the integral control action is included in the controller.
81
In the integral control of a plant, the control signal—the output signal from the controller—at any
instant is the area under the actuating-error-signal curve up to that instant. The control signal u(t)
can have a nonzero value when the actuating error signal e(t) is zero.
The Plots of e(t) and u(t) curves showing nonzero control signal when the actuating error signal is
zero (integral control) are shown as follow:
This is impossible in the case of the proportional controller, since a nonzero control signal requires
a nonzero actuating error signal. (A nonzero actuating error signal at steady state means that there is
an offset.). The plots of e(t) and u(t) curves showing zero control signal when the actuating error
signal is zero (proportional control).
Note that integral control action, while removing offset or steady-state error, may lead to oscillatory
response of slowly decreasing amplitude or even increasing amplitude, both of which are usually
undesirable.
82
Let us obtain the steady-state error in the unit-step response of the system.
Define
Such a system without an integrator in the feedforward path always has a steady-state error in the
step response. Such a steady-state error is called an offset. Figure 5–38 shows the unit-step
response and the offset.
83
The controller is an integral controller.
The closed-loop transfer function of the system is
Since the system is stable, the steady-state error for the unit-step response can be obtained by
applying the final-value theorem, as follows:
Integral control of the system thus eliminates the steady-state error in the response to the step input.
This is an important improvement over the proportional control alone, which gives offset.
Let us investigate the effect of a torque disturbance occurring at the load element. Consider the
system shown in below.
84
The proportional controller delivers torque T to position the load element, which consists of
moment of inertia and viscous friction.
Torque disturbance is denoted by D.
Assuming that the reference input is zero or R(s) = 0, the transfer function between
C(s) and D(s) is given by
To eliminate offset due to torque disturbance, the proportional controller may be replaced
by a proportional-plus-integral controller.
If integral control action is added to the controller, then, as long as there is an error signal, a
torque is developed by the controller to reduce this error, provided the control system is a
stable one.
85
The closed-loop transfer function between C(s) and D(s) is
In the absence of the reference input, or r(t)=0, the error signal is obtained from
If this control system is stable—that is, if the roots of the characteristic equation
have negative real parts—then the steady-state error in the response to a unit-step disturbance
torque can be obtained by applying the final-value theorem as follows:
Thus steady-state error to the step disturbance torque can be eliminated if the controller is of the
proportional-plus-integral type.
Note that the integral control action added to the proportional controller has converted the
originally second-order system to a third-order one. Hence the control system may become unstable
for a large value of 𝐾𝑝 , since the roots of the characteristic equation may have positive real parts.
(The second-order system is always stable if the coefficients in the system differential equation are
all positive.)
86
It is important to point out that if the controller were an integral controller, as in figure below, then
the system always becomes unstable, because the characteristic equation
Will have roots with positive real parts. Such an unstable system cannot be used in practice.
Note that the proportional control action tends to stabilize the system, while the integral control
action tends to eliminate or reduce steady state error in response to various inputs.
Derivative control action, when added to a proportional controller, provides a means of obtaining a
controller with high sensitivity.
An advantage of using derivative control action is that it responds to the rate of change of the
actuating error and can produce a significant correction before the magnitude of the actuating error
becomes too large. Derivative control thus anticipates the actuating error, initiates an early
corrective action, and tends to increase the stability of the system.
Although derivative control does not affect the steady-state error directly, it adds damping to the
system and thus permits the use of a larger value of the gain K, which will result in an improvement
in the steady-state accuracy.
Because derivative control operates on the rate of change of the actuating error and not the
actuating error itself, this mode is never used alone. It is always used in combination with
proportional or proportional-plus-integral control action.
Before we discuss further the effect of derivative control action on system performance, we shall
consider the proportional control of an inertia load.
87
The closed-loop transfer function is obtained as
are imaginary, the response to a unit-step input continues to oscillate indefinitely, as shown as
follow.
Control systems exhibiting such response characteristics are not desirable. We shall see that the
addition of derivative control will stabilize the system.
88
Now has two roots with negative real parts for positive values of J, 𝐾𝑝 , and Thus derivative control
introduces a damping effect.
A typical response curve c(t) to a unit-step input is shown in Figure below.
Clearly, the response curve shows a marked improvement over the original response curve shown
in Figure above.
89
2.6. Steady-state errors
2.6.1. Definition and Test Inputs
Steady-state error is the difference between the input and the output for a prescribed test input as
𝑡 → ∞.
Test inputs used for steady-state error analysis and design are summarized in the table below
In order to explain how these test signals are used, let us assume a position control system, where
the output position follows the input commanded position.
Step inputs represent constant position and thus are useful in determining the ability of the control
system to position itself with respect to a stationary target, such as a satellite in geostationary orbit.
The figure below shows Test inputs for steady-state error analysis and design vary with target type.
90
An antenna position control is an example of a system that can be tested for accuracy using step
inputs.
Ramp inputs represent constant-velocity inputs to a position control system by their linearly
increasing amplitude. These waveforms can be used to test a system’s ability to follow a linearly
increasing input or, equivalently, to track a constant velocity target.
For example, a position control system that tracks a satellite that moves across the sky at a constant
angular velocity, as shown in Figure above, would be tested with a ramp input to evaluate the
steady-state error between the satellite’s angular position and that of the control system.
Finally, parabolas, whose second derivatives are constant, represent constant acceleration inputs to
position control systems and can be used to represent accelerating targets, such as the missile in
Figure above, to determine the steady-state error performance.
Errors in a control system can be attributed to many factors. Changes in the reference input will
cause unavoidable errors during transient periods and may also cause steady-state errors.
Imperfections in the system components, such as static friction, backlash, and amplifier drift, as
well as aging or deterioration, will cause errors at steady state.
In this section, we shall investigate a type of steady-state error that is caused by the incapability of a
system to follow particular types of input.
91
The closed-loop transfer function is
The transfer function between the error signal e(t) and the input signal r(t) is
where the error e(t) is the difference between the input signal and the output signal.
The final-value theorem provides a convenient way to find the steady-state performance of a stable
system. Since E(s) is
92
The static position error constant 𝐾𝑝 is defined by
Thus, the steady-state error in terms of the static position error constant 𝐾𝑝 is given by
Hence, for a type 0 system, the static position error constant 𝐾𝑝 is finite, while for a type 1 or higher
system, 𝐾𝑝 is infinite.
For a unit-step input, the steady-state error 𝑒𝑠𝑠 may be summarized as follows:
1
𝑒𝑠𝑠 = 1+𝐾 , for type 0 systems
𝑝
93
2.6.5. Static Velocity Error Constant 𝑲𝒗
The steady-state error of the system with a unit-ramp input is given by
Thus, the steady-state error in terms of the static velocity error constant 𝐾𝑣 is given by
The term velocity error is used here to express the steady-state error for a ramp input. The
dimension of the velocity error is the same as the system error. That is, velocity error is not an error
in velocity, but it is an error in position due to a ramp input.
94
For a type 1 system,
The steady-state error 𝑒𝑠𝑠 for the unit-ramp input can be summarized as follows:
The steady-state error of the system with a unit-parabolic input (acceleration input), which is
defined by
is given by
95
The static acceleration error constant 𝐾𝑎 is defined by the equation
Note that the acceleration error, the steady-state error due to a parabolic input, is an error in
position. The values of 𝐾𝑎 are obtained as follows:
96
The Response of a type 2 unity-feedback system to a parabolic input is shown below
2.6.7. Summary
Table below summarizes the steady-state errors for type 0, type 1, and type 2 systems when they are
subjected to various inputs. The finite values for steady-state errors appear on the diagonal line.
Above the diagonal, the steady-state errors are infinity; below the diagonal, they are zero.
97
Problem 1:
For each system of Figures (a),(b) and (c) , evaluate the static error constants and find the expected
error for the standard step, ramp, and parabolic inputs.
Solution
98
From figure (b)
99
From figure (c)
100
a. Evaluate system type, 𝐾𝑝 , 𝐾𝑣 , and 𝐾𝑎 .
b. Use your answers to a. to find the steady-state errors for the standard step, ramp, and
parabolic inputs.
Referring to the system shown in Figure below, determine the values of K and k such that the
system has a damping ratio 𝜁of 0.7 and an undamped natural frequency
𝜔𝑛 of 4 𝑟𝑎𝑑/𝑠𝑒𝑐.
Obtain both analytically and computationally the rise time, peak time, maximum overshoot, and
settling time in the unit-step response of a closed-loop system given by
ASSIGNMENT PROBLEM:
101
Using Newton’s law, modeling equations for this system becomes: , Where u is the
force from the engine. Assume m=1000kg and b=50N sec/m. When the engine gives a 500N force,
the car will reach a maximum velocity of 10m/s (2mph).The car should be able to accelerate up to
that speed in less than 5 seconds and a 10% overshoot on the velocity will not do much damage. A
2% steady-state error is also acceptable.
What, then, is stability? There are many definitions for stability, depending upon the kind of system
or the point of view. In this section, we limit ourselves to linear, time-invariant (LTI) systems.
A linear, time-invariant system is stable if the natural response approaches zero as time approaches
infinity.
A linear, time-invariant system is unstable if the natural response grows without bound as time
approaches infinity.
A linear, time-invariant system is marginally stable if the natural response neither decays nor grows
but remains constant or oscillates as time approaches infinity.
Thus, the definition of stability implies that only the forced response remains as the natural
response approaches zero.
102
These definitions rely on a description of the natural response. When one is looking at the total
response, it may be difficult to separate the natural response from the forced response. However, we
realize that if the input is bounded and the total response is not approaching infinity as time
approaches infinity, then the natural response is obviously not approaching infinity. If the input is
unbounded, we see an unbounded total response, and we cannot arrive at any conclusion about the
stability of the system; we cannot tell whether the total response is unbounded because the forced
response is unbounded or because the natural response is unbounded.
Thus, our alternate definition of stability, one that regards the total response and implies the first
definition based upon the natural response, is this:
A system is stable if every bounded input yields a bounded output. We call this statement the
bounded-input, bounded-output (BIBO) definition of stability.
Let us now produce an alternate definition for instability based on the total response rather than the
natural response. We realize that if the input is bounded but the total response is unbounded, the
system is unstable, since we can conclude that the natural response approaches infinity as time
approaches infinity. If the input is unbounded, we will see an unbounded total response, and we
cannot draw any conclusion about the stability of the system; we cannot tell whether the total
response is unbounded because the forced response is unbounded or because the natural response is
unbounded.
Thus, our alternate definition of instability, one that regards the total response, is this:
A system is unstable if any bounded input yields an unbounded output.
These definitions help clarify our previous definition of marginal stability, which really means that
the system is stable for some bounded inputs and unstable for others. For example, we will show
that if the natural response is undamped, a bounded sinusoidal input of the same frequency yields a
natural response of growing oscillations. Hence, the system appears stable for all bounded inputs
except this one sinusoid. Thus, marginally stable systems by the natural response definitions are
included as unstable systems under the BIBO definitions.
Physically, an unstable system whose natural response grows without bound can cause damage to
the system, to adjacent property, or to human life. Many times systems are designed with limited
stops to prevent total runaway. From the perspective of the time response plot of a physical system,
103
instability is displayed by transients that grow without bound and, consequently, a total response
that does not approach a steady-state value or other forced response.
How do we determine if a system is stable? Let us focus on the natural response definitions of
stability. Recall from our study of system poles that poles in the left half-plane (lhp) yield either
pure exponential decay or damped sinusoidal natural responses. These natural responses decay to
zero as time approaches infinity. Thus, if the closed-loop system poles are in the left half of the
plane and hence have a negative real part, the system is stable. That is, stable systems have closed-
loop transfer functions with poles only in the left half-plane.
Poles in the right half-plane (rhp) yield either pure exponentially increasing or exponentially
increasing sinusoidal natural responses. These natural responses approach infinity as time
approaches infinity. Thus, if the closed-loop system poles are in the right half of the s-plane and
hence have a positive real part, the system is unstable.
Also, poles of multiplicity greater than 1 on the imaginary axis lead to the sum of responses of the
form 𝐴𝑡 𝑛 𝑐𝑜𝑠 (𝜔𝑡 + ∅), where 𝑛 = 1,2, ⋯,which also approaches infinity as time approaches
infinity. Thus, unstable systems have closed-loop transfer functions with at least one pole in the
right half-plane and/or poles of multiplicity greater than 1 on the imaginary axis.
Finally, a system that has imaginary axis poles of multiplicity 1 yields pure sinusoidal oscillations
as a natural response. These responses neither increase nor decrease in amplitude. Thus, marginally
stable systems have closed-loop transfer functions with only imaginary axis poles of multiplicity 1
and poles in the left half-plane.
Consider the unit step response of the stable system shown below
This system has the poles on the left half of s-plane, hence unstable
104
The oscillations in the response for the stable system diminish and system’s response in this case
approaches a steady-state value of unity.
Consider the unit step response of the unstable system shown below
This system has the poles on the right half of s-plane, hence unstable.
The oscillations in the response for the unstable system increase without bound.
(Notice that we say how many, not where.)We can find the number of poles in each section of the s-
plane, but we cannot find their coordinates. The method is called the Routh-Hurwitz criterion for
stability (Routh, 1905).
The method requires two steps: (1) Generate a data table called a Routh table and (2) interpret the
Routh table to tell how many closed-loop system poles are in the left half-plane, the right half-
plane, and on the 𝑗𝜔 − 𝑎𝑥𝑖𝑠.You might wonder why we study the Routh-Hurwitz criterion when
modern calculators and computers can tell us the exact location of system poles. The power of the
method lies in design rather than analysis. For example, if you have an unknown parameter in the
denominator of a transfer function, it is difficult to determine via a calculator the range of this
parameter to yield stability. You would probably rely on trial and error to answer the stability
question. We shall see later that the Routh-Hurwitz criterion can yield a closed-form expression for
the range of the unknown parameter.
105
In this section, we make and interpret a basic Routh table. In the next section, we consider two
special cases that can arise when generating this data table.
Since we are interested in the system poles, we focus our attention on the denominator
We first create the Routh table by labeling the rows with powers of s from the highest power of the
denominator of the closed-loop transfer function to 𝑠 0 .
Next start with the coefficient of the highest power of s in the denominator and list, horizontally in
the first row, every other coefficient.
In the second row, list horizontally, starting with the next highest power of s, every coefficient that
was skipped in the first row.
The remaining entries are filled in as follows. Each entry is a negative determinant of entries in the
previous two rows divided by the entry in the first column directly above the calculated row. The
left-hand column of the determinant is always the first column of the previous two rows, and the
right-hand column is the elements of the column above and to the right. The table is complete when
all of the rows are completed down to𝑠 0 .
Example
Problem: Make the Routh table for the system shown in Figure below.
106
Solution: The first step is to find the equivalent closed-loop system because we want to test the
denominator of this function, not the given forward transfer function, for pole location. Using the
feedback formula, we obtain the equivalent system as shown below.
The Routh-Hurwitz criterion will be applied to this denominator. First label the rows with powers
of s from 𝑠 3 down to 𝑠 0 in a vertical column. Next form the first row of the table, using the
coefficients of the denominator of the closed-loop transfer function. Start with the coefficient of the
highest power and skip every other power of s. Now form the second row with the coefficients of
the denominator skipped in the previous step.
For convenience, any row of the Routh table can be multiplied by a positive constant without
changing the values of the rows below. This can be proved by examining the expressions for the
entries and verifying that any multiplicative constant from a previous row cancels out. In the second
row of Routh Table above, for example, the row was multiplied by 1/10.We see later that care must
be taken not to multiply the row by a negative constant.
Now that we know how to generate the Routh table, let us see how to interpret it.
The basic Routh table applies to systems with poles in the left and right half-planes.
Systems with imaginary poles and the kind of Routh table that results will be discussed in the next
section. Simply stated, the Routh-Hurwitz criterion declares that the number of roots of the
polynomial that are in the right half-plane is equal to the number of sign changes in the first
column.
If the closed-loop transfer function has all poles in the left half of the s-plane, the system is stable.
Thus, a system is stable if there are no sign changes in the first column of the Routh table. For
example, the Routh table has two sign changes in the first column. The first sign change occurs
from 1 in the 𝑠 2 row to −72 in the 𝑠1 row.
The second occurs from −72 in the 𝑠1 row to 103 in the 𝑠 0 row.
Exercises
Problem: Make a Routh table and tell how many roots of the following polynomial are in the right
half-plane and in the left half-plane.
107
Answer: Four in the right half-plane (rhp), three in the left half-plane (lhp).
Routh-Hurwitz Criterion: Special Cases
Two special cases can occur: (1) The Routh table sometimes will have a zero only in the first
column of a row, or (2) the Routh table sometimes will have an entire row that consists of zeros.
Let us examine the first.
Two special cases can occur: (1) The Routh table sometimes will have a zero only in the first
column of a row, or (2) the Routh table sometimes will have an entire row that consists of zeros.
Let us examine the first case.
If the first element of a row is zero, division by zero would be required to form the next row. To
avoid this phenomenon, an epsilon, ∈, is assigned to replace the zero in the first column. The value
∈ is then allowed to approach zero from either the positive or the negative side, after which the
signs of the entries in the first column can be determined.
Example
Solution:
The Routh table by using the denominator of the transfer function T(s). Begin by assembling the
Routh table down to the row where a zero appears only in the first column (the 𝑠 3 row). Next
replace the zero by a small number, ∈, and complete the table.
108
To begin the interpretation, we must first assume a sign, positive or negative, for the quantity ∈.
Table below shows the first column of the Routh table along with the resulting signs for choices of
∈ positive and ∈ negative.
Determining signs in first column of a Routh table with zero as first element in a row
If ∈ is chosen positive, There will be a sign change from the 𝑠 3 row to the
𝑠 2 row, and there will be another sign change from the 𝑠 2 row to the 𝑠1 row. Hence, the system is
unstable and has two poles in the right half-plane.
Alternatively, we could choose ∈ negative. There would then be a sign change from the 𝑠 4 row to
the 𝑠 3 row. Another sign change would occur from the 𝑠 3 row to the 𝑠 2 row. Our result would be
exactly the same as that for a positive choice for ∈. Thus, the system is unstable, with two poles in
the right half-plane.
Another method that can be used when a zero appears only in the first column of a row is derived
from the fact that a polynomial that has the reciprocal roots of the original polynomial has its roots
distributed the same—right half-plane, left half-plane, or imaginary axis—because taking the
reciprocal of the root value does not move it to another region. Thus, if we can find the polynomial
that has the reciprocal roots of the original, it is possible that the Routh table for the new
polynomial will not have a zero in the first column. This method is usually computationally easier
than the epsilon method just described.
We now show that the polynomial we are looking for, the one with the reciprocal roots is simply
the original polynomial with its coefficients written in reverse order (Phillips, 1991). Assume the
equation
1
If s is replaced by , then d will have roots which are the reciprocal of s
𝑑
109
Thus, the polynomial with reciprocal roots is a polynomial with the coefficients written in reverse
order. Let us redo the previous example to show the computational advantage of this method.
Example
SOLUTION:
First write a polynomial that has the reciprocal roots of the denominator of T(s). From our
discussion, this polynomial is formed by writing the denominator of T(s) in reverse order. Hence,
We form the Routh table as shown in Routh Table in example above using T(s)
Since there are two sign changes, the system is unstable and has two right-half-plane poles. This is
the same as the result obtained in Example above.
We now look at the second special case. Sometimes while making a Routh table, we find that an
entire row consists of zeros because there is an even polynomial that is a factor of the original
110
polynomial. This case must be handled differently from the case of a zero in only the first column
of a row.
Method
Let us look at an example that demonstrates how to construct and interpret the Routh table when an
entire row of zeros is present.
Problem:
Solution:
Start by forming the Routh table for the denominator of T(s)
At the second row we multiply through by 1/7 for convenience. We stop at the third row, since the
entire row consists of zeros, and use the following procedure.
111
First we return to the row immediately above the row of zeros and form an auxiliary polynomial,
using the entries in that row as coefficients. The polynomial will start with the power of s in the
label column and continue by skipping every other power of s.
Finally, we use the coefficients of the new equation to replace the row of zeros. Again, for
convenience, the third row is multiplied by 1/4 after replacing the zeros.
Solved examples
Example 1:
Solution:
112
Example 2:
Solution
113
Hence we have symmetrically placed roots out of which two are in the right half of
s-plane
Additional Problems
Use the Routh stability criterion to determine the location of roots on the s-plane and hence the
stability for the system represented by the characteristic equation
𝑠 5 + 4𝑠 4 + 8𝑠 3 + 8𝑠 2 + 7𝑠 + 4 = 0.
Problem 3: (to be solved by students)
114
In the figure below, determine the range of K for the system to be Stable.
Series compensation:
115
Parallel compensation:
The lead compensator is an electrical network which produces a sinusoidal output having phase
lead when a sinusoidal input is applied. The lead compensator circuit in the ‘s’ domain is shown in
the following figure.
116
Here, the capacitor is parallel to the resistor 𝑅1 and the output is measured across resistor 𝑅2.
We know that, the phase of the output sinusoidal signal is equal to the sum of the phase angles of
input sinusoidal signal and the transfer function.
So, in order to produce the phase lead at the output of this compensator, the phase angle of the
transfer function should be positive. This will happen when 0 <𝛽< 1. Therefore, zero will be nearer
to origin in pole-zero configuration of the lead compensator.
4.3.2. Lagcompensation
The Lag Compensator is an electrical network which produces a sinusoidal output havingthe phase
lag when a sinusoidal input is applied. The lag compensator circuit in the ‘s’ domain is shown in
the following figure.
117
Here, the capacitor is in series with the resistor 𝑅2 and the output is measured across this
combination.
We know that, the phase of the output sinusoidal signal is equal to the sum of the phase angles of
input sinusoidal signal and the transfer function.
So, in order to produce the phase lag at the output of this compensator, the phase angle of the
transfer function should be negative. This will happen when 𝛼> 1.
Lag-Lead compensator is an electrical network which produces phase lag at one frequencyregion
and phase lead at other frequency region. It is a combination of both the lag and the lead
compensators. The lag-lead compensator circuit in the ‘s’ domain is shown in the following figure.
118
This circuit looks like both the compensators are cascaded. So, the transfer function of this circuit
will be the product of transfer functions of the lead and the lag compensators.
119
120
Example 1
Solution
121
122
The Bode plot for the compensated system is drawn.
123
The main effects of phase lead compensation may be summarized as follows:
124
Example
Solution
125
Bode plot of the compensated system is drawn
126
The effects of lag compensation on the response may be summarized as follows.
127
4.4.3. Design of Lag-Lead Compensator
128
4.5. Comparison of Lead, Lag, and Lag–Lead Compensation
2. In some design problems both lead compensation and lag compensation may satisfy the
specifications. Lead compensation yields a higher gain crossover frequency than is possible
with lag compensation. The higher gain crossover frequency means a larger bandwidth. A large
bandwidth means reduction in the settling time.
The bandwidth of a system with lead compensation is always greater than that with lag
compensation. Therefore, if a large bandwidth or fast response is desired, lead compensation
should be employed. If, however, noise signals are present, then a large bandwidth may not be
desirable, since it makes the system more susceptible to noise signals because of an increase in
the high-frequency gain. Hence, lag compensation should be used for such a case.
3. Lead compensation requires an additional increase in gain to offset the attenuation inherent in
the lead network. This means that lead compensation will require a larger gain than that
required by lag compensation. A larger gain, in most cases, implies larger space, greater weight,
and higher cost.
4. Lead compensation may generate large signals in the system. Such large signals are not
desirable because they will cause saturation in the system.
5. Lag compensation reduces the system gain at higher frequencies without reducing the system
gain at lower frequencies. Since the system bandwidth is reduced, the system has a slower
speed to respond. Because of the reduced high-frequency gain, the total system gain can be
increased, and thereby low-frequency gain can be increased and the steady-state accuracy can
be improved. Also, any high frequency noises involved in the system can be attenuated.
6. Lag compensation will introduce a pole-zero combination near the origin that will generate a
long tail with small amplitude in the transient response.
7. If both fast responses and good static accuracy are desired, a lag–lead compensator may be
employed. By use of the lag–lead compensator, the low-frequency gain can be increased (which
means an improvement in steady-state accuracy), while at the same time the system bandwidth
and stability margins can be increased.
8. Although a large number of practical compensation tasks can be accomplished with lead, lag, or
lag–lead compensators, for complicated systems, simple compensation by use of these
compensators may not yield satisfactory results.
Then, different compensators having different pole–zero configurations must be employed.
129
Example
Solution
2. Bode plot and log magnitude Vs phase angle plot on Nichols chart are drawn in Figs below.
From the bode plot the phase cross over frequency is 10.5 rad/sec.
The phase margin is - 22°
From the Nichols plot, the Bandwidth is 13.5 rad/sec.
Thus, since the Bandwidth is already large, a lead compensator will further increase it. A lag
compensator provides the required phase margin at a frequency of about 2.3 rad/sec, which makes
the bandwidth much smaller than desirable. Hence a lag-lead compensator only can satisfy all the
specifications.
130
131
4.6. Designing and tuning PID controllers
4.6.1. Designing of PID controller
Before beginning with the explanation of a PID controller, it is first useful to recall where the
controlleris placed in the control loop and what its input and output are. The controller is just after
the comparator, the summing block that takes the difference between the desired value and the
actual value. Thus the input to the controller is the error signal. The controller operates on this error
signal and produces a command that is then sent downstream to the actuator.
132
The purpose of the control loop is to drive the error to 0, so that the actual value = the desired
value. If everything is working as it should, e(t) will be 0 and the controller will take no action. It
will simply put 0 on the output, the input to the actuator. This is a command to the actuator to do
nothing. When the actual value is not equal to the desired value, the controller takes action and
produces a non‐zero command for the actuator.
PID, of course, stands for proportion‐integral‐derivative. A PID controller has a parallel structure
with these three actions. The three controller constants—KP, KI, and KD—can be tuned to adjust
the relative strength of each action. The proportional action is the main action, and the other two
actions are add‐ons to improve the control.
Where Kp is the Proportional Gain,Kd is the Derivative Gain and Ki is the Integral Gain of the
controller.
Frequency Domain Representation of PID controller
In Frequency Domain (after taking Laplace Transform of both sides), the control input can be
represented as
133
Thus, PID controller adds pole at the origin and two zeroes to the Open loop transfer function.
The Closed loop Transfer Function of the system can be written as
134
3. A derivative control (Kd) will have the effect of increasing the stability of the system,
reducing the overshoot, and improving the transient response but little effect on rise time
4. A PD Controller could add damping to a system, but the steady-state response is not
affected.(steady state error is not eliminated)
5. A PI Controller could improve relative stability and eliminate steady state error at the same
time, but the settling time is increased(System response sluggish)
But a PID controller removes steady-state error and decreases system settling times while
maintaining a reasonable transient response.
Example:
In the mechanical system shown in the figure below, m is the mass, k is the spring constant, b is the
friction constant, f(t) is an external applied force and x(t) is the resulting displacement.
Solution:
Assuming m=1kg,b=10 Ns/m & k=20 N/m [For Simplicity], the open loop transfer function is
135
The final value of the output to a unit step input is 0.05. This corresponds to the steady-state
error of 0.95, quite large indeed.
Furthermore, the rise time is about one second, and the settling time is about 1.5 seconds.
Proportional control
In Proportional control, the actuating signal for the control action in a control system is proportional
to the error signal. The error signal being the difference between the reference input signal and the
feedback signal obtained from the output.
136
For Kp=500
The Step Response of the system becomes:
The Step response of the system indicates that the proportional controller (Kp) reduces the rise
time, increases the overshoot, reduces the steady state error but never eliminates it completely
This is a type-zero system and hence will have a finite steady-state error for a step input.
PD control
For Derivative control action the actuating signal consists of proportional error signal added with
derivative of the error signal
Proportional-derivative (PD) control considers both the magnitude of the system error and the
derivative of this error. Derivative control has the effect of adding damping to a system, and, thus,
has a stabilizing influence on the system response.
137
The closed-loop transfer function of
For Kp = 500 and Kd = 10, the above system with a PD controller is:
The derivative controller reduced both the overshoot and the Settling time, and had a small effect
on the rise time and the steady-state error.
The PD controller has decreased the system settling time considerably; however, to control the
steady-state error, the derivative gain Kd must be high. This decreases the response times of the
system and can make it susceptible to noise.
PI control
For Integral control action, the actuating signal consists of proportional-error signal added with
integral of the error signal.
138
Proportional-integral (PI) control considers both the magnitude of the system error signal and the
integral of this error.
The closed-loop transfer function of the above system with a PI controller is:
Using integral control makes the system type-one, so the steady-state error due to a step input is
zero.
The response shows that the Integral control has removed the steady-state error and improved the
transient response, but it has also increased the system settling time.
Increasing Ki increases overshoot and settling time making system response sluggish
139
To reduce both settling time and overshoot, a PI controller by itself is not enough.
PID control
For PID control, the actuating signal u(t),consists of proportional error signal added with derivative
and integral of error signal e(t)
140
Thus, A PID controller has removed steady-state error and decreased system settling times while
maintaining a reasonable transient response.
While designing a PID controller, the general rule is to add proportional control to get the desired
rise time, add derivative control to get the desired overshoot, and then add integral control (if
needed) to eliminate the steady-state error.
Proportional-Integral-Derivative (PID) control is the most common control algorithm used in
industry and has been universally accepted in industrial control. This is due to the fact that all
design specifications of the system can be met through optimal tuning of constants Kp, Ki & Kd for
maximum performance.
In 1942 Ziegler and Nichols, both employees of Taylor Instruments, described simple mathematical
procedures, the first and second methods respectively, for tuning PID controllers. These procedures
are now accepted as standard in control systems practice.
As regulators, these loops’ purpose is disturbance rejection. That is keeping a desired quantity at a
certain level despite disturbing influences that try to change it. Ziegler‐Nichols is probably the best
known and most widely used of the heuristic tuning methods for tuning PID controllers. “Heuristic”
simply means “based on experimentation” or “based on trial‐and‐error”. Such methods do not
depend on the development of a system model. They are field tuning methods, in that one can apply
them to the real system and tune it in place.
141
Ziegler-Nichols formulae for specifying the controllers are based on plant step responses.
It is also typical of a plant made up of a series of first order systems. The response is characterized
by two parameters, L the delay time and T the time constant. These are found by drawing a tangent
to the step response at its point of inflection and noting its intersections with the time axis and the
steady state value.
Ziegler and Nichols derived the following control parameters based on this model
Second Method
The steps for tuning a PID controller via the 2nd method is as follows:
142
1. Reduce the integrator and derivative gains to 0.
2. Increase Kp from 0 to some critical value Kp=Kcr at which sustained oscillations occur. If it
does not occur then another method has to be applied.
3. Note the value Kcr and the corresponding period of sustained oscillation, Pcr
Example
The second method lends itself to both experimental and analytical study. Consider a process with
transfer function that is to be placed under PID control
We can determine the limiting gain for stability (before oscillations) by use of the Routh-Hurwitz
condition. The characteristic equation, p(s), with Proportional control is:
143
Analysis: The closed loop step response shows an overshoot performance of 50%, 100% over
target. Given the dependence of the technique on a generic model, it is not surprising that the
design objectives will almost always not be met. The technique, however, does provide an effective
starting point for controller tuning.
REFERENCES
1. Norman S. Nise 2007, Control Systems Engineering, 6th Edition, California State Polytechnic
University, Pomona.
2. Frank Owen, PhD, P.E.2015, Control Systems Engineering, A practical approach.
3. K. Ogata 2010, Modern Control Engineering, 5th edition, PHI, New Delhi.
4. Norman S. Nise 2007, Control Systems Engineering, 4th Edition, John Wiley, New Delhi.
5. Smarajit Gosh 2007, Control systems, Pearson Education, New Delhi.
6. M. Gopal2002, Control Systems, Principles and Design, Tata McGraw Hill, New Delhi.
7. 5.U.A Bakshi , V.U Bakshi 2006 , Control system engineering, Technical publication.
8. 6.NC Jagan 2008, Control systems- BS publications.
9. W.Bolton 2002,Control systems, Newnes
144
APPENDIX
145
146