Ingenieria de Controles
Ingenieria de Controles
The controller measures the process variable in order to calculate the next control effort
and to determine the required tuning parameters
The control loop is the essence of automation. By measuring some activity in an automated process, a controller decides what
needs to be done next and executes the required operations through a set of actuators. The controller then remeasures the
process to determine if the actuators' actions had the desired effect. The whole routine is then repeated in a continuous loop of
measure, decide, actuate, and repeat.
Common industrial controllers include programmable logic controllers (PLCs), distributed control systems (DCSs), stand-alone
loop controllers, and more recently, personal computers (PCs). Heating coils, robot arms, pumps, other motor types, and
conveyor belts are some of the actuators that a controller can use to operate an automated process.
Discrete control
In discrete control applications, control loops automate the production of individual objects, such as computer chips,
automobiles, and light bulbs. Activities to be controlled generally occur in a step-by-step manner where each step starts only
after its predecessor finishes.
An automatic car wash that produces clean cars from dirty ones is a familiar example of discrete control. When the controller
detects the departure of the previous vehicle, it signals the next one to enter the bay. When that vehicle reaches the stopping
point, the controller displays the STOP NOW sign. When each step of the washing process finishes, the controller starts the
next operation. When all of the required operations have been completed, the controller displays the EXIT NOW sign.
The controller measures the progress of the washing process with a variety of sensors. An electric eye detects the departure of
the previous vehicle. A proximity switch detects the arrival of the next one. Actuators for this automated process include valves
to regulate the water flow through the sprayers, conveyors to position the sprayers, and instruction signs to direct movements of
incoming and outgoing vehicles.
Continuous control
In continuous control applications, the controller and its actuators operate constantly. Continuous control is also commonly
known as process control even though many automated processes are discrete.
A continuous controller measures flow rates, temperatures, pressures, and other continuous variables that can change at any
time. It then decides if those variables are at acceptable levels and uses its actuators to change them if necessary.
Continuous control loops generally cycle through the measure-decide-actuate routine much faster than discrete control loops
do. In fact, most continuous controllers will make a whole series of control decisions before the results of the first one are
completely evident.
Recopilación: AdolfoOR
Ingenieria de Controles 2 de 91
Considerable analytical effort is sometimes required to program a continuous controller. Its decision making algorithm has to
consider not only the current activity of the process, but the on-going effects of all of its previous decisions.
The water heater that provides hot water to the car wash is a continuous process subject to continuous control. A thermocouple
measures the water temperature in the tank and signals the controller to turn on the heating coil whenever the actual
temperature drops below a specified level. The tricky part is deciding how long the heating coil should remain activated each
time a temperature drop is detected. If it is shut off too soon, it will just have to be reactivated as the water temperature begins
dropping again. If it is kept on too long, the tank could boil over.
Continuous control loops are especially common in industries where the product flows in a continuous stream--petrochemicals,
foods, pharmaceuticals, pulp and paper, etc. The proportional-integral-derivative (PID) algorithm is the most common method by
which continuous controllers decide what to do next.
Vance J. VanDoren, Ph.D., P.E., is president of VanDoren Industries, West Lafayette, Ind.
Control Engineering June 1998
Recopilación: AdolfoOR
Ingenieria de Controles 3 de 91
This tutorial presents an overview of how and why PID controllers work. It is the first in a four part series on the fundamental
concepts of modern control theory.
A feedback controller is designed to generate an "output" that causes some corrective effort to be applied to a "process" so as
to drive a measurable "process variable" towards a desired value known as the "setpoint." Figure 1 shows a typical feedback
control loop, with blocks representing the dynamic elements of the system and arrows representing the flow of information,
generally in the form of electrical signals.
Virtually all feedback controllers determine their output by observing the "error" between the setpoint and the actual process
variable measurement. A home thermostat, for example, uses the air conditioning system to correct the temperature in a
process comprised of a room and the air inside. It sends an electrical signal (an output) to turn on the air conditioner when the
error between the actual temperature (the process variable) and the desired temperature (the setpoint) is too high.
A proportional-integral-derivative or PID controller performs much the same function as the thermostat, but with a more
elaborate algorithm for determining its output. It looks at the current value of the error, the integral of the error over a recent time
interval, and the current derivative of the error signal to determine not only how much of a correction to apply, but for how long.
Those three quantities are each multiplied by a "tuning constant" and added together to produce the current controller output
CO(t) , thusly:
t
CO(t)=P·e(t)+I·(ò 0e(t)dt)+D·(d/dt·e(t)) [eq. 1]
In equation [1], P is the "proportional" tuning constant, I is the "integral" tuning constant, D is the "derivative" tuning constant,
and e(t) is the error between the setpoint SP(t) and the process variable PV(t) at time t.
e(t)=SP(t)-PV(t) [eq. 2]
If the current error is large, has been sustained for some time, or is changing rapidly, the controller will attempt to make a large
correction by generating a large output. Conversely, if the process variable has matched the setpoint for some time, the
controller will leave well enough alone.
Recopilación: AdolfoOR
Ingenieria de Controles 4 de 91
Conceptually, that's all there is to a PID controller. The tricky part is "tuning" it; i.e., setting the P, I, and D tuning constants so
that the weighted sum of the proportional, integral, and derivative terms produces a controller output that steadily drives the
process variable in the direction required to eliminate the error.
The brute force solution to this problem would be to generate the largest possible output by using the largest possible tuning
constants. A controller thus tuned would amplify every error and initiate extremely aggressive efforts to eliminate even the
slightest discrepancy between the setpoint and the process variable. However, an overly aggressive controller can actually
make matters worse by driving the process variable past the setpoint as it attempts to correct a recent error. In the worst case,
the process variable will end up even further away from the setpoint than before.
On the other hand, a PID controller that is tuned too conservatively may not be able to eliminate one error before the next one
appears. A well-tuned controller performs at a level somewhere between those two extremes. It works aggressively to eliminate
an error quickly, but without overdoing it.
How to best tune a PID controller depends upon how the process responds to the controller's corrective efforts. Processes that
react instantaneously and predictably don't really require feedback at all. A car's headlights, for example, come on as soon as
the driver hits the switch. No subsequent corrections are required to achieve the desired illumination.
On the other hand, the car's cruise controller cannot accelerate the car to the desired cruising speed as quickly. Because of
friction and the car's inertia, there is always a delay between the time that the cruise controller activates the accelerator and the
time that the car's speed reaches the setpoint (see Fig. 2). A PID controller must be tuned to account for such "lags."
PID in action
Consider a sluggish process with a relatively long lag--accelerating an overloaded car with an undersized engine, for example.
Such a process tends to respond slowly to the controller's efforts. If such errors are introduced abruptly (as when the setpoint is
changed), the controller's initial reaction will be determined primarily by the actions of the derivative term in equation 1. This will
cause the controller to initiate a burst of corrective effort the instant the error changes from zero. The proportional term will then
come into play to keep the controller's output going until the error is eliminated.
After a while, the integral term will also begin to contribute to the controller's output as the error accumulates over time. In fact,
the integral term will eventually come to dominate the output signal because the error decreases so slowly in a sluggish
process. Even after the error has been eliminated, the controller will continue to generate an output based on the history of
errors that have been accumulating in the controller's integrator. The process variable may then "overshoot" the setpoint,
causing an error in the opposite direction.
If the integral tuning constant is not too large, this subsequent error will be smaller than the original, and the integral term will
begin to diminish as negative errors are added to the history of positive ones. This whole operation may then repeat several
times until both the error and the accumulated error are eliminated. Meanwhile, the derivative term will continue to add its share
to the controller output based on the derivative of the oscillating error signal. The proportional term will also come and go as the
error waxes and wanes.
Fig. 2: A familiar real-world example of feedback control can be found in the "cruise control" feature common in many
automobiles.
Recopilación: AdolfoOR
Ingenieria de Controles 5 de 91
Now suppose the process has very little lag so that it responds quickly to the controller's efforts. The integral term in equation 1
will not play as dominant a role in the controller's output since the errors will be so short-lived. On the other hand, the derivative
term will tend to be larger, since the error changes rapidly in the absence of long lags.
Clearly, the relative importance of each term in the controller's output depends on the behavior of the controlled process.
Determining the best mix suitable for a particular application is the essence of controller tuning. For the sluggish process, a
large value for the derivative tuning constant D might be advisable in order to accelerate the controller's reaction to a setpoint
change. For the fast-acting process, however, an equally large value for D might cause the controller's output to fluctuate wildly,
as every change in the error is amplified by the controller's derivative action.
Tuning techniques
There are three schools of thought on how to select the values of P, I, and D required to achieve an acceptable level of
performance for the controller. The first method is simple trial and error--tweak the tuning parameters and watch the controller
handle the next error. If it can eliminate the error in a timely fashion, quit. If it proves to be too conservative or too aggressive,
increase or decrease one or more of the tuning constants. Experienced control engineers seem to know just how much
proportional, integral, and derivative action to add or subtract in order to correct the performance of a poorly tuned controller.
Unfortunately, intuitive tuning procedures can be difficult to develop because a change in one tuning constant tends to affect the
performance of all three terms in the controller's output. For example, turning down the integral action reduces overshoot. This
in turn slows the rate of change of the error and thus reduces the derivative action as well.
The analytical approach to the tuning problem, which is the second method, is more rigorous. It involves a mathematical "model"
of the process that relates the value of the process variable at time t to the current rate of change of the process variable and a
history of the controller's output. For example,
PV(t)=K·CO(t-d)-T·(d/dt·PV(t)) [eq. 3]
This particular model describes a process with a "gain" of K, a "time constant" of T, and a "deadtime" of d. The process gain
represents the magnitude of the controller's effect on the process variable. A large value of K corresponds to a process that
amplifies small control efforts into large changes in the process variable.
The time constant in equation 3 represents the severity of the process lag. A large value of T corresponds to a long lag in a
sluggish process. The deadtime d represents another kind of delay present in many processes, where the "sensor" used to
measure the process variable is located some distance from the "actuator" used to implement the controller's corrective efforts.
The time required for the actuator's effects to reach the sensor is the deadtime. During that interval, the process variable does
not respond at all to the actuator's activity. Only after the deadtime has elapsed does the lag time begin (see Fig. 3).
In the thermostat example above, the air conditioner is the actuator and the thermostat's onboard thermocouple is the sensor. If
there is any ductwork between the air conditioner and the thermostat, there will be a deadtime while each slug of cool air travels
down the duct. The room temperature will not begin to drop until the first slug of cool air emerges from the duct.
There are other characteristics of process behavior that can be factored into a process model, but equation 3 is one of the
simplest and most widely used. It applies to any process with a process variable that changes in proportion to its current value.
Recopilación: AdolfoOR
Ingenieria de Controles 6 de 91
For example, a car of mass m accelerates when its cruise control calls for the engine to apply a force Fe to the drive axle.
However, that acceleration a(t) is opposed by frictional forces Ff that are proportional to the car's current velocity v(t) by a factor
of Kf. If the force applied by the engine is proportional to the controller's output by a factor of Ke, then applying Newton's second
law to the process gives:
Fe-Ff=m·a(t) [eq. 4]
"The process variable is v(t) , the process gain is K = Ke/Kf and the process time constant is T = m/Kf. In this example no
deadtime exists, since the speed of the car begins to change as soon as the cruise controller activates the accelerator.
If a model like equation 3 can be defined for a process, its behavior can be quantified by analyzing the model's parameters. In
equation 6, for example, the values of K and T (computed from Ke, Kf, and m) determine how the velocity of the car will change
in response to any control effort. A model's parameters in turn dictate the tuning constants required to modify the behavior of the
process with a feedback controller.
Literally hundreds of analytical techniques can translate model parameters into tuning constants. Each approach uses a
different model, different controller objectives, and different mathematical tools. Several examples of analytical tuning will be
explored in future installments of this series.
The third approach to the tuning problem is something of a compromise between purely self-teaching trial-and-error techniques
and the more rigorous analytical techniques. It was originally proposed in 1942 by John G. Ziegler and Nathaniel B. Nichols, and
remains popular today because of its simplicity and its applicability to any process governed by a model in the form of equation
3. Through trial-and-error experiments, Ziegler and Nichols created a set of "tuning rules" that translate the parameters of
equation 3 into values for P, I, and D, giving generally acceptable controller performance. In particular,
P=(1.2·T)/(K·d)
2
I=(0.6·T)/(K·d ) [eqs. 7]
D=(0.6·T)/K
Ziegler and Nichols also came up with a practical method for estimating the values of K, T, and d experimentally. With the
controller in manual mode (no feedback), they induced a step change in the controller's output, then analyzed the process
reaction graphically (see Fig. 3). They concluded that the process gain K can be approximated by dividing the net change of the
process variable by the size of the step change generated by the controller. They estimated the deadtime d from the interval
between the controller's step change and the beginning of a line drawn tangent to the reaction curve at its steepest point. They
also used the inverse slope of that line to estimate the time constant T.
Other tuning rules have since been developed for more complex models and for other controller performance objectives.
Several of these, as well as a reprint of Ziegler and Nichols' 1942 paper, can be found in "Reference Guide to PID Tuning--A
collection of reprinted articles of PID tuning techniques" published by CONTROL ENGINEERING in 1991).
For more information, contact Vance VanDoren Ph.D, P.E. at VanDoren Industries; Tel: 317/497-3367; Fax: 317/497-4875 .
Recopilación: AdolfoOR
Ingenieria de Controles 7 de 91
This tutorial is the last in a series of four on the fundamentals of process control. Part 1, in February, examined PID control. Part
2, in May, presented the Smith Predictor control strategy. Part 3, in September, compared sampled and continuous control
algorithms.
Most process controllers perform a familiar drill--measure the variable of interest, decide if its value is acceptable, apply a
corrective effort if necessary, and repeat. Industrial controllers generally make their decisions according to the ubiquitous
proportional, integral, derivative (PID) algorithm that relies on a single sensor for the measurement and a single actuator for the
corrective action.
This single-variable-control routine works very well for a wide variety of control problems with process variables that can be
manipulated independently. For example, the temperature inside a hot water heater is controlled by internal heating elements,
while the temperature of the room outside is controlled by the building's heating, venting, and air conditioning (HVAC) system.
Changing one temperature does not generally change the other, so each can be controlled by its own independent thermostat.
The problem gets much trickier when the control system is required to achieve multiple objectives all at once using multiple
actuators that affect all of the process variables simultaneously. Consider a commercial HVAC system designed to regulate the
relative humidity of the room air as well its temperature. Lowering the room temperature raises the relative humidity since cold
air can't hold as much moisture. Conversely, injecting steam into the room raises not only the humidity, but the temperature as
well. The temperature and humidity are said to be coupled, since changing one changes the other.
Because of this coupling effect, the controller that regulates the steam injector and the thermostat that regulates the chiller must
work together if either is to achieve its objective. Better still, the two controllers could be integrated to achieve a combined
objective--the comfort of the room's occupants--rather than separate target values for their respective process variables. Such a
controller would have the leeway to choose from several equally comfortable combinations of temperature and humidity. Given
the thermodynamic properties of the room and the cost of steam and electricity, the controller should be able determine which
combinations are achievable and which would be cheapest to achieve.
This is a classic example of multivariable control. By balancing the actions of several actuators that each affect several process
variables, a multivariable controller tries to maximize the performance of the process at the lowest possible cost. Multivariable
controllers are most common in the aeronautical, energy, and petrochemical industries. In a distillation column, for example,
there can be hundreds of tightly coupled temperatures, pressures, and flow rates that must all be coordinated to maximize the
quality of the distilled product. A jet aircraft control system must coordinate the plane's engines and flight control surfaces to
keep it flying on the course dictated by the pilot.
A multivariable control system can also take into account the cost of applying each control effort and the potential cost of not
applying the correct control effort. Costs can include not only financial considerations, such as energy spent vs. energy saved,
but safety and health factors as well.
Remember Chernobyl? The failure of that nuclear reactor was blamed in part on operators who prevented the control system
from doing its job. The cost was catastrophic by every measure.
So how does a multivariable controller do all this? There are just a few basic multivariable control techniques, but oddly enough,
PID isn't one of them. The PID algorithm is by far the most popular technique for single variable control, but applying PID control
to a multivariable process is not simply a matter of installing another controller for each additional process variable.
The traditional PID algorithm does not account for the effects of coupling nor for the cost of applying a control effort. Its only
objective is to correct deviations in a single process variable. However, if control costs are negligible, and if the process
variables can somehow be decoupled, then multiple PID controllers can be combined to regulate multivariable processes.
Recopilación: AdolfoOR
Ingenieria de Controles 8 de 91
Figure 1: Some multivariable processes can be decoupled so that each process variable responds to only one actuator. This
two variable decoupler, for example, could be applied to the HVAC problem mentioned earlier. If the temperature in the room is
defined as process variable 1 (PV1) and the humidity is defined as process variable 2 (PV2), then box P21 represents the effect
that a change in temperature has on the humidity (the relative humidity rises when the temperature drops). Conversely, box P12
represents the effect that a change in humidity has on the temperature (the temperature rises when hot steam is used to raise
humidity). In order to negate these effects, the decoupler D12 must cause a cooling action whenever steam is added to the air
stream, whereas decoupler D21 must cut back the steam injection whenever the chiller is commanded to lower the room
temperature.
Figure 1 shows how a simple process with two controllers (C1 and C2) and two process variables (PV1 and PV2) can be
decoupled so that each controller ends up affecting only one process variable. The decouplers (D21 and D12) are designed to
cancel the cross-over effects (P21 and P12) that each controller has on the other process variable. They allow the controllers--
even single variable PID controllers--to operate as if each was in control of its own independent process (P11 or P22).
Unfortunately, decoupling works only if the cross-over effects are either very weak or very well understood. Otherwise, the
decouplers will not be able to negate the cross-over effects completely. Decoupling can also fail if the behavior of the process
changes even slightly after the decouplers have been designed and implemented.
A minimum variance control algorithm is generally much more effective for multivariable control. Variance is a measure of how
badly a process variable has changed from its setpoint over a period of time. It is computed by periodically squaring the
measured error between the two values and adding the results into a historical total. For a multivariable process, the overall
variance is a weighted sum of the variances computed for each individual process variable.
A minimum variance controller coordinates all of its control efforts so as to minimize the overall variance. It can also minimize
the cost of control by treating each control effort as if it were another process variable with a setpoint of zero. The weighting
factors used for the overall variance calculation can be chosen to dictate how much emphasis the controller places on
eliminating errors vs. minimizing control efforts. In the HVAC example above, the controller can be designed to be more or less
aggressive depending on the relative benefits of reducing energy expenditures vs. keeping the room's occupants comfortable.
Minimum variance controllers can also impose absolute limits or constraints on the control efforts and the process variable
errors. The HVAC control system would have to constrain its efforts so that the valve in the steam injector is never asked to
open more than 100%. Conversely, the humidity in the room would have to be constrained to remain within tolerable limits at all
times no matter how expensive the control costs might be.
Unfortunately, the benefits of multivariable control come at a price. The mathematical formulation of even the simplest minimum
variance control algorithm is tedious and much more complex than the theory of PID control. It's no wonder that the PID
algorithm remains the champion of all control techniques.
For more information, contact Dr. Vance J. VanDoren, VanDoren Industries, 3220 State Road 26W, West Lafayette, IN 47906;
Tel: 317/497-3367, ext. 8262; Fax: 317/497-4875
Recopilación: AdolfoOR
Ingenieria de Controles 9 de 91
A feedback controller is designed to generate an output that causes some corrective effort to be applied to a process so as to
drive a measurable process variable towards a desired value known as the setpoint. Shown is a typical feedback control loop
with blocks representing the dynamic elements of the system and arrows representing the flow of information, generally in the
form of electrical signals. Virtually all feedback controllers determine their output by observing the error between the setpoint
and the actual process variable measurement.
PID control
A proportional-integral-derivative or "PID" controller looks at the current value of the error, the integral of the error over a recent
time interval, and the current derivative of the error signal to determine not only how much of a correction to apply, but for how
long. Those three quantities are each multiplied by a tuning constant and added together to produce the current controller
output CO(t) thusly:
(eq. 1)
In equation (1), P is the proportional tuning constant, I is the integral tuning constant, D is the derivative tuning constant, and e(t)
is the error between the setpoint SP(t) and the process variable PV(t) at time t.
(eq. 2)
If the current error is large, has been sustained for some time, or is changing rapidly, the controller will attempt to make a large
correction by generating a large output. Conversely, if the process variable has matched the setpoint for some time, the
controller will leave well enough alone.
Conceptually, that's all there is to a PID controller. The tricky part is "tuning" it; i.e., setting the P, I, and D tuning constants so
that the weighted sum of the proportional, integral, and derivative terms produces a controller output that steadily drives the
process variable in the direction required to eliminate the error.
How to best tune a PID controller depends upon how the process responds to the controller's corrective efforts. Consider a
sluggish process that tends to respond slowly. If an error is introduced abruptly (as when the setpoint is changed), the
controller's initial reaction will be determined primarily by the derivative term in equation (1). This will cause the controller to
initiate a burst of corrective efforts the instant the error changes from zero. The proportional term will then come in to play to
keep the controller's output going until the error is eliminated.
After a while, the integral term will also begin to contribute to the controller's output as the error accumulates over time. In fact,
the integral term will eventually come to dominate the output signal, since the error decreases so slowly in a sluggish process.
Even after the error has been eliminated, the controller will continue to generate an output based on the history of errors that
have been accumulating in the controller's integrator. The process variable may then overshoot the setpoint, causing an error in
the opposite direction.
If the integral tuning constant is not too large, this subsequent error will be smaller than the original, and the integral term will
begin to diminish as negative errors are added to the history of positive ones. This whole operation may then repeat several
times until both the error and the accumulated error are eliminated. Meanwhile, the derivative term will continue to add its share
to the controller output based on the derivative of the oscillating error signal. The proportional term too will come and go as the
error waxes and wanes.
Now suppose the process responds quickly to the controller's efforts. The integral term in equation (1) will not play as dominant
a role in the controller's output since the errors will be so short lived. On the other hand, the derivative term will tend to be larger
since the error will change rapidly.
Hundreds of mathematical and heuristic techniques for selecting appropriate values for the tuning constants have been
developed over the last 50 years. Several of these can be found in "Reference Guide to PID Tuning,"a collection of reprinted
articles of PID tuning techniques published by Control Engineering magazine in 1991. Related subjects such as feedforward
control, frequency domain analysis techniques, and self-tuning control will be addressed in future installments of this series.
Vance J. VanDoren has a BS and MS in Control Engineering from Case Western Reserve University. He holds a Ph.D. in
Control Engineering from Purdue University's School of Mechanical Engineering.
Control Engineering January 1997
Recopilación: AdolfoOR
Ingenieria de Controles 10 de 91
TUNING FUNDAMENTALS
Basics of
Proportional-Integral-
Derivative Control
PID controllers are by far the most popular feedback controllers for continuous processes. Here's a look at how they work.
A feedback controller is designed to generate an output that causes some corrective effort to be applied to a process so as to
drive a measurable process variable towards a desired value known as the setpoint. The controller uses an actuator to affect the
process and a sensor to measure the results. Figure 1 shows a typical feedback control system with blocks representing the
dynamic elements of the system and arrows representing the flow of information, generally in the form of electrical signals.
Figure 1: Most feedback controllers for continuous processes use the proportional-derivative-integral (PID) alogrithm to
manipulate the process variable by aplying a corrective effort to the process.
Virtually all feedback controllers determine their output by observing the error between the setpoint and a measurement of the
process variable. Errors occur when an operator changes the setpoint intentionally or when a process load changes the process
variable accidentally.
In warm weather, a home thermostat is a familiar controller that attempts to correct temperature of the air inside a house. It
measures the room temperature with a thermocouple and activates the air conditioner whenever an occupant lowers the desired
room temperature or a random heat source raises the actual room temperature. In this example, the house is the process, the
actual room temperature inside the house is the process variable, the desired room temperature is the setpoint, the
thermocouple is the sensor, the activation signal to the air conditioner is the controller output, the air conditioner itself is the
actuator, and the random heat sources (such as sunshine and warm bodies) constitute the loads on the process.
PID control
A proportional-integral-derivative or PID controller performs much the same function as a thermostat but with a more elaborate
algorithm for determining its output. It looks at the current value of the error, the integral of the error over a recent time interval,
and the current derivative of the error signal to determine not only how much of a correction to apply, but for how long. Those
three quantities are each multiplied by a tuning constant and added together to produce the current controller output CO(t) as in
equation [1].In this equation, P is the proportional tuning constant, I is the integral tuning constant, D is the derivative tuning
constant, and the error e(t) is the difference between the setpoint P(t) and the process variable PV(t) at time t. If the current error
is large or the error has been sustained for some time or the error is changing rapidly, the controller will attempt to make a large
correction by generating a large output. Conversely, if the process variable has matched the setpoint for some time, the
controller will leave well enough alone.
Recopilación: AdolfoOR
Ingenieria de Controles 11 de 91
Equations [1] and [6]--both forms of the PID algorithm generate an output CO(t) according to recent values of the
sepoint SP(t), the process variable PV(t), and the error between them e(t)=SP(t) - PV(t)
Conceptually, that's all there is to a PID controller. The tricky part is tuning it; i.e., setting the P,I, and D tuning constants
appropriately. The idea is to weight the sum of the proportional, integral, and derivative terms so as to produce a controller
output that steadily drives the process variable in the direction required to eliminate the error.
The brute force solution to this problem would be to generate the largest possible output by using the largest possible tuning
constants. A controller thus tuned would amplify every error and initiate extremely aggressive efforts to eliminate even the
slightest discrepancy between the setpoint and the process variable. However, an overly aggressive controller can actually
make matters worse by driving the process variable past the setpoint as it attempts to correct a recent error. In the worst case,
the process variable will end up even further away from the setpoint than before.
On the other hand, a PID controller that is tuned to be too conservative may be unable to eliminate one error before the next
one appears. A well-tuned controller performs at a level somewhere between those two extremes. It works aggressively to
eliminate an error quickly, but without over doing it.
How to best tune a PID controller depends upon how the process responds to the controller's corrective efforts. Processes that
react instantly and predictably don't really require feedback at all. A car's headlights, for example, come on as soon as the driver
hits the switch. No subsequent corrections are required to achieve the desired illumination.
On the other hand, the car's cruise controller cannot accelerate the car to the desired cruising speed so quickly. Because of
friction and the car's inertia, there is always a delay between the time that the cruise controller activates the accelerator and the
time that the car's speed reaches the setpoint. A PID controller must be tuned to account for such lags.
PID in action
Consider a sluggish process with a relatively long lag--an overloaded car with an undersized engine, for example. Such a
process tends to respond slowly to the controller's efforts. If the process variable should suddenly begin to differ from the
setpoint, the controller's immediate reaction will be determined primarily by the actions of the derivative term in equation [1].
This will cause the controller to initiate a burst of corrective efforts the instant the error changes from zero. A cruise controller
with derivative action would kick in when the car encounters an uphill climb and suddenly begins to slow down. The change in
speed would also initiate the proportional action that keeps the controller's output going until the error is eliminated. After a
while, the integral term will also begin to contribute to the controller's output as the error accumulates over time. In fact, the
integral action will eventually come to dominate the output signal since the error decreases so slowly in a sluggish process.
Even after the error has been eliminated, the controller will continue to generate an output based on the history of errors that
have been accumulating in the controller's integrator. The process variable may then overshoot the setpoint, causing an error in
the opposite direction.
Recopilación: AdolfoOR
Ingenieria de Controles 12 de 91
Figure 2: A cruise controller attempts to minimize errors between the desired speed set by the driver and the car's
actual speed measured by the speedometer. The controller detects a speed error when the desired speed is increased
or when an added load (such as an uphill climb) slows the car.
If the integral tuning constant is not too large, this subsequent error will be smaller than the original, and the integral action will
begin to diminish as negative errors are added to the history of positive ones. This whole operation may then repeat several
times until both the error and the accumulated error are eliminated. Meanwhile, the derivative term will continue to add its share
to the controller output based on the derivative of the oscillating error signal. The proportional action, too, will come and go as
the error waxes and wanes.
Now suppose the process has very little lag so that it responds quickly to the controller's efforts. The integral term in equation [1]
will not play as dominant a role in the controller's output since the errors will be so short lived. On the other hand, the derivative
action will tend to be larger since the error changes rapidly in the absence of long lags.
Clearly, the relative importance of each term in the controller's output depends on the behavior of the controlled process.
Determining the best mix suitable for a particular application is the essence of controller tuning.
For the sluggish process, a large value for the derivative tuning constant D might be advisable to accelerate the controller's
reaction to an error that appears suddenly. For the fast-acting process, however, an equally large value for D might cause the
controller's output to fluctuate wildly as every change in the error (including extraneous changes caused by measurement noise)
is amplified by the controller's derivative action.
There are basically three schools of thought on how to select P, I, and D values to achieve an acceptable level of controller
performance. The first method is simple trial-and-error--tweak the tuning constants and watch the controller handle the next
error. If it can eliminate the error in a timely fashion, quit. If it proves to be too conservative or too aggressive, increase or
decrease one or more of the tuning constants.
Experienced control engineers seem to know just how much proportional, integral, and derivative action to add or subtract to
correct the performance of a poorly tuned controller. Unfortunately, intuitive tuning procedures can be difficult to develop since a
change in one tuning constant tends to affect the performance of all three terms in the controller's output. For example, turning
down the integral action reduces overshoot. This in turn slows the rate of change of the error and thus reduces the derivative
action as well.
The analytical approach to the tuning problem is more rigorous. It involves a mathematical model of the process that relates the
current value of the process variable to its current rate of change plus a history of the controller's output. Random influences on
the process variable from sources other than the controller can all be lumped into a load variable LV(t). See equation [2]. This
particular model describes a process with a gain of K, a time constant of T, and a deadtime of d. The process gain represents
the magnitude of the controller's effect on the process variable. A large value of K corresponds to a process that amplifies small
control efforts into large changes in the process variable.
Equation [2]
Recopilación: AdolfoOR
Ingenieria de Controles 13 de 91
Equation [2]--The process variable PV(t) is a function of its own derivative plus an earler output from the controller
CO(t-d) and a random load variable LV(t). A PID controller for this process can be tuned according to the values of the
parameters K, T, and D.
Time constant T in equation [2] represents the severity of the process lag. A large value of T corresponds to a long lag in a
sluggish process. The deadtime d represents another kind of delay present in many processes where the controller's sensor is
located some distance from its actuator. The time required for the actuator's effects to reach the sensor is the deadtime. During
that interval, the process variable does not respond at all to the actuator's activity. Only after the deadtime has elapsed does the
lag time begin.
In the thermostat example, the ductwork between the air conditioner and the thermostat causes a deadtime since each slug of
cool air takes time to travel the duct's length. The room temperature will not begin to drop at all until the first slug of cool air
emerges from the duct.
Other characteristics of process behavior can be factored into a process model, but equation [2] is one of the simplest and most
widely used. It applies to any process with a process variable that changes in proportion to its current value. For example, a car
of mass m accelerates when its cruise controller calls for the engine to apply a force Fengine(t) to the drive axle. However, that
acceleration a(t) is opposed by frictional forces Ffriction(t) that are proportional to the car's current velocity v(t) by a factor of
Kfriction. If all other forces impeding the car's acceleration are lumped into Fload(t) and the force applied by the engine Fengine(t) is
proportional to the controller's output by a factor of Kengine, then v(t) will obey equation [2] as shown in equations [3] through [5].
Equations [3], [4], and [5]
Equations [3], [4], and [5]--The car's velocity v(t) is a function of the cruise controller's output CO(t), the car's current
acceleration a(t), and a load variable LV(t). The load variable represents forces on the car from sources other than the engine
and friction.
In equation [5], the process variable is v(t) and the load variable is LV(t) = -Fload(t)/Kfriction. The process gain is K = Kengine/Kfriction
and the process time constant is T = m/Kfriction. In this example there is no deadtime since the speed of the car begins to change
as soon as the cruise controller activates the accelerator. The car will not reach its final speed for some time, but it will begin to
accelerate almost immediately.
If a model like [2] or [5] can be defined for a process, its behavior can be quantified by analyzing the model's parameters. A
model's parameters in turn dictate the tuning constants required to modify behavior of a process with a feedback controller.
There are literally hundreds of analytical techniques for translating model parameters into tuning constants. Each approach uses
a different model, different controller objectives, and different mathematical tools.
The third approach to the tuning problem is something of a compromise between purely heuristic trial-and-error techniques and
the more rigorous analytical techniques. It was originally proposed in 1942 by John G. Ziegler and Nathaniel B. Nichols of Taylor
Instruments and remains popular today because of its simplicity and its applicability to any process governed by a model in the
form of equation [2]. The Ziegler-Nichols tuning technique will be the subject of "Back to Basics" (CE, Aug. 1998).
Application issues
Experienced PID users will note that none of the discussion so far applies directly to the commercial PID controllers currently
running more than 90% of their industrial processes. Several subtle flaws in the basic PID theory have been discovered during
the last 50 years of real-life applications.
Consider, for example, the effects of actuator saturation. This occurs when the output signal generated by the controller
exceeds the capacity of the actuator. In the cruise control example above, the PID formula may at some point call for a million
foot-pound torque to be applied to the drive axle. Mathematically, at least, that much force may be required to achieve a
particularly rapid acceleration.
Of course real engines can only apply a small fraction of that force, so the actual effects of the controller's output will be limited
to whatever the engine can do at full throttle. The immediate result is a rate of acceleration much lower than expected since the
engine is "saturated" at its maximum capacity.
However, it is the long-term consequences of actuator saturation that have necessitated a fix for equation [1] known as
antiwindup protection. The controller's integral term is said to "wind up" whenever the error signal is stuck in either positive or
Recopilación: AdolfoOR
Ingenieria de Controles 14 de 91
negative territory, as in this example. That causes the integral action to grow larger and larger as the error accumulates over
time. The resulting control effort also keeps growing larger and larger until the error finally changes sign and the accumulated
error begins to diminish.
Unfortunately, a saturated actuator may be unable to reverse the error. The engine may not be able to accelerate the car to the
desired velocity, so the error between the desired velocity and the actual velocity may remain positive forever. Even if the actual
velocity does finally exceed the setpoint, the accumulated error will be so large by then that the controller will continue to
generate a very large corrective effort. By the time enough negative errors have been accumulated to bring the integral term
back to zero, the controller may well have caused the car's velocity to overshoot the setpoint by a wide margin.
The fix to this problem is to prevent integrator wind-up in the first place. When an actuator saturates, the controller's integral
action must be artificially limited until the error signal changes sign. The simplest approach is to hold the integral term at its last
value when saturation is detected.
Alternative implementations
The PID formula itself has also been modified. Several variations on equation [1] have been developed for commercial PID
controllers; the most common being equation [6]. This version involves differentiating the process variable PV(t) rather than the
error e(t) = SP(t) - PV(t). The idea here is to prevent abrupt changes in the controller's output every time the setpoint changes.
Note that the results are the same when the setpoint SP(t) is constant.
The tuning constants in equation [6] differ from those in equation [1] as well. The controller's proportional gain now applies to all
three terms rather than just the error e(t). This allows the overall "strength" of the controller to be increased or decreased by
manipulating just P (or its inverse).
The other two tuning constants in equation [6] have been modified so that they may both be expressed in units of time. This also
gives some physical significance to the integral time TI. Note that if the error e(t) could somehow be held constant, the total
integral action would increase to the level of the proportional action in exactly TI seconds. Although the error should never
remain constant while the controller is working, this formulation does give the user a feel for the relative strengths of the integral
and proportional terms; i.e., a long integral time implies a relatively weak integral action, and vice versa.
For more details on the practical issues of applying PID controllers to real-life control problems, refer to "Process Control
Systems" by F. Greg Shinskey, available from the Foxboro Training Institute at 1-888-FOXBORO. The author gratefully
acknowledges Mr. Shinskey's assistance in the preparation of this article.
Recopilación: AdolfoOR
Ingenieria de Controles 15 de 91
As controls and automation become more distributed and integrated, industrial communication networks and buses are
becoming more crucial because they link controls with real world, in-the-trench, manufacturing processes. However, due to
growing choices of networks, protocols, buses, and node connections, it is helpful to understand the basic aspects of each
network or bus before picking an overall architecture.
Physical topologies include linear buses (with drops), and daisy-chained, ring (where all nodes are connected in a physical ring),
star, and mixed types. Each has its own advantages and drawbacks.
Bus arbitration schemes include master-slave and peer-to-peer (CSMA or collision sense/multi-access, and token-passing). On
a master-slave network, a master node manages network access, typically polling each node and granting access. Peer-to-peer
networks share bus ownership with all nodes having equal access. Token-passing buses (token ring/token bus) give token
holders bus ownership until the token is passed.
CSMA allows any network node to exchange data if the bus is idle. Each node monitors transmission, and temporarily backs off
if it finds a collision. This makes bus use of bandwidth efficient, but can sometimes degrade throughput in heavily loaded
networks.
Many buses are able to broadcast status data periodically, and then any node can retreive what it needs and use it immediately.
This is known as producer/consumer technology. ControlNet, DeviceNet, Fieldbus Foundation, and WorldFIP protocols are
based on producer/consumer industry technology, which permits all nodes on the network to have this simultaneous access to
the same data from a single source.
Industrial communication networks can be grouped into three basic categories: general-purpose networks, fieldbuses, and
device buses (also referred to as bit-level buses). Various physical topologies and arbitration schemes exist for each type.
General-purpose communication networks provide broadcast and point-to-point messaging between nodes, and perhaps to
other networks via bridges. Networks such as Ethernet, ARCNet, FDDI, IBM Token Ring, and MAP, are used for data gathering,
interprocess exchange of control data and sequencing information, and remote access.
Fieldbuses are optimized to exchange periodic data--also known as producer/consumer information--with I/O devices, while
providing time-slices for point-to-point messaging and network management tasks. Both messaging and network management
capabilities assist in remote coordination of I/O devices (such as configuration and diagnostics) among other possibilities. Many
provide bus electrical isolation among nodes, and add device profiles which virtualize equipment from different manufacturers,
thus providing a common access model in applications. Fieldbus Foundation, Profibus, and SP50 are typical examples.
Device buses, such as AS-I (AS Interface) and Seriplex, provide cost-effective connectivity to I/O devices. Although the cabling
and data rates are not typically advanced, actual throughput can be quite high due to the reduced frame size and simplified bus
arbitration and addressing schemes.
Other networks and buses tend to fall between these categories. Newer, so-called sensor buses, such as DeviceNet, CAN, and
SDS offer some benefits of a fieldbus, while keeping connectivity costs closer to device buses. Some offer remote configuration
of I/O devices and programmability.
Recopilación: AdolfoOR
Ingenieria de Controles 16 de 91
In this article, the consequences of performing feedback control with sampled rather than continuous data are examined.
This is the third in a series of four tutorials on the fundamentals of process control. Part 1, in February, examined PID control.
Part 2, in May, discussed the Smith Predictor. Part 4, in December, will look at multivariable control.
Continuous control applications involve process variables that can change at any instant--flow, temperature, and pressure,
being prime examples. The process industries, particularly petrochemicals, were once heavily dependent on continuous control
systems that could manipulate continuous process variables at all times.
True continuous control systems are virtually extinct now. Gone are the electrical, pneumatic, and mechanical devices that could
apply corrective forces directly to a process by mechanically magnifying the forces generated by the process sensors. The most
common continuous controllers left are found in the bathrooms of private homes. A toilet's level control system measures the
level in the tank and slowly closes the inflow valve as the level rises (see Figure 1).
Figure1: Pictured is a familiar continuous control system. As the water level rises, the valve closes and prevents any further
flow.
In contrast, discrete variables that change only at specified intervals are subject to discrete control. An assembly line is a classic
example. The count-of-completed-assemblies is a discrete variable that changes only at the instant when the line moves
forward and a finished product rolls off the line. A discrete control system can manipulate a discrete variable only when the
schedule calls for the next operation.
Continuous and discrete control systems behave very differently and are generally designed according to different mathematical
principles. However, the two come together in sampled control applications where the process variables change continuously,
but can only be measured at discrete intervals (see Figure 2).
Figure 2: This is a sampled control system. Switch inthe sampler cloes periodically and sends electronic measurement to the
piston.
Recopilación: AdolfoOR
Ingenieria de Controles 17 de 91
Computer-based controllers
All computer-based controllers perform sampled control. Whether it is part of a distributed control system, a single-loop
controller, or a PC-based controller, a control computer must wait to measure the process variables until its program calls for the
next round of sensor readings.
Once the measurements have been read into memory, the computer must take time to analyze the data and compute the
appropriate control actions. Only then can the computer return to reading the sensors. It may be just a matter of milliseconds
between readings, but during that sampling interval, the computer cannot "see" what's going on in the process.
On the other hand, a true continuous controller never stops measuring the process variables. It receives a continuous signal
from its sensors and applies a continuous stream of control efforts to the process. The sampling interval is effectively zero. In
the level control example mentioned earlier, the controller reacts to every infinitesimal change in the tank level, with no time out
for any computations.
Design strategies
The design of any computer-based control system must address the scarcity of data that results from sampling. The simplest
and most popular design strategy is to run the computer so fast that the sampling interval approaches zero and the sampled
signal appears continuous. This allows the computer to use control algorithms based on the more traditional principles of
continuous control.
Shortening the sampling interval also seems like a good way to prevent fluctuations in the process variables from going entirely
unnoticed between samples. However, excessively fast sampling rates waste computing resources that might be better used for
other purposes, such as interfacing with the operator or logging historical data.
Furthermore, it is possible to achieve comparable results with slower sampling rates, provided the original continuous signal can
be reconstructed from the sampled data. Figure 3 shows how this can be accomplished for a particularly simple application
where the original signal is known to be a low-frequency sine wave. The original sine wave (Figure 3a) has been sampled by a
control computer, resulting in a sample set (Figure 3b). In Figure 3c, the computer has found the lowest frequency sine wave
that fits the sampled data. For this application, the computer needed only two samples from each cycle of the sine wave to
completely reconstruct the original signal. Any additional samples would have been superfluous.
Figure 3
Recopilación: AdolfoOR
Ingenieria de Controles 18 de 91
Not so simple
Alas, signal reconstruction is never this simple. A square wave or even a higher frequency sine wave could have generated the
data samples shown in Figure 3b. The control computer could not have distinguished one from the other.
It may also seem unrealistic to have the control computer look for a sine wave that best fits the data samples. After all, most real
process variables don't oscillate sinusoidally unless forced to do so. However, any signal that is not a sine wave can be
expressed as a sum of sine waves, a theorem first proven by mathematician Joseph Fourier in 1822. Fourier also showed how
to compute the frequency and amplitude of each sine wave in that sum, using only the sampled data. It is this algorithm, known
as the Fourier Transform, that allows a control computer to reconstruct a continuous signal from sampled data.
Unfortunately, the Fourier Transform cannot guarantee the accuracy of the reconstructed signal. If enough of the original signal
is lost between samples, the signal cannot be completely reconstructed by any means. The Fourier Transform can be used to
determine if any particular signal will survive sampling and reconstruction intact. Consider each of the sine waves that the
Fourier Transform identifies as a component of the original signal. If the sampling rate is fast enough to "capture" each of those
components as in Figure 3, then the entire signal will be captured in the sampled data as well. Otherwise, the higher frequency
components will be lost and the reconstructed signal will not match the original.
Sampling rates
So how fast is fast enough for sampling? The answer depends on the process to be controlled. Most industrial processes
involve some combination of friction, inertia, and system stiffness that prevents a process variable from changing rapidly. The
temperature in an annealing oven, for example, may change by only a few degrees over the course of several hours. For
processes like these, high-frequency fluctuations in the process variable simply aren't possible. Fast sampling is not required to
capture the entire signal. Motion control applications, on the other hand, do involve rapid changes in the position and velocity
variables that the control computer won't see unless it samples as fast as it can.
The required sampling rate can be determined experimentally. The simplest test involves stimulating the process with a
sinusoidal control effort of ever increasing frequency. The process variable will eventually start oscillating at the same frequency
as the control effort, but with ever decreasing amplitude. The process will refuse to oscillate at all once the stimulation reaches a
high enough frequency.
Since the process cannot oscillate any faster than this frequency, there is no point in having the control computer look for any
higher frequency components when reconstructing the process variable's signal. Note, however, that capturing any one of the
signal's sinusoidal components requires two samples from every cycle of the sine wave (as in Figure 3b). Thus, if the maximum
frequency to be captured is determined to be wmax, the controller must be set to sample at a frequency of 2 wmax. Conversely, if
the sampling frequency is fixed at ws, then the highest frequency component that can be successfully reconstructed from the
sampled data is ws/2. This is the Nyquist frequency, first published by Bell Labs physicist Harry Nyquist in 1928.
What then?
Once the required sampling rate has been determined, there are literally hundreds of techniques for analyzing the sampled data
and generating the appropriate control effort. More on some of those methods will be presented in future installments of this
series. •
Recopilación: AdolfoOR
Ingenieria de Controles 19 de 91
This tutorial is the second in a series of four. Part 1, in February, examined PID control. Part 3, in September, will examine
sampled vs. continuous control, and Part 4, in December, will look at multivariable control.
Arguably the trickiest problem to overcome with a feedback controller is process deadtime--the delay between the application of
a control effort and its first effect on the process variable. During that interval, the process does not respond to the controller's
activity at all, and any attempt to manipulate the process variable before the deadtime has elapsed inevitably fails.
A deadtime example
Deadtime occurs in many different control applications, generally as a result of material being transported from the site of the
actuator to another location where the sensor takes its reading. Not until the material has reached the sensor can any changes
caused by the actuator be detected. Consider, for example, the rolling mill shown in Figure 1, which produces a continuous
sheet of some material at a rate of V inches per second. A feedback controller uses a piston to modify the gap between a pair of
reducing rollers that squeezes the material into the desired thickness. The deadtime in this process is caused by the separation
S between the rollers and the thickness gauge.
The controller in this example can compare the current thickness of the sheet (the process variable PV) with the desired
thickness (the setpoint SP) and generate an output (CO), but it must wait at least D S/V seconds for the thickness of the sheet
to change. If it expects a result any sooner, it will determine that its last control effort had no effect and will continue to apply
ever larger corrections to the rollers until the sensor begins to see the thickness changing in the desired direction. By that time,
however, it will be too late. The controller will have already overcompensated for the original thickness error, perhaps to the
point of causing an even larger error in the opposite direction.
Recopilación: AdolfoOR
Ingenieria de Controles 20 de 91
How badly the controller overcompensates depends on how aggressively it is tuned and on the difference between the actual
and the assumed deadtime. That is, if the controller assumes that the deadtime is much shorter than is actually the case, it will
spend a much longer time increasing its output before successfully effecting a change in the process variable. If the controller is
tuned to be particularly aggressive, the rate at which it increases its output during that interval will be especially high.
Overcoming deadtime
Curing overcompensation means ad-dressing one or both symptoms. The easiest solution is to "detune" the controller to slow
response rate. A detuned controller won't have time to overcompensate unless deadtime is very long.
The integrator in a PID controller is particularly sensitive to deadtime. By design, its function is to continue ramping up the
controller's output so long as there is an error between the setpoint and the process variable. In the presence of deadtime, the
integrator works overtime. Ziegler and Nichols determined the best way to detune a PID controller to handle a deadtime of D
2
seconds is reduce the integral tuning constant by a factor of D . Also, the proportional tuning constant should be reduced by a
factor of D. The derivative term is unaffected by deadtime. It only occurs after the process variable begins to move.
Fig. 2: The mathematical model of a Smith Predictor is usually implemented digitally, analog transit delays being difficult to
construct
Detuning can restore stability to a control loop that suffers from chronic overcompensation, but it would not even be necessary if
the controller could first be
made aware of the deadtime, then endowed with the patience to wait it out. That is essentially what happens in the famous
Smith Predictor control strategy proposed by O.J.M. Smith, U. of California at Berkeley, in 1957.
Fig. 3: The Smith Predictor effectively removes the deadtime from the loop.
Smith's strategy is shown in Figure 2. It consists of an ordinary feedback loop plus an inner loop that introduces two extra terms
directly into the feedback path. The first term is an estimate of what the process variable would look like in the absence of any
disturbances. It is generated by running the controller output through a process model that intentionally ignores the effects of
load disturbances. If the model is otherwise accurate in representing the behavior of the process, its output will be a
disturbance-free version of the actual process variable.
Recopilación: AdolfoOR
Ingenieria de Controles 21 de 91
The mathematical model used to generate the disturbance-free process variable has two elements connected in series. The first
represents all of the process behavior not attributable to deadtime. The second represents nothing but the deadtime. The
deadtime-free element is generally implemented as an ordinary differential or difference equation that includes estimates of all
the process gains and time constants. The second element is simply a time delay. The signal that goes into it comes out
delayed, but otherwise unchanged.
The second term that Smith's strategy introduces into the feedback path is an estimate of what the process variable would look
like in the absence of both disturbances and deadtime. It is generated by running the controller output through the first element
of the process model (the gains and time constants), but not through the time delay element. It thus predicts what the
disturbance-free process variable will be once the deadtime has elapsed (hence the expression Smith Predictor).
Subtracting the disturbance-free process variable from the actual process variable yields an estimate of the disturbances. By
adding this difference to the predicted process variable, Smith created a feedback variable that includes the disturbances, but
not the deadtime.
So what?
The purpose of all these mathematical manipulations is best illustrated by Figure 3. It shows the Smith Predictor of Figure 2 with
the blocks rearranged. It also shows an estimate of the process variable (with both disturbances and deadtime) generated by
adding the estimated disturbances back into the disturbance-free process variable. The result is a feedback control system with
the deadtime outside of the loop.
The Smith Predictor essentially works to control the modified feedback variable (the predicted process variable with
disturbances included) rather than the actual process variable. If it is successful in doing so, and if the process model does
indeed match the process, then the controller will simultaneously drive the actual process variable towards the setpoint whether
the setpoint changes or a load disturbs the process.
Unfortunately, those are big "ifs." It's easier for the controller to meet its objectives without dealing with the deadtime, but it is not
always a simple matter to generate the process models needed to make the strategy work. Even the slightest mismatch
between the process and the model can cause the controller to generate an output that successfully manipulates the modified
feedback variable, but drives the actual process variable off into oblivion. There have been several fixes proposed to improve on
the basic Smith Predictor, but deadtime remains a particularly difficult control problem.
Recopilación: AdolfoOR
Ingenieria de Controles 22 de 91
Terminology in Motion
Motion control, as other technologies, has its fair share of special terms. Not all of them are rigorously defined; some require
blending to suit the specific audience. Here's a sampling of some well-known and not-so-well-known terms.
A motion control system typically consists of a controller to process motion algorithms and signals; an amplifier to boost signals
to a level needed to power an actuator that provides the motion output; and feedback (sensors/transducers) to allow
adjustments for process changes, based on comparing measured output with the input.
An operator interface or host terminal front-end completes the system. Feedback implies that most motion control systems
operate in closed-loop; however, some run in open-loop, notably a step-motor-based system. Actuators come in various forms--
motors, cylinders, solenoids, etc.--and can be electric, hydraulic, pneumatic, or other type.
Axis--Any movable part of a machine or system that requires controlled motion. Several axes of motion can be combined in a
coordinated multiaxis system.
Circular interpolation--Coordination of two independent motion axes to produce an apparent circular motion. It's done through a
series of straight line approximations via software algorithms.
Commutation--Sequential excitation of motor windings to maintain the relative phase angle between the rotor and stator
magnetic fields, within specified limits, to control motor output. In brush dc motors, this function is accomplished by a
mechanical commutator and carbon brushes; in brushless motors, it's done electronically using rotor position feedback.
Electronic gearing--A method that simulates mechanical gears by electrically "slaving" one closed-loop axis to a second axis
(open- or closed-loop) through a variable ratio.
Encoder--A feedback device that translates mechanical motion into electrical signals indicative of actuator position. Incremental
and absolute encoders are common varieties; as the names imply, their output indicates incremental or absolute changes of
position.
Feedforward--A method that "precompensates" a control loop for known errors due to motor, drive, or load characteristics to
improve response. It depends only on the command, not the measured error.
Indexer--An electronic unit that converts high-level commands from a host computer, PLC, or operator panel into step and
direction pulses needed by a stepping motor driver.
Loop bandwidth--Maximum rate at which a control loop can respond to a change in a control parameter. It's indicative of loop
performance and is expressed in Hertz (Hz).
Motion profile--The velocity versus time (or position) relationship of the move made by a motion axis.
Overshoot--A system response where the output or result exceeds the desired value.
Pulse-width modulation--A switch-mode control method used in amplifiers and drivers to control motor voltage and current to
obtain higher efficiency than linear control. PWM refers to variable on/off times (or width) of the voltage pulses applied to the
transistors.
Quadrature--A technique that separates signal channels by 90° (electrical) in feedback devices. It is used with encoders and
resolvers to detect direction of motion.
Resolver--A position transducer that uses magnetic coupling to measure absolute shaft position during one revolution.
Servo mechanism--An automatic, closed-loop motion control system that uses feedback to control a desired output such as
position, velocity, or acceleration.
Tachometer--An electromagnetic feedback transducer providing an analog voltage signal proportional to rotational speed.
Recopilación: AdolfoOR
Ingenieria de Controles 23 de 91
This material addresses numerous reader inquiries asking how to calculate tuning parameters. Users should understand this
material is not a substitute for formal training on process control loop analysis and tuning, but rather as an introduction or
refresher. Control Engineering provides this information as a service and makes no guarantees of its usefulness for a particular
application (See also, summary, in August "News.")
Recent articles in Control Engineering have addressed methods to improve control loop performance (see Feb. 99, p. 77, May
99, p. 91 and p. 99). In each article, information related to collecting and analyzing process variable information has been
explained, but little has been provided on how to use this information to calculate controller-tuning values. This has not been an
oversight—calculating controller tuning values requires knowing considerable information about the algorithm used by the
controller.
There are essentially three types of algorithms in use: ideal, parallel and series. Ideal algorithms are generally found only in
textbooks. Parallel control algorithms have three independent (parallel) calculations for Proportional (Gain), Integral, and
Derivative. An advantage of parallel calculation construction is changes in the values to one do not affect the other two. A
disadvantage is they are difficult to manually tune. Series control algorithms are constructed so the output of one calculation is
part of the input to the next calculation, thus "upstream" calculation changes affect "downstream" calculations. This is frequently
referred to as controller tuning interaction. Series control algorithms are the most common used in analog and digital controllers.
Recopilación: AdolfoOR
Ingenieria de Controles 24 de 91
PID controllers operate on an error feedback where the output is normally characterized when there is a difference between the
PV and SP. However, it is not always advantageous for a controller to operate on an error signal. It is common practice to allow
a controller to respond differently to SP changes verses load (PV) changes. It is important to understand which algorithm
variables will be affected when the SP is changed versus when the PV is changed. Continuous processes normally have PV
load changes, while batch processes tend to have more SP changes. Depending on how the controller is being used, how the
algorithm reacts to SP and PV changes, and how tuning constants are determined/calculated, it is possible to have a controller
perform better one way than another.
Technical libraries contain volumes on various ways to calculate controller-tuning values. One of the most efficient and
consistent ways to collect and analyze process data is to use software from companies like ControlSoft (Cleveland, O.),
Techmation (Scottsdale, Ariz.) or ExperTune (Hubertus, Wis.). Also, most control system manufacturers offer a variety of control
loop analysis and tuning software. When software isn’t available, some useful guidelines can be applied.
Recopilación: AdolfoOR
Ingenieria de Controles 25 de 91
1. Double the GAIN setting (Leave INTEGRAL and DERIVATIVE
the same as Test 1).
2. Make a 10% change in SP.
3. Record the PV and CO responses. (If process becomes
unstable, place the loop in MANUAL and do what is needed to
Test 2
maintain control.)
4. If recorded response produces a stable (lagging) response,
repeat Test 2.
5. If recorded response produces an unstable (leading) response,
proceed to Test 3.
1. Half the GAIN setting (Leave INTEGRAL and DERIVATIVE the
same as Test 1 and 2).
2. Make a 10% change in SP.
3. Record the PV and CO responses. (If process becomes
unstable, place the loop in MANUAL and do what is needed to
Test 3
maintain control.)
4. If recorded response produces a stable (lagging) response,
proceed to Test 2.
5. If recorded response produces an unstable (leading) response,
repeat Test 3.
Source: Control Engineering
Once a marginally stable response is obtained, all the information necessary to calculate usable controller tuning constants is
available. The following table provides guidelines useful in determining usable tuning constants for P (proportional), PI
(proportional and integral), and PID (proportional, integral, and derivative) controllers.
Recopilación: AdolfoOR
Ingenieria de Controles 26 de 91
Many people become very nervous when a controller is placed in automatic with tuning constants that produce cyclic, on the
brink of out-of-control response. For these nervous types, a method known as open loop (loop in manual) reaction curve testing
may be less stressful (see CE, May 99, p. 99).
The philosophy of open loop testing is to begin with a steady-state process, make a step change to the final control element and
record the results of the PV. Information produced by the open-loop test is the loop deadtime and the loop time constant. Users
must be accurate in determining times for T2, the point where the PV first begins to move, and T3, the point where the PV
attains 63.2% of the total PV change. Following an open-loop test the recorded information should closely resemble the Open
Loop Test Results diagram below.
A side benefit to conducting and recording the open-loop test is the establishment of a loop signature for future reference in
determining if the process has changed. For example, if the loop signature is for a temperature controller on a heat exchanger, a
significant change in the loop signature could indicate the heat exchanger is losing efficiency.
Using the results of the open-loop test to calculate controller-tuning constants requires dividing the percentage change in PV by
the percentage change in CO to obtain an open-loop gain. The open-loop gain (OLG) and control loops inherent gain (IG) are
used in the formula (OLG * P(gain) = IG). With terms rearranged, the formula becomes IG divided by OLG = P(gain).
Consider the following guidelines when calculating controller-tuning constants:
If a control loop has an inherent gain of one, and the open-loop Then the proportional gain required for the
gain is four; controller is one divided by four or 0.25. (Place
0.25 in P constant of the controller.)
RULE #1: If loop dead time is less than or equal to ¼ the loop time Then open-loop gain times process gain equals
constant; one.
COROLLARY #1: If loop dead time is approximately ½ the loop Then open-loop gain times process gain equals
time constant; 0.5
COROLLARY #2: If loop dead time is greater than or equal to the Then open-loop gain times process gain equals
loop time constant; 0.25.
NOTE: Controller scan times should be at least eight times faster than the loop time constant.
RULE #2: Integral time (IT) should be equal to loop time constant IT = LTC when LTC is in repeats per minute.
(LTC).
RULE#3: Derivative time should be less than or equal to ¼ of LTC.
Source: Control Engineering with data from Techmation Inc.
Recopilación: AdolfoOR
Ingenieria de Controles 27 de 91
Integrating processes
Integrating processes are those for which only one CO setting in manual mode produces a stable (balanced) PV. Level, batch
temperature, batch pressure, and pH tanks are examples of integrating processes. Expanding on the level example, with the
control loop in manual, only one CO setting allows the amount of liquid entering a vessel to exactly equal the amount of liquid
leaving the vessel. Any other CO setting will cause the level PV to integrate upward or downward.
Gathering data for integrating processes is best accomplished using an open loop test.
1. Find the balance point where vessel/process input is equal to vessel/process output.
2. Make a 10% to 20% change in the CO setting.
3. After the PV has integrated 3% to 5% change the CO output back the balance point value.
4. Repeat step two in the opposite direction.
5. Repeat step three.
The reason to conduct the test in both directions is that some loops (i.e., heating and cooling) will likely produce a different slope
for each direction. When this is the case, the less aggressive slope should be used to determine controller-tuning constants to
prevent loop instability. When unexpected graphs are produced, likely causes are stiction or backlash in the control valve. Trying
to tune such loops is nearly impossible because the controller is attempting to overcome mechanical defects that likely will
become worse with time.
Integrating processes usually produce the best overall results with medium response tuning constants that allow some
overshoot. Use caution when applying derivative to integrating processes. If "excessive" hysteresis is found in a control valve,
use only slow PI tuning constants.
Recopilación: AdolfoOR
Ingenieria de Controles 28 de 91
Controller tuning constants for integrating processes should utilize high gain and slow integral (small repeats per minute).
Cascade control loops are effective when trying to maintain tight control over slow moving variables. For example, boiler level
can be tightly maintained using a level controller cascaded to a flow controller.
To have an effective cascade control strategy the dynamics of the secondary loop must be at least five times faster than the
primary loop. (Dynamics are defined as loop dead time multiplied by loop time constant.)
1. Place secondary loop in automatic (disconnect the secondary loop from the primary loop).
2. Conduct test and tune secondary loop.
3. Place secondary loop in Remote SP (connect secondary and primary loop).
4. Conduct test and tune primary loop.
Note: Ensure secondary loop does not have setpoint limits or unnecessary assigned alarms.
When embarking on a journey to tune all loops in a process, work from the raw material end to the final product end beginning
with flows, then pressures, followed by levels, then temperatures, and finally what remains.
Contrary to popular belief, control loop tuning is a science. But it begins with analysis of each component in the loop to ensure
each piece of equipment in the loop is capable of performing at its best (see CE, Feb. 99, p. 77). Once the equipment is ready,
methods have been developed and repeatedly proven to work, but it takes knowledge and patience. The pay off to having every
control loop performing at its best is a minimum 5% quality and production improvement that could go as high as 25%.
Recopilación: AdolfoOR
Ingenieria de Controles 29 de 91
Ziegler-Nichols Methods
Facilitate Loop Tuning
Tuning a proportional-integral-derivative (PID) controller is a matter of selecting the right mix of P, I, and D action to achieve a
desired closed performance (see "Basics of Proportional-Integral-Derivative Control," Control Engineering, March 1998).
The ISA standard form of the PID algorithm is:
¡Error!Marcador no definido.
The variable CO(t) represents the controller output applied to the process at time t, PV(t) is the process variable coming from
the process, and e(t) is the error between the setpoint and the process variable. Proportional action is weighted by a factor of P,
the integral action is weighted by P/TI, and the derivative action is weighted by PTD where P is the controller gain, TI is the
integral time, and TD is the derivative time.
In 1942, John G. Ziegler and Nathaniel B. Nichols of Taylor Instruments (now part of ABB Instrumentation in Rochester, N.Y.)
published two techniques for setting P, TI, and TD to achieve a fast, though not excessively oscillatory, closed-loop step
response. Their "open loop" technique is illustrated by the reaction curve in the figure. This is a strip chart of the process
variable after a unit step has been applied to the process while the controller is in manual mode (i.e., without feedback).
A line drawn tangent to the reaction curve at its steepest point shows how fast the process reacted to the step input. The inverse
of this line's slope is the process time constant T. The reaction curve also shows how long the process waited before reacting to
the step (the deadtime d) and how much the process variable increased relative to the size of the step (the process gain K).
Ziegler and Nichols determined that the best settings for the tuning parameters could be computed from T, d, and K as follows:
¡Error!Marcador no definido.
Once these parameter values are loaded into the PID algorithm and the controller is returned to automatic mode, subsequent
changes in the setpoint should produce the desired "not-too-oscillatory" closed-loop response. A controller thus tuned should
also be able to reject load disturbances quickly with only a few oscillations in the process variable.
Ziegler and Nichols also described a "closed loop" tuning technique that is conducted with the controller in automatic mode (i.e.,
with feedback), but with the integral and derivative actions shut off. The controller gain is increased until any disturbance causes
a sustained oscillation in the process variable. The smallest controller gain that can cause such an oscillation is called the
ultimate gain Pu. The period of those oscillations is called the ultimate period Tu. The appropriate tuning parameters can be
computed from these two values according to these rules:
A reprint of Ziegler and Nichols' 1942 paper can be found in "Reference Guide to PID Tuning" (Control Engineering, 1991).
Note, however that the tuning rules given in that paper differ from those shown here because Ziegler and Nichols were working
with a slightly different form of the PID algorithm. Different PID controllers use different algorithms, and each must be tuned
according to the appropriate set of rules. The Rules also change when the derivative or the integral action is disabled.
Recopilación: AdolfoOR
Ingenieria de Controles 30 de 91
If high-speed response is not required, any continuous process can be controlled easily enough. A feedback controller need
only measure the process variable, determine if it has deviated too far from the setpoint, apply the necessary corrective effort,
wait to see if the error goes away, and repeat as necessary. This closed-loop control procedure will eventually have the desired
effect provided the controller is sufficiently patient.
Unfortunately, patience is not generally considered a virtue in process control. A typical controller will apply a whole series of
corrective efforts well before its initial efforts have finished affecting the process. Waiting for the process to settle out every time
the controller makes a move generally leaves the process out of spec for so long that the controller becomes virtually useless.
Not so fast
On the other hand, a controller that tries to eliminate errors too quickly can actually do more harm than good. It may end up
over-correcting to the point that the process variable overshoots the setpoint, causing an error in the opposite direction. If this
subsequent error is larger than the original, the controller will continue to over-correct until it starts oscillating from 100% effort to
0% and back again.
This condition is commonly called closed-loop instability or simply hunting. An aggressive controller that drives the closed-loop
system into sustained oscillations is even worse than its overly patient counterpart because process oscillations can go on
forever. The process variable will always be too high or too low. Worse still, the oscillations can sometimes grow in magnitude
until pipes start bursting and tanks start overflowing.
A child playing on a swingset uses closed-loop instability to keep the swing going.
Stabilizing techniques
The Ziegler-Nichols closed-loop method is arguably the most straightforward approach for designing stable control loops. It
applies to PID controllers, which can be made more or less aggressive by adjusting their proportional (P), integral (I), and
derivative (D) gains. The higher the gains, the harder the controller works to eliminate errors.
Ziegler and Nichols found that if they gradually turned up the proportional gain on a P-only controller it would eventually start
over-correcting and force the process into sustained oscillations. By reducing the gain by 50% at that point, the loop would
become stable again. Simple enough!
Less obvious is how to add integral and derivative action to make the controller even more responsive without risking closed-
loop instability. Ziegler and Nichols determined through trial and error that increasing the integral and derivative gains in a
prescribed manner would actually allow the proportional gain to be increased to as much as 75% of the value that caused
instability. Their famous "tuning rules" allowed control engineers for the first time to design two-term (PI) and three-term (PID)
controllers that would keep the closed-loop system stable, yet fast enough to eliminate errors in a timely manner.
A child on a swingset, for example, uses closed-loop instability to keep the swing going. By applying a control action while the
swing is still in motion (i.e., by "pumping"), the child can force the swing back and forth past its resting position. Conversely, a
process controller would try to keep the closed-loop system stable by forcing the magnitude of to grow ever smaller.
Recopilación: AdolfoOR
Ingenieria de Controles 31 de 91
Control Valves:
Sizing, Design, Characteristics
Control valves are devices with movable, variable, and controlled internal elements for modulating fluid flow in a conduit. The
valve restricts flow in response to the command signal from a process measurement control system. Basically, a control valve
consists of a pressure containment enclosure body and various internal elements--fixed and movable--commonly called the
valve trim.
While there are uncommon exceptions, control valves are designed to function in either a push-pull or linear sliding-stem
manner, or in a rotary-stem manner. The former is epitomized by the traditional globe body design, while the latter is most
commonly seen in the butterfly vane type--although also found in the ball, partial ball, plug, and rotary plug types.
Control valves can and have been fabricated from practically every known metal, metal alloy, and modern engineered plastics or
polymers. Essentially, anything that can be cast, forged, molded, or machined can be used for a control valve and its internal
parts. Fluids from the most benign to corrosive, with pressures ranging from ultra-high vacuum to ultra-high pressure, and with
temperatures from cryogenic cold to extremely hot have been handled. All it takes is full recognition of the needed parameters,
engineering ingenuity and, of course, money.
Control valves can be built in sizes to handle liquid flow, from tiny fractions of a cubic centimeter to thousands of gallons per
minute, or gases from bubbles to millions of cubic feet per hour. Body size will range from about one-eighth inch to five feet or
larger. Despite these extremes, a given valve can exhibit a considerable degree of flow control turndown over its operating
range depending upon the process application and factors involved. Numerous ISA (Research Triangle Park, N.C.) and ANSI
(New York) standards detail various control valve considerations.
Proper application of control valves requires a combination of engineering knowledge, broad experience, and sensitivity to the
aspects of the art involved. It isn't always cut and dried, doesn't always follow a formula or book or rules, and certainly doesn't
benefit from being a casual afterthought to the system design. Good control-valve application requires broad knowledge of
control valve types, design details and operating characteristics.
Beyond that, the application engineer requires a good understanding of aspects of control dynamics, thermodynamics,
hydraulics, fluid dynamics, fluid physical properties, metallurgy, codes and standards, seals and gaskets, and finally for those
really tough service applications, a good dose of ingenuity and imagination for a solution.
Control valves are the final control element. As such, they are just as important to the proper functioning of the control-loop
system as the primary measurement device and the controller. It does little good to specify and purchase the most sophisticated
and capable measurement elements and controllers if the control valve is given short shrift and viewed as just a simple piece of
"pig iron" hardware. Unfortunately, too many engineers don't seem to understand this.
Even worse, there is a tendency to continue this backward way of thinking and settle for the lower cost of outmoded control
valve designs to avoid the expense of redesign.
Recopilación: AdolfoOR
Ingenieria de Controles 33 de 91
The National Electrical Code (NEC) defines a ground as "a conducting connection, whether intentional or accidental, between
an electrical circuit or equipment and the earth, or to some conducting body that serves in place of the earth." A ground loop can
be defined as any objectionable current flowing in a circuit's ground or return path. Here is a short guide that will help identify
possible sources of ground loops in your electrical systems and how to solve them.
The simplest ground loop involves connection between two different earth grounds as shown in the top figure. With earth ground
#1 at one potential, and earth ground #2 at a different potential, a ground loop current will flow in the loop as indicated. While
NEC requires grounding electrodes to be connected together and metal parts to be bonded together, there will still be
differences in ground potentials in the system. The further apart the connections, the more likely there will be a significant
potential difference.
One common cause of ac power ground loops is the double bonding of the neutral. The NEC requires the neutral to be bonded
to ground at only one place, either the service entrance or source (for separately derived systems), or at the first disconnect or
overcurrent device. Double bonding of the neutral usually occurs in downstream distribution panels. When the neutral is double
grounded, returning neutral current will split per Ohm's law and will flow in the ground circuit. This current can cause varying
voltage reference to equipment in the system. Remove the illegal neutral to ground bonds and the ground loop will be
eliminated.
DC power systems used for instrument and loop power are subject to a number of possible ground loops. This type of dc power
system has its return path or negative side grounded in only one place. One common ground loop occurs when a grounded
thermocouple is used without isolated inputs or an isolated transducer. Since the grounded thermocouple is typically a long
distance from the dc power system's reference ground, a substantial difference in ground potential can exist. Large currents can
flow causing varying reference potentials in the system, which can sometimes cause strange effects. The solution to this type of
ground loop is to use an ungrounded thermocouple or to isolate the thermocouple ground from the instrument system ground by
using an isolator, an isolated transducer, or isolated inputs. Generally, it is good practice to isolate even when using ungrounded
thermocouples.
The shield drain wire of an instrument signal cable is another place susceptible to ground loops. A shield wire is normally
grounded only at the zero-signal reference point of the circuit, which is normally the dc instrument power system reference point.
If any intermediate point on the shield becomes grounded, a ground loop will be formed. Not only will this ground loop corrupt
the dc reference, current will flow in the shield which will generate noise in the signal wires. Care must taken to ensure that the
field portions of the shields are terminated properly and that they are not exposed to environmental conditions that might cause
a sneak path from the shield to ground.
Two systems which communicate digitally to each other and are referenced to ground at two physically different points within
the same grounding electrode system are commonly prey to ground loops. This type of ground loop is solved by using isolated
communication devices or preferably using a fiber-optic link.
The method to solving ground loop problems is generally twofold. Remove any extra grounds so that there is one ground in the
system. If there must be more than one ground, make sure to isolate each from other(s).
Recopilación: AdolfoOR
Ingenieria de Controles 34 de 91
Choosing a digital pressure indicator used to be a relatively simple job. Like the early days of camcorders and VCRs, only a few
manufacturers supplied them, with limited features, and they tended to be expensive. The past decade has seen improvements
in pressure sensing technology (better accuracy, lower cost) along with an explosion in electronics and firmware integration and
packaging. The good news is that users now enjoy a multitude of general purpose and application specific products, but this
variety of solutions makes the selection process a bit more complex.
Today, the term "digital pressure indicator" can cover a lot of ground. The general marketplace uses it to describe anything from
traditional bench- or panel-mount devices to handheld devices and transducers used with digital panel meters and acquisition
systems.
A digital pressure indicator is defined as a package that includes an integrated pressure sensor and digital display in a bench- or
panel-mounting enclosure. The application for which a digital pressure indicator is being selected is key. It will ultimately drive
the choice of a given product.
There are two major challenges facing the specifying engineer. Challenge number one is understanding the application
requirements. This requires a well-defined set of performance and results criteria which in turn requires a good understanding of
manufacturers' specs and jargon. Challenge number two is wading through the multitude of manufacturers' specifications and
industry specific jargon. This apparent paradox does have a solution if the user employs a selection guide to help simplify the
process.
Selection Guide
There are key questions to ask about an application. The answers will prompt essential questions to ask of a potential supplier
about a product and its performance. Some key terms and concepts are defined within the following paragraphs while the rest
are included in the "key terms" section.
The answer to this question will indicate what type of enclosure is required, and whether or not an agency approval (FM, NEMA,
etc.) is required.
The answer to this question will indicate speed requirements (update rate) and output requirements (data logging, control, etc.).
Recopilación: AdolfoOR
Ingenieria de Controles 35 de 91
____ Reactive Vapor
____ Conductive Gas
Process media is important for two reasons. First is the obvious safety issue. Is the pressure media compatible with the
product’s wetted materials? A mismatch of process media to wetted material could result in corrosion of the sensing element,
resulting in erroneous measurement data and/or rupture of the sensing element. Second is a less apparent issue relating to
conductive gases and humidity. Conductive gas can have a significant effect on performance for some types of semiconductor
based sensors because the fundamental output signal can be altered by the effect of a conductive gas on the sensing bridge or
capacitor.
The designation of pressure type is based on the location of the datum point and direction of measurement relative to
atmospheric pressure as shown below.
It is important to understand how the manufacturer supports a product’s accuracy specification. This requires a conceptual
understanding of how display resolution, pressure range, and the accuracy statement interrelate.
The display resolution must be high enough to support the accuracy statement. Display resolution is usually described in terms
of "numbers of digits," for example, "3 and ½ digit, 4 and ½ digit, 5 full digit," and so on. This describes how many numeric
symbols the display can represent. A half digit can only represent a 1 or "off" and is always in the leading or most significant
digit position. A full digit can represent any numeric symbol from 0 through 9. For example, a 3-digit display can represent a
maximum numeric value of 999, a 3½ digit display can represent a maximum numeric value of 1999, a 4½ digit display can
represent 19999, and 5 full digit display can represent 99999.
An equally important aspect of display resolution is found in accuracy specifications that provide a percent of reading or full-
span rating with an additional plus or minus one LSD (least significant digit). This can contribute significant additional
uncertainty. For example, although a 4½ digit display can represent 19999 "counts," a 300 PSI range with a 4½ digit display
would be configured to provide a maximum display resolution of one part in 3000 so that the least significant digit increments in
non-fractional values. Therefore, the least significant digit contributes an additional ± 0.033% uncertainty. However, a 300 psi
range with a 3½ digit display would be configured to provide a maximum display resolution of 1 part in 300 which increases the
additional uncertainty contributed by the least significant digit to ± 0.33%.
Recopilación: AdolfoOR
Ingenieria de Controles 36 de 91
Required pressure range(s)?
From _____ to _____
It is generally recommended that a full scale pressure range is specified so that the typical operating pressure occurs between
25 and 75 percent of the full scale. For example, if a typical process measurement is required at 50 psi, a full scale range of 100
psi would be ideal.
There are products available that offer very good accuracy specifications but provide a limited choice of ranges. This is because
it "makes sense" to offer ranges which optimize display resolution. For example, a 4½ digit display can provide a maximum of
19999 counts. Therefore "two’s" ranges like 0/2, 0/200, 0/2000 complement a display resolution of 19999 counts. Each of these
ranges will use the maximum available display resolution, however, if the application calls for a range of 0/300 only 3000 counts
will be used. Clearly, the result will be limited resolution and possible conflict with the unit’s accuracy rating and the application
accuracy requirement.
Understanding the elements of an accuracy statement that are most important to your application is very important, so this
question involves some additional work.
By generally accepted standards (e.g. ASME B40.7, ANSI/ISA S51.1, and others), an accuracy statement typically includes the
effects of linearity, repeatability and hysteresis (see key terms). The linearity component is handled in different ways depending
on the manufacturer.
The technique used to describe linearity can have a significant effect on the total accuracy value. Generally accepted
techniques include independent (best fit straight line), zero-based, and terminal-based linearity. They are characterized by the
following:
Independent Best Fit Straight Line: A straight line is fit to a series of data points taken along the instruments FS range in such a
way as to minimize the maximum deviation of any one value. This method can reduce the stated nonlinearity by as much as
50%.
Zero-Based: A straight line, fixed at the zero point, is fit to a series of data points taken along the instruments FS range in such a
way as to minimize the maximum deviation of any one value.
Terminal-Based: A straight line, fixed at the actual zero and full span values, is used as the datum point to determine the
deviation of each reading. The maximum deviation of the individual readings is used to describe the linearity. This method tends
to represent actual performance more reliably than the preceding methods.
Another consideration is which elements are included and how they are combined. A manufacturer’s accuracy specification will
generally include the effects of non-linearity, non-repeatability and hysteresis. These can be presented as an algebraic sum or a
root sum of the squares (RSS). RSS will yield a "better" accuracy number but it can be somewhat misrepresentative of some of
the instruments actual performance characteristics. The following example provides a comparison of three methods of
presenting an accuracy specification based on the same data set for Nonlinearity (terminal point), hysteresis, and
nonrepeatability.
Nonlinearity: 0.07% FS
Hysteresis: 0.02% FS
Nonrepeatability: 0.01% FS
This example illustrates the importance of understanding a manufacturer’s accuracy specification and the method used to
express it.
Recopilación: AdolfoOR
Ingenieria de Controles 37 de 91
Accuracy as a percent of reading is defined as the difference between an instrument’s indicated value (IV) and a known
standard value (SV) based on individual reading (R) values where:
% R = SV - R x 100
IV
Accuracy as a percent of span is defined as the difference between an instrument’s indicated value (IV) and a known standard
value (SV) based on the instrument’s full span (FS) where:
% FS = SV - IV x 100
FS
Changes in ambient temperature can have a significant effect on accuracy depending on how the instrument is compensated for
change in temperature. It is important to understand the manufacturer’s specification.
There are products, which offer an accuracy specification that includes the effects of temperature, for example, ± 0.05% FS
including temperature from 20° to 120°F. Other products specify the temperature error separately, for example, ± 0.01% of FS
per °F from 73°F for zero and span.
Most pressure instruments can not be compensated for the effects of extreme process media temperatures. For example,
pressure measurements on superheated steam lines are a common requirement. In this case a pig tail siphon is installed
between the pressure instrument and the steam line. The pig tail siphon provides additional surface area sufficient to cool and
condense the process media to ambient temperature conditions that can be handled by the pressure sensor. Although process
media temperature can be an important issue, keep in mind that the plumbing and process connector materials absorb a good
deal of thermal energy in a dead-end measurement system.
This specification will typically be based on manufacturer’s data taken over a given period of time under reference or controlled
conditions. The "real world" of applications introduces a variety of conditions which can affect the manufacturer’s specification
such as temperature extremes, shock, vibration, and power interruption.
Output Signals
Selecting a digital output signal is a function of the type of data acquisition equipment and or the requirement for multi-drop
(multiple devices on the same line) operation. A digital output will typically mirror the digital display information.
Proportional analog output signals will also be determined by the requirements of the interface equipment. However, unlike
digital outputs an analog output signal may or may not mirror the digital display’s value. Often an analog output is required to
gain higher resolution or faster response.
Recopilación: AdolfoOR
Ingenieria de Controles 38 de 91
How is the analog signal generated?
Some instruments offer an analog output signal that is derived directly from the sensor prior to digital correction and modeling.
In this case the accuracy of the analog signal may be less than that of the digital display data. In other cases, the analog signal
is created from the digitally corrected data utilizing a DAC (digital to analog converter). In this case the signal will carry the same
accuracy specification as the digital display data but will often be slower and have less resolution.
The bottom line is to ask the right questions and determine which of the techniques provides the best compromise.
Other Features
There are numerous optional features available which can enhance the usefulness and value of a product. Among these are:
• Alarm Relays Drivers;
• Engineering Unit Select;
• Max/Min Tracking;
• Data Logging; and
• Battery Power
It is beyond the scope of this guide to describe them all in detail but the same selection principals apply. Defining the application
requirements to the point that specific questions can be asked of a manufacturer is the only way to be sure that the feature’s
capabilities will meet your expectations.
Key Terms
Accuracy—The difference between an instrument’s indicated value and a known value generated by an accepted standard.
Accuracy, Reference—The accuracy of an instrument under defined "standard" conditions of temperature, relative humidity and
mounting position.
Accuracy, Percent of Reading—An expression of the difference between an instrument’s indicated value (IV) and a known
standard value (SV) based on individual reading (R) values where:
% R = SV - R x 100
IV
Accuracy, Percent of Span—An expression of the difference between an instrument’s indicated value (IV) and a known standard
value (SV) based on the instrument’s full span (FS) where:
% FS = SV - IV x 100
FS
A/D Resolution—An analog to digital converter is a device that converts a sensor’s pressure proportional analog signal to a
14
digital signal. A/D converters are described as 12 bit, 14 bit, 20 bit, etc. The resolution of a 14 bit A/D converter is 2 or 16384
20
"counts", a 20 bit converter (2 ) or 1048576 counts.
Display Resolution—The maximum numeric value that can be represented by a digital display.
For example:
3 digit, 4 and ½ digit, 5 full digit, etc. A half digit can only represent a 1 whereas a full digit can represent any numeric
symbol from 0 through 9. A 3-digit display can represent a maximum numeric value of 999, a 4 1/2 digit display can
represent 19999 and 5 full digit display can represent 99999.
Note: A/D resolution and firmware will determine the number of counts used in driving the display.
Hysteresis—The difference in an indicated pressure value, taken at the same point approached first an increasing then from a
decreasing pressure, on the same pressure excursion.
Linearity—How closely a set of pressure readings, taken along the span of an instrument, approximates a straight line.
Linearity, Independent—A straight line is fit to a series of data points taken along the instruments FS range in such a way as to
minimize the maximum deviation of any one value.
Recopilación: AdolfoOR
Ingenieria de Controles 39 de 91
Linearity, Zero-Based—A straight line, fixed at the zero point, is fit to a series of data points taken along the instruments FS
range in such a way as to minimize the maximum deviation of any one value.
Linearity, Terminal (end) Point—A straight line, fixed at the actual zero and full span values, is used as the datum point to
determine the deviation of each reading. The largest deviation of an individual reading is used to describe the linearity.
Repeatability—The difference in an indicated pressure value observed from a number of readings taken approached from the
same (increasing or decreasing) direction.
Sample Rate, Conversion—A specification used to describe the speed of an A/D converter. This value is related to the display
update rate.
Span—The algebraic difference between the upper and lower range values.
For example:
0 to 100 psi, Span = 100 psi
-15 to 100 psi, Span = 115 psi
Temperature, Reference—The temperature at which calibration and certification of the indicator accuracy is performed.
Temperature Effect—Changes in the indicated value attributed to the effects of variations in ambient temperature (from
reference temperature) on an indicator’s electronics or the effects of process temperature on the indicator’s pressure sensor.
Update Rate, Display—The time required for the displayed data to be updated. Usually expressed in terms of milliseconds or
samples per second.
Suggested Reading
Each of the following publications provides additional information on specifications, applications, and terminology and
definitions.
• ANSI B40.2, Gauges - Pressure Indicating Dial Type - Elastic Element
• ANSI B40.7, Gauges - Pressure Digital Indicating
• ANSI/ISA-S51.1 - 1979, Process Instrumentation Terminology
The Fluid Controls Institute (FCI) is a non-profit trade association dedicated to the technical advancement and increased
understanding of the fluid control, handling, and measurement equipment. The FCI traces its origins to 1921 and has been
known as the Fluid Controls Institute since 1955.
FCI standards and publications are designed to aid in the proper selection, application, and operation of fluid control, handling,
and measurement equipment. The following companies are members of the FCI Gauge Section: Ametek/U.S. Gauge; Dresser
Industries Inc.; ITT Conoflow; Moeller Instrument Co.; Noshok Inc.; Palmer Instruments Inc.; Sensor Development Inc.; 3D
Instruments Inc.; Trend Instruments Inc.; H. O. Trerice Co.; Weiss Instruments Inc.
Recopilación: AdolfoOR
Ingenieria de Controles 40 de 91
Understanding the five essential pieces of a control loop contributes to process performance and can turn problem loops into
performing loops
Each control system control loop contains five pieces: a sensing element, transmitter, controller, final control element, and
process. Only when all five elements are performing their best will the control system meet expectations.
Frequently process control requires controlling one of four variables; flow, pressure, temperature, or level.
Flow measurement devices available today are more forgiving in their installation requirements than devices available even five
years ago; so it may be that replacing a flow measurement device is justified on its ability to perform in the current installation.
When auditing existing flow measurement installations, information necessary to understand performance expectations
includes:
• Full range (0-100%) is the minimum and maximum flow that passes by the measurement point;
• Rangeability (turndown) is the ratio between the maximum and minimum control points;
• Repeatability, frequently described in percent, is the ability to achieve the same output when the same input, coming
from the same direction is provided; and
• Accuracy, often stated in either percent of full scale or percent of reading.
If, for example, flow was originally to be controlled at 1,400 gpm (gallons per minute) (5,300 lpm (liters per minute)) at ± 0.5%,
but current requirements are to control flow between 375 and 1,500 gpm (1,400 and 5,700 lpm) with an accuracy of ± 0.5%, the
audit needs to ensure the installed flow meter can provide 4:1 rangeability and 0.5% of reading accuracy.
Flow measurement devices can be segregated into two categories: devices measuring flow rate and those measuring velocity.
Devices measuring flow are often head-type devices that depend on measuring pressure differences across inline interferences
(i.e., orifice plate) and typically provide about 3:1 rangeability. Accurate flow measurements in head-type devices occurs as long
as the pressure and/or temperature of the flowing media remains at design (base or standard) conditions.
Velocity based measurement devices frequently provide measurements in mass, mole, or volume terms; and include magnetic-,
vortex-, mass-, and turbine-meters. Devices in the velocity category rely on the basic formula Mass = volume x density.
Mass-flow measurement relies on the laws of nature that prohibit a stream from accumulating or loosing mass; thus mass-flow
measurements are independent of changes in temperature, pressure, or pipe size. Units used in mass-flow measurement are
usually pounds or kilograms with time periods of seconds, minutes, or hours such as pounds per hour (pph) or kilograms per
second (kg/s).
When volume measurements are made assuming design conditions, an inaccurate measurement is produced. Overcoming
these inaccuracies requires on-line compensation for density changes. For liquids, that means temperature compensation; and
for vapors or gases, that means pressure and temperature compensation. Volume measurements are usually in cubic feet,
cubic meters, cubic liters, or gallons with time periods of seconds, minutes, or hours, i.e., cfh (cubic feet per hour), m3/s (cubic
meters per second), or gpm (gallons per minute).
Recopilación: AdolfoOR
Ingenieria de Controles 41 de 91
Mole-flow measurements are determined by the formula Mole flow rate = mass flow rate ÷ molecular weight of the fluid. Typical
units for mole measurements are moles per second (mol/s) and kilomoles per hour (kmol/h).
Rangeability of velocity based flowmeters is improving, but a few years ago the rules-of-thumb were: magnetic meters (30:1),
vortex devices (15:1), massflow meters (100:1), and turbine meters (10:1).
Osborne Reynolds (1842-1912) determined turbulence influenced obtaining repeatable flow measurements. Reynolds
developed a formula that when basic units are assigned to each quantity, the ratio is a dimensionless number (see Reynolds
number formula). Reynolds determined that turbulence essentially disappeared and flow became streamline (laminar) at about
2,000. Between 2,000 and 4,000 measurement performance is questionable. Above 4,000, turbulence is good.
Producing turbulent flow is not enough for accurate flow measurements; how the turbulence is developed also influences the
measurement. Valves, pumps, or piping configurations located close to flow sensors can cause unwanted flow stream
influences.
Determining if a specific flowmeter installation can provide the required capability is best determined by referring to specific
device installation instructions. When instructions are unavailable, "rules-of-thumb" may help decide if sufficient up- and down-
stream runs of pipe the same size as the flowmeter are installed (see Flow installation rules of thumb).
Error sources
Primary sensors are frequently connected to transmitters that are used as transducers, receiving information in one form and
converting it to another form. For example, an RTD (resistance thermal detector) primary sensor provides temperature
measurements in ohms/deg. Connected to a transmitter (transducer) the ohms/deg value is converted to 4-20 mA and
transmitted to an indicator, recorder or controller.
A source of error in control loops occurs when transmitter calibrations are made at the electronics, and do not include the
sensor. For example, it is common to find older thermocouples have drifted several degrees. Substituting a calibrating source for
the thermocouple input will not reveal an inaccurate thermocouple.
Sensor interchangeability is another source of temperature measurement error. Standard temperature sensors allow for a
reasonable tolerance around an "ideal" sensor curve. Matched temperature sensors cost more but deliver significantly better
accuracy. Be aware that some manufacturers deliver high accuracy systems by matching the sensor and transmitter to form a
system. Extra care is required in maintaining systems using matched sensors to preserve the "paid for" capability.
Primary sensors are the number one influence on control-loop performance—but final control elements rank a close second.
Recopilación: AdolfoOR
Ingenieria de Controles 42 de 91
Final control elements come in a variety of shapes and sizes including variable-speed drives, heaters, and valves. Valves
include globe, characterized ball, quarter-turn, butterfly, eccentric-disk, and knife gate.
Final control elements can be segregated into three performance classes; linear, equal percentage, and quick opening (see
Final element performance classes diagram). Linear elements include globe and eccentric-disk valves, variable-speed drives,
and heaters. Equal-percentage elements include globe, characterized ball, and butterfly valves. Included in the quick-opening
class are globe, quarter-turn ball, knife gates, and dampers.
Globe valves appear in all three classes because they can be fitted with a variety of plug, seat, and cage designs to meet a
broad range of applications; a point worth remembering as process audits are conducted and the need for changes analyzed.
Control valves have long been the primary final element installed to control flow, temperature, pressure, and level. Experience
reveals that about 60% of installed control loops reach 100% of the measured variable range with only 30% of the final control
element travel. That means a lot of businesses have bought more control valve capacity than necessary. When sizing and
selecting control valves, it is best to calculate minimum, maximum, and normal flow for all three characteristics (see Example
control valve calculations chart). With results side-by-side, the characteristic that provides the most uniform process gain is
easier to determine. A valve that's too small will not pass the required flow, while one that is too large may result in unstable
performance as it tries to control at very low increments of travel.
Since the goal is to reduce process variability, ensuring smooth control valve performance is critical to success.
Making permanent improvements to control-loop performance requires verifying that data are repeatable. Processes unable to
repeat data often indicate problems in the measurement system and/or control valves. Two common causes of nonlinear
response in control valves are excess hysteresis and stick-slip.
Hysteresis is the inability of a device to return to a previously established position when the input to the device is repeated. In
control valves, hysteresis is distinguished from deadband by expecting that small reversals of input may not produce reversals
of valve travel. Integrating processes often demonstrate an oscillatory behavior caused by control valves with excess hysteresis;
self-regulating processes rarely do.
Sources to check for excess hysteresis include:
Stick-slip cycling occurs when controller integral action continuously increases the controller output without a corresponding
change in the actual valve position (stick phase). When the valve finally moves, it "pops" and the process variable overshoots
the setpoint (slip phase). Controller integral action drives the output in the other direction, setting up a distinctive continuous
oscillation. (See sidebar story Common cause of control loop cycling for more information.)
Controller influences
Controllers exist to maintain a measured variable equal to a setpoint. The affects filtering have on controller performance is a
seldom addressed topic. For example, transmitters often provide means of "snubbing" the measured variable. Snubbing is
accomplished using an adjustable orifice (or partially closed isolation valve) in the sensing line to reduce process pulsations
from reaching the sensing element.
Some transmitters provide electronic filtering of the output signal. Many digital control systems allow users to apply one or more
filters on input signals.
Regardless of the form, filters add lag to the signal and, when inappropriately used, can mask measurement variability and
create unsafe conditions.
Recopilación: AdolfoOR
Ingenieria de Controles 43 de 91
If examination of the raw measured-variable input indicates frequent, random spikes which cannot be eliminated at their source,
then application of a filter as near the source of the noise is appropriate. The amount of filter should be the minimum necessary
to filter excessive, but not all the noise from the signal. Use extra care when applying filters to integrating processes. The
additional lag introduced by the filter can be canceled by the controller's derivative action.
Temperature measurements seldom contain excessive noise because of process measurement lags. If high-frequency noise is
discovered on temperature measurement signals the cause is likely improper shielding of thermocouple leads. Source of the
noise should be fixed, rather than applying filters. When filters must be applied to temperature measurement signals, they
should be very small values.
Controller scan periods can be a source of poor control loop performance. As a rule-of-thumb, controller scan periods should be
at least eight times faster than the loop time constant (see Loop time constant diagram).
Conducting systematic audits of the five parts of existing control loops can pay big dividends in understanding the "knobs"
available to operations and the influence each contributes to process variability.
As many as one in five control loops demonstrates a continuous cycling at steady state when tuned with the optimum PI or PID
tuning parameters calculated using any of the popular methods including Lamda and Ziegler-Nichols. In most cases, the cycling
can be directly traced to nonlinear behavior of pneumatically actuated control valves. The two most common types of nonlinear
control valve responses are hysteresis with deadband and stick-slip.
Hysteresis with deadband will cause steady-state cycling in properly tuned integrating loops, while stick-slip causes the same in
self-regulating loops.
Stick-slip response is common in pneumatically actuated control valves using pneumatic positioners.
By design, pneumatic positioners are nonlinear devices. When a constant ramp input signal is applied to a pneumatic positoner,
the gain is small and loads the actuator slowly. When the ramp input exceeds a predetermined value, the gain increases and
loads the actuator dome at a faster rate.
Stick-slip occurs when the controller integral action continuously increases the controller output without a corresponding change
in the actual valve position (see Closed loop stick-slip cycling illustration). When the valve finally moves, it pops and the process
variable overshoots the setpoint. The error becomes negative and the controller integral action drives the output in the other
direction. This results in the distinctive continuous limit cycle known as a stick-slip cycle. The process variable appears as a
square wave oscillating around the setpoint. The controller output appears as a triangular wave with a frequency dependent on
the tuning parameters, the valve, and the process gain.
Detuning the integral setting eliminates stick-slip cycling but also slows the control loop's ability to respond to setpoint changes.
Techmation Inc. (Scottsdale, Ariz.) has developed a deadband reset scheduling (DSR) algorithm that adjust controller integral
settings depending on the size of the error between the setpoint and process variable.
Recopilación: AdolfoOR
Ingenieria de Controles 44 de 91
A feedback controller is designed to generate an output that causes some corrective effort to be applied to a process so as to
drive a measurable process variable towards a desired value known as the setpoint. The controller uses an actuator to affect the
process and a sensor to measure the results.
Virtually all feedback controllers determine their output by observing the error between the setpoint and a measurement of the
process variable. Errors occur when an operator changes the setpoint intentionally or when a disturbance or a load on the
process changes the process variable accidentally. The controller’s mission is to eliminate the error automatically.
A mechanical flow controller manipulates the valve to maintain the downstream flow rate in spite of the leakage. The size of the
valve opening at time t is V(t). The flowrate is measured by the vertical position of the float F(t). The gain of the controller is A/B.
This arrangement would be entirely impractical for a modern flow control application, but a similar principle was actually used in
James Watt’s original fly-ball governor. Watt used a float to measure the speed of his steam engine (through a mechanical
linkage) and a lever arm to adjust the steam flow to keep the speed constant.
An example
Consider for example, the mechanical flow controller depicted above. A portion of the water flowing through the tube is bled off
through the nozzle on the left, driving the spherical float upwards in proportion to the flow rate. If the flowrate slows because of a
disturbance such as leakage, the float falls and the valve opens until the desired flow rate is restored.
In this example, the water flowing through the tube is the process, and its flowrate is the process variable that is to be measured
and controlled. The lever arm serves as the controller, taking the process variable measured by the float’s position and
generating an output that moves the valve’s piston. Adjusting the length of the piston rod sets the desired flowrate; a longer rod
corresponds to a lower setpoint and vice versa.
Suppose that at time t the valve opening is V(t) inches and the resulting flowrate is sufficient to push the float to a height of F(t)
inches. This process is said to have a gain of Gp = F(t)/V(t). The gain of a process shows how much the process variable
changes when the controller output changes. In this case,
Equation [1] is an example of a process model that quantifies the relationships between the controller’s efforts and its effects on
the process variable.
The controller also has a gain Gc, which determines the controller’s output at time t according to
The constant Fmax is the highest possible float position, achieved when the valve’s piston is completely depressed. The
geometry of the lever arm shows that Gc = A/B, since the valve’s piston will move A inches for every B inches that the float
moves. In other words, the quantity (Fmax - F(t)) that enters the controller as an input "gains" strength by a factor of A/B before it
is output to the process as a control effort V(t).
Recopilación: AdolfoOR
Ingenieria de Controles 45 de 91
Note that controller equation [2] can also be expressed as
where Fset is the desired float position (achieved when the flow rate equals the setpoint) and VB = Gc (Fmax - Fset) is a constant
known as the bias. A controller’s bias represents the control effort required to maintain the process variable at its setpoint in the
absence of a load.
Proportional control
Equation [3] shows how this simple mechanical controller computes its output as a result of the error between the process
variable and the setpoint. It is a proportional controller because its output changes in proportion to a change in the measured
error. The greater the error, the greater the control effort; and as long as the error remains, the controller will continue to try to
generate a corrective effort.
So why would a feedback controller have to be any more sophisticated than that? The problem is a proportional controller tends
to settle on the wrong corrective effort. As a result, it will generally leave a steady state error (offset) between the setpoint and
the process variable after it has finished responding to a setpoint change or a load.
This phenomenon puzzled early control engineers, but it can be seen in the flow control example above. Suppose the process
gain Gp is 1 so that any valve position V(t) will cause an identical float position F(t). Suppose also the controller gain Gc is 1 and
the controller’s bias VB is 1. If the flow- rate’s setpoint requires Fset to be 3 inches and the actual float position is only 2 inches,
there will be an error of (Fset - F(t)) = 1 inch. The controller will amplify that 1 inch error to a 2 inch valve opening according to
equation [3]. However, since that 2 inch valve opening will in turn cause the float position to remain at 2 inches, the controller
will make no further change to its output and the error will remain at 1 inch.
The same mechanical controller now manipulates the valve to shut off the flow once the tank has filled to the desired level Fset.
The controller’ gain of A/B has been set much lower, since the float position now spans a much greater range.
Integral control
Even bias-free proportional controllers can cause steady-state errors (try the previous exercise again with Gp = 1, Gc = 2, and VB
= 0). One of the first solutions to overcome this problem was the introduction of integral control. An integral controller generates
a corrective effort proportional not to the present error, but to the sum of all previous errors.
The level controller depicted above illustrates this point. It is essentially the same float-and-lever mechanism from the flow
control example except that it is now surrounded by a tank, and the float no longer hovers over a nozzle but rests on the surface
of the water. This arrangement should look familiar to anyone who has inspected the workings of a common household toilet.
As in the first example, the controller uses the valve to control the flowrate of the water. However, its new objective is to refill the
tank to a specified level whenever a load (i.e., a flush) empties the tank. The float position F(t) still serves as the process
variable, but it represents the level of the water in the tank, rather than the water’s flowrate. The setpoint Fset is the level at which
the tank is full.
The process model is no longer a simple gain equation like [1], since the water level is proportional to the accumulated volume
of water that has passed through the valve. That is
Recopilación: AdolfoOR
Ingenieria de Controles 46 de 91
Equation [4] shows that tank level F(t) depends not only on the size of the valve opening V(t) but also on how long the valve has
been open.
The controller itself is the same, but the addition of the integral action in the process makes the controller more effective.
Specifically, a controller that contains its own integral action or acts on a process with inherent integral action will generally not
permit a steady-state error.
That phenomenon becomes apparent in this example. The water level in the tank will continue to rise until the tank is full and the
valve shuts off. On the other hand, if both the controller and the process happened to be pure integrators as in equation [4], the
tank would overflow because back-to-back integrators in a closed loop cause the steady-state error to grow without bound!
The blue trace on this strip chart shows the error between the process variable F(t) and its desired value Fset. The derivative
control action in red is the time derivative of this difference. Derivative control action is zero when the error is constant and
spikes dramatically when the error changes abruptly.
Derivative control
Proportional (P) and integral (I) controllers still weren’t good enough for early control engineers. Combining the two operations
into a single "PI" controller helped, but in many cases a PI controller still takes too long to compensate for a load or a setpoint
change. Improved performance was the impetus behind the development of the derivative controller (D) that generates a control
action proportional to the time derivative of the error signal.
The basic idea of derivative control is to generate one large corrective effort immediately after a load change in order to begin
eliminating the error as quickly as possible. The strip chart in the derivative control example shows how a derivative controller
achieves this. At time t1, the error, shown in blue, has increased abruptly because a load on the process has dramatically
changed the process variable (such as when the toilet is flushed in the level control example).
The derivative of the error signal is shown in red. Note the spike at time t1. This happens because the derivative of a rapidly
increasing step-like function is itself an even more rapidly increasing impulse function. However, since the error signal is much
more level after time t1, the derivative of the error returns to roughly zero thereafter.
In many cases, adding this "kick" to the controller’s output solves the performance problem nicely. The derivative action doesn’t
produce a particularly precise corrective effort, but it generally gets the process moving in the right direction much faster than a
PI controller would.
Fortunately, the proportional and integral actions of a full "PID" controller tend to make up for the derivative action’s lack of
finesse. After the initial kick has passed, derivative action generally dies out while the integral and proportional actions take over
to eliminate the remaining error with more precise corrective efforts. As it happens, derivative-only controllers are very difficult to
implement anyway.
On the other hand, the addition of integral and derivative action to a proportional-only controller has several potential
drawbacks. The most serious of these is the possibility of closed-loop instability (see "Controllers must balance performance
with closed-loop stability," Control Engineering, May 2000). If the integral action is too aggressive, the controller may over-
correct for an error and create a new one of even greater magnitude in the opposite direction. When that happens, the controller
Recopilación: AdolfoOR
Ingenieria de Controles 47 de 91
will eventually start driving its output back and forth between fully on and fully off, often described as hunting. Proportional-only
controllers are incapable of such behavior.
Another problem with the PID controller is its complexity. Although the basic operations of its three actions are simple enough
when taken individually, predicting just exactly how well they will work together for a particular application can be difficult. The
stability issue is a prime example. Whereas adding integral action to a proportional-only controller can cause closed-loop
instability, adding proportional action to an integral-only controller can prevent it.
PID in action
Revisiting the Flow control example, suppose an electronic PID controller capable of generating integral and derivative action as
well as proportional control has replaced the simple lever arm controller. Suppose too a viscous slurry has replaced the water so
the flow rate changes gradually when the valve is opened or closed.
Since this viscous process tends to respond slowly to the controller’s efforts—when the process variable suddenly differs from
the setpoint because of a load or setpoint change—the controller’s immediate reaction will be determined primarily by the
derivative action, as shown on the Derivative control example. This causes the controller to initiate a burst of corrective efforts
the instant the error moves away from zero. The change in the process variable will also initiate the proportional action that
keeps the controller’s output going until the error is eliminated.
After a while, the integral action will begin to contribute to the controller’s output as the error accumulates over time. In fact, the
integral action will eventually dominate the controller’s output, since the error decreases so slowly in a sluggish process. Even
after the error has been eliminated, the controller will continue to generate an output based on the accumulation of errors
remaining in the controller’s integrator. The process variable may then overshoot the setpoint, causing an error in the opposite
direction, or perhaps closed-loop instability.
If the integral action is not too aggressive, this subsequent error will be smaller than the original, and the integral action will
begin to diminish as negative errors are added to the history of positive ones. This whole operation may then repeat several
times until both the error and the accumulated error are eliminated. Meanwhile, the derivative term will continue to add its share
to the controller output based on the derivative of the oscillating error signal. The proportional action also will come and go as
the error waxes and wanes.
Now replace the viscous slurry with water, causing the process to respond quickly to the controller’s output changes. The
integral action will not play as dominant a role in the controller’s output, since the errors will be short lived. On the other hand,
the derivative action will tend to be larger because the error changes rapidly when the process is highly responsive.
Clearly the possible effects of a PID controller are as varied as the processes to which they are applied. A PID controller can
fulfill its mission to eliminate errors, but only if properly configured for each application.
Recopilación: AdolfoOR
Ingenieria de Controles 48 de 91
Many different companies contribute to a typical system integration project. The following list divides them into several broad
categories and describes the general functions of each. These are by no means hard and fast job descriptions. Most companies
fit more than one definition, and some switch from category to category as their clients' needs change. Still others combine their
services under a single banner for clients that require a single point of contact. For more information about 1000 automation
integrators of all kinds, log on to the 1999 Automation Integrator Guide.
Application engineers - Application engineers that work for vendors or their distributors generally concentrate on applying the
vendor's equipment to a client's project. Some application engineering departments will offer design and implementation
services as well; others will provide little more than technical advice. A few will even work with products from competing vendors
if the client so desires.
Architect engineers (A&Es) - A&E firms often provide multi-discipline design services for an automation project, but generally do
not become involved with the actual implementation. A&Es typically concentrate on designing the buildings and the layout of the
automated facility, though some design control systems as well.
Consulting engineers - Consulting companies range in size from single individuals to huge multi-national corporations. They
provide consulting and design services in specific technical disciplines such as civil, mechanical, electrical, and automation
engineering. Larger consulting firms may also assume ultimate responsibility for completing the entire project. Individual
consultants and smaller consulting firms generally do not.
Electrical contractors - Electricians and technicians working for electrical contractors actually run the wires and hook up the
electrical equipment specified in the project's design. They may also design and build the required control panels.
Engineering constructors (E/Cs) - E/C firms are similar to A&Es, but they provide construction management services as well.
Though they may serve as the general contractor for a complete turnkey automation project, they generally delegate specific
design and implementation tasks to specialized subcontractors.
Independent system integrators, systems houses - To varying degrees, system integrators work on every aspect of an
automation project other than actually manufacturing the control equipment. They may design and implement the control system
required by an A&E's overall plant design. They may design the panels and electrical systems that the electrical contractor
implements. They may perform all of these functions themselves or subcontract pieces of a project to specialists such as panel
shops and software houses. A system integrator generally assumes ultimate responsibility for completing the entire project from
initial consultation through final check-out. Truly independent integrators do so without favoring any particular vendor's products.
Instrumentation contractors - These are the technicians who calibrate and install the field instrumentation at the foundation of
every automation system. Some work for E/Cs, A&Es, electrical contractors, process engineers, and system integrators; but
many work directly for the end user. They may also provide low voltage electrical engineering services.
Machine builders, original equipment manufacturers (OEMs) - Some automation projects require specialized machinery or other
equipment with built-in control systems. Specialty machine builders and OEMs generally perform the software and hardware
integration for the custom equipment they fabricate. Some machine builders may also integrate their custom control systems
into the overall plant control system.
Panel shops, panel builders - Panel builders assemble a project's control equipment into the cabinets or "panels" that sit next to
the automated machinery. Many system integrators, machine builders, and OEMs maintain their own in-house panel shops.
Other independent panel shops execute designs supplied by system integrator or A&Es.
Plant/process engineering contractors - These engineering firms can build material processing facilities or entire processing
plants for their clients. They may construct the required control systems themselves or delegate that job to a subcontractor such
as a system integrator or a vendor's application engineering department.
Service and repair technicians - Technical service companies such as these generally work on repairing or maintaining existing
equipment, including a plant's control system. Some may also contract their services to install, commission, and perhaps even
design new control systems. They may work for a system integrator, a panel shop, a distributor, or an electrical contractor; or
they may work directly for the end user.
Software houses, contract programmers - Automation software houses provide engineers to program the computers required for
an automation project (PLCs and DCSs for control; PCs for data acquisition, control, and simulation; business systems for data
analysis and archiving; etc.). They may use their own software products or specialize in configuring commercial software
packages. Many software houses are also value added resellers.
Recopilación: AdolfoOR
Ingenieria de Controles 49 de 91
Value added resellers (VARs), value added distributors (VADs) - VARs buy products from a vendor, add something of value,
and resell the complete package to the end user. The value they add may be other compatible products or services such as
software configuration, troubleshooting, or complete system integration. VARs generally focus on a particular vendor's products
or a particular industry's applications. VADs also maintain product inventories and provide technical advice.
Recopilación: AdolfoOR
Ingenieria de Controles 50 de 91
Flow measurement is one of the "big four" need-to-know process parameters (others are temperature, pressure, and level).
Closed-channel flowmeters are categorized by their operating technologies and fall into the following categories:
Turbine
Fluid passing through a turbine flowmeter spins a rotor. The rotational speed of the rotor is related to the velocity of the fluid.
Multiplying the velocity times the cross-sectional area of the turbine provides the volumetric flow rate. Turbine flowmeters
provide excellent measurement accuracy for most clean liquids and gases. Like PD flowmeters, turbine meters create a
nonrecoverable pressure loss and have moving parts subject to wear.
Ultrasonic
Transit-time sound velocity or Doppler frequency shift methods are used to measure the mean velocity of a fluid. Like other
velocity measuring meters, volumetric flow rate is determined by multiplying mean velocity times area. Besides being
obstructionless, ultrasonic flowmeters can also be non-intrusive if their sonic transducers are mounted on the outside of the
pipe. Good to excellent accuracy can be obtained for almost all liquids, including slurries. Pipe fouling will degrade accuracy.
Vortex Shedding
The frequency of vortices shed from a bluff body placed in the flow stream is proportional to the velocity of the fluid. Again,
velocity times area gives the volumetric flow rate. Vortex flowmeters provide good measurement accuracy with liquids, gases, or
steam. They have no moving parts and are fouling tolerant. Vortex meters can be sensitive to pipeline noise and require flow
rates high enough to generate vortices.
Thermal
Mass flow rate can be determined by measuring the temperature rise of a fluid ("heat gain") or the temperature drop of a heated
sensor ("heat lost"). Thermal flowmeters have no moving parts or orifices and provide good gas measurement accuracy.
Thermal is one of only a few technologies that measure mass flow rate; it is also one of the few technologies that can be used
for measuring gas flow in large pipes, ducts, or stacks. Measurement of the fluid temperature is also provided by thermal
technology.
Coriolis
Fluid flowing through a vibrating flow tube causes a deflection of the flow tube proportional to mass flow rate. Coriolis flowmeters
can be used to measure the mass flow rate of liquids, slurries, gases, or vapors. They provide excellent measurement accuracy.
However, the thin wall of the flow tube necessitates careful material selection to minimize corrosion or erosion effects.
Measurement of fluid density or concentration is also provided by Coriolis technology.
An accurate comparison of technology differences is the first step in flowmeter selection for a given application. Once
completed, device selection is aided by detailed comparison of product specifications/features and vendors’ service and support
policies.
Jeff Deane is director of engineering at
Fluid Components Intl., San Marcos, Calif.
Recopilación: AdolfoOR
Ingenieria de Controles 51 de 91
Open-loop control offers some advantages
A feedback controller can keep an oven's temperature within acceptable ranges, sustain the pressure in a steam supply line as
demand fluctuates, and maintain a car's speed through an uphill climb. Every feedback controller has a different strategy for
accomplishing its particular mission, but all use some variation on the closed-loop control algorithm--measure a process
variable, decide if its value is acceptable, apply a corrective effort as necessary, and repeat the whole operation ad infinitum.
Disabling the feedback path in a closed loop control system generally reduces accuracy but may be necessary to stabilize the
loop.
Open loop controllers, on the other hand, do not use feedback. They apply a single corrective effort when so commanded and
assume that the desired results will be achieved. An oven may have a separate open-loop controller that opens and closes the
oven doors without verification. The steam supply system may have an emergency shutdown controller that automatically cuts
power and vents the lines when a dangerous over-pressure condition is detected.
Even feedback controllers must operate in the open-loop mode on occasion. A sensor may fail to generate the feedback signal
or an operator may take over the feedback operation to manipulate the controller's output manually.
Operator intervention is generally required when a feedback controller proves unable to maintain stable closed-loop control. For
example, a particularly aggressive pressure controller may overcompensate for a drop in line pressure. If the controller then
overcompensates for its overcompensation, the pressure may end up lower than before, then higher, then even lower, then
even higher, etc. The simplest way to terminate such unstable oscillations is to break the loop and regain control manually.
Expert operators
There are also many applications where experienced operators can make manual corrections faster than a feedback controller
can. Using knowledge of the process' past behavior, operators can manipulate process inputs now to achieve the desired output
values later. A feedback controller, on the other hand, must wait until the effects of its latest efforts are measurable before it
decides on the next appropriate control action. Predictable processes with long time constants or excessive deadtime are
particularly suited for open-loop manual control.
The principal drawback of open-loop control is accuracy loss. Without feedback, there is no guarantee that the control inputs
applied to the process will actually have the desired effect. If speed and accuracy are both required, open-loop and closed-loop
control can be applied simultaneously using a feedforward strategy. A feedforward controller uses a mathematical model of the
process to make its initial control moves like an experienced operator would. It then measures the results of its open-loop efforts
and makes additional corrections as necessary like a traditional feedback controller.
Feedforward is particularly useful when sensors are available to measure an incoming disturbance before it hits the process. If
its future effects on the process can be accurately predicted with the process model, the controller can take preemptive actions
to absorb the disturbance as it occurs.
For example, if a car equipped with cruise control and radar could see a hill coming, it could begin to accelerate even before it
begins to slow down. The car may not end up at the desired speed as it climbs the hill, but even that error can eventually be
eliminated by the cruise controller's normal feedback control algorithm. Without the advance notice provided by the radar, the
cruise controller wouldn't know that acceleration is required until the car had already slowed below the desired speed halfway up
the hill.
Recopilación: AdolfoOR
Ingenieria de Controles 52 de 91
Pressure transducers are found in numerous OEM applications including appliances, on- and off-road vehicles, medical
equipment, industrial machinery and military/aerospace applications. They are also used widely in process control. Introduction
of the microprocessor has spurred both functionality and expansion in the use of pressure transducers over the past 15 years.
Selecting the proper one for any application requires a close look at the following criteria:
Need for isolation—One of the first questions to be asked is whether the transducer’s sensor needs to be isolated from the
medium being measured. If the medium is a clean and a noncorrosive gas or liquid then a nonisolated transducer is acceptable.
For corrosive, high-temperature, or viscous media, isolation is generally required. Frequently, a metal or ceramic diaphragm with
or without a fill fluid is incorporated. Diaphragm seals can be attached to most pressure transducers.
Accuracy—This key selection criterion is determined by the performance level required for the application. Pressure
instrumentation is available in a wide range of accuracies. Keep in mind, high accuracy devices usually have improved
performance both with temperature changes and over time. This greater stability comes at a premium price.
Pressure range—Commonly available ranges exist from vacuum to 60,000 psi—in steps—with vacuum, gauge, absolute, or
differential pressure references. When selecting a transducer’s range, it is desirable for the application’s normal operating
pressure to be 50-90% of the range chosen.
Temperature effects—Temperature changes have the greatest effect on a pressure transducer’s environmental performance.
Most manufacturers provide temperature compensation specifications that define thermal effects over a given range.
Performance shown as a coefficient or error band is guaranteed over that temperature range. Outside of that range, larger
errors should be anticipated.
Vibration/shock effects—Vibration and shock are highly application-specific environmental issues. They should reviewed for fit
with manufacturer’s specifications.
Electrical effects—Built-in radio frequency interference (RFI), electromagnetic interference (EMI), and electrostatic discharge
(ESD) protection are fast becoming a requirement for usage within today’s operating environments. “CE”- marked products
usually have RFI, EMI and ESD protection built into the transducer’s electronics.
Type of process connection—Pressure ports of 1/8-, 1/4-, 1/2-in. NPT and 7/16-in. straight threads are common in industrial
applications. Applications in low-pressure ranges may only require hose barbs or simple push-on connections. User preference
is typically dependent upon the industry and application.
Hydraulic Applications—When applying transducers in hydraulic systems, it may be necessary to consider use of “snubbers” to
dampen hydraulic spikes. These dampening devices prevent sensor failure due to over range readings from phenomena such
as “water hammer.”
Outputs—Transducer outputs are available in industry-standard, millivolt, voltage, or current signals. Digital outputs with
communication capability are available as well. Some of the more common outputs are 0-30 mV, 0-100 mV, 4-20 mA, 0-5 V dc
and 0-10 V dc. The 4-20 mA output is the simplest since it is usually a two-wire configuration. Other nonstandard outputs are
usually the result of specific requirements of a large-volume OEM.
Electrical connections—Electrical terminations possible include conduit, cable, circular, and DIN style. DIN-style connectors,
both full size and miniature, have become popular options across the application spectrum because they offer the convenience
of screw terminals and moderate cost.
Recopilación: AdolfoOR
Ingenieria de Controles 53 de 91
Redundancy in Control
Gint Burokas
senior software engineer
Intellution Inc.'s Wizdom Controls, Naperville, Ill.
How is redundancy implemented for PC-based control? The first step is defining a physical interface to the real world that will
provide multiple computers controlling the same I/O system.
One way to provide multiple "controllers" is to implement a "mirroring backup" so that another system also collects all data. If the
system is controlling in real-time, having more than one CPU in the system is ideal, creating a bumpless control system. This is
a common requirement for redundancy.
For example, if the CPU fails in a system controlling a furnace, waiting for a replacement might destroy the product. Continual
process and CPU data exchange creates redundancy to ensure system reliability.
Taking advantage Microsoft Windows NT robustness and including features for redundancy in the software are key challenges
for PC-based control in factory-floor automation. To provide redundancy, PC-based control software acknowledges more than
one CPU on the system, whether it is distributed or centralized control.
Redundancy is achieved by updating one CPU with all I/O states, while another CPU interrupts to do control. Through this
transition, the software can identify which CPU is doing the control. Systems with PC-based control on two systems need to
share data, while one does the control and the other receives data. The two CPUs cannot compete with each other. CPU "hand
shaking" tells the control software which CPU system is operating.
Hot swapping
Hot swapping, similar to redundancy, allows the user to unplug one hard drive while on-line, allowing another drive to take over.
Some software can synchronize a state by passing the data space from one machine to another. In case of failure, control
switches to the backup processor.
With computer-based systems, redundancy is used for data storage and file servers. To secure data in file systems, computer
systems may use a redundant array of inexpensive disk drives, or techniques like disk mirroring, which protects data by
duplicating it on more than one disk drive. Industrial PCs use a paired drive or power supply that can be hot swapped or
exchanged during operation. Networking uses redundancy to tolerate failures, to increase likelihood of meeting tight time-
constraints, and to ration (based on task priorities) limited system bandwidth.
For such time-critical systems, redundancy is employed to secure the required bandwidth and fault-tolerance. Supervisory
control and data acquisition (SCADA), and distributed control systems use redundancy for sharing information and data. If either
server fails, the redundant system still gathers information. Some software allows viewing of plant activity from a human-
machine interface, or view node, should a SCADA node become unavailable by channeling data requests to a backup SCADA
node.
Programmable logic controllers (PLCs) also incorporate redundancy. At one level, the system employs multiple PLCs controlling
a single I/O bus so that if either CPU fails, back up takes over.
PLCs can also be redundant when one I/O point connects two identical systems of a PLC, CPU, or I/O interface card. Care must
be taken to avoid confusion when reading or writing inputs and outputs. PLCs also use redundancy with multiple CPUs and
power supplies. Redundancy may require communicating among multiple CPUs and distributing output data back to servers.
I/O-based hardware can also provide redundancy.
While hardware can provide redundancy, we find that software provides fail-safe redundancy for mission-critical operations.
Terms
Bandwidth range (usually Hertz) over which a system operates.
ability to change processors controlling a process (changeover) without
Bumpless
affecting the process.
CPU central processing unit.
Data space where data reside.
Disk mirroring data protection by duplication on disk drives.
design which allows continued system operation with some level of
Fault tolerance
malfunction.
Hand shaking contact among or between CPUs for identification.
Recopilación: AdolfoOR
Ingenieria de Controles 54 de 91
Hot swap exchange of components during operation.
RAID redundant array of inexpensive disk drives.
Redundancy duplication to enhance reliability.
Synchronize a state ensuring frequencies of two systems are equal.
Every child who has ever held a spring upright knows that tugging on the top end causes the bottom end to start bouncing and
that repeated tugging keeps those oscillations going. Some may notice that even though both ends always oscillate at the same
frequency, the bottom end bounces higher at some frequencies than at others. Truly gifted children might even notice that the
bottom end oscillates out of sync with the top end and lagsfurther and further behind as the frequency increases.
Engineers know that many mechanical, electrical, and chemical processes with energy-storing components behave the same
way. A step change at the input end causes decaying oscillations at the output. A sinusoidal input, on the other hand, causes a
sinusoidal output with an amplitude and a lag time that depend on the frequency of the oscillations. Not coincidentally, these two
phenomena are related and form the basis of the frequency domain techniques that are fundamental to the analysis of feed-
back controllers.
There are basically two ways to analyze the behavior of a process in the frequency domain. The direct method is to drive it with
a series of sinusoidal inputs, each with the same amplitude but different frequency. The amplitude and lag time of the sinusoidal
outputs that result in each case can be plotted against frequency to produce a Bode plot for the process. The sample Bode plot
in the figure shows how high the bottom end of the spring will bounce and how much it will lag the top end when the top end is
set oscillating at various frequencies.
The second frequency domain analysis method uses Fourier's Theorem to compute the process' Bode plot indirectly. Fourier's
Theorem states that any signal which is not itself a sine wave can be expressed as a sum of sine waves. A step input, for
example, results when a few high-amplitude, low-frequency sine waves are added to a larger collection of low-amplitude, high-
frequency sine waves. Fourier's Theorem also gives a formula for computing the amplitude of each component sine wave plus
its lag time relative to the lowest frequency compo-nent. Note that plotting the amplitude and lag time for each component
against its frequency yields a Bode plot of the signal just like theBode plot for an entire process.
In fact, the Bode plot for a process can be derived from the Bode plots of its input and output signals. Simply divide each
amplitudein the output's Bode plot by the corresponding amplitude in the input's Bode plot. The result-ing quotient is the
amplitude for the process' Bode plot at that frequency. To get the lagtimes for the process' Bode plot, simply subtract each input
lag time from the correspondingoutput lag time.
Conversely, multiplying the amplitudes of an arbitrary input signal with the amplitudes of the process will give the amplitudes
resulting in the output when that input is actually applied to that process. The output's lag times are computed by adding the lag
times of the input to the lag times of the process. With this mathematical "trick," control engineers can predict the effects of a
controller's actions on any process with a known Bode plot.
Recopilación: AdolfoOR
Ingenieria de Controles 55 de 91
Every electrical circuit has noise in it. Noise becomes undesirable when the signal-to-noise ratio becomes low enough to
adversely effect the operation of the electrical circuit.
Electrical or electromechanical devices that cause fast or large changes in voltage or current are common sources of noise.
Radio frequency noise can come from walkie-talkies, wireless computer systems, and other radio based systems. Typical noise
sources include lightning, power-line switching, switching inductive loads, arcs, fluorescent lights, welding machines, inadequate
separation of conductors of different levels, static discharge, harmonics, and ground loops. Noise can appear in both power and
signal (control) lines.
Noise is often described by how it is coupled into a circuit. Five basic types of couplings are capacitive, inductive, radio
frequency, common impedance, and conducted.
Electromagnetic interference (EMI) or radiated noise is coupled into a circuit depending on how close the source is to the
receiver. In general, if the receiver is less than about one-sixth of a wavelength from the radiating source, the noise coupling
mechanism will be dominated by either capacitive or inductive effects, but if it is greater than one-sixth wavelength, the radiated
noise is a plane wave and the coupling will be by radio frequency effects. This is commonly called radio-frequency interference
(RFI).
Capacitive or electrostatic noise is coupled into a circuit via a capacitive effect and is voltage-based. A voltage difference
between two conductors separated by air or other insulating materials create a capacitor through which noise can be coupled.
Inductive or magnetic-coupled noise reaches a circuit via an inductive effect and is current-based. Current flowing through one
circuit induces noise current in another circuit. The portion of the circuit into which the inductive noise is coupled can be viewed
as a single loop or coil being inductively coupled by a noise coil (circuit).
The complex coupling mechanism of RFI is based on reflection, adsorption, and antenna effects. The effectiveness of coupling
RFI into a system is a function of the radiating source, its strength, the characteristics of the transmission path, the distance
involved, and the receiver's sensitivity.
Common impedance coupling occurs when different circuits share common wires (impedances). Shared ground wires and long
common neutrals or return paths can cause common impedance coupling.
Conducted noise is coupled into a circuit via transmission of noise by wires or other conducting materials. Sooner or later, all
other coupled noise becomes conducted noise.
Conducted noise is generally defined as two types, normal mode and common mode.
Normal mode noise is defined between the conducting wires and the circuit reference and common mode noise is defined
between the circuit conductors and ground.
The different types of noise coupling are illustrated. Two good references in the area of electrical noise are "IEEE 518--IEEE
Guide for the Installation of Electrical Equipment to Minimize Electrical Noise Inputs to Controllers from External Sources" and
"Noise Reduction Techniques in Electronic Systems," by Henry W. Ott.
Recopilación: AdolfoOR
Ingenieria de Controles 56 de 91
SPC and SQC provide the big picture about processing performance
Jeff Cawley, Northwest Analytical
and Dave Harrold, CONTROL ENGINEERING
Statistical Process Control (SPC) and Statistical Quality Control (SQC) methodology is one of the most important analytical
developments available to manufacturing in this century.
SPC has come to be known as an on-line tool providing close-up views of what's happening to a process at the moment.
SQC provides off-line tools to support analysis and decision making to help determine if a process is stable and predictable from
shift to shift, day in and day out, and from supplier to supplier.
When SPC and SQC tools work together, users see the
current and long-term picture about processing
performance.
• SPC provides on-line tools that permit a close-up
Before SPC, products were inspected after they were
view of what's currently happening to a process.
completed and defective products were discarded or
• SQC provides off-line tools to support analysis
reworked. The idea behind continuous improvement is to
and decision making about process stability over
focus on designing, building, and controlling a process
time.
that makes the product correctly the first time (See CE,
Jan. '99, p.62).
Key to improving a process is removing as much variation as possible. When products and services are delivered with minimal
variation, customer requirements and expectations are met.
Manufacturers applying SPC and SQC techniques rely on a variety of methods, charts, and graphs to measure, record, and
analyze processes to reduce variations (See CE, March '99, p.87). In general, processes achieving the most benefit from SPC
and SQC are products with:
• Highly repetitive manufacturing processes;
• High-volume production and low margins; or
• Narrow tolerances.
To make SPC and SQC work, key parameters indicating product variations are measured and recorded. For example, key
parameters for a roll of cloth could include shrinkage, color, strength, and flaws per yard (meter).
SQC methods are used to analyze recorded data and establish which variations are a natural part of the process and which are
unusual variations caused by external factors, such as variations in raw materials.
Control charts are a fundamental tool of SPC and SQC and provide visual representation of how a process varies over time or
from unit to unit.
Control limits statistically separate natural variations from unusual variations. Points falling outside the control limits are
considered out-of-control and indicate an unusual source of variation.
Performance improvements of personal computer hardware and software permit real-time data collection, number crunching,
and graphing. Providing SPC information in real-time allows operators to make adjustments, or schedule maintenance on an as-
needed basis.
SPC and SQC are an effective part of continuously improving a manufacturing process. When measurements are accurately
collected and analyzed, improvements are identified and implemented, and controls established to ensure improvements are
permanent, a process is well on its way to meeting quality requirements.
Recopilación: AdolfoOR
Ingenieria de Controles 57 de 91
Arguably the trickiest problem to overcome with a feedback controller is process deadtime--the delay between the application of
a control effort and its first effect on the process variable. During that interval, the process does not respond to the controller's
activity at all, and any attempt to manipulate the process variable before the deadtime has elapsed inevitably fails.
Deadtime occurs in many different control applications, generally as a result of material being transported from the site of the
actuator to another location where the sensor measures the process variable. Not until the material has reached the sensor can
any changes caused by the actuator be detected. If the controller expects a result any sooner, it will determine that its last
control effort had no effect and will continue to apply ever larger corrections until the process variable begins to change in the
desired direction. By that time, however, it will be too late. The controller will have already overcompensated for the error that it
was trying to correct, perhaps to the point of causing an even larger error in the opposite direction.
In 1957, Otto Smith, a Professor at the University of California-Berkeley, determined that this overcompensation problem could
be eliminated if the controller could see an immediate response to its efforts. Without actually eliminating deadtime, Smith
created a model-based control strategy that allows the controller to predict the future effect of its present efforts and react
immediately to those predictions.
Mr. Smith's strategy is shown in the figure. It consists of an ordinary feedback loop plus an inner loop that introduces two extra
terms directly into the feedback path. The first term is an estimate of what the process variable would look like in the absence of
any disturbances. It is generated by running the controller output through a process model that intentionally ignores the effects
of load disturbances. If the model is otherwise accurate in representing the behavior of the process, its output will be a
disturbance-free version of the actual process variable.
The mathematical model used to generate the disturbance-free process variable consists of two elements hooked up in series.
The first element represents all of the process behavior not attributable to deadtime. The second element represents nothing but
the deadtime. The deadtime-free element is generally implemented as an ordinary differential or difference equation that
includes estimates of all the process gains and time constants. The second element of the model is simply a time delay. The
signal that goes in to it comes out delayed, but otherwise unchanged.
The second term that Smith's strategy introduces into the feedback path is an estimate of what the process variable would look
like without disturbances and deadtime. It is generated by running the controller output through the first element of the process
model (the gains and time constants), but not through the time delay element. It thus predicts what the disturbance-free process
variable will be once deadtimehas elapsed.
Subtracting the disturbance-free process variable from the actual process variable yields an estimate of the disturbances. By
adding this difference to the predicted process variable, Smith's strategy creates a feedback variable that includes the
disturbances, but not the deadtime. The deadtime is essentially moved outside of the loop. There is still a delay between the
application of the controller's efforts and its first effects on the process variable, but the controller need not wait for the deadtime
to elapse before determining what the next control effort should be
Recopilación: AdolfoOR
Ingenieria de Controles 58 de 91
The control loop is the essence of automation. By measuring some activity in an automated process, a controller decides what
needs to be done next and executes the required operations through a set of actuators. The controller then remeasures the
process to determine if the actuators' actions had the desired effect. The whole routine is then repeated in a continuous loop of
measure, decide, actuate, and repeat.
Common industrial controllers include programmable logic controllers (PLCs), distributed control systems (DCSs), stand-alone
loop controllers, and more recently, personal computers (PCs). Heating coils, robot arms, pumps, other motor types, and
conveyor belts are some of the actuators that a controller can use to operate an automated process.
Discrete control
In discrete control applications, control loops automate the production of individual objects, such as computer chips,
automobiles, and light bulbs. Activities to be controlled generally occur in a step-by-step manner where each step starts only
after its predecessor finishes.
An automatic car wash that produces clean cars from dirty ones is a familiar example of discrete control. When the controller
detects the departure of the previous vehicle, it signals the next one to enter the bay. When that vehicle reaches the stopping
point, the controller displays the STOP NOW sign. When each step of the washing process finishes, the controller starts the
next operation. When all of the required operations have been completed, the controller displays the EXIT NOW sign.
The controller measures the progress of the washing process with a variety of sensors. An electric eye detects the departure of
the previous vehicle. A proximity switch detects the arrival of the next one. Actuators for this automated process include valves
to regulate the water flow through the sprayers, conveyors to position the sprayers, and instruction signs to direct movements of
incoming and outgoing vehicles.
Continuous control
In continuous control applications, the controller and its actuators operate constantly. Continuous control is also commonly
known as process control even though many automated processes are discrete.
A continuous controller measures flow rates, temperatures, pressures, and other continuous variables that can change at any
time. It then decides if those variables are at acceptable levels and uses its actuators to change them if necessary.
Continuous control loops generally cycle through the measure-decide-actuate routine much faster than discrete control loops
do. In fact, most continuous controllers will make a whole series of control decisions before the results of the first one are
completely evident.
Considerable analytical effort is sometimes required to program a continuous controller. Its decision making algorithm has to
consider not only the current activity of the process, but the on-going effects of all of its previous decisions.
The water heater that provides hot water to the car wash is a continuous process subject to continuous control. A thermocouple
measures the water temperature in the tank and signals the controller to turn on the heating coil whenever the actual
temperature drops below a specified level. The tricky part is deciding how long the heating coil should remain activated each
time a temperature drop is detected. If it is shut off too soon, it will just have to be reactivated as the water temperature begins
dropping again. If it is kept on too long, the tank could boil over.
Continuous control loops are especially common in industries where the product flows in a continuous stream--petrochemicals,
foods, pharmaceuticals, pulp and paper, etc. The proportional-integral-derivative (PID) algorithm is the most common method by
which continuous controllers decide what to do next.
Recopilación: AdolfoOR
Ingenieria de Controles 59 de 91
There are many different ways to measure the level of products in industrial storage and process vessels. One of the most
commonly used devices is the differential pressure transmitter (dp). A dp device actually measures the height of material in the
vessel and its density. These two variables multiplied, result in the amount of pressure exerted on the diaphragm, which then
can be translated into an indication of level. Dps are relatively economical and easy to install. This "comfortable" technology is
fairly accurate and dependable when used to measure the level of clean liquids. However, density compensation is required for
accurate measurements. New installations require additional piping and isolation valves that add to initial installation cost.
One of the simplest devices for measuring level is the float. Floats are classified by the type of position sensor (reed switch,
cable-and-pot, magnetostrictive, and sonic or radar). Advantages to using floats are unlimited tank height, excellent accuracy
(depending on the float type), and comparatively low cost. However, they are intrusive sensors. Additionally, these mechanical
devices are subject to wear, corrosion, mechanical failure, and "getting stuck." Floats are subject to material buildup which can
effect their weight and therefore, accuracy.
Related to the float principle of level measurement is the displacer. Displacer technology is based on Archimede's principal.
Although they have fewer moving parts than typical float devices, actual mechanical motion is limited. Displacers are frequently
placed in external "cages," which can affect accuracy if the vessel/cage level is misaligned. Long-span displacement devices
may be very expensive.
Recopilación: AdolfoOR
Ingenieria de Controles 60 de 91
Sonic instruments determine level by measuring the length of time it takes for a sound pulse to return to a piezoelectric
transducer after bouncing off the process material. For maximum accuracy, the transmitter must be mounted at the top of the
vessel and positioned so the internal structure of the vessel will not interfere with the signal path. Sonic devices are noncontact
and minimally intrusive. Dust, solvent vapors, foam, surface turbulence, and ambient noise effect accuracy. Elevated process
temperatures can limit application.
Radar-based devices beam microwaves at the process material's surface. A portion of that energy is reflected back and
detected by the sensor. Time for the signal's return determines the level. Technologies in use include:
• Frequency Modulated Continuous Wave--FMCW is very accurate, ignores vapors, and is immune to changes in
physical characteristics (except dielectric constant) of process materials. Applications include "still," but not turbulent,
fluids. Cost is quite high in comparison to other technologies ($5,000-$10,000 per point).
• Pulsed Time of Flight--PTOF is lower powered and lower priced. Due to its lower power, its performance can be limited
by the presence of vessel obstructions, agitation, foam, elevated pressure, and low dielectric materials (Dielectric
constant less than 2).
Recopilación: AdolfoOR
Ingenieria de Controles 61 de 91
• Time Domain Reflectometry (TDR)--Unlike FMCW and PTOF, TDR is an intrusive measurement that uses a rod or
flexible cable to "channel" the microwave pulse. It can measure normal (low K on top) interface levels in immiscible
fluids. It is low cost, can measure long spans, and provides good performance in lower dielectric materials.
¡Error!Marcador no definido.
Radio frequency (RF), based on capacitance or admittance, can handle a wide range of process conditions. Process
temperature and pressure are limited only by the material system of the sensing element. Level transmitters of this type sense
the change of electrical impedance that occurs with the change of level on the sensor. RF devices ignore material buildup on
sensor and work with all types of process material. It is an intrusive technology.
Teresa Parris, marketing communications manager, and John Roede, application engineer, Drexelbrook Engineering Co., a
supplier of level sensing devices.
Recopilación: AdolfoOR
Ingenieria de Controles 62 de 91
Understanding and preventing radio frequency William (Bill) L. Mostia, Jr., P.E.
interference Amoco Corp.
Radio-based devices such as walkie-talkies and pagers have been used in our plants for many years but in recent years there
has been an increased number of radio frequency sources both inside and outside our facilities. This combined with the greater
clock speed of our microprocessor-based systems and increased use of digital communication systems complicates the
electromagnetic environment in our facilities.
Our instrumentation systems must be able to function in the presence of high-frequency interference which is commonly
referred to as radio frequency interference (RFI). It is also sometimes referred to as electromagnetic interference (EMI), though
this is somewhat of a misnomer as EMI covers a wider range of frequencies.
RFI can be defined as objectionable high-frequency electromagnetic radiation where the source is further away than the
radiation's wavelength (l) divided by 2p, i.e. l/2p.
The area past this distance is called the far field and the radiation is called a plane wave. Shorter than this distance is called the
near field and electric or magnetic fields will dominate the interference coupling mechanism. The effectiveness of coupling RFI
into a system is a function of the radiating source, its strength, the characteristics of the transmission path, the distance
involved, and the sensitivity of the receiver.
The basic method for getting rid of a RFI source is to remove the radiating mechanism. Conversion of electromechanical
contacts, for example, to solid state would remove the arc-generated RFI. Electrostatic discharge (ESD) generated RFI could be
reduced by removing the charge generating mechanism or by providing a method to bleed off the charge.
Shielding is probably the most common means used to reduce the effects of RFI.
Shielding can work both to prevent RFI from radiating out or to prevent RFI from getting in. The effectiveness of a shield is a
function of the material, the frequency, the angle of incidence, coverage, and the thickness of the material. Metal is commonly
used to shield RFI. Plastic materials used as shields are coated or impregnated with reflective and adsorptive materials or have
embedded screens.
Often it is the enclosure openings, seams, and joints that are the limiting factors of the shield's effectiveness. The longest
dimension of any opening should be less than l/20. Reduction in opening dimensions, screens, coatings, and special gasketing
are some of the methods used to prevent RFI from getting into or out of enclosures. Cables in metal conduit are generally
protected against RFI, but cables in a cable tray may open up windows of exposure. The effectiveness of cable shields such as
aluminum foil, braided, and coaxial is a function of the material, frequency, thickness, and the shield coverage.
Distance can be used to provide separation between RFI generating equipment and the sensitive equipment by reducing the
field strength of the RFI at the receiver.
Administrative controls can also be used to prohibit RFI sources from being operated near sensitive equipment.
Circuit design can also help. Some common techniques used to minimize the effects of RFI include component location,
conductor lengths, and component selection as well as the use of differential inputs, twisted pair cabling, common mode chokes,
and ferrite beads.
RFI will be an increasing concern in the future. A good reference in this area is the book: Noise Reduction Techniques in
Electronic Systems by Henry W. Ott.
Recopilación: AdolfoOR
Ingenieria de Controles 63 de 91
RFI Terms
EMI: Electromagnetic interference, electrical interference from electric, magnetic, or plane wave fields.
ESD: Electrostatic discharge, the discharge of free electrons from an insulator to a conductor or to another
insulator.
Far field: The area away from a receiving device greater than the incident radiation's wavelength divided by
2p.
Near field: The area nearer to a receiving device less than the incident radiation's wavelength divided by 2p.
Plane wave: Electromagnetic radiation where the source is further away than the radiation's wavelength
divided by 2p.
RFI: Radio Frequency Interference, electromagnetic radiation where the source is further away than the
radiation's wavelength divided by 2p, which is generally in the range of a few 100 kHz to the GHz.
Shield: A material placed between unwanted electromagnetic fields and a receiving or transmitting device with
the purpose of minimizing the transmission of the electromagnetic fields past the shield.
Recopilación: AdolfoOR
Ingenieria de Controles 64 de 91
Too frequently, instrumentation protection is not considered until the snow is flying and then a problem is discovered. The
summer months are the best time to examine instrumentation winterization problems and take measures to ensure the
instrumentation performs accurately and repeatedly when the mercury falls.
When evaluating protection options, passive protection methods are preferred over active methods.
In order of preference, six popular ways of ensuring instruments perform well in harsh winter elements are:
• Locate instruments indoors;
• Use instruments immune to cold weather;
• Install nonfreezing liquid in impulse lines and meters to form a liquid seal;
• Apply insulation and/or heat trace to impulse lines and meters to keep contents above freezing;
• Purge impulse lines with a dry gas to keep liquids or vapors out; or
• Purge impulse lines with a nonfreezing liquid to keep process liquids out.
If outside, enclose
Every effort should be made to locate instruments indoors or in instrument housings. Even if the enclosure is not heated,
instrument reliability and life will be extended if the enclosure takes the beating from snow, rain, hail, and falling ice.
Manufacturers constantly improve product performance and broaden environmental limits that affect instrument performance.
Paying a premium for an instrument that does not require winter protection may actually be a bargain when all factors are
considered.
Liquid seals come in two forms, open and closed. Open liquid seals most often use seal pots to form an interface between the
process media and the nonfreezing liquid in the impulse lines and meter bodies.
Closed, or integral, seals use a diaphragm and flexible liquid-filled capillary system. Process pressure changes cause slight
deflections on the diaphragm. The capillary system hydraulically transmits the change to the instrument.
When liquid seals won’t work, the use of insulation and/or heat trace is required.
If the process media is heated, mounting the instrument very close to the process piping or vessel and insulating everything may
provide adequate protection.
Heat trace may be electric, steam, salt solution, glycol, or other media. Any of these choices increase operating cost, and
require periodic maintenance. Unless a temperature controller is used, it’s important to remember to turn continuous heating
heat tracing off when the weather warms and on again when it gets cold.
A variety of reasons make purges the least desirable protection. Adding hardware, piping, and proximity of the purge media
creates a miniature process. Also, unless purge flow rates are tightly maintained, accuracy and repeatability suffer. Purges have
been successful in measuring liquid level changes in open vessels, but opportunities for this application are limited.
First, many flow meters have two impulse lines. When heat tracing is used, both lines must be equally protected.
Avoid splitting a single steam trace line into two lines, one for each impulse line, and then rejoining the lines ahead of
the steam trap. Steam follows the path of least resistance. The line offering the most resistance will stop flowing, the
steam will condense, freeze, and possibly rupture the tracer line. It’s okay to split the lines, but provide each line its
own steam trap.
Secondly, when using an instrument housing, especially “home built housings,” avoid temptations to mount the
instrument on the same metal pedestal as is used to mount the instrument housing. Thermodynamics applied to the
pedestal will cause heat, supplied to protect the instrument, to migrate toward the cold end of the pedestal. The
instrument housing should be mounted on top of a pedestal and a separate instrument-mounting stand should be
located inside the housing.
Recopilación: AdolfoOR
Ingenieria de Controles 65 de 91
Finally, when using finned heaters to warm instrument housings, small buildings, etc., the fins should be in a vertical
position. Finned heaters rely on convection to move heat away from the heater. When fins are horizontal, air cannot
flow between the fins, and little or no heat is transferred
Recopilación: AdolfoOR
Ingenieria de Controles 66 de 91
Tuning a PID controller is conceptually simple--observe the behavior of the controlled process and fine tune the controller's
proportional (P), integral (I), and derivative (D) parameters until the closed-loop system performs as desired. However, PID
tuning is often more of an art than a science. The best choice of tuning parameters depends upon a variety of factors including
the dynamic behavior of the controlled process, the controller's objectives, and the operator's understanding of the tuning
procedures.
Self-tuning PID controllers simplify matters by executing the necessary tuning procedures automatically. Most observe the
process' reaction to a disturbance and set their tuning parameters accordingly. However, no two go about accomplishing those
tasks in the same way.
"Heuristic" self-tuners, for example, attempt to duplicate the decision-making process of an experienced operator. They adjust
their tuning parameters according to a series of expert tuning rules such as "IF the controller overreacts to an abrupt
disturbance THEN lower the derivative parameter."
Model-based approach
A more common approach to automatic parameter selection, however, involves a mathematical "model" of the process--an
equation that relates the present value of the process output to a history of previous outputs and previous inputs applied by the
controller. If the model is accurate, the controller can predict the future effect of its present efforts and tune itself accordingly.
For example, a process that reacts sluggishly to a step input can be modeled with an equation that gives the current output as a
weighted sum of the most recent output and the most recent input. A self-tuner can choose the weights in that sum to fit the
model to the observed process behavior. With the model in hand, the self-tuner can go on to determine how much proportional,
integral, and derivative action the process can tolerate. In the case of a sluggish process, the model will show that the controller
is free to apply aggressive control efforts. The self-tuner will then set the P, I, and D parameters to relatively high values.
Exactly how high or low the tuning parameters should be set depends on the performance objectives specified by the operator.
If, for example, the settling time is to be limited to some maximum value, the required tuning parameters can be determined by
analyzing the time constant and the deadtime of the process model. On the other hand, if excessive overshoot is the operator's
principal concern, the controller can be configured to select tuning parameters that will limit the rate of change of the process
variable.
Self-tuning controllers also differ in their data collection techniques. Some apply a series of artificial disturbances to the process
in order to observe how it behaves. Others make do with data collected during normal loop operations. The latter approach
limits the waste and inconvenience caused by intentionally disturbing the process, but generally produces much less useful
information about the process' behavior.
Which of these many variations is appropriate for a given application of self-tuning control is up to the operator. A single
universally applicable technique has yet to be developed.
Control Engineering September 1997
Recopilación: AdolfoOR
Ingenieria de Controles 67 de 91
A touchscreen is a computer input device that enables users to make a selection by touching the screen, rather than typing on a
keyboard or pointing with a mouse. Computers with touchscreens have a smaller footprint, can be mounted in smaller spaces,
have fewer movable parts, and can be sealed. Touching a screen is more intuitive than using a keyboard or mouse, which
translates into lower training costs.
3 components in common
All touchscreen systems have three components. To process a user's selection, a sensor unit and a controller sense the touch
and its location, and a software device driver transmits the touch coordinates to the computer's operating system. Touchscreen
sensors use one of five technologies: resistive, capacitive, infrared, acoustic wave, or near field imaging.
Resistive touchscreens typically include a flexible top sheet and a glass base separated by insulating dots. Each layer is coated
with a transparent metal oxide on its inside surface. Voltage applied to the layers produces a gradient across each. Pressing the
top sheet creates electric contact between resistive layers, essentially closing a switch in the circuit.
Capacitive touchscreens are also coated with a transparent metal oxide, but the coating is bonded to the surface of a single
sheet of glass. Unlike resistive touchscreens, where any object can create a touch, capacitive touchscreens require contact with
a bare finger or conductive stylus. The finger's capacitance, or ability to store an electric charge, draws some current from each
corner of the touchscreen, where voltage has been applied.
Infrared touchscreens are based on light-beam interruption technology. Instead of placing a layer on the display surface, a
frame surrounds it. The frame has light sources, or light-emitting diodes (LEDs), on one side, and light detectors, or
photosensors, on the opposite side, creating an optical grid across the screen. When any object touches the screen, the
invisible light beam is interrupted, causing a drop in the signal received by the photosensors.
Acoustic wave touchscreens use transducers mounted at the edge of a glass screen to emit ultrasonic sound waves along two
sides. The ultrasonic waves are reflected across the screen and received by sensors. When a finger or other soft-tipped stylus
touches the screen, the sound energy is absorbed, causing the wave signal to weaken. In surface acoustic wave (SAW)
technology, waves travel across surface of the glass, while in guided acoustic wave (GAW) technology, waves also travel
through the glass.
Near field imaging (NFI) touchscreens consist of two laminated glass sheets with a patterned coating of transparent metal oxide
in between. An ac signal is applied to the patterned conductive coating, creating an electrostatic field on the surface of the
screen.
When a finger--gloved or ungloved--or other conductive stylus comes into contact with the sensor, the electrostatic field is
disturbed.
Elizabeth Morse is communication coordinator at Dynapro (Vancouver, British Columbia, Canada), a hardware, software and
touchscreen manufacturer.
Vibration, shock.
3. Does the touchscreen require a NEMA seal? Yes/ No
Recopilación: AdolfoOR
Ingenieria de Controles 69 de 91
The two most common ways of measuring industrial temperatures are with resistance temperature detectors (RTDs) and
thermocouples. But when should control engineers use a thermocouple and when should they use an RTD? The answer is
usually determined by four factors: temperature, time, size, and overall accuracy requirements.
• What are the temperature requirements? If process temperatures fall from -328 to 932°F (-200 to 500°C), then an
industrial RTD is an option. But for extremely high temperatures, a thermocouple may be the only choice.
• What are the time-response requirements? If the process requires a very fast response to temperature changes--
fractions of a second as opposed to seconds (i.e. 2.5 to 10 sec)--then a thermocouple is the best choice. Keep in mind
that time response is measured by immersing the sensor in water moving at 3 ft/sec with a 63.2% step change.
• What are the size requirements? A standard RTD sheath is 0.125 to 0.25 in. dia., while sheath diameters for
thermocouples can be less than 0.062 in.
• What are the overall requirements for accuracy? If the process only requires a tolerance of 2°C or greater, then a
thermocouple is appropriate. If the process needs less than 2°C tolerance, then an RTD is the only choice. Keep in
mind, unlike RTDs that can maintain stability for many years, thermocouples can drift within the first few hours of use.
Although not a technical point, price may be another consideration. An average Quick selection guidelines
thermocouple costs approximately $35, while an average RTD costs $55. Cost
of extension wire must also be considered. Thermocouples require the same RTDs:
type of extension wire material as the thermocouple, which can cost up to $1 • Offer stable output within broad
per ft. Standard nickel-plated, teflon-coated RTD wire averages pennies per ft. temperature ranges;
• Can be recalibrated for verifiable
RTD basics accuracy;
• Are stable over the long term;
Once parameters are defined, the type of RTD or thermocouple is chosen. • Follow a more linear curve than
RTDs provide a resistance vs. temperature output and are passive devices, thermocouples;
needing no more than 1.0 mA to run. The most common RTD is a 100 ohm, • Have high sensitivity; and
platinum sensor, with an alpha coefficient of 0.00385 ohms/ohm/C. It can be • Provide accurate reading over
ordered as DIN A or DIN B which specifies the initial accuracy at 0°C (ice point) narrow temperature spans.
and the interchangeability over the operating range. IEC 751 states that DIN A
is 0.15°C ± 0.002/t*, where t* = specified temperature. DIN B is 0.3°C ±
0.005/t*.
RTDs can also be constructed from nickel, copper, or nickel/iron. Each metal
has a different alpha coefficient and operating range. An RTDs alpha
coefficient must be matched to its instrumentation or an error of several
degrees can occur.
About thermocouples
Standard limits of error and special limits of error must also be considered. These values relate to the purity of the wire used to
manufacture the thermocouple. For very little additional cost, thermocouple specifiers can often improve accuracies greatly
(100% or greater).
Specifying the correct thermocouple or RTD for an unconventional application may be a difficult task. Many manufacturers of
RTDs and thermocouples offer applications engineering support to help customers select the right combination of temperature
measurement equipment.
Recopilación: AdolfoOR
Ingenieria de Controles 70 de 91
Worldwide popularity of ac induction motors in numerous applications has led to some standardized motor designs.
Concentration on a finite number of motor types also brings design and manufacturing efficiencies, while helping to achieve
attractive pricing.
The National Electrical Manufacturers Association, (NEMA, Washington, D.C.) has developed specifications for so-called NEMA
design A, B, C, and D motor types. These designs are based on standardizing certain motor characteristics such as starting
current, slip, and specified torque points (see below). Here's a brief rundown on NEMA motor types:
• Design A has normal starting torque (typically 150-170% of rated) and relatively high starting current. Breakdown
torque is the highest of all NEMA types. It can handle heavy overloads for a short-duration. Slip <=5%. A typical
application is powering of injection-molding machines.
• Design B is the most numerous type of ac induction motor sold. It has normal starting torque, similar to Design A, but
offers low starting current. Locked rotor torque is good enough to start many loads encountered in industrial
applications. Slip <=5%. Motor efficiency and full load power factor are comparatively high, contributing to the
popularity of the design. Typical applications include pumps, fans, and machine tools.
• Design C has high starting torque (greater than previous two designs, say 200%), useful for driving heavy breakaway
loads. These motors are intended for operation near full speed with-out great overloads. Starting current is low. Slip
<=5%.
• Design D has high starting torque (highest of all the NEMA motor types). Starting current and full-load speed are low.
High slip values (5-13%) make this motor suitable for applications with changing loads and attendant sharp changes in
motor speed, such as in machinery with flywheel energy storage. Several design subclasses cover the rather wide slip
range. This motor type is usually considered a "special order" item.
•
CE, July 1998, p. 94 provides an illustration of the comparative torque-speed characteristics of these motors. The diagram also
shows graphically the following torque-speed points important to induction motor specifications. Locked-rotor torque (starting
torque) refers to minimum torque generated with the rotor at rest, and rated voltage and frequency applied. Breakdown torque is
maximum torque generated before an abrupt drop in motor speed occurs as rated speed is approached (at rated voltage and
frequency). Pull-up torque is the minimum torque generated over the motor's speed range from rest to the speed point where
breakdown torque is developed
Motors manufactured for European and international markets conform to a different set of designs and specifications. The
International Electrotechnical Commission (IEC, Geneva, Switzerland) defines these design characteristics. One type of IEC
induction motor is called Design N. This motor type has operating characteristics comparable to NEMA Design A and B motors.
AC induction motors are manufactured with a variety of protective housings to suit specific applications. Besides the enclosure
types mentioned in the "Back to Basics" article, induction motors are available in still other housing varieties; for example
chemical duty, washdown, and explosion proof types.
Recopilación: AdolfoOR
Ingenieria de Controles 71 de 91
‘Encoding" or converting angular position into electronic signals is the mission of rotary encoders. Ways to detect motion include
mechanical (via brush contacts) or magnetic/inductive methods, but noncontact optical encoders comprise the most common
feedback device used in industrial motion control.
Besides motor shaft position, rotary encoders also provide information for speed feedback, direction of rotation (see diagram),
and electronic commutation in brushless servo systems. Position feedback is not limited to the motor. For example, an encoder
can indicate valve position or sometimes be directly mounted to a rotary load.
All optical encoders work on the same basic principle. Light from an LED or other light source is passed through a stationary
patterned mask onto a rotating code disk that contains code patterns (see below). The disk is the heart of the device.
Photodetectors scan the disk and an electronic circuit processes the information into digital form as output to counters and
controllers.
Incremental or absolute
Two basic types of optical encoders exist—incremental and absolute position. Incremental encoders are the simpler devices.
Their output is a series of square wave pulses generated as the code disk, with evenly spaced opaque radial lines on its
surface, rotates past the light source. Number of lines on the disk determines the encoder’s resolution.
The simplest incremental encoder, called a tachometer, has one square wave output and is often used in unidirectional
applications in need of basic position or speed information. The more useful quadrature encoder has two output square waves
(Channels A and B), plus a reference pulse (Channel Z) generated as a "home" marker once each revolution (not shown in
diagram).
A two-channel (quadrature) incremental encoder can sense direction of rotation as well as angular position. The signals’ phase
relationship, offset by 90 electrical degrees, is related to direction—clockwise if Channel A leads Channel B, and vice versa.
Incremental encoders provide only relative position; in case of power failure, the position count is lost. The Channel Z marker
comes into play upon a restart to establish the home position, so that the new pulse count can begin.
Absolute position encoders are more complex and capable than their incremental cousins. They provide a unique output for
every position. Their code disk consists of multiple concentric "tracks" of opaque and clear segments. Each track is independent
with its own photodetector to simultaneously read a unique position. The number of tracks corresponds to the binary "bit"-
resolution of the encoder. That is, a 12-bit absolute encoder has 12 tracks.
Also, the absolute encoder’s nonvolatile memory retains the exact position without need to return to "home" position if power
fails. This is useful in remote applications where equipment runs infrequently with power turned off between uses.
Recopilación: AdolfoOR
Ingenieria de Controles 72 de 91
Most rotary encoders are single-turn devices, but absolute multiturn units are available, which obtain feedback over several
revolutions by adding extra code disks. The additional disk stages are geared to the main disk and have their own
photodetectors.
Choice of incremental or absolute encoder is very application dependent, but price is a factor. Incremental encoders are less
costly than "absolutes," but generally offer fewer functions and lower resolution. They’re also more susceptible to electrical
noise.
Some techniques can improve resolution and noise immunity. A controller counting the leading and trailing edges of Channel A
allows the quadrature detection method to multiply basic encoder resolution by 2X. If this is done for both Channels A and B, 4X
basic resolution is obtained. However, system bandwidth limits must be kept in mind.
Open-collector is one type of output circuit from an incremental encoder. The user must specify a pull-up resistor for this
electronic interface to work properly with external controls. Another basic output circuit, the differential line driver, is used with
long cable lengths and in electronically noisy environments.
Complementary (differential) signals generated with this output take on an incorrect form in the presence of electrical noise.
Information coming from these abnormal signals can then be rejected.
Absolute encoders work with various serial outputs that can be converted to parallel and fieldbus formats. Rotary encoders offer
a rich variety of sizes, features, and capabilities. Models with speeds up to 30,000 rpm and operating temperatures well above
100 °C are available. For more information and product examples see July Online issue under this topic at
www.controleng.com.
Recopilación: AdolfoOR
Ingenieria de Controles 73 de 91
Instrumentation for pressure, temperature, level, and flow equipment can play a key role in holding the ‘cork’ in and process
safe.
Many industries must routinely deal with materials that explode, burn, or explode and burn. Actually,
explosions in gas, vapors, or dusts are not detonations but very rapid burning of the media best described as deflagrations.
Handling "media from hell" often requires many specialized equipment and disciplines (everything from mechanical system
design to satisfying numerous industry associations and regulatory agencies, such as API, AGA, OSHA, the USEPA, UL, FM,
etc., to name a few).
All processes that make "stuff" require instrumentation to monitor the media as it undergoes chemical change or tracks
conditions during ingredient mixing or blending. Thus, pressure, temperature, level, flow, and other specialty instruments must
come in contact with some very combustible and highly explosive materials, "genies" that are a far cry from Major Nelson’s
mythological, impossible-to-understand friend.
Although the type of media present in the processing plant determines the amount of "bang for the buck" in potentially explosive
applications (See Classification of hazardous areas), the hazardous area classification for Class I determines the National
Electrical Code (NEC) protection method that must be used.
Because gases and vapors are always present, Division I applications require "heroic" steps to avoid possible disaster. Sensors
in these areas must be protected by one of several methods. These include the use of explosion-proof designs, intrinsically safe
construction, or use of purge or pressurized housings.
On the other hand, Division 2 areas require protection methods often confined to the design of the sensors and their associated
electronics. These include such techniques as encapsulation/hermetic sealing, nonincendive design, nonsparking design, and
use of oil immersion. These basic design considerations simply isolate potential sources of ignition (heat, sparks, or flame) from
explosive or flammable media. However, because Division 2 areas have a very low probability of actually seeing gases and
vapors, these methods have been deemed sufficient for these infrequent and usually short-lived exposures.
Even simple devices, like thermocouples, can be dangerous in some applications. High temeratures can boost insulation
electrical conductivity, leading to high voltages, faults, and potential disaster.
2-wire—bad, 3-wire—good
Even deceptively simple applications, such as two-wire thermocouples used in high-temperature furnaces or kilns can present a
danger to personnel. According to Dennis Hablewitz, senior application engineer at Eurotherm/Barber-Colman’s Loves Park, Ill.,
facility, type K thermocouple can pick up common mode noise from high-voltage electric heaters used in these applications,
allowing dangerously high voltages between the thermocouple leads and ground. Stray high voltages like this can cause
arcing—dangerous in flammable situations—and an electrocution hazard to workers.
Recopilación: AdolfoOR
Ingenieria de Controles 74 de 91
"It is not uncommon to see as much as 380 volts on either lead. The problem is caused by the fact that both a thermocouple’s
ceramic protection tube and the furnace insulation do not act as insulators at high temperatures (see accompanying figure).
Both start to conduct at approximately 700 °C. Once the elevated temperature defeats their insulating ability, heater load faults
can be conducted through the metal furnace wall to the protection tube. From this point, the heater potential has a straight path
to the measuring instruments—a dangerous, and not uncommon, situation," Mr. Hablewitz says.
"In this case, the situation was resolved by using three-wire thermocouples. The third lead from the measuring junction was tied
to earth ground at the furnace wall," Mr. Hablewitz adds.
Bad gas
Monitoring digester gas flow in the wastewater industry is another example of instrumentation working in a potentially dangerous
situation. Methane gas, a byproduct of digester operation, is classified as Group D. A digester is basically a "cooker" that heats
sludge under pressure to produce a mixture of CO2 and methane. A flowmeter mounted in the 48-in. output line provides an
indication of how well the microorganism-prompted process is working; high flow indicates an efficient reaction. The methane
gas is cleaned and used either internally to power other equipment or sold to cogeneration or independent power producers.
Fluid Components International (FCI, San Marcos, Calif.) supplies flowmeters internationally for these applications, specifically
Model GF90. The sensor uses low wattage heaters and encapsulates the RTDs in a stainless steel thermowell. This non-
incendive design allows the sensing head to be placed directly in the gas flow. According to Glen Fishman, senior application
engineer, "Because the encapsulated sensors cannot be damaged by the media flow, sparks cannot be introduced into the
process. Additionally, low-wattage heaters add to the safety of this design."
Unless process media are completely inert, the potential exists for fire, explosion, corrosion, and/or environmental damage in
case of an alarm situation. Of these disasters, explosion and fire are often the deadliest to plant personnel.
Explosions can be prevented in several ways. One way is by limiting the amount of electrical energy available in hazardous
areas. Controlling electrical parameters such as voltage and current requires the use of energy limiting devices known as
intrinsically safe (IS) barriers. IS barriers limit the levels of power available in a protected barrier. If a spark or excess electrical
heat cannot occur, neither can a fire or explosion. Although used in Europe for many years, intrinsic safety was not adopted as
part of the U.S. National Electrical Code until 1990.
An intrinsically safe circuit contains three components: the target device, IS barrier, and wiring. Devices within the protected
area can be categorized as simple (contacts, resistors, thermocouples, RTDs, etc.) or complex (transmitters, relays, solenoids,
etc.). Complex devices often have complicated circuitry that can store excess electrical energy and are normally certified
"intrinsically safe" by safety testing and certification organizations, such as Underwriters Laboratories (Northbrook, Ill.) or
Factory Mutual (Norwood, Mass.).
Selection of proper IS barriers requires calculation of both the open-circuit voltage and short-circuit current of simple devices.
For complex devices, both allowed capacitance and inductance values must be calculated. Results are then compared to
ignition curves that have been calculated for a wide variety of flammable/explosive media (gases, vapors, airborne dusts or
fibers, etc.) to determine if the available energy is below the amount needed for ignition.
Explosion-proof enclosures provide a brute-force method of preventing or controlling potentially explosive situations. These
heavy, cast—usually but not always—devices feature sealed and securely fastened access doors. They protect the normal
power level devices within them from coming into contact with an explosive atmosphere. Even under fault conditions, an
explosion or fire usually cannot occur because of limited air for combustion within the sealed container. If an explosion does
occur, the housing is strong enough to contain it.
Although there have been many refinements in explosion-proof enclosure design, the fact remains that they are bulky, can be
difficult to mount because of their weight, and are not the handiest of housings to access. Additionally, seals, gasketing, and
purging systems require inspection and maintenance if their integrity is to be trusted. The fact remains that in industries where
high voltages and currents are routinely encountered and process systems are rarely reconfigured, explosion-proof enclosures
remain a practical method of preventing an industrial tragedy.
Tough stuff
What makes an instrument itself explosion-proof? De-pending on device size, any number of design refinements can be
incorporated so the sensor cannot ignite an explosive atmosphere by supplying spark or open flame. According to Charles
Isaac, product manager, pressure and temperature switches,
Barksdale Inc. (Los Angeles), "Smaller instrumentation of all types can be designed to get explosion-proof status. For example,
Barksdale has recently introduced a line of compact pressure switches that are UL-, CSA-, and Cenelec-approved as explosion
proof."
Recopilación: AdolfoOR
Ingenieria de Controles 75 de 91
From a design standpoint, smaller devices are often easier to make explosion proof. Housings, gaskets, fasteners, and covers
must retain their integrity in case of operational failure. They must also handle high pressure, shock, and vibration. Covers and
access plate must be tamper proof. Sensors are often hermetically sealed to exclude the surrounding hostile atmosphere from
coming in contact with any source of spark. Additionally, mating surfaces are thoroughly gasketed or permanently sealed to
prevent leakage.
‘Finessing’ it
Keeping instrumentation designs safe cannot always be done mechanically. And it is one thing to keep a malfunctioning
instrument from causing a fire and explosion when it is buried in a thermowell or wrapped in a custom enclosure much like
explosion-proof devices are. However, an instrument that "hangs" on the top of a vessel full of volatile liquid to measure level is
often neither buried or enclosed. An intrinsic safety (IS) rating may be the only way out because IS devices cannot produce a
spark to ignite an explosive atmosphere. See Protecting against tragedy sidebar.
Ametek Drexelbrook (Horsham, Pa.) provides level instruments in explosion-proof housings, and many are designed as
intrinsically safe for hazardous areas. According to Bill Sholette, product support manager, the benefits of instrumentation level
intrinsic safety are:
• The instrument/transmitter enclosure can be opened in a hazardous area without the danger of an explosion.
• There is no need to "sniff" the area using a handheld monitor prior to opening a protective enclosure.
• There is typically a reduced installation cost because conduit and explosion-proof enclosures are not required.
In the process instrumentation field, use of more than one protection method applied to the same device is a common practice,
Even though it may seem like "wearing pants with both a belt and suspenders," circuits with intrinsically safe inputs can be
mounted in segregated or explosion-proof enclosures. Generally, mixed systems are not difficult to install if the single protection
methods are appropriately used and are according to relative standards.
"Many of Ametek Drexelbrook’s level devices are available as both intrinsically safe and explosion proof," Mr. Sholette
continues. "Despite the added cost and installation time, many process industry users specify these types of instruments."
Out of sight doesn’t necessarily mean out of mind. Just because sensors can be remotely mounted does not mean they are out
of harms way when it comes to being a possible ignition source in a potentially hostile environment. In the case of GE Silicones
(Waterford, N.Y.), moving the source of temperature measurement in a mixing operation brought about another problem.
Silicone powder and other raw materials are blended and heated to a temperature between 100 and 200 °C. The idea was to
remove high-maintenance thermocouples located in the base of the kettle and relocate them as four noncontact devices on the
mixer’s lid. In their new position, however, explosive gases that were sometimes liberated during the mixing process proved a
definite safety hazard. Leveraging the accuracy and robustness of the infrared sensors needed safety backup, which came in
the form of an intrinsically safe unit from Raytek Inc. (Santa Cruz, Calif.).
At the time of installation, Raytek’s Thermalert TX was the only intrinsically safe device available, the company said. Installation
was a success. According to Bob Secreti, control systems craftsman at GE, "The number one benefit by far is product quality
consistency. We also have less maintenance problems; the sensors just work."
Ensuring worker safety in today’s process plants often requires the control engineer to have first-hand knowledge of many
safety technologies. These can vary from exotic software-enabled plant safety shutdown sequences to the basics of explosion
control technology. Although instrument level safety seems pretty basic in the overall scheme of things, it often provides the first
line of defense against unthinkable tragedy, a genie no one wants to uncork.
In North America, hazardous areas are classified using two basic parameters: the type of flammable material and the probability
that a hazardous material is present. U.S. National Electrical Code and the Canadian Electrical Code divide flammable materials
Recopilación: AdolfoOR
Ingenieria de Controles 76 de 91
into three classes: gases, dusts, and fibers. Gases and dusts are subdivided into groups with similar explosive potential. The
following table lists some typical materials found in each category, in descending order of flammability.
In addition to classifying types of hazardous materials, the area is also defined by the probability that those materials are
present. Division 1 areas are defined as ones where hazardous materials may be present under normal operating conditions.
Division 2, on the other hand, is defined as an area where hazards arise only as the result of leaks, ventilation failure, or
unexpected breakdowns. These areas have a low probability of danger because only an mishap, such as a spill or equipment
failure, can create a hazardous situation. Probability of the presence of hazardous material must be less than 1% for an area
designated Division 2.
Gas storage facility maximizes instrument safety with combined bus systems
The Unionville Compressor Station of Reliant Energy Pipeline Services, located outside of Dubach, La., receives raw gas from
fields in eastern Texas and northern Louisiana. Recently, the company upgraded the hardware and control system for
hazardous and nonhazardous areas of its Unionville Station property. The project’s main goal was to add more field devices to
increase accuracy in measuring flow, temperature and pressure. It included upgrading the PLC system and bus architecture at
the same time. The previous system used obsolete PLCs and seriously out-of-date bus architecture. It used direct hard-wiring
and was based on 4-20 mA control-loop technology and corresponding types of field devices that are used with current-loop
systems. After looking at several vendor choices, as well as the pros and cons of various system scenarios, Reliant Energy
decided to use Siemens Simatic PLCs and a Siemens Profibus fieldbus system.
The new system uses a total of nine Simatic 505-based PLCs. Two of these PLCs are responsible for sending and receiving
data to and from the metering fields, designated as East and West Metering Runs. Central to the property is a 24-hr/day
attended control station with big-screen human-machine interface (HMI) monitors. These monitors have screen displays
designed with Intellution software. PLCs connect to the HMIs over an Ethernet backbone.
Profibus, used widely in discrete manufacturing operations, has also been applied in process industries, especially where a
mixture of nonhazardous and hazardous environments need to be connected over a single bus system. For applications
requiring fast, open communications, throughput rates are typically set at either 1.5 Mb or 12 Mb, depending on speed
requirements and other factors. Profibus-DP, with 11-bit character format, is not designated as intrinsically safe for use in
potentially explosive environments. Profibus-PA, on the other hand, typically operates at 31.25 Kbit/sec in accordance with IEC
1158-2. It uses an 8-bit character format and is rated as intrinsically safe according to IEC H1 and CENELEC. Using a
combination of special linking and coupling modules as a simple gateway, users can link Profibus DP and
Profibus PA bus systems so data transmission between the networks are "decoupled."
By using Profibus DP/PA couplers, up to five of the PA runs can be linked through a single linking module.
A total of six linking modules were required for the East and West portions of the Unionville facility. This allowed 30 PA cable
runs into the respective metering fields. For each linking module and five-coupler combination, all lines have physically isolated
power supply, but constitute one bus system in terms of communication. The linking modules for each side of the facility (East
and West) are daisy-chained with one DP line leading to each of the two PLCs assigned to the metering runs.
With standard Siemens Step 7 PLC software programming tools users can configure the PLC and Profibus system. From the
viewpoint of the process control system, the DP/PA links are modular slaves. Individual modules of this slave are the metering
devices and other field devices connected to the lower-level Profibus PA side of the system. The metering devices and other
field devices are addressed indirectly via the DP/PA link. The Profibus linking module reserves one Profibus DP address,
conserving the addressing capacity of the PLC system. This means from the PLC’s perspective, the system makes the DP/PA
couplers invisible. Couplers, which pass telegrams to linking modules, do not need a separate bus address.
Cost savings
Fewer hardware components, reduced installation time, and avoiding the need to enclose Profibus PA cabling in explosive-proof
conduit saved money. The project management team estimated cost savings as high as $80,000, when compared to
4-20 mA systems using explosive-proof conduit and hardwiring. Open-style metal channeling attached to the ceiling and having
the cabling fully exposed provided easy access and visibility to the cabling, while still meeting safety requirements.
Unless process media are completely inert, the potential exists for fire, explosion, corrosion, and/or environmental damage in
case of an alarm situation. Of these disasters, explosion and fire are often the deadliest to plant personnel.
Explosions can be prevented in several ways. One way is by limiting the amount of electrical energy available in hazardous
areas. Controlling electrical parameters such as voltage and current requires the use of energy limiting devices known as
intrinsically safe (IS) barriers. IS barriers limit the levels of power available in a protected barrier. If a spark or excess electrical
heat cannot occur, neither can a fire or explosion. Although used in Europe for many years, intrinsic safety was not adopted as
part of the U.S. National Electrical Code until 1990.
Recopilación: AdolfoOR
Ingenieria de Controles 77 de 91
An intrinsically safe circuit contains three components: the target device, IS barrier, and wiring. Devices within the protected
area can be categorized as simple (contacts, resistors, thermocouples, RTDs, etc.) or complex (transmitters, relays, solenoids,
etc.). Complex devices often have complicated circuitry that can store excess electrical energy and are normally certified
"intrinsically safe" by safety testing and certification organizations, such as Underwriters Laboratories (Northbrook, Ill.) or
Factory Mutual (Norwood, Mass.).
Selection of proper IS barriers requires calculation of both the open-circuit voltage and short-circuit current of simple devices.
For complex devices, both allowed capacitance and inductance values must be calculated. Results are then compared to
ignition curves that have been calculated for a wide variety of flammable/explosive media (gases, vapors, airborne dusts or
fibers, etc.) to determine if the available energy is below the amount needed for ignition.
Explosion-proof enclosures provide a brute-force method of preventing or controlling potentially explosive situations. These
heavy, cast—usually but not always—devices feature sealed and securely fastened access doors. They protect the normal
power level devices within them from coming into contact with an explosive atmosphere. Even under fault conditions, an
explosion or fire usually cannot occur because of limited air for combustion within the sealed container. If an explosion does
occur, the housing is strong enough to contain it.
Although there have been many refinements in explosion-proof enclosure design, the fact remains that they are bulky, can be
difficult to mount because of their weight, and are not the handiest of housings to access. Additionally, seals, gasketing, and
purging systems require inspection and maintenance if their integrity is to be trusted. The fact remains that in industries where
high voltages and currents are routinely encountered and process systems are rarely reconfigured, explosion-proof enclosures
remain a practical method of preventing an industrial tragedy.
Recopilación: AdolfoOR
Ingenieria de Controles 78 de 91
How to read
P&Ids
Instrumentation detail varies with the degree of design complexity. For example, simplified or conceptual designs, often called
process flow diagrams, provide less detail than fully developed piping and instrumentation diagrams (P&IDs). Being able to
understand instrumentation symbols appearing on diagrams means understanding ANSI/ISA’s S5.1-1984 (R 1992)
Instrumentation symbols and identification standard. S5.1 that defines how each symbol is constructed using graphical
elements, alpha and numeric identification codes, abbreviations, function blocks, and connecting lines.
Deciphering symbols
ISA S5.1 defines four graphical elements—discrete instruments, shared control/display, computer function, and programmable
logic controller—and groups them into three location categories (primary location, auxiliary location, and field mounted).
Discrete instruments are indicated by circular elements. Shared control/display elements are circles surrounded by a square.
Computer functions are indicted by a hexagon and programmable logic controller (PLC) functions are shown as a triangle inside
a square.
Adding a single horizontal bar across any of the four graphical elements indicates the function resides in the primary location
category. A double line indicates an auxiliary location, and no line places the device or function in the field. Devices located
behind a panel-board in some other inaccessible location are shown with a dashed horizontal line
Deciphering symbols
ISA S5.1 defines four graphical elements—discrete instruments, shared control/display, computer function, and programmable
logic controller—and groups them into three location categories (primary location, auxiliary location, and field mounted).
Discrete instruments are indicated by circular elements. Shared control/display elements are circles surrounded by a square.
Computer functions are indicted by a hexagon and programmable logic controller (PLC) functions are shown as a triangle inside
a square.
Adding a single horizontal bar across any of the four graphical elements indicates the function resides in the primary location
category. A double line indicates an auxiliary location, and no line places the device or function in the field. Devices located
behind a panel-board in some other inaccessible location are shown with a dashed horizontal line
Recopilación: AdolfoOR
Ingenieria de Controles 79 de 91
Letter and number combinations appear inside each graphical element and letter combinations are defined by the ISA standard.
Numbers are user assigned and schemes vary with some companies use of sequential numbering, others tie the instrument
number to the process line number, and still others adopt unique and sometimes unusual numbering systems.
The first letter defines the measured or initiating variables such as Analysis (A), Flow (F), Temperature (T), etc. with succeeding
letters defining readout, passive, or output functions such as Indicator (I), Record (R), Transmit (T), and so forth.
Recopilación: AdolfoOR
Ingenieria de Controles 80 de 91
Identification letters
First letter Succeeding letters
Measured or initiating Readout or passive
Modifier Output function Modifier
variable function
A Analysis Alarm
B Burner, combustion User's choice User's choice User's choice
C User's choice Control
D User's choice Differential
Sensor (primary
E Voltage
element)
Ration
F Flow rate
(fraction)
G User's choice Glass, viewing device
H Hand High
I Current (electrical) Indication
J Power Scan
Time rate of
K Time, time schedule Control station
change
L Level Light Low
Middle,
M User's choice Momentary
intermediate
N User's choice User's choice User's choice User's choice
O User's choice Orifice, restriction
Point (test
P Pressure, vacuum
connection)
Integrate,
Q Quantity
totalizer
R Radiation Record
S Speed, frequency Safety Switch
T Temperature Transmit
U Multivariable Multifunction Multifunction Multifunction
Vibration, mechanical Valve, damper,
V
analysis louver
W Weight, force Well
X Unclassified X axis Unclassified Unclassified Unclassified
Relay, compute,
Y Event, state, or presence Y axis
convert
Z Position, dimension Z axis Driver, actuator
Source: Control Engineering with data from ISA S5.1 standard
Recopilación: AdolfoOR
Ingenieria de Controles 81 de 91
Recopilación: AdolfoOR
Ingenieria de Controles 82 de 91
There is no single PID algorithm. Different fields using feedback control have probably used different algorithms ever since math
was introduced to feedback control. This Web page (a single file, of four pages, no pictures) is written for people in the process
industries, for that is the only field in which I (David W. St. Clair) have experience. Even in that single field, which has been
served by companies such as ABB (formerly Taylor), Bailey, Fisher, Foxboro, Honeywell, Moore Products, Yokogawa and
others there is no standard algorithm. Perhaps years ago there was (or for most practical purposes was), but today there are
many algorithms. Also there is no standard terminology. For the person interested in tuning controllers for the process industries
it has become a bit more complicated, because the rules and procedures you would use to tune with one algorithm are not the
ones you would use to tune with another. Also, with the added features available with computers, some of the configurations
can become quite complex. This page does not begin to address those, but you certainly need to understand what your basic
building block is.
The purpose of this Web page is to focus on the fact that there are differences and to describe them (or at let to alert you to look
for them). No reference to the algorithm of specific manufacturers is given. If you are tuning controllers you must know the
algorithm of the equipment you are using. For that you should read the information provided by the manufacturer. Even the
words used to identify an algorithm are ambiguous. You should look at the equation. This is unfortunate because many persons
assigned the responsibility of tuning process industry controllers are not comfortable with equations. If you are reading this as
preparation for writing a PID algorithm, it will alert you to the fact that there is more to it than you might have thought. Indeed,
the feedback I have from knowledgeable people is that even the experts can slip up.
I have asked several friends and acquaintances to review this write-up before (and after) putting it on the Web. This does not
necessarily mean they agree with what I have written (some discussions are still taking place), but at least I have sought their
advice. I hope it has no mistakes in it. If you feel you have something to contribute in the way of corrections or additions, please
write me. I have nothing to sell by providing this page, except better control and hopefully less confusion.
Presently there are three basic forms of the PID algorithm. These will be discussed in turn. After that there is a short discussion
of other aspects of any algorithm which must be considered to write the digital program for one. A section on references and
links is at the end.
where
I have deliberately not assigned a name to any of these forms yet. Also I have not given a name to the variables. Both will come
later as each algorithm is discussed. The second and third forms can be made equivalent to the first form (provided derivative is
handled appropriately), but the first form cannot duplicate all combinations available in the second and third forms. The second
and third forms can be made equal to each other. For most practical purposes one algorithm is not better than another, just
different.
This first form is called "series" or "interacting" or "analog" or "classical". The variables are:
Recopilación: AdolfoOR
Ingenieria de Controles 83 de 91
Early pneumatic controllers were probably designed more to meet mechanical and patent constraints than by a zeal to achieve
a certain algorithm. Later pneumatic controllers tended to have an algorithm close to this first form. Electronic controllers of
major vendors tended to use this algorithm. It is what process industry control users were used to at the time. If you are unsure
what algorithm is being used for the controller you are tuning, find out what it is before you start to tune.
I did not follow closely the evolution of algorithms as digital controllers were introduced. It is my understanding that most major
vendors of digital controllers provide this algorithm as basic, and many provide the second form as well. Also, many provide
several variations (I'm told Allen-Bradley has 10, and that other manufacturers are adding variations continually).
The choice of the word interacting is interesting. At least one author says that it is interacting in the time domain and
noninteracting in the frequency domain. Another author disagrees with this distinction. This really becomes a discussion of what
interacts with what. To be safe, think of the word interacting as one to identify the algorithm, not to describe it.
The second form of the algorithm is called "noninteracting, or "parallel" or "ideal" or "ISA" . I understand one manufacturer refers
to this as "interacting", which serves to illustrate that terms by themselves may not tell you what the algorithm is. This form is
used in most textbooks, I understand. I think it is unfortunate that textbooks do not at least recognize the different forms. Most if
not all books written for industry users rather than students recognize at least the first two forms. The basic difference between
the first and second forms is in the way derivative is handled. If the derivative term is set to zero, then the two algorithms are
identical. Since derivative is not used very often (and shouldn't be used very often) perhaps it is not important to focus on the
difference. But it is important to anyone using derivative, and people who use derivative should know what they are doing. The
parameters set in this form can be made equivalent (except for the treatment of gain-limiting on derivative) to those in the first
form in this way:
These conversions are made by equating the coefficients of s. Conversions in the reverse direction are:
Kc = FKc'
Ti = FTi'
Td = Td'/F
where
Typically Ti is set about 4 to 8 times Td, so the conversion factor is not huge, but it is important to not loose sight of the
correction. With this algorithm it is possible to have very troublesome combinations of Ti' and Td'. If Ti'<4Td' then the reset and
derivative times, as differentiated from settings, become complex numbers, which can confuse tuning. Don't slip into these
settings inadvertently! A very knowledgeable tuner may be able to take advantage of that characteristic in very special cases,
but it is not for everyone, every day. Some companies advise to use the interacting form if available, simply to avoid that
potential pitfall.
This algorithm also has no provision for limiting high frequency gain from derivative action, a virtually essential feature. In the
first algorithm Kd is typically fixed at 10, or if adjustable, should typically be set somewhere in the range of 6 to 10. This
desirable limiting of the derivative component is sometimes accomplished in this second form by writing it as:
or
The variables Kc', Ti' and Td' have been called "effective". In the Bode plot, IF Ti'>4Td', THEN Kc' is the minimum frequency-
dependent gain (Kc is a frequency-independent gain). This is at a frequency which is midway between the "corners" defined by
Ti and Td, which is also midway between the "effective " corners associated with Ti' and Td'. Ti' is always larger than Ti and Td'
is always smaller than Td, which recognizes the slight spreading of the "effective" corners of the Bode plot as they approach
each other.
This algorithm is also called the "ISA" algorithm. The ISA has no association with this algorithm. Apparently this attribution got
started when someone working on the Fieldbus thought it would become "THE" algorithm. It didn't. Or hasn't. ANSI/ISA-S51.1-
1979 (Rev. 1993) is a standard on Process Instrumentation Terminology. While this is a standard on terminology, not
Recopilación: AdolfoOR
Ingenieria de Controles 84 de 91
algorithms, it uses the first form of the algorithm for examples and in its Bode plot for a PID controller. Another term used to
identify this algorithm is "ideal". Think of this word as one to identify the algorithm, not describe it. It is true that it can do
everything the first form can do, and more, provided the gain for derivative is handled appropriately. But settings which produce
complex roots should be used only by the very knowledgeable.
It is hard to know what to call this third form since it is so close to the second. It has been called "parallel", "ideal parallel",
"noninteracting", "independent" and "gain independent". In one sense this third form is the second form rewritten. I understand
this is the algorithm taught Electrical Engineers. The second and third forms can be made equal to each other by using the
following substitutions:
Kc" = Kc'
Ti" = Ti'/Kc'
Td" = Kc'Td'
They would only differ in what you call the tuning parameters. They are not gain, integral time and derivative time as those
words are traditionally used in this field. Also, the option for limiting the gain from derivative action should be handled somehow,
perhaps the same way as for form two. One option is as follows:
The constraint in the second form that Ti'>4Td' to keep the roots real becomes Kc"Ti">4Td"/Kc", which is a bit more
complicated.
PROGRAMMING CONSIDERATIONS
There are many considerations in writing the program for a controller besides the decision on which basic algorithm to use.
These include:
The option to have the derivative function act only on the process variable, not on set point changes.
The same option with regard to the proportional action. This option may be tied to the first, in that if you choose to have
derivative act only on the process variable you get proportional action only on that also.
Provision for setpoint and process variable tracking, to permit bumpless automatic/manual transfers. You can have
bumpless transfers without setpoint tracking. You can also transfer from manual to automatic without any bump due to
proportional action. Aren't all these options wonderful!
Provision for reset windup protection.
Provision for a filter besides the one used to limit the derivative gain.
It is no simple matter to get digital derivative action to approach the quality of analog derivative action. No program can
match it. This space is not intended to amplify on that problem, but simply to emphasize that it is a problem. It relates to
sampling frequency and noise on the signal. Some algorithms use more than one back value of the controlled variable I
believe. Also some manufacturers limit how low a derivative time may be set. It is very difficult for the user to know
whether the derivative provided is doing a good job of achieving what could be achieved with derivative action.
Integral/reset action with digital controllers is not perfect. There is a phenomenon related to quantizing error, sampling time
and long integral/reset times and calculating precision which prevents integrating to zero error. Apparently with more
digits in the A/D converter and in the computer's math, this is becoming less and less of a problem.
There is the choice of having the algorithm be "velocity", sometimes called "incremental" (each calculation period a change
in the output is calculated), or "position" (each calculation period the actual desired output is calculated). Apparently at
one time there was a perception that the velocity algorithm did not have a reset windup problem, but this is not the
case. The choice between the incremental and position algorithms seems to be a choice based on many
considerations which are beyond the scope of this write-up.
There are options on filtering noise, such as providing a dead zone or a zone of low gain around the setpoint.
There are options to be considered in special cases, such as preventing reset windup in override and cascade situations.
Provision needs to be made for manual bias.
There must be other points to make to caution the novice. Does anyone want to suggest some?
Recopilación: AdolfoOR
Ingenieria de Controles 85 de 91
PID stands for Proportional, Integral, Derivative. Controllers are designed to eliminate the need for continuous operator
attention. Cruise control in a car and a house thermostat are common examples of how controllers are used to automatically
adjust some variable to hold the measurement (or process variable) at the set-point. The set-point is where you would like the
measurement to be. Error is defined as the difference between set-point and measurement.
(error) = (set-point) - (measurement) The variable being adjusted is called the manipulated variable which usually is equal to the
output of the controller. The output of PID controllers will change in response to a change in measurement or set-point.
Manufacturers of PID controllers use different names to identify the three modes. These equations show the relationships:
Depending on the manufacturer, integral or reset action is set in either time/repeat or repeat/time. One is just the reciprocal of
the other. Note that manufacturers are not consistent and often use reset in units of time/repeat or integral in units of
repeats/time. Derivative and rate are the same.
Proportional Band
With proportional band, the controller output is proportional to the error or a change in measurement (depending on the
controller).
(controller output) = (error)*100/(proportional band)
With a proportional controller offset (deviation from set-point) is present. Increasing the controller gain will make the loop go
unstable. Integral action was included in controllers to eliminate this offset.
Integral
With integral action, the controller output is proportional to the amount of time the error is present. Integral action eliminates
offset.
CONTROLLER OUTPUT = (1/INTEGRAL) (Integral of) e(t) d(t)
Notice that the offset (deviation from set-point) in the time response plots is now gone. Integral action has eliminated the offset.
The response is somewhat oscillatory and can be stabilized some by adding derivative action. (Graphic courtesy of ExperTune
Loop Simulator.)
Integral action gives the controller a large gain at low frequencies that results in eliminating offset and "beating down" load
disturbances. The controller phase starts out at -90 degrees and increases to near 0 degrees at the break frequency. This
additional phase lag is what you give up by adding integral action. Derivative action adds phase lead and is used to compensate
for the lag introduced by integral action.
Derivative
Recopilación: AdolfoOR
Ingenieria de Controles 86 de 91
With derivative action, the controller output is proportional to the rate of change of the measurement or error. The controller
output is calculated by the rate of change of the measurement with time.
dm
CONTROLLER OUTPUT = DERIVATIVE ----
dt
Where m is the measurement at time t.
Some manufacturers use the term rate or pre-act instead of derivative. Derivative, rate and pre-act are the same thing.
DERIVATIVE = RATE = PRE ACT
Derivative action can compensate for a changing measurement. Thus derivative takes action to inhibit more rapid changes of
the measurement than proportional action. When a load or set-point change occurs, the derivative action causes the controller
gain to move the "wrong" way when the measurement gets near the set-point. Derivative is often used to avoid overshoot.
Derivative action can stabilize loops since it adds phase lead. Generally, if you use derivative action, more controller gain and
reset can be used.
With a PID controller the amplitude ratio now has a dip near the center of the frequency response. Integral action gives the
controller high gain at low frequencies, and derivative action causes the gain to start rising after the "dip". At higher frequencies
the filter on derivative action limits the derivative action. At very high frequencies (above 314 radians/time; the Nyquist
frequency) the controller phase and amplitude ratio increase and decrease quite a bit because of discrete sampling. If the
controller had no filter the controller amplitude ratio would steadily increase at high frequencies up to the Nyquist frequency (1/2
the sampling frequency). The controller phase now has a hump due to the derivative lead action and filtering. (Graphic courtesy
of ExperTune Loop Simulator.)
The time response is less oscillatory than with the PI controller. Derivative action has helped stabilize the loop.
It is important to keep in mind that understanding the process is fundamental to getting a well designed control loop. Sensors
must be in appropriate locations and valves must be sized correctly with appropriate trim.
In general, for the tightest loop control, the dynamic controller gain should be as high as possible without causing the loop to be
unstable.
Recopilación: AdolfoOR
Ingenieria de Controles 87 de 91
This picture (from the Loop Simulator) shows the effects of a PI controller with too much or too little P or I action. The process is
typical with a dead time of 4 and lag time of 10. Optimal is red.
You can use the picture to recognize the shape of an optimally tuned loop. Also see the response shape of loops with I or P too
high or low. To get your process response to compare, put the controller in manual change the output 5 or 10%, then put the
controller back in auto.
P is in units of proportional band. I is in units of time/repeat. So increasing P or I, decreases their action in the picture.
Recopilación: AdolfoOR
Ingenieria de Controles 88 de 91
PID (Proportional-Integral-Derivative) control action allow the process control to accurately maintain setpoint by adjusting the
control outputs. In this technical note we have attempted to explain what PID is in practical terms. We have available further
technical references for our customers.
ISE has a complete line of PID controls suitable for virtually an application. We also have numerous tools (such as software,
data loggers and recorders) to help to optimize any control application. Our application engineers have extensive practical
knowledge in the tuning of PID controls to all types of applications.
While controls can be used for many different process variables for clarity we have chosen to use temperature as the process
variable throughout these notes. Other processes can utilize these control concepts and the effects will be the same.
Proportioning control continuously adjusts the output dependent on the relative positions of the process temperature and the
setpoint. PID (Proportioning/Integral/Derivative) are control functions commonly used together in today's controls. These
functions when used properly allow for the precise control of difficult processes.
General:
1) Allows for the output to be a value other than 100% or 0%.
2) Temperature can be controlled without oscillations around the setpoint.
Definitions:
Proportioning Band: is the area around the setpoint where the controller is actually controlling the process; The output is at
some level other than 100% or 0%. The band is generally centered around the setpoint (on single output controls) causing the
output to be at 50% when the setpoint and the temperature are equal.
On (2) two output controls (i.e.: heat/cool) there are two proportioning bands. One is for heating and one is for cooling. In this
case the bands generally end at the setpoint as shown below.
Manual Reset: Virtually no process requires precisely 50% output on single output controls or 0% output on two output controls.
Because of this many older control designs incorporated an adjustment called manual reset (also called offset on some
controls). This adjustment allows the user to redefine the output requirement at the setpoint. A proportioning control without
Recopilación: AdolfoOR
Ingenieria de Controles 89 de 91
manual or automatic reset (defined below) will settle out somewhere within the proportioning band but likely not on the setpoint.
Some newer controls are using manual reset (as a digital user programmable value) in conjunction with automatic reset. This
allows the user to preprogram the approximate output requirement at the setpoint to allow for quicker settling at setpoint.
Automatic Reset (Integral): Corrects for any offset (between setpoint and process variable) automatically over time by shifting
the proportioning band. Reset redefines the output requirements at the setpoint until the process variable (temperature) and the
setpoint are equal. Most current controls allow the user to adjust how fast reset attempts to correct for the temperature offset.
Control manufacturers display the reset value as minutes, minutes/repeat (m/r) or repeats per minute (r/m). This difference is
extremely important to note for repeats/ minute is the inverse of minutes or minutes/ repeat). The reset time constant must be
greater (slower larger number m/r smaller number r/m) than the process responds. If the reset value (in minutes/repeat) is to
small a continuous oscillation will result (reset will over respond to any offset causing this oscillation). If the reset value is too
long (in minutes/ repeat) the process will take to long to settle out at setpoint. Automatic reset is disabled any time the
temperature is outside the proportioning band to prevent problems during startup.
Below is an example of a single output (heat only temperature control) with a 10% proportioning band and a setpoint of 400.
Note how reset shifts the proportioning band when the temperature (PV) enters the proportioning band.
Recopilación: AdolfoOR
Ingenieria de Controles 90 de 91
Reset stops moving the proportioning band as soon as the setpoint and PV are equal. In the above example reset determined
approximately 38% output is required to maintain setpoint. Stable control is achieved and the temperature matches the setpoint
of 400.
Rate (Derivative): Shifts the proportioning band on a slope change of the process variable. Rate in effect applies the "brakes" in
an attempt to prevent overshoot (or undershoot) on process upsets or startup. Unlike reset rate operates anywhere within the
range of the instrument. Rate usually has an adjustable time constant and should be set much shorter than reset. The larger the
time constant the more effect rate will have. Too large of a rate time constant will cause the process to heat too slowly. Too
short and the control will be slow to respond to upsets. The time constant is the amount of time any effects caused by rate will
be in effect when rate is activated due to a slope change.
Self Tuning /Adaptive Tuning / Pre-Tuning
Many control manufactures provide various facilities in their controls that allow the user to more easily tune (adjust) the PID
parameters to their process. Below is a description of same.
Tuning On Demand with Upset: This facility typically determines the PID parameters by inducing an upset in the process. The
controls proportioning is shut off (on-off mode) and the control is allowed to oscillate around a setpoint. This allows the control to
measure the response of the process when heat is applied and removed (or cooling is applied). From this data the control can
calculate and load appropriate PID parameters. Some manufactures perform this procedure at setpoint while others perform it at
other values. Caution must be excersized for substantial swings in the process variable values will likely occur while the control
is in this mode.
Adaptive Tuning: This mode tunes the PID parameters without introducing any upsets. When a control is utilizing this function it
is constantly monitoring the process variable for any oscillation around the setpoint. If there is an oscillation the control adjusts
the PID parameters in an attempt to eliminate them. This type of tuning is ideal for processes where load characteristics change
drastically while the process is running. It cannot be used effectively if the process has externally induced upsets for which the
control could not possibly tune out. For example: A press where a cold mold is inserted at some cyclic rate could cause the PID
parameters to be adjusted to the point where control would be totally unacceptable.
Some manufactures call Tuning on demand Self Tune, Auto Tune or Pre-Tune. Adaptive tuning is sometimes called Self Tune,
Auto Tune or Adaptive Tune. Since there is no standardization in the naming of these features questions must be asked to
determine how they operate.
Recopilación: AdolfoOR