UNIT-1
UNIT-1
MEASUREMENTS:
i) The standard used for comparison purposes must be accurately defined &
should be commonly accepted
ii) The apparatus used & the method adopted must be provable.
MEASURING INSTRUMENT:
It may be defi ned as a device for determining the value or magnitude of a quantity or
variable.
Most of the measurement systems contain three main functional elements. They are:
primary
Variable Variable Data Data
sensing
conversion manipulation transmission presentatio
element
element element element n element
Primary sensing element:
The q u a n t it y under measurement makes its first contact with the primary
sensing element of a measurement system. i.e., the measurand- (the unknown quantity
which is to be measured) is first detected by primary sensor which gives the output in
a different analogous form This output is then converted into an e electrical signal by
a transducer - (which converts energy from one form to another). The first stage of a
m e a s u r e m e n t system is known as a detector transducer stage’.
The output of the primary sensing element may be electrical signal of any form ,
it may be voltage, a frequency or some other electrical parameter
For the instrument to perform the desired function, it may be necessary to convert this
output to some other suitable form.
The information about the quantity under measurement has to be conveyed to the
p e r s o n n e l handling the instrument or the system for monitoring, control, or
analysis purposes. This function is done by data presentation element
In case data is to be monitored, visual display devices are needed These devices
may be analog or digital indicating instruments like ammeters, voltmeters
etc. In case data is to be recorded, recorders like magnetic tapes, high speed camera
& TV equipment, CRT, printers may be used. For control & analysis is purpose
microprocessor or computers may be used. The final stage in a measurement system is
known as terminating stage’
1.2 STATIC& DYNAMIC CHARACTERISTICS
Static characteristics:
The set of criteria defined for the instruments, which are used to measure the
quantities which are slowly varying with time or mostly constant, i.e., do not vary with
time, is called ‘static characteristics’.
The various static characteristics are:
i) Accuracy
ii) Precision
iii) Sensitivity
iv) Linearity
v) Reproducibility
vi) Repeatability
vii) Resolution
viii) Threshold
ix) Drift
x) Stability
xi) Tolerance
xii) Range or span
Accuracy:
It is the degree of closeness with which the reading approaches the true value
of the quantity to be measured. The accuracy can be expressed in
following ways:
a) Point accuracy:
Such an accuracy is specified at only one particular point of scale. It does
not give any information about the accuracy at any other
point on the scale.
Precision:
It is the measure of reproducibility i.e., given a fixed value of a quantity,
precision is a measure of the degree of agreement within a group of
measurements. The precision is composed of two characteristics:
a) Conformity:
Consider a resistor having true value as 2385692 , which is being
measured by an ohmmeter. But the reader can read consistently, a value as
2.4 M due to the non availability of proper scale. The error
created due to the limitation of the scale reading is a precision error.
P=1- Xn-Xn
Xn
Where, P = precision
th
Xn = Value of n measurement
Xn = Average value the set of measurement values
Sensitivity:
The sensitivity denotes the smallest change in the measured variable to which
the instrument responds. It is defined as the ratio of the changes in the
output of an instrument to a change in the value of the quantity to be measured.
Mathematically it is expressed as,
Output
qo
qo qo
qi qi
Input, qi Input, qi
ǻqo
=
ǻqi
Thus, if the calibration curve is liner, as shown, the sensitivity of the instrument is the
slope of the calibration curve.
If the calibration curve is not linear as shown, then the sensitivity varies with the
input.
Inverse sensitivity or deflection factor is defined as the reciprocal of sensitivity.
ǻqi
=
ǻqo
Linearity:
Input
Reproducibility:
It is the degree of closeness with which a given value may be repeatedly
measured. It is specified in terms of scale readings over a given period of time.
Repeatability:
It is defined as the variation of scale reading & random in nature.
Drift:
Drift may be classified into three categories:
a) zero drift:
If the whole calibration gradually shifts due to slippage, permanent set,
or due to undue warming up of electronic tube circuits, zero drift sets in.
Characteristics with
zero drift
Output Output
Span drift
Nominal Nominal
Characteristics characteristics
Input Input
c) Zonal drift:
In case the drift occurs only a portion of span of an instrument, it is called zonal
drift.
Resolution:
If the input is slowly increased from some arbitrary input value, it will again be found that
output does not change at all until a certain increment is exceeded.
This increment is called resolution.
Threshold:
If the instrument input is increased very gradually from zero there will be some minimum
value below which no output change can be detected. This
minimum value defines the threshold of the instrument.
Stability:
It is the ability of an instrument to retain its performance throughout is specified operating
life.
Tolerance:
The maximum allowable error in the measurement is specified in terms of some value which is
called tolerance.
Range or span:
The minimum & maximum values of a quantity for which an instrument is designed to
measure is called its range or span.
Dynamic characteristics:
The set of criteria defined for the instruments, which are changes rapidly with time, is called
‘dynamic characteristics’.
Speed of response:
It is defined as the rapidity with which a measurement system responds to changes in the
measured quantity.
Measuring lag:
It is the retardation or delay in the response of a measurement system to changes in the
measured quantity. The measuring lags are of two types:
a) Retardation type:
In this case the response of the measurement system begins immediately after the
change in measured quantity has occurred.
Fidelity:
It is defined as the degree to which a measurement system indicates changes in the
measurand quantity without dynamic error.
Dynamic error:
It is the difference between the true value of the quantity changing with time & the value
indicated by the measurement system if no static error is assumed. It is also called measurement
error.
Gross Errors:
The gross errors mainly occur due to carelessness or lack of experience of a human
begin
These errors also occur due to incorrect adjustments of instruments
These errors cannot be treated mathematically
These errors are also called¶ personal errors’.
Ways to minimize gross errors:
The complete elimination of g r o s s errors is not possible but one c a n minimize them by
the following ways:
Taking great care while taking the reading, recording the r e a d i n g &
calculating the result
Without depending on only one reading, at least three or more readings must be taken *
preferably by different persons.
Systematic errors:
Instrumental errors:
These errors can be mainly due to the following three reasons:
These are because of the mechanical structure of the instruments. For exam ple friction
in the bearings of various moving parts; irregular spring tensions, reductions in due to
improper handling , hysteresis, gear backlash, stretching of spring, variations in air gap, etc .,
Selecting a proper instrument and planning the proper procedure for the
measurement recognizing the effect of such errors a n d applying t h e proper correction
factors calibrating the instrument carefully against a standard
b) Misuse of instruments:
A good instrument if used in abnormal way gives misleading results. Poor initial adjustment,
Improper zero setting, using leads of high resistance etc., are the examples of misusing a good
instrument. Such things do not cause the permanent dam age to the instruments but definitely
cause the serious errors.
C) Loading effects
Loading effects due to im proper way of using the instrument cause the serious errors.
The best ex ample of such loading effect error is connecting a w ell calibrated volt meter across
the two points of high resistance circuit. The same volt meter connected in a low resistance circuit
gives accurate reading.
Ways to minimize this error:
Thus the errors due to the loading effect can be avoided by using an instrument
intelligently and correctly.
Environmental errors:
These errors are due to the conditions external to the measuring instrument. The various
factors resulting these environmental errors are temperature changes, pressure changes, thermal
emf, ageing of equipment and frequency sensitivity of an instrument.
The various methods which can be used to reduce these errors are:
i) Using the pr oper correction factors and using the information supplied by the
manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding conditions
Constant
iii) Reducing the effect of dust ,humidity on the components by hermetically sealing the
components in the instruments
iv) The effects of external f i e l d s can be minimized by using the magnetic or electro
static shields or screens
v) Using the equipment which is immune to such environmental effects.
Observational errors:
To eliminate such errors one should use the instruments with mirrors, knife edged
pointers, etc.,
The systematic errors can be subdivided as static and dynamic errors. The static errors are
caused by the limitations of the measuring device while the dynamic errors are caused by t h e
instrument not responding fast enough to follow the changes in the variable to be measured.
Random errors:
Some errors still result, though the systematic and instrumental errors are reduced or
atleast accounted for the causes of such errors are unknown and hence the errors are called
random errors.
Ways to minimize this error
The only way to reduce these errors i s by increasing t h e number of observations and
using the statistical methods to obtain the best
approximation of the reading.
Out of the various possible errors, the random errors cannot be determined in the
ordinary process of measurements. Such errors are treated mathematically
The mathematical analysis of the various measurements is called
statistical analysis of the data’.
For such statistical analysis, the same reading is taken number of times, generally u sing
different observers, different instruments & by different ways of measurement. The statistical
analysis helps to determine analytically t he uncertainty of the final test results.
Arithmetic mean & median:
When the n umber of readings of the same measurement are taken, the most
likely value from the set of measured value is the arithmetic mean of the number of
readings taken.
The arithmetic mean value can be mathematically obtained as,
X1 X 2 .... Xn
X = =
n
This mean is very close to true value, if number of readings is very large.
But when the number of readings is large, calculation of mean value is
complicated. In such a case, a median value is obtained which is obtained which is a
close approximation to the arithmetic mean value. For a set of µ Q¶ measurements X1, X2,
X3.Xn written down in the ascending order of magnitudes, the median value is given by,
Xmedian=X (n+1)/2
Average deviation:
The deviation tells us about the departure of a given reading from the arithmetic mean of the
data set
di=xi- X
Where
di = deviation of ith reading
Xi= value of ith reading
X = arithmetic mean
The average deviation is defined as the sum of the absolute values of
deviations divided by the number of readings. This is also called mean deviation
1.5 STANDARD & CALIBRATION
CALIBRATION
The calibration is the procedure for determining the correct values of measurand by
comparison with standard ones. The standard of device with which comparison is made is called a
standard instrument. The instrument which is unknown & is to be calibrated is called test
instrument. Thus in calibration, test instrument is compared with standard instrument.
There are two methodologies for obtaining the comparison between test instrument &
standard instrument. These methodologies are
i) Direct comparisons
ii) Indirect comparisons
Direct comparisons:
In a direct comparison, a source or generator applies a known input to the meter under test.
The ratio of what meter is indicating & the known generator values gives the meter¶ s error.
In such case the meter is the test instrument while the generator is the standard instrument.
The deviation of meter from the standard value is compared with the allowable
performance limit.
With the help of direct comparison a generator or source also can be calibrated.
Indirect comparisons:
In the indirect comparison, the test instrument is compared with the response
standard instrument of same type i .e., if test instrument is meter, standard
instrument is also meter, if test instrument is generator; the standard
instrument is also generator & so on.
If the test instrument is a meter then the same input is applied to the test
meter as well a standard meter.
In case of generator calibration, the output of the generator tester as well as
standard, or set to same nominal levels.
Then the transfer meter is used which measures the outputs of both
standard and test generator.
Standard
International standards
These are highly accurate absolute standards, which can be used as ultimate
reference standards .These primary standards are maintained at national
standard laboratories in different countries.
These standards representing fundamental units as well as some electrical
and mechanical derived units are calibrated independently by absolute
measurements at each of the national laboratories.
These are not available for use, outside the national laboratories.
The main function of the primary standards is the calibration and
verification of secondary standards.
Secondary standards
As mentioned above, the primary standards are not ava ilable for use outside
the national laboratories.
The various industries need some reference standards. So, to protect highly
a c c u r a t e p r i m a r y s t a n d a r d s t h e secondary s t a n d a r d s are
maintained, which are designed and constructed from the absolute standards.
These are used by the measurement and calibration laboratories in
industries and are m aintained by the particular industry to which they
belong. Each industry has its own standards.
Working standards
These are the basic tools of a measurement laboratory and are used to check
and calibrate the instruments used in laboratory for accuracy and the
performance.
International standards
National standard
laboratories
.
Measurement
laboratory
Process instrument