0% found this document useful (0 votes)
2 views

UNIT-1

Unit1 Introduction to measurement
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

UNIT-1

Unit1 Introduction to measurement
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT I INTRODUCTION

MEASUREMENTS:

The measurement of a given quantity is essentially an act or the result of comparison


between the quantity (whose magnitude is unknown) & a predefined standard. Since two
quantities are compared, the result is expressed in numerical values.

BASIC REQUIREMENTS OF MEASUREMENT:

i) The standard used for comparison purposes must be accurately defined &
should be commonly accepted
ii) The apparatus used & the method adopted must be provable.

MEASURING INSTRUMENT:

It may be defi ned as a device for determining the value or magnitude of a quantity or
variable.

1.1 FUNCTIONAL ELEMENTS OF AN INSTRUMENT:

Most of the measurement systems contain three main functional elements. They are:

i) Primary sensing element


ii) Variable conversion element &
iii) Data presentation element.

primary
Variable Variable Data Data
sensing
conversion manipulation transmission presentatio
element
element element element n element
Primary sensing element:

The q u a n t it y under measurement makes its first contact with the primary
sensing element of a measurement system. i.e., the measurand- (the unknown quantity
which is to be measured) is first detected by primary sensor which gives the output in
a different analogous form This output is then converted into an e electrical signal by
a transducer - (which converts energy from one form to another). The first stage of a
m e a s u r e m e n t system is known as a detector transducer stage’.

Variable conversion element:

The output of the primary sensing element may be electrical signal of any form ,
it may be voltage, a frequency or some other electrical parameter
For the instrument to perform the desired function, it may be necessary to convert this
output to some other suitable form.

Variable manipulation element:

The function of this element is to manipulate the signal presented to it preserving


the original nature of the signal. It is not necessary that a var iable m anipulation
elem ent should follow the variable conversion element Some non -linear processes
like modulation, detection, sampling , filtering, chopping etc.,are performed on
the signal to bring it to the desired form to be accepted by the next stage of
measurement system This process of conversion is called µ signal conditioning’
The term signal conditioning includes many other functions in addition to Variable
conversion & Variable manipulation In fact the element that follows the p r i m a r y
sensing e l e m e n t in any instrument or m e a s u r e m e n t system is called
conditioning element’

NOTE: When the elements of an instrument are actually physically separated, it


be come s necessary to t r a n s m i t data from one to another. The element that
performs this function i s called a data transmission element’.
Data presentation element:

The information about the quantity under measurement has to be conveyed to the
p e r s o n n e l handling the instrument or the system for monitoring, control, or
analysis purposes. This function is done by data presentation element
In case data is to be monitored, visual display devices are needed These devices
may be analog or digital indicating instruments like ammeters, voltmeters
etc. In case data is to be recorded, recorders like magnetic tapes, high speed camera
& TV equipment, CRT, printers may be used. For control & analysis is purpose
microprocessor or computers may be used. The final stage in a measurement system is
known as terminating stage’
1.2 STATIC& DYNAMIC CHARACTERISTICS

The performance characteristics of an instrument are mainly divided into two


categories:
i) Static characteristics
ii) Dynamic characteristics

Static characteristics:

The set of criteria defined for the instruments, which are used to measure the
quantities which are slowly varying with time or mostly constant, i.e., do not vary with
time, is called ‘static characteristics’.
The various static characteristics are:
i) Accuracy
ii) Precision
iii) Sensitivity
iv) Linearity
v) Reproducibility
vi) Repeatability
vii) Resolution
viii) Threshold
ix) Drift
x) Stability
xi) Tolerance
xii) Range or span
Accuracy:
It is the degree of closeness with which the reading approaches the true value
of the quantity to be measured. The accuracy can be expressed in
following ways:
a) Point accuracy:
Such an accuracy is specified at only one particular point of scale. It does
not give any information about the accuracy at any other
point on the scale.

b) Accuracy as percentage of scale span:


When an instrument as uniform scale, its accuracy may be
expressed in terms of scale range.

c) Accuracy as percentage of true value:


The best way to conceive the idea of accuracy is to specify it in terms
of the true value of the quantity being measured.

Precision:
It is the measure of reproducibility i.e., given a fixed value of a quantity,
precision is a measure of the degree of agreement within a group of
measurements. The precision is composed of two characteristics:

a) Conformity:
Consider a resistor having true value as 2385692 , which is being
measured by an ohmmeter. But the reader can read consistently, a value as
2.4 M due to the non availability of proper scale. The error
created due to the limitation of the scale reading is a precision error.

b) Number of significant figures:


The precision of the measurement is obtained from the number of
significant figures, in which the reading is expressed. The significant
figures convey the actual information about the magnitude & the
measurement precision of the quantity.

The precision can be mathematically expressed as:

P=1- Xn-Xn

Xn
Where, P = precision
th
Xn = Value of n measurement
Xn = Average value the set of measurement values
Sensitivity:
The sensitivity denotes the smallest change in the measured variable to which
the instrument responds. It is defined as the ratio of the changes in the
output of an instrument to a change in the value of the quantity to be measured.
Mathematically it is expressed as,

Output
qo

qo qo
qi qi

Input, qi Input, qi

Infinitesimal change in output


Sensitivity=
Infinitesimal change in input

ǻqo
=
ǻqi

Thus, if the calibration curve is liner, as shown, the sensitivity of the instrument is the
slope of the calibration curve.
If the calibration curve is not linear as shown, then the sensitivity varies with the
input.
Inverse sensitivity or deflection factor is defined as the reciprocal of sensitivity.

Inverse sensitivity or deflection factor = 1/ sensitivity

ǻqi
=
ǻqo
Linearity:

The linearity is defined as the ability to reproduce the input characteristics


symmetrically & linearly.
The curve shows the actual calibration curve & idealized straight line.
Idealized
Straight line
Output
Actual maximum deviation
Curve

Input

Max. deviation of output from idealized straight line


% non-linearity =
Full scale reading

Reproducibility:
It is the degree of closeness with which a given value may be repeatedly
measured. It is specified in terms of scale readings over a given period of time.

Repeatability:
It is defined as the variation of scale reading & random in nature.

Drift:
Drift may be classified into three categories:
a) zero drift:
If the whole calibration gradually shifts due to slippage, permanent set,
or due to undue warming up of electronic tube circuits, zero drift sets in.

Characteristics with
zero drift
Output Output
Span drift

Nominal Nominal
Characteristics characteristics

Input Input

(Fig) span drift (fig) zero drift


b) span drift or sensitivity drift
If there is proportional change in the indication all along the upward scale, the
drifts is called span drift or sensitivity drift.

c) Zonal drift:
In case the drift occurs only a portion of span of an instrument, it is called zonal
drift.

Resolution:
If the input is slowly increased from some arbitrary input value, it will again be found that
output does not change at all until a certain increment is exceeded.
This increment is called resolution.

Threshold:
If the instrument input is increased very gradually from zero there will be some minimum
value below which no output change can be detected. This
minimum value defines the threshold of the instrument.

Stability:
It is the ability of an instrument to retain its performance throughout is specified operating
life.

Tolerance:
The maximum allowable error in the measurement is specified in terms of some value which is
called tolerance.

Range or span:
The minimum & maximum values of a quantity for which an instrument is designed to
measure is called its range or span.

Dynamic characteristics:

The set of criteria defined for the instruments, which are changes rapidly with time, is called
‘dynamic characteristics’.

The various static characteristics are:


i) Speed of response ii)Measuring
lag
iii) Fidelity
iv) Dynamic error

Speed of response:
It is defined as the rapidity with which a measurement system responds to changes in the
measured quantity.
Measuring lag:
It is the retardation or delay in the response of a measurement system to changes in the
measured quantity. The measuring lags are of two types:

a) Retardation type:
In this case the response of the measurement system begins immediately after the
change in measured quantity has occurred.

b) Time delay lag:


In this case the response of the measurement system begins after a dead time after the
application of the input.

Fidelity:
It is defined as the degree to which a measurement system indicates changes in the
measurand quantity without dynamic error.

Dynamic error:
It is the difference between the true value of the quantity changing with time & the value
indicated by the measurement system if no static error is assumed. It is also called measurement
error.

1.3 ERRORS IN MEASUREMENT

The types of errors are follows


i) Gross errors
ii) Systematic errors
iii) Random errors

Gross Errors:

The gross errors mainly occur due to carelessness or lack of experience of a human
begin
These errors also occur due to incorrect adjustments of instruments
These errors cannot be treated mathematically
These errors are also called¶ personal errors’.
Ways to minimize gross errors:

The complete elimination of g r o s s errors is not possible but one c a n minimize them by
the following ways:
Taking great care while taking the reading, recording the r e a d i n g &
calculating the result
Without depending on only one reading, at least three or more readings must be taken *
preferably by different persons.
Systematic errors:

A constant uniform deviation of the operation of an instrument is known as a Systematic


error
The Systematic errors are mainly due to the short comings of the instrument & the
characteristics of the material used in the instrument, such as de fective o r worn parts,
ageing effects, env ironmental effects, etc.

Types of Systematic errors:

There are three types of Systematic errors as:


i) Instrumental errors
ii) Environmental errors iii)
Observational errors

Instrumental errors:
These errors can be mainly due to the following three reasons:

a) Short comings of instruments:

These are because of the mechanical structure of the instruments. For exam ple friction
in the bearings of various moving parts; irregular spring tensions, reductions in due to
improper handling , hysteresis, gear backlash, stretching of spring, variations in air gap, etc .,

Ways to minimize this error:

These errors can be avoided by the following methods:

Selecting a proper instrument and planning the proper procedure for the
measurement recognizing the effect of such errors a n d applying t h e proper correction
factors calibrating the instrument carefully against a standard

b) Misuse of instruments:

A good instrument if used in abnormal way gives misleading results. Poor initial adjustment,
Improper zero setting, using leads of high resistance etc., are the examples of misusing a good
instrument. Such things do not cause the permanent dam age to the instruments but definitely
cause the serious errors.

C) Loading effects

Loading effects due to im proper way of using the instrument cause the serious errors.
The best ex ample of such loading effect error is connecting a w ell calibrated volt meter across
the two points of high resistance circuit. The same volt meter connected in a low resistance circuit
gives accurate reading.
Ways to minimize this error:

Thus the errors due to the loading effect can be avoided by using an instrument
intelligently and correctly.

Environmental errors:

These errors are due to the conditions external to the measuring instrument. The various
factors resulting these environmental errors are temperature changes, pressure changes, thermal
emf, ageing of equipment and frequency sensitivity of an instrument.

Ways to minimize this error:

The various methods which can be used to reduce these errors are:

i) Using the pr oper correction factors and using the information supplied by the
manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding conditions
Constant
iii) Reducing the effect of dust ,humidity on the components by hermetically sealing the
components in the instruments
iv) The effects of external f i e l d s can be minimized by using the magnetic or electro
static shields or screens
v) Using the equipment which is immune to such environmental effects.
Observational errors:

These are the errors introduced by the observer.


These are many sources of observational errors such as parallax error while reading a
meter, wrong scale selection, etc.

Ways to minimize this error

To eliminate such errors one should use the instruments with mirrors, knife edged
pointers, etc.,

The systematic errors can be subdivided as static and dynamic errors. The static errors are
caused by the limitations of the measuring device while the dynamic errors are caused by t h e
instrument not responding fast enough to follow the changes in the variable to be measured.

Random errors:
Some errors still result, though the systematic and instrumental errors are reduced or
atleast accounted for the causes of such errors are unknown and hence the errors are called
random errors.
Ways to minimize this error

The only way to reduce these errors i s by increasing t h e number of observations and
using the statistical methods to obtain the best
approximation of the reading.

1.4 STATISTICAL EVALUATION OF MEASUREMENT DAT A

Out of the various possible errors, the random errors cannot be determined in the
ordinary process of measurements. Such errors are treated mathematically
The mathematical analysis of the various measurements is called
statistical analysis of the data’.
For such statistical analysis, the same reading is taken number of times, generally u sing
different observers, different instruments & by different ways of measurement. The statistical
analysis helps to determine analytically t he uncertainty of the final test results.
Arithmetic mean & median:

When the n umber of readings of the same measurement are taken, the most
likely value from the set of measured value is the arithmetic mean of the number of
readings taken.
The arithmetic mean value can be mathematically obtained as,
X1 X 2 .... Xn
X = =
n
This mean is very close to true value, if number of readings is very large.
But when the number of readings is large, calculation of mean value is
complicated. In such a case, a median value is obtained which is obtained which is a
close approximation to the arithmetic mean value. For a set of µ Q¶ measurements X1, X2,
X3.Xn written down in the ascending order of magnitudes, the median value is given by,
Xmedian=X (n+1)/2

Average deviation:

The deviation tells us about the departure of a given reading from the arithmetic mean of the
data set

di=xi- X
Where
di = deviation of ith reading
Xi= value of ith reading
X = arithmetic mean
The average deviation is defined as the sum of the absolute values of
deviations divided by the number of readings. This is also called mean deviation
1.5 STANDARD & CALIBRATION

CALIBRATION

Calibration is the process of making an adjustment or marking a scale so that the


readings of an instrument agree with the accepted & the certified standard.
In other words, it is the procedure for determining the correct values of measurand by
comparison with the measured or standard ones.
The calibration offers a guarantee to the device or instrument that it is operating with
required accuracy, under stipulated environmental conditions.
The calibration procedure involves the steps like visual inspection for various defects,
installation according to the specifications, zero adjustment etc.,

The calibration is the procedure for determining the correct values of measurand by
comparison with standard ones. The standard of device with which comparison is made is called a
standard instrument. The instrument which is unknown & is to be calibrated is called test
instrument. Thus in calibration, test instrument is compared with standard instrument.

Types of calibration methodologies:

There are two methodologies for obtaining the comparison between test instrument &
standard instrument. These methodologies are

i) Direct comparisons
ii) Indirect comparisons

Direct comparisons:

In a direct comparison, a source or generator applies a known input to the meter under test.
The ratio of what meter is indicating & the known generator values gives the meter¶ s error.
In such case the meter is the test instrument while the generator is the standard instrument.
The deviation of meter from the standard value is compared with the allowable
performance limit.
With the help of direct comparison a generator or source also can be calibrated.
Indirect comparisons:

In the indirect comparison, the test instrument is compared with the response
standard instrument of same type i .e., if test instrument is meter, standard
instrument is also meter, if test instrument is generator; the standard
instrument is also generator & so on.
If the test instrument is a meter then the same input is applied to the test
meter as well a standard meter.
In case of generator calibration, the output of the generator tester as well as
standard, or set to same nominal levels.
Then the transfer meter is used which measures the outputs of both
standard and test generator.

Standard

All the instruments are calibrated at the time of manufacturer against


measurement standards.
A standard of measurement is a physical representation of a unit of
measurement.
A standard means known accurate measure of physical quantity.

The different size of standards of measurement are classified as i)


International standards
ii) Primary standards
iii) Secondary standards
iv) Working standards

International standards

International standards are defined as the international agreement. These


standard ,as mentioned above are maintained at the international bureau of
weights an d measures and are periodically evaluated and checked by
absolute measurements in term s of fundamental units of physics.
These international standards are not available to the ordinary users for the
calibration purpose.
For the improvements in the accuracy of absolute measurements the
international units are replaced by the absolute units in 1948.
Absolute units are more accurate than the international units.
Primary standards

These are highly accurate absolute standards, which can be used as ultimate
reference standards .These primary standards are maintained at national
standard laboratories in different countries.
These standards representing fundamental units as well as some electrical
and mechanical derived units are calibrated independently by absolute
measurements at each of the national laboratories.
These are not available for use, outside the national laboratories.
The main function of the primary standards is the calibration and
verification of secondary standards.

Secondary standards

As mentioned above, the primary standards are not ava ilable for use outside
the national laboratories.
The various industries need some reference standards. So, to protect highly
a c c u r a t e p r i m a r y s t a n d a r d s t h e secondary s t a n d a r d s are
maintained, which are designed and constructed from the absolute standards.
These are used by the measurement and calibration laboratories in
industries and are m aintained by the particular industry to which they
belong. Each industry has its own standards.

Working standards

These are the basic tools of a measurement laboratory and are used to check
and calibrate the instruments used in laboratory for accuracy and the
performance.

International standards

National standard
laboratories
.

Industries & secondary


laboratories

Measurement
laboratory

Process instrument

You might also like