0% found this document useful (0 votes)
30 views

Measuring Instruments

Uploaded by

jacklog600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Measuring Instruments

Uploaded by

jacklog600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

MEASURING

INSTRUMENTS
Introduction
Electronic measuring instruments are extremely accurate and they can provide graphical
displays. The accurate measuring devices generally have digital displays of the measurement
data, hence such devices are often referred to as digital instruments (the electronic element
remains inferred).
Absolute Instruments
The instruments of this type give the value of the measurand in terms of instrument constant
and its deflection. Such instruments do not require comparison with any other standard. The
example of this type of instrument is tangent galvanometer, which gives the value of the
current to be measured in terms of tangent of the angle of deflection produced, the horizontal
component of the earth’s magnetic field, the radius and the number of turns of the wire used.
Rayleigh current balance and absolute electrometer are other examples of absolute
instruments. Absolute instruments are mostly used in standard laboratories and in similar
institutions as standardizing.
Secondary Instruments
These instruments are so constructed that the deflection of such instruments gives the
magnitude of the electrical quantity to be measured directly. These instruments are required
to be calibrated by comparison with either an absolute instrument or with another secondary
instrument, which has already been calibrated before the use. These instruments are generally
used in practice.
Secondary instruments are further classified as
 Indicating instruments
 Integrating instruments
 Recording instruments

Measurement of errors
In practice, it is impossible to measure the exact value of the measurand. There is always
some difference between the measured value and the absolute or true value of the unknown
quantity (measurand), which may be very small or may be large.
The difference between the true or exact value and the measured value of the unknown
quantity is known as the absolute error of the measurement.
If δA be the absolute error of the measurement, Am and A be the measured and absolute value
of the unknown quantity then δA may be expressed as

Sometimes, δA is denoted by ε0. The relative error is the ratio of absolute error to the true
value of the unknown quantity to be measured,
When the absolute error ε0 (=δA) is negligible, i.e., when the difference between the true
value A and the measured value Am of the unknown quantity is very small or negligible then
the relative error may be expressed as,

The relative error is generally expressed as a fraction, i.e., 5 parts in 1000 or in percentage
value,

Sources of Systematic Error


The main sources of systematic error in the output of measuring instruments can be
summarized as
• Effect of environmental disturbances, often called modifying inputs
• Disturbance of the measured system by the act of measurement • changes in characteristics
due to wear in instrument components over a period of time
• Resistance of connecting leads
These various sources of systematic error, and ways in which the magnitude of the errors can
be reduced, as discussed here.
a) Careful Instrument Design
Careful instrument design is the most useful weapon in the battle against environmental
inputs by reducing the sensitivity of an instrument to environmental inputs to as low a level
as possible. For instance, in the design of strain gauges, the element should be constructed
from a material whose resistance has a very low temperature coefficient (i.e., the variation of
the resistance with temperature is very small).
b) Calibration
Calibration consists of comparing the output of the instrument or sensor under test against the
output of an instrument of known accuracy when the same input (the measured quantity) is
applied to both instruments.
This procedure is carried out for a range of inputs covering the whole measurement range of
the instrument or sensor. Calibration ensures that the measuring accuracy of all instruments
and sensors used in a measurement system is known over the whole measurement range,
provided that the calibrated instruments and sensors are used in environmental conditions that
are the same as those under which they were calibrated.
All instruments suffer drift in their characteristics, and the rate at which this happens depends
on many factors, such as the environmental conditions in which instruments are used and the
frequency of their use.
Error due to an instrument being out of calibration is never zero, even immediately after the
instrument has been calibrated, because there is always some inherent error in the reference
instrument that a working instrument is calibrated against during the calibration exercise.
Nevertheless, the error immediately after calibration is of low magnitude. The calibration
error then grows steadily with the drift in instrument characteristics until the time of the next
calibration. The maximum error that exists just before an instrument is recalibrated can
therefore be made smaller by increasing the frequency of recalibration so that the amount of
drift between calibrations is reduced.
c) Method of Opposing Inputs
The method of opposing inputs compensates for the effect of an environmental input in a
measurement system by introducing an equal and opposite environmental input that cancels it
out.
d) Use of high gain feedback
e) Signal Filtering
One frequent problem in measurement systems is corruption of the output reading by periodic
noise, often at a frequency of 50 Hz caused by pickup through the close proximity of the
measurement system to apparatus or current-carrying cables operating on a mains supply.
Periodic noise corruption at higher frequencies is also often introduced by mechanical
oscillation or vibration within some component of a measurement system. The amplitude of
all such noise components can be substantially attenuated by the inclusion of filtering of an
appropriate form in the system.
f) Manual Correction of Output Reading
In the case of errors that are due either to system disturbance during the act of measurement
or to environmental changes, a good measurement technician can substantially reduce errors
at the output of a measurement system by calculating the effect of such systematic errors and
making appropriate correction to the instrument readings. This is not necessarily an easy task
and requires all disturbances in the measurement system to be quantified. This procedure is
carried out automatically by intelligent instruments.
g) Intelligent Instruments
Intelligent instruments contain extra sensors that measure the value of environmental inputs
and automatically compensate the value of the output reading. They have the ability to deal
very effectively with systematic errors in measurement systems, and errors can be attenuated
to very low levels in many cases.
Digital Multimeters
Digital meters have been developed to satisfy a need for higher measurement accuracies and
a faster speed of response to voltage changes than can be achieved with analogue
instruments. They are technically superior to analogue meters in almost every respect. The
binary nature of the output reading from a digital instrument can be applied readily to a
display that is in the form of discrete numerals. Where human operators are required to
measure and record signal voltage levels, this form of output makes an important contribution
to measurement reliability and accuracy, as the problem of analogue meter parallax error is
eliminated and the possibility of gross error through misreading the meter output is reduced
greatly. The availability in many instruments of a direct output in digital form is also very
useful in the rapidly expanding range of computer control applications. Quoted inaccuracy
values are between +/- 0.005% (measuring d.c. voltages) and +/- 2%. Digital meters also
have very high input impedance (10 MΩ compared with 1-20 KΩ for analogue meters),
which avoids the measurement system loading problem that occurs frequently when analogue
meters are used. Additional advantages of digital meters are their ability to measure signals of
frequency up to 1 MHz and the common inclusion of features such as automatic ranging,
which prevents overload and reverse polarity connection, etc
Oscilloscopes
The oscilloscope is probably the most versatile and useful instrument available for signal
measurement.
The basic function of an oscilloscope is to draw a graph of an electrical signal. In the most
common arrangement, the y-axis (vertical) of the display represents the voltage of a measured
signal and the x-axis (horizontal) represents time. Thus, the basic output display is a graph of
the variation of the magnitude of the measured voltage with time.
The oscilloscope is able to measure a very wide range of both a.c. and d.c. voltage signals
and is used particularly as an item of test equipment for circuit fault finding. In addition to
measuring voltage levels, it can also measure other quantities, such as the frequency and
phase of a signal. It can also indicate the nature and magnitude of noise that may be
corrupting the measurement signal.
a) The cathode Ray Oscilloscope
The cathode ray tube within an analogue oscilloscope is shown below. The cathode consists
of a barium and strontium oxide-coated, thin, heated filament from which a stream of
electrons is emitted.
The stream of electrons is focused onto a well-defined spot on a fluorescent screen by an
electrostatic focusing system that consists of a series of metal discs and cylinders charged at
various potentials. Adjustment of this focusing mechanism is provided by a focus control on
the front panel of an oscilloscope.

An intensity control varies the cathode heater current and therefore the rate of emission of
electrons, and thus adjusts the intensity of the display on the screen. These and other typical
controls are shown in the illustration of the front panel of a simple oscilloscope given in in
the diagram below.
It should be noted that the layout shown is only one example. Every model of oscilloscope
has a different layout of control knobs, but the functions provided remain similar irrespective
of the layout of the controls with respect to each other. Application of potentials to two sets
of deflector plates mounted at right angles to one another within the tube provide for
deflection of the stream of electrons, such that the spot where the electrons are focused on the
screen is moved. The two sets of deflector plates are normally known as horizontal and
vertical deflection plates, according to the respective motion caused to the spot on the screen.
The magnitude of any signal applied to the deflector plates can be calculated by measuring
the deflection of the spot against a cross-wire graticule etched on the screen.
The common oscilloscope configuration with two channels can therefore display two separate
signals simultaneously.
b) Digital Storage Oscilloscope
Digital storage oscilloscopes are the most basic form of digital oscilloscopes but even these
usually have the ability to perform extensive waveform processing and provide permanent
storage of measured signals.
The block diagram below shows typical components used in the digital storage oscilloscope.

The first component (as in an analogue oscilloscope) is an amplifier/attenuator unit that


allows adjustment of the magnitude of the input voltage signal to an appropriate level. This is
followed by an analogue-to-digital converter that samples the input signal at discrete points in
time. The sampled signal values are stored in the acquisition memory component before
passing into a microprocessor. This carries out signal processing functions, manages the front
panel control settings, and prepares the output display. Following this, the output signal is
stored in a display memory module before being output to the display itself.
The signal displayed is actually a sequence of individual dots rather than a continuous line as
displayed by an analogue oscilloscope. However, as the density of dots increases, the display
becomes closer and closer to a continuous line. The density of the dots is entirely dependent
on the sampling rate at which the analogue signal is digitized and the rate at which the
memory contents are read to reconstruct the original signal.
In addition to their ability to display the magnitude of voltage signals and other parameters,
such as signal phase and frequency, most digital oscilloscopes can also carry out analysis of
the measured waveform and compute signal parameters such as maximum and minimum
signal levels, peak-peak values, mean values, r.m.s. values, rise time, and fall time.

Spectrum Analyser
A spectrum analyser block diagram provides a calibrated graphical display on its CRT, with
frequency on the horizontal axis and amplitude (voltage) on the vertical axis. Displayed as
vertical lines against these coordinates are sinusoidal components of which the input signal is
composed. The height represents the absolute magnitude, and the horizontal location
represents the frequency. These instruments provide a display of the frequency spectrum over
a given frequency band. Spectrum analysers use either a parallel filter bank or a swept
frequency technique. In a parallel filter bank analyser, the frequency range is covered by a
series of filters whose central frequencies and bandwidth are so selected that they overlap
each other, as shown in figure below.

You might also like