0% found this document useful (0 votes)
24 views8 pages

METROLOGY

Metrology is the science of measurements, encompassing the establishment of measurement units, accuracy analysis, and error investigation. It includes legal metrology, industrial inspection, and various measurement techniques, with accuracy and precision being key concepts. Errors in measurements are categorized into systematic and random errors, and various methods of measurement are employed to ensure accuracy and minimize uncertainty.

Uploaded by

omoikeelijah8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views8 pages

METROLOGY

Metrology is the science of measurements, encompassing the establishment of measurement units, accuracy analysis, and error investigation. It includes legal metrology, industrial inspection, and various measurement techniques, with accuracy and precision being key concepts. Errors in measurements are categorized into systematic and random errors, and various methods of measurement are employed to ensure accuracy and minimize uncertainty.

Uploaded by

omoikeelijah8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

METROLOGY

Metrology is the science of measurements: Metrology involves measurements of length,


angles, and other quantities, it is also concerned with industrial inspection and its various
techniques, Metrology also deals with establishing units of measurements, developing
methods of measurement, analysing the accuracy of methods of measurement, establishing
uncertainty of measurement, and investigating the causes of measuring errors and
eliminating them.

It is pertinent to mention here some of the classic statement by eminent scientists which
highlight the importance of metrology.
When you can measure what you are speaking about and express it in numbers, you know
something about it, but when you cannot measure it, when you cannot express it in
numbers, your knowledge of it is of a meagre and unsatisfactory kind. It may be the
beginning of knowledge, but you have scarcely in your thought advanced to the stage of
science. Lord Kelvin (1824-1907)

"Measure everything that is measurable and make measurable what is not so. Galileo (1564-
1642)

The term "legal metrology applies to any application of metrology that is subjected to
national laws or regulations. Legal metrology ensures the conservation of national standards
and guarantees their accuracy in comparison with the international standards, thereby
imparting proper accuracy to the secondary standards of the country. Some of the
applications of legal metrology are industrial measurement, commercial transactions, and
public health and human safety aspects. A group of techniques employed for measuring
small variations that are of a continuous nature is termed as 'dynamic metrology. These
techniques find application in recording continuous measurements over a surface and have
obvious advantages over individual measurements of a distinctive character.

The metrology in which part measurement is substituted by process measurement is known


as 'deterministic metrology". An example of deterministic metrology is a new technique
known as 3D error compensation by computer numerical control (CNC) systems and expert
systems, leading to fully adaptive control. This technology is adopted in high-precision
manufacturing machinery and control systems to accomplish micro and nanotechnology
accuracies.

INSPECTION

Inspection is defined as a procedure in which a part or product characteristic, such as a


dimension, is examined to determine whether it conforms to the design specification.
Industrial inspection assumed importance because of mass production, which involved
interchangeability of parts. Dimensions of the components must be well within the
permissible limits to obtain the assemblies with the predetermined fit. Many inspection
methods rely on measurement techniques, that is, measuring the actual dimension of a
part, while others employ the gauging method. The gauging method determines whether a
particular dimension of interest is well within the permissible limits or not. The part found to
be within the permissible limits is accepted; otherwise it is rejected, Industrial inspection is a
very important aspect of quality control.

ACCURACY AND PRECISION

Accuracy is the degree of agreement of the measured dimension with its true magnitude. It
can also be defined as the maximum amount by which the result differs from the true value
or as the nearness of the measured value to its true value, often expressed as a percentage.
Precision is the degree of repetitiveness of the measuring process. It is the degree of
agreement of the repeated measurements of a quantity made by using the same method,
under similar conditions. It is essential to know the difference between precision and
accuracy. Accuracy gives information regarding how far the measured value is with respect
to the true value, whereas precision indicates quality of measurement, without giving any
assurance that the measurement is correct.
The difference between the true value and the mean value of the set of readings on the
same is termed as an error. Error can also be defined as the difference the indicated value
and the true value of the quantity measured.

E = V_{m} - V_{t} where E is the error. V_{m} the measured value, and I the true value The
value of E is also known as the absolute error. n_{0} error is sometimes known as relative
error. Relative error is expressed as the ratio of the error to the true value of the quantity to
be measured. Accuracy of an instrument can also be expressed as "% error. If an instrument
measures V_{m} instead of V_{i} then, % error True al 100 Or

\%error= (v_{H} - v_{1})/v_{1} * 100 is always assessed in terms of error. The instrument is
more accurate if the magnitude of error is low.

The ratio of the change of instrument indication to the change of quantity being measured is
termed as sensitivity. In other words, it is the ability of the measuring to detect small
variations in the quantity being measured.
When successive readings of the measured quantity obtained from the measuring
instrument are same all the time, the equipment is said to be consistent. A highly accurate
instrument possesses both sensitivity and consistency. A highly sensitive instrument need
not be consistent, and the degree of consistensy determines the accuracy of the instrument.
An instrument that is both consistent and sensitive need not be accurate, because its scale
may have been calibrated with a wrong standard. Range is defined as the difference
between the lower and higher values that an instrument is able to measure

GENERAL MEASUREMENT CONCEPTS

Fig. 2 Elements of measurement

The three basic elements of measurements (schematically shown in Fig.2), which are of
significance, are the following:
1. Measurand, a physical quantity such as length, weight, and angle to be measured 2.
Comparator, to compare the measurand (physical quantity) with a known standard
(reference) for evaluation.

3. Reference, the physical quantity or property to which quantitative comparisons are to be


made, which is internationally accepted. All these three elements would be considered to
explain the direct measurement using a calibrated fixed reference, In order to determine the
length (a physical quantity called measurand) of the component measurement is carried out
by comparing it with a steel scale (a known standard).

Calibration of Measuring Instruments

It is essential that the equipment/instrument used to measure a given physical quantity is


validated. The process of validation of the measurements to ascertain whether the given
physical quantity conforms to the original/national standard of measurement is known as
traceability of the standard. Calibration is a means of achieving traceability. Calibration is the
procedure used to establish a relationship between the values of the
quantities indicated by the measuring instrument and the corresponding values realized by
standards under specified conditions. If the instrument has an arbitrary scale, the indication
has to be multiplied by a factor to obtain the nominal value of the quantity measured, which
is referred to as scale factor. If the values of the variable involved remain constant (not time
dependent) while calibrating a given instrument, this type of calibration is known as statte
calibration, whereas if the value is time dependent or time-based information is required, it
is called dynamic calibration Thus, repeatability could also be termed as the minimum
uncertainty that exists between a measurand and a standard.

ERRORS IN MEASUREMENTS
In order to analyse the measurement data, we need to understand the nature of errors
associated with the measurements. Therefore, it is imperative to investigate the causes or
sources of these errors in measurement systems and find out ways for their subsequent
elimination. Two broad categories of errors in measurement have been identified:
systematic and random errors.
1. Systematic or Controllable Errors
A systematic error is a type of come the devies by a fised art from the tr value of
measurement. Examples of such glade measurement of length wing a metre scale,
measurement of cures with inaccurately calibrated ammeters, et the following are
the reasons for their occurrence
1. Calibration errors
2. Ambient conditions
3. Deformation of workpiece
4. Avoidable errors Calibration Errors

A small amount of variation from the nominal value will be present in the actual length
standards, as in slip gauges and engraved scales. Inertia of the instrument and its hysteresis
effects do not allow the instrument to translate with true fidelity. Hysteresis is defined as the
difference between the indications of the measuring instrument when the value of the
quantity is measured in both the ascending and descending orders. These variations have
positive significance for higher-order accuracy achievement. Calibration curves are used to
minimize such variations. Inadequate amplification of the instrument also affects the
accuracy.

Ambient Conditions

It is essential to maintain the ambient conditions at internationally accepted values of


standard temperature (20°C) and pressure (760mml1g) conditions. A small difference of
10mmHg can cause errors in the measured size of the component. The most significant
ambient condition affecting the accuracy of measurement is temperature. An increase in
temperature of 1°C results in an increase in the length of C25 steel by 0.3µm, and this is
substantial when precision measurement is required. In order to obtain error-free results, a
correction factor for temperature has to be provided. Therefore, in case of measurements
using strain gauges, temperature compensation is provided to obtain accurate results.
Relative humidity, thermal gradients, vibrations, and CO2 content of the air affect the
refractive index of the atmosphere. Thermal expansion occurs due to heat radiation from
different sources such as lights, sunlight, and body temperature of operators.

Deformation of Workpiece

Any elastic body, when subjected to a load, undergoes elastic deformation. The stylus
pressure applied during measurement affects the accuracy of measurement. Due to a
definite stylus pressure, elastic deformation of the work piece and deflection of the
workpiece shape may occur. The magnitude of deformation depends on the applied load,
area of contact, and mechanical properties of the material of the given workpiece.
Therefore, during comparative measurement, one has to ensure that the applied measuring
loads are same.

1. Avoidable Errors
These include the following:
o Datum errors: Datum error is the difference between the true value of
the quantity being measured and the indicated value, with due regard to
the sign of each. When the instrument is used under specified conditions
and a physical quantity is presented to it for the purpose of verifying the
setting, the indication error is referred to as the datum error.
o Reading errors: These errors occur due to the mistakes committed by the
observer while noting down the values of the quantity being measured.
Digital readout devices, which are increasingly being used for display
purposes, eliminate or minimize most of the reading errors usually made
by the observer.
o Errors due to parallax effect: Parallax errors occur when the sight is not
perpendicular to the instrument scale or the observer reads the
instrument from an angle. Instruments having a scale and a pointer are
normally associated with this type of error. The presence of a mirror
behind the pointer or indicator virtually eliminates the occurrence of this
type of error.

o Effect of misalignment: These occur due to the inherent inaccuracies


present in the measuring instruments. These errors may also be due to
improper use, handling, or selection of the instrument. Wear on the
micrometre anvils or anvil faces not being perpendicular to the axis
results in misalignment, leading to inaccurate measurements. If the
alignment is not proper, sometimes sine and cosine errors also contribute
to the inaccuracies of the measurement.

2. Zero errors: When no measurement is being carried out, the reading on the scale of
the instrument should be zero. A zero error is defined as that value when the initial
value of a physical quantity indicated by the measuring instrument is a non-zero
value when it should have actually been zero. For example, a voltmeter might read IV
even when it is not under any electromagnetic influence. This voltmeter indicates IV
more than the true value for all subsequent measurements made. This error is
constant for all the values measured using the same instrument. A constant error
affects all measurements in a measuring process by the same amount or by an
amount proportional to the magnitude of the quantity being measured. Random
Errors: Random errors provide a measure of random deviations when measurements
of a physical quantity are carried out repeatedly. When a series of repeated
measurements are made on a component under similar conditions, the values or
results of measurements vary. Specific causes for these variations cannot be
determined, since these variations are unpredictable and uncontrollable by the
experimenter and are random in nature. They are of variable magnitude and may be
either positive or negative. When these repeated measurements are plotted, they
follow normal or Gaussian distribution. Random errors can be statistically evaluated,
and their mean value and standard deviation can be determined. These errors scatter
around a mean value. If measurements are made using an instrument, denoted by v1,
v2, v3, ………vn then arithmetic mean is given as

and standard deviation s is given by the following equation:

Standard deviation is a measure of dispersion of a set of readings. It can be


determined by taking the root mean square deviation of the readings from their
observed numbers, which is given by the following equation:

Random errors can be minimized by calculating the average of a large number of


observations. The following are the likely sources of random errors:

1. Presence of transient fluctuations in friction in the measuring instrument


2. Play in the linkages of the measuring instruments
3. Error in operator's judgement in reading the fractional part of engraved
scaledivisions
4. Operator's inability to note the readings because of fluctuations during measurement
5. Positional errors associated with the measured object and sandars, arising dan tin
small variations in sitting

METHODS OF MEASUREMENT
Measurements are performed to determine the magnitude of the value and the unit of the
quantity under consideration. For instance, the lenyth of a rod is 3m, where the number, 3.
indicates the magnitude and the unit of mesearemert is metre. The choice of the method of
ovensurertent depends on the required ac accuracy and the amount of permissible error.
Irrespective of the method wied, the primary objective is to minimize the uncertainty
associates with measurement. The common methods employed for making measurements
are as follows:
 Direct method in this method, the quantity to be measured is directly compared with
the primary or secondary standard. Scales, vernier callipers, micromaten, bevel
protractors, etc., are used in the direct method. This method is widely employed in
the production field.
 Indirect method la this method, the value of a quantity is obtained by measuring
other quantities that are functionally related to the required value, Measurement of
the quantity is carried out directly and then the value is determined by using a
mathematical relationship. Some examples of indirect measurement are anyle
measurement using sine bar, measurement of strain induced in a bur due to the
applies force, determination of effective diameter of a screw thread, etc.
 Fundamental or absolute method in this case, the measurement is based on the
measurements of base quantities used to define the quantity. The quantity under
consideration is directly measured, and is then linked with the definition of that
quantity
o Comparative method in this method, as the name suggests, the quantity to
be measured is compared with the known value of the same quantity or any
other quantity practically related to it. The quantity is compared with the
master gauge and only the deviations from the master gauge are recorded
after comparison. The most common examples are comparators, dial
indicators, etc coincidence method This is a differential method of
measurement wherein a very minute difference between the quantity to be
measured and the reference is determined by careful observation of the
coincidence of certain lines and signals. Measurements on vernier calliper
and micrometer are examples of this methodDeflection method This method
involves the indication of the value of the quantity to be measured directly by
deflection of a pointer on a calibrated scale. Pressure measurement is an
example of this method. Complementary method The value of the quantity to
be measured is combined with a known value of the same quantity. The
combination is so adjusted that the sum of these two valies is equal to the
predetermined comparison value. An example of this method is
determination of the volume of a solid by liquid displacement
Null measurement method In this method, the difference between the value of the quantity
to be measured and the known value of the same quantity with which comparison is to be
made is brought to zero

Substitution method It is a direct comparison method. This method involves the


replacement of the value of the quantity to be measured with a known value of the same
quantity, so selected that the effects produced in the indicating
device by these two values are the same. The Borda method of determining mass is an
example of this method.

Contact method in this method, the surface to be measured is touched by the sensor or
measuring tip of the instrument. Care needs to be taken to provide constant contact
pressure in order to avoid errors due to excess constant pressure. Examples of this method
include measurements using micrometer, vernier calliper, and dial indicator. Contactless
method As the name indicates, there is no direct contact with the surface to be measured.
Examples of this method include the use of optical instruments, tool maker's microscope,
and profile projector

Composite method The actual contour of a component to be checked is compared


with its maximum and minimum tolerance limits. Cumulative errors of the interconnected
elements of the component, which are controlled through a combined tolerance, can be
checked by this method. This method is very reliable to ensure interchangeability and is
usually effected through the use of composite GO gauges. The use of a GO screw plug gauge
to check the thread of a nut is an example of this method.

You might also like