0% found this document useful (0 votes)
62 views

Manufacturing Technology (ME 361) - Lecture 20: Engineering Metrology

This document provides an overview of engineering metrology and measurement concepts. It discusses the need for precise measurement in manufacturing quality control and inspection. Key aspects covered include the objectives of metrology like ensuring accuracy and precision, defining measurement concepts like measurand and reference standards, describing measurement errors and calibration of instruments to validate measurements against national standards. The document also explains measurement techniques and various instruments used for dimensional, optical, surface texture, and advanced metrology.

Uploaded by

Sahil Sunda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Manufacturing Technology (ME 361) - Lecture 20: Engineering Metrology

This document provides an overview of engineering metrology and measurement concepts. It discusses the need for precise measurement in manufacturing quality control and inspection. Key aspects covered include the objectives of metrology like ensuring accuracy and precision, defining measurement concepts like measurand and reference standards, describing measurement errors and calibration of instruments to validate measurements against national standards. The document also explains measurement techniques and various instruments used for dimensional, optical, surface texture, and advanced metrology.

Uploaded by

Sahil Sunda
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Manufacturing Technology

(ME 361)- Lecture 20


Engineering Metrology
Instructor: Shantanu Bhattacharya
Table of Contents
Introduction to Metrology Measurement Systems
➢ Need for Inspection Measuring Instruments
➢Dimensional Measurements (Contact type)
Objectives of Metrology and Measurements ➢Optical measurement techniques – Non Contact type
➢ Accuracy and Precision techniques
➢ Sensitivity and Consistency ➢Surface finish measurement
➢Hardness and Toughness Tests
General Measurement Concepts ➢Advanced Measuring Machines
Calibration of Measuring instruments Micro and Nano Metrology
➢SEM (Scanning Electron Microscope)
Errors ➢TEM (Transmission Electron Microscope)
Methods of Measurement ➢AFM ( Atomic Force Microscope)
Limits, Fits and Tolerances
Introduction

• Metrology is derived from Greek word 'Metrologia', which means measure. It literally means the science of
measurements.
• Measurement is an act of assigning an accurate and precise value to a physical variable. The physical variable
then gets transformed into a measured variable.
• Measurements provide a basis for judgements about process information, quality assurance and process
control.
• Meaningful measurements require common measurement standards and must be performed using them. The
common methods of measurement are based on the development of international specification standards. These
provide appropriate definitions of parameters and protocols that enable standard measurements to be made and
establish a common basis for comparing measured values.
• In addition, Metrology is also concerned with the reproduction, conservation. and transfer of units of
measurements and their standards.
• Its role is ever increasing and encompasses different fields such as communications, energy, medical sciences,
food sciences, environment, trade, transportation and military applications.
• A high product quality along with effectiveness and productivity is a must, in order to survive economically. In
order to achieve high product quality, metrology has to be firmly integrated into the production activity. Hence,
metrology forms an inseparable key element in manufacturing.
Need for Inspection:
• F.W. Taylor, who has been acknowledged as the father of scientific management of manufacturing industry, created
the modern philosophy of production and the philosophy of production metrology and inspection. He decomposed
a job into multiple tasks, thereby isolating the tasks involved in inspection from the production tasks. This
culminated in the creation of a separate quality assurance department in manufacturing industries, which is
assigned the task of inspection and quality control.

• Inspection is defined as a procedure in which a part or product characteristic, such as a dimension, is examined to
determine whether it conforms to the design specification.

• Industrial inspection assumed importance because of mass production, which involved interchangeability of parts.
The various components that come from different locations or industries are then assembled at another place. This
necessitates that parts must be so assembled that satisfactory mating of any pair chosen at random is possible.

• Inspection mainly rely on measurement technique and gauging technique. Measurement techniques involves
measuring the actual dimension of a part. Gauging method does not provide any information about the actual
dimension of a part, it only determines whether a particular dimension of interest is well within the permissible
limits on not. Gauging techniques are faster than measurement techniques.
• Inspection essentially encompasses the following:
(i) Ascertain that the part, material, or component conforms to the established or desired standard.
(ii) Accomplish interchangeability of manufacture.
(iii) Provide means of finding out inadequacies in manufacture. The results of inspection are recorded and
reported to the manufacturing department for further action to ensure production of acceptable parts and
reduction in scrap.
(iv) Purchase good-quality raw materials, tools, and equipment that govern the quality of the finished
products.
(v) Coordinate the functions of quality control, production, purchasing, and other departments of the
organizations.
(vi) Take the decision to perform rework on defective parts, that is, to assess the possibility of making some
of these parts acceptable after minor repairs
Objectives of Metrology and Measurements

• To ascertain that the newly developed components are comprehensively evaluated and designed within the process, and
that facilities possessing measuring capabilities are available in the plant.
• To ensure uniformity of measurements.
• To carry out process capability studies to achieve better component tolerances.
• To assess the adequacy of measuring instrument capabilities to carry out their respective measurements.
• To ensure cost-effective inspection and optimal use of available facilities.
• To adopt quality control techniques to minimize scrap rate and rework.
• To establish inspection procedures from the design stage itself so that the measuring methods are standardized.
• To calibrate measuring instruments regularly in order to maintain accuracy in measurements.
• To resolve the measurement problems that might arise in the shop floor.
• To design gauges and special fixtures required to carry out inspection.
• To investigate and eliminate different sources of measuring errors.
Accuracy and Precision:

• Accuracy is the degree of agreement of the measured dimension with its true magnitude. True magnitude can be
defined as the mean of the infinite number of measured values when the average deviation due to the various
contributing factors tends to zero. In practice, realization of the true value is not possible due to uncertainties of
the measuring process and hence cannot be determined experimentally.

• Precision is the degree of agreement of the repeated measurements of a quantity made by using the same method,
under similar conditions. Repeatability is random in nature and, by itself, does not assure accuracy, though it is a
desirable characteristic. If an instrument is not precise, it would give different results for the same dimension for
repeated readings. In most measurements, precision assumes more significance than accuracy. Precision is
associated with a process on set of measurements which indicates the quality of measurements.

Visualisation of Accuracy and Precision


Sensitivity and Consistency:
• Two terms are associated with accuracy, especially when one strives for higher
accuracy in measuring equipment: sensitivity and consistency.
• Sense factor affects accuracy of measurements, be it the sense of feel on sight. The
ratio of the change of instrument indication to the change of quantity being
measured is termed as sensitivity.
• The permitted degree of sensitivity determines the accuracy of the instrument. An
instrument cannot be more accurate than the permitted degree of sensitivity.
• When successive readings of the measured quantity obtained from the
measuring instrument are same all the time, the equipment is said to be
consistent. A highly accurate instrument possesses both sensitivity and consistency.
• An instrument that is both consistent and sensitive need not be accurate, because its Relationship of Accuracy with
scale may have been calibrated with a wrong standard. Errors of measurement will cost
be constant in such instruments, which can be taken care of by calibration.
• Range is defined as the difference between the lower and higher values that an
instrument is able to measure. It is important to note that as the magnification
increases, the range of measurement decreases and, at the same time, sensitivity
increases.
• Demanding high accuracy unless it is absolutely required is not viable, as it
increases the cost of measuring equipment and hence the inspection cost. It can be
observed from the figure that as the requirement of accuracy increases, the cost
increases exponentially.
General Measurement Concepts
The primary objective of measurement in industrial inspection is to determine the qualities of a component such as
permissible tolerance limits, form, surface finish, size, and flatness and check for conformity with the standard quality
specifications. In order to realize this, the three basic elements of measurements, which are of significance are the
following:

1. Measurand: A physical quantity such as length, weight


and angle to be measured.

2. Comparator: To compare the measurand (physical


quantity) with a known standard (reference) for evaluation.

3. Reference: The physical quantity or property to which


quantitative comparisons are to be made which is
internationally accepted. Major Elements during Measurement
Calibration of Measuring instruments:
• The process of validation of the measurements to ascertain whether the given physical quantity conforms to the
original/National standard of measurement is known as traceability of the standard.

• Calibration is a means of achieving traceability. It is the procedure used to establish a relationship between the
values of the quantities indicated by the measuring instrument and the corresponding values realized by standards
under specified conditions.

• If the instrument has an arbitrary scale, the indication must be multiplied by a factor to obtain the nominal value of
the quantity measured. This is known as scale factor.

• If the values of the variable involved remain constant (not time dependent) while calibrating a given instrument, this
type of calibration is known as static calibration. whereas if the value is time dependent or time-based information
is required. it is called dynamic calibration.

• General calibration requirements of the measuring system are as follows:


i. Accepting calibration of the new system.
ii. Ensuring traceability of standards for the unit of measurement under consideration.
iii. Carrying out calibration of measurement periodically, depending on usage or when it is used after storage.
Calibration of Measuring instruments:

• Calibration is achieved by comparing the measuring instrument with the following:


i. A primary standard
ii. A known source of input
iii. A secondary standard that possesses a higher accuracy than the instrument to be calibrated.

• During calibration , the dimensions and tolerances of the gauge on accuracy of the measuring instrument is checked
by comparing it with a standard instrument or gauge of known accuracy.

• The limiting factor of the calibration process is the repeatability, because it is the only characteristic error that
cannot be calibrated out of the measuring system and hence the overall measurement accuracy is curtailed.

• Conditions that exist during calibration of the instrument should be similar to the conditions under which actual
measurements are made.

• The standard that is used for calibration purpose should normally be one order of magnitude more accurate than the
instrument to be calibrated.

• When it is intended to achieve greater accuracy, it becomes imperative to know all the sources of errors so that they
can be evaluated and controlled.
Errors in Measurement:
While performing physical measurements, it is important to note that the measurements obtained are not completely accurate, as they are
associated with uncertainty. Thus, in order to analyse the measurement data, we need to understand the nature of errors associated with
the measurements. Two broad categories of errors in measurement have been identified: Systematic and Random errors.

Systematic or Controllable Errors

• A systematic error is a type of error that deviates by a fixed amount from the true value of measurement. These type of errors are
controllable in both their magnitude and their direction and can be assessed and minimized if efforts are made to analyse them.

• In order to assess them, it is important to know all the sources of such errors, and if their algebraic sum is significant with respect to
manufacturing tolerance, necessary allowance should be provided to the measured size of the workpiece.

• It is difficult to identify systematic errors, and statistical analysis cannot be performed. In addition, systematic errors cannot be
eliminated by taking a large number of readings and then averaging them out. These errors are reproducible inaccuracies that are
consistently in the same direction.

• The reason for their occurrence are:

i. Calibration Errors: A small amount of variation from the nominal value will be present in the actual length standards, as in slip
gauges and engraved scales. Inertia of the instrument and its hysteresis effect do not allow the instrument to translate with true
fidelity. Hysteresis is defined as the difference between the indications of the measuring instrument when the value of the quantity
is measured in both the ascending and descending orders.
Errors in Measurement:
ii. Ambient Conditions: It is essential to maintain the ambient conditions at internationally accepted values of standard
temperature (20°C) and pressure (760 mm Hg) conditions. The most significant ambient condition affecting the accuracy of
measurement is temperature. In order to obtain error-free results, a correction factor for temperature has to be provided. Relative
humidity, thermal gradients, vibrations, and CO2 content of the air affect the refractive index of the atmosphere.

iii. Deformation of Workpiece: Any elastic body. when


subjected to a load, undergoes elastic deformation. The
magnitude of deformation depends on the applied load,
area of contact, and mechanical properties of the material
of the given wonk piece. Therefore, during comparative
measurements, one has to ensure that the applied measuring
loads are the same.
iv. Avoidable Errors: These include the following: Elastic deformation due to stylus pressure
a) Datum errors: Datum error is the difference between the true value of the quantity being measured and the indicated value.
b) Reading errors: These errors occur due to the mistakes committed by the observer while noting down the values of the quantity
being measured. Digital readout devices eliminate or minimize these type of errors usually made by the observer.
c) Error due to parallax effect: Parallax errors occur when the sight is not perpendicular to the instrument scale or the observer reads
the instrument from an angle. Instruments having a scale and pointer are normally associated with this type of error.
d) Effect of misalignment: These occur due to the inherent inaccuracies present in the measuring
instruments. These errors may also be due to improper use, handling, or selection of the instrument.
e) Zero errors: When no measurement is carried out. the reading on the scale of the instrument should be zero. A zero error is defined
as that value when the initial value of a physical quantity indicated by the measuring instrument is a non-zero value when it should
have been actually zero. This value is constant for all the values measured using the same instrument.

Therefore, in order to find out and eliminate any systematic error, it is required to calibrate the measuring instrument before
conducting an experiment.
Random Errors:
• Random errors provide a measure of random deviations when measurements of a physical quantity are carried out
repeatedly.
• Specific causes for these variations cannot be determined, since these variations are unpredictable and uncontrollable
by the experimenter and are random in nature.
• They are of variable magnitude and may be either positive or negative. When these repeated measurements are plotted,
they follow a normal or Gaussian distribution.
• Random errors can be statistically evaluated, and their mean value and standard deviation can be determined

Relationship between systematic and random errors with


measured value
• Random errors can be minimized by calculating the average of a large number of observations. Since precision is closely
associated with the repeatability of the measuring process, a precise instrument will have very few random errors and better
repeatability.
Random Errors:
• Some likely sources of random errors are:

i. Presence of transient fluctuations in friction in the measuring instrument.


ii. Operator's inability to note the readings because of fluctuations during measurement.
iii. Positional errors associated with the measured object and standard, arising due to small variation in settings.

Difference between Systematic and Random errors:

Systematic Error Random Error


Not easy to detect Easy to detect
Cannot be eliminated by repeated measurements Can be minimized by repeated measurements

Can be assessed easily Statistical analysis required


Calibration helps reduce systematic errors Calibration has no effect on random errors
Minimization of systematic errors increase the Minimization of random errors increases
accuracy of measurement repeatability and hence precision of the
measurement
Reproducible inaccuracies that are consistently in Random in nature and can be both positive and
the same direction negative
Methods of Measurement
• Deflection Method: This method involves the indication of the value of the quantity to be measured directly by
deflection of a pointer on a calibrated Scale. For e.g., Pressure Measurement.
• Complementary Method: The value of the quantity to be measured is combined with a known value of the same
quantity. The combination is so adjusted that the sum of these two value is equal to the predetermined comparison
value. For e.g., Determination of the volume of a solid by liquid displacement.
• Null measurement Method: In this method, the difference between the value of the quantity to be measured and
the known value of the same quantity with which comparison is to be made is brought to zero.
• Substitution Method: It is a direct comparison method. This method involves the replacement of the value of the
quantity to be measured with a known value of the same quantity, so selected that the effects produced in the
indicating device by these two values are the same. For e.g., The Borda method of determining mass.
• Contact Method: In this method, the surface to be measured is touched by the sensor or measuring tip of the
instrument. For e.g., measurements using micrometer, vernier calliper etc.
• Contactless Method: There is no direct contact with the surface to be measured. For e.g., Optical instruments,
Tool maker's microscope and Profile Projector.
• Composite Method: The actual contour of a component to be checked is compared with its maximum and
minimum tolerance limits. This method is very reliable to ensure interchangeability and is usually affected with
composite GO gauges. For e.g., The use of a GO screw plug gauge to check the thread of a nut is an example of
this method.
Limits, Fits and Tolerances
Historical Background
• Before the 18th century production used to be confined to small number of units and the same operator could adjust the mating
components to obtain desired fit.

• Devices such as guns were made one at a time by gunsmith. If single component of a firearm needed a replacement, the entire firearm
either had to be sent to an expert gunsmith for custom repairs or discarded and replaced by another firearm.

Interchangeability
• An interchangeable part is one which can be substituted for similar part manufactured to the same drawing.
• When one component assembles properly (and which satisfies the functionality aspect of the assembly) with any mating component,
both chosen at random, then it is known as interchangeable.
• For example, consider the assembly of a shaft and a part with a hole. The two mating parts are produced in bulk, say 1000 each. By
interchangeable assembly, any shaft chosen randomly should assemble with any part with a hole selected at random, providing the
desired fit.
• Interchangeability of parts are achieved by combining several innovations and improvements in machining operations so that we will
be able to produce components with accuracy.
• Advantage of interchangeability: The assembly of mating parts is easier, enhances the production rate, brings down the assembling
cost drastically, repairing and replacement of worn-out parts is easy and mass producible.
Tolerances
• We know that it is not possible to precisely manufacture components to a given dimension
because of the inherent inaccuracies of the manufacturing processes.
• The components are manufactured in accordance with the permissive tolerance limits, as suggested by the designer,
to facilitate interchangeable manufacture.
• Tolerance can be defined as “the magnitude of permissible variation of a dimension or other
measured value or control criterion from the specified value”.
or
• The total variation permitted in the size of a dimension, and is the algebraic difference between the upper
and lower acceptable dimensions. It is an absolute value.

Classification of Tolerance

Tolerance can be classified under the following categories:

• Unilateral tolerance

• Bilateral tolerance

• Compound tolerance

• Geometric tolerance
Unilateral Tolerance

• When the tolerance distribution is only on one side of the basic size, it is known as unilateral tolerance.

• Unilateral tolerance is employed when precision fits are required during assembly.

• This type of tolerance is usually indicated when the mating parts are also machined by the same operator.

• Unilateral tolerance is employed in the drilling process wherein dimensions of the hole are most likely to deviate in one direction
only, that is, the hole is always oversized rather than undersized.

Example

Bilateral Tolerance

• When the tolerance distribution lies on either side of the basic size, it is known as bilateral tolerance.

• This system is generally preferred in mass production where the machine is set for the basic size

Example
Compound Tolerance

• When tolerance is determined by established tolerances on more than one dimension, it is known as compound tolerance.
• In practice, compound tolerance should be avoided as far as possible.

Geometric Tolerance
• Geometric tolerance is defined as the total amount that the dimension of a manufactured part can vary
• This method is frequently used in industries, depending on the functional requirements, tolerance on
diameter, straightness, and roundness may be specified separately.
Important Terms in Tolerancing

Shaft: A term used by convention to designate all external features of a part, including those which are not cylindrical.
Hole: A term used by convention to designate all internal features of a part, including those which are not cylindrical

Basic Size: The nominal diameter of the shaft (or bolt) and the hole. This is, in general, the same for both components.

Actual Size: The measured size of the finished part after machining.

Zero Line: It is a straight line corresponding to the basic size. The deviations are measured from this line. The positive
and negative deviations are shown above and below the zero line, respectively.

Limits of Size: The term limits of size referred to the two extreme permissible sizes for a dimension of a part(hole or
shaft), between which the actual size should lie
Maximum Limit of Size: The greater of the two limits of size of a part(Hole or shaft)

Minimum Limit of Size: The smaller of the two limits of size of a part(Hole or shaft).

Allowance: It is the difference between the basic dimensions of the mating parts. When the shaft size is less than
the hole size, then the allowance is positive and when the shaft size is greater than the hole size, then the allowance
is negative.

Tolerance Zone: It is the zone between the maximum and minimum limit size.
Upper Deviation: It is the algebraic difference between the maximum size and the basic size. The upper deviation
of a hole is represented by a symbol ES (Ecart Superior) and of a shaft, it is represented by es.
Lower Deviation: It is the algebraic difference between the minimum size and the basic size. The lower deviation
of a hole is represented by a symbol EI (Ecart Inferior) and of a shaft, it is represented by ei.
MAXIMUM AND MINIMUM METAL CONDITIONS

❑ Let us consider a shaft having a dimension of 40 ± 0.05 mm

• The maximum metal limit (MML) of the shaft will have a dimension of 40.05mm.
• The shaft will have the least possible amount of metal at a lower limit of 39.95 mm, and this limit of the shaft is
known as minimum or least metal limit (LML)

❑ Similarly, consider a hole having a dimension of 45 ± 0.05mm


• The hole will have a maximum possible amount of metal at a lower limit of 44.95mm and the lower limit of the hole
is designated as MML.

• The higher limit of the hole will be the LML, at a high limit of 45.05 mm
FITS
• Manufactured parts are required to mate with one another during assembly. The relationship between the two mating
parts that
are to be assembled, that is, the hole and the shaft, with respect to the difference in their dimensions before assembly
is called a fit.
• An ideal fit is required for proper functioning of the mating parts.
• Three basic types of fits can be identified, depending on the actual limits of the hole or shaft, Clearance fit,
Interference fit, Transition fit.

Clearance fit
• In clearance fit, an air space or clearance exists between the shaft and hole
• Such fits give loose joint.
• A clearance fit has positive allowance, i.e., there is minimum positive clearance between higher limit of the shaft
and lower limit of the hole.
• Allows rotation or sliding between the mating parts.
Interference fit Transition Fit

• A negative difference between diameter of the hole and • It may result in either clearance fit, or interference fit
the shaft is called interference. depending on the actual value of the individual
• In such cases, the diameter of the shaft is always larger tolerances of the mating components.
than the hole diameter. • Transition fits are a compromise between clearance
• It used for components where motion, power must be and interference fits.
transmitted. • They are used for applications where accurate
• Interference exists between the high limit of hole and location is important but either a small amount of
low limit of the shaft clearance or interference is permissible.
Allowance
• An allowance is the intentional difference between the maximum material limits, that is, LLH and HLS (minimum clearance or
maximum interference) of the two mating parts.

• It is the prescribed difference between the dimensions of the mating parts to obtain the desired type of fit.

• Allowance may be positive or negative. Positive allowance indicates a clearance fit, and an interference fit is indicated by a
negative allowance.

Allowance = LLH - HLS


Hole Basis and Shaft Basis Systems
• To obtain the desired class of fits, either the size of the hole or the size of the shaft must vary.
• Two types of systems are used to represent the three basic types of fits, namely clearance, interference, and transition fits.
• They are (a) hole basis system and (b) shaft basis system.
Although both systems are the same, hole basis system is generally preferred in view of the
functional properties.

Hole Basis System


• In this system, the size of the hole is kept constant, and the shaft size is varied to give various types of fits.
• In a hole basis system, the fundamental deviation or lower deviation of the hole is zero, that is, the lower limit of the hole is the
same as the basic size.
• The two limits of the shaft and the higher dimension of the hole are then varied to obtain the desired type of fit.
• This type of system is widely adopted in industries, as it is easier to manufacture shafts of varying sizes to the required
tolerances

Fig. Hole basis system (a) Clearance fit (b) Transition fit (c) Interference fit
Shaft Basis System

• The system in which the dimension of the shaft is kept constant, and the hole size is varied to
obtain various types of fits is referred to as shaft basis system.
• In this system, the fundamental deviation or the upper deviation of the shaft is zero, that is, the HLH equals the basic
size.
• The desired class of fits is obtained by varying the lower limit of the shaft and both limits of the hole.
• This system is not preferred in industries, as it requires a greater number of standard-size tools such as reamers,
broaches, and gauges, which increases manufacturing and inspection costs.
• It is normally preferred where the hole dimension is dependent on the shaft dimension and is used in situations
where the standard shaft determines the dimensions of the mating parts such as couplings, bearings, collars, gears,
and bushings.

Fig. Shaft basis system (a) Clearance fit (b) Transition fit (c) Interference fit
Limit Gauging
• The term ‘limit gauging’ signifies the use of gauges for checking the limits of the components. Gauging plays an important role
in the control of dimensions and interchangeable manufacture.
• Limit gauges ensure that the components lie within the permissible limits, but they do not determine the actual size or
dimensions.
• The gauges required to check the dimensions of the components correspond to two sizes conforming to the maximum and
minimum limits of the components.
• They are called GO gauges or NO GO or NOT GO gauges, which correspond, respectively, to the MML and LML of the
component.
• The GO gauge manufactured to the maximum limit will assemble with the mating (opposed) part, whereas the NOT GO gauge
corresponding to the low limit will not, hence the names GO and NOT GO gauges, respectively.

Metal limits for shaft gauging


Taylor’s Principle
• Taylor’s principle states that the GO gauge is designed to check maximum metal conditions, that is, LLH and HLS. It should
also simultaneously check as many related dimensions, such as roundness, size, and location, as possible.

• The NOT GO gauge is designed to check minimum metal conditions, that is, HLH and LLS. It should check only one
dimension at a time. Thus, a separate NOT GO gauge is required for each individual dimension.

• During inspection, the GO side of the gauge should enter the hole or just pass over the shaft under the weight of the gauge
without using undue force. The NOT GO side should not enter
or pass.

GO and NOT GO limits of plug gauge


Q1. In a limit system, the following limits are specified for a hole and shaft assembly:

Hole = mm and shaft = mm.


Determine the (a) tolerance and (b) allowance.

Solution:
(a) Determination of tolerance:
Tolerance on hole = HLH − LLH
= 30.02 − 30.00 = 0.02 mm
Tolerance on shaft = HLS − LLS
= [(30 − 0.02) − (30 − 0.05)] = 0.03 mm

(b) Determination of allowance:


Allowance = Maximum metal condition of hole − Maximum metal condition of shaft
= LLH − HLS
= 30.02 − 29.98 = 0.04 mm
Q2. The following limits are specified in a limit system, to give a clearance fit between a hole and a shaft:

Hole = mm and shaft = mm.


Determine the following:
(a) Basic size
(b) Tolerances on shaft and hole
(c) Maximum and minimum clearances

Solution:
(a) Basic size is the same for both shaft and hole.
(b) Determination of tolerance:
Tolerance on hole = HLH − LLH
= 25.03 − 25.00 = 0.03 mm

Tolerance on shaft = HLS − LLS


= [(25 − 0.006) − (25 − 0.020)] = 0.014 mm

Determination of clearances:
Maximum clearance = HLH − LLS
= 25.03 − 24.98 = 0.05 mm
Minimum clearance = LLH − HLS
= 25.00 − (25 − 0.006) = 0.06mm
Q3. For the following hole and shaft assembly, determine (a) hole and shaft tolerance and (b) type of fit.

Hole = mm and shaft = mm

Solution:
(a) Determination of tolerance:
Tolerance on hole = HLH − LLH
= 20.05 − 20.00 = 0.05 mm
Tolerance on shaft = HLS − LLS
= 20.08 − 20.06 = 0.02 mm

(b) To determine the type of fit, calculate maximum and minimum clearances:
Maximum clearance = HLH − LLS
= 20.05 − 20.06 = −0.01 mm
Minimum clearance = LLH − HLS
= 20.00 − 20.08 = −0.08 mm
Since both differences are negative, it can be concluded that the given hole and shaft pair has an interference
fit.

You might also like