0% found this document useful (0 votes)
234 views46 pages

Unit 2 BME RGPV Notes

Unit 2 BME RGPV Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
234 views46 pages

Unit 2 BME RGPV Notes

Unit 2 BME RGPV Notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

📐

Unit 2 (Measurements)
Concept of Measurement
Measurement is the process of quantifying physical quantities to understand and analyze
systems in mechanical engineering. It involves comparing an unknown quantity to a standard unit
using appropriate tools and techniques.

1. Definition of Measurement
Measurement is defined as the act of determining the magnitude, size, or quantity of a physical
parameter relative to a standard unit (e.g., meters, kilograms, seconds).

2. Importance of Measurement in Mechanical Engineering


Design and Analysis: Ensures accurate dimensions and tolerances in mechanical
components.

Performance Evaluation: Assesses system efficiency, such as engine power and thermal
performance.

Quality Control: Verifies compliance with manufacturing specifications.

Standardisation: Enables interoperability and ensures uniformity in production.

3. Basic Elements of Measurement


1. Measurand: The physical quantity being measured (e.g., length, pressure, velocity).

2. Measurement System: Comprises instruments and methods used for measurement.

Unit 2 (Measurements) 1
3. Standards: Defines units of measurement, e.g., International System of Units (S.I.).

4. Accuracy and Precision: Refers to the closeness of measurements to the true value and the
consistency of repeated measurements, respectively.

4. Types of Measurement
1. Direct Measurement: Directly comparing a quantity with a standard, e.g., using a ruler for
length.

2. Indirect Measurement: Inferring the value from related parameters, e.g., calculating velocity
from displacement and time.

5. Measurement Systems
1. Mechanical Systems: Tools like Vernier calipers, micrometers, and gauges.

2. Electronic Systems: Sensors and transducers for measuring dynamic quantities.

3. Fluid Systems: Devices like manometers and flow meters.

6. Characteristics of Measurement Systems


Sensitivity: Ability to detect small changes in the measurand.

Linearity: Proportionality between input and output over a range.

Response Time: Time taken by a system to stabilize after a change in input.

7. Errors in Measurement
1. Systematic Errors: Caused by flaws in the instrument or setup.

2. Random Errors: Due to unpredictable variations.

3. Human Errors: Resulting from operator mistakes.

Error Mitigation
Regular calibration of instruments.

Standardised operating procedures.

8. Applications in Mechanical Engineering


Design Validation: Measuring stress, strain, and thermal properties.

Diagnostics: Identifying wear and tear through vibration analysis.

Manufacturing: Ensuring dimensional accuracy in CNC machining.

Unit 2 (Measurements) 2
Errors in Measurement
Errors in measurement refer to the deviations between the measured value of a quantity and its
true value. In engineering applications, minimising these errors is crucial for achieving precise
and accurate results. Understanding the sources and types of errors helps in improving
measurement reliability.

1. Definition of Error
An error is the difference between the true value (actual value) and the measured value of a
physical quantity.

Mathematical Representation:

Error=True Value−Measured Value\text{Error} = \text{True Value} - \text{Measured Value}


Error=True Value−Measured Value

2. Types of Errors
Errors in measurement can be broadly categorised into three types:

(A) Systematic Errors


Systematic errors are predictable and consistent inaccuracies caused by flaws in the
measurement system or method.

1. Instrumental Errors:

Arise due to defects in the measuring instrument.

Examples: Zero error in Vernier callipers or micrometers.

2. Environmental Errors:

Caused by external factors such as temperature, humidity, or vibrations.

Example: Thermal expansion of components leading to incorrect readings.

3. Observational Errors:

Result from the operator's misjudgment during observation.

Example: Parallax error while reading an analog scale.

4. Calibration Errors:

Occur due to improper calibration of the instrument against standards.

(B) Random Errors


Random errors are unpredictable and arise due to unknown or uncontrollable factors.

These errors occur irregularly and affect the precision of measurements.

Unit 2 (Measurements) 3
Example: Fluctuations in electrical signals during sensor readings.

(C) Gross Errors


Gross errors are caused by human mistakes or blunders during measurement or recording.

Examples: Incorrect settings, calculation errors, or misreading the instrument scale.

3. Sources of Errors
1. Instrumental: Imperfections in design or wear and tear of the device.

2. Environmental: Factors like pressure changes, electromagnetic interference.

3. Human: Lack of skill or attention by the operator.

4. Classification Based on Magnitude


1. Absolute Error

The magnitude of the difference between the true value and measured value.

2. Relative Error:

The ratio of absolute error to the true value.

3. Percentage Error:

The relative error expressed as a percentage.

5. Methods to Minimise Errors


1. Calibration: Regularly calibrating instruments against standards.

2. Environmental Control: Minimising external factors like temperature and vibrations.

3. Operator Training: Ensuring skilled operation and accurate observation.

4. Instrument Maintenance: Regular servicing and inspection of measuring devices.

5. Use of Modern Instruments: Employing digital or automated systems to reduce human


errors.

6. Applications in Mechanical Engineering

Unit 2 (Measurements) 4
Ensuring precise dimensional tolerances during manufacturing.

Monitoring performance parameters like pressure and temperature in machinery.

Measuring stress, strain, and other mechanical properties for structural analysis.

Temperature Measurement
Temperature measurement is a crucial aspect of Basic Mechanical Engineering, as it helps
monitor and control thermal systems. Temperature is a measure of the thermal energy of a
substance, and its accurate determination is essential for various industrial, scientific, and
engineering applications.

1. Definition of Temperature Measurement


Temperature measurement is the process of determining the hotness or coldness of a body or
environment using specific devices and methods. It is expressed in standard units such as
Celsius (°C), Fahrenheit (°F), or Kelvin (K).

2. Principles of Temperature Measurement


Temperature measurement is based on the physical properties of materials that change
predictably with temperature. Some common principles include:

1. Thermal Expansion: Expansion of solids, liquids, or gases with temperature.

2. Electrical Resistance: Change in electrical resistance of materials.

3. Thermoelectric Effect: Voltage generated at the junction of two different metals.

4. Radiation Emission: Intensity of infrared radiation emitted by a body.

3. Methods of Temperature Measurement


Temperature can be measured using contact or non-contact methods.

(A) Contact Methods


1. Thermometers:

Use thermal expansion or liquid column displacement.

Example: Mercury-in-glass thermometers.

2. Thermocouples:

Operate on the thermoelectric effect.

Composed of two dissimilar metals that generate voltage proportional to temperature.

3. Resistance Temperature Detectors (RTDs):

Unit 2 (Measurements) 5
Use the principle of change in electrical resistance with temperature.

Provide high accuracy and stability.

4. Thermistors:

Semiconductors whose resistance decreases sharply with temperature.

Common in electronic applications.

5. Bimetallic Strips:

Two metals with different thermal expansion coefficients are bonded together.

Used in thermostats and industrial applications.

(B) Non-Contact Methods


1. Infrared Thermometers:

Measure infrared radiation emitted by a body.

Suitable for high-temperature and hazardous environments.

2. Pyrometers:

Used for extremely high temperatures, such as in furnaces.

Operate on the principle of radiation intensity.

4. Types of Temperature Measurement Scales


1. Celsius Scale (°C): Used worldwide, based on the freezing and boiling points of water (0°C
and 100°C).

2. Fahrenheit Scale (°F): Common in the U.S., freezing and boiling points of water are 32°F and
212°F.

3. Kelvin Scale (K): Absolute scale used in scientific applications, starts from absolute zero.

Conversions Between Scales:

T (K) = T (°C) + 273.15T (K) = T (°C) + 273.15

T (K) = T (°C) + 273.15

9
T (°F ) = T (°C) × 95 + 32T (°F ) = T (°C) × + 32
5

T (°F ) = T (°C) × 59​+ 32

5. Advantages and Limitations of Temperature Measurement Devices

Unit 2 (Measurements) 6
Device Advantages Limitations

Wide temperature range, fast


Thermocouple Requires calibration, less accurate
response

RTD High accuracy and stability Expensive, slower response

Infrared Non-contact, suitable for moving Affected by emissivity and environmental


Thermometer objects factors

Mercury
Simple and precise Toxic, limited temperature range
Thermometer

6. Applications in Mechanical Engineering


1. Industrial Processes: Monitoring furnace, boiler, and engine temperatures.

2. HVAC Systems: Ensuring temperature control for thermal comfort.

3. Material Testing: Measuring temperatures during heat treatment or welding.

4. Automobile Industry: Monitoring engine and coolant temperatures.

7. Key Considerations for Accurate Measurement


1. Calibration: Regular calibration of devices against standards.

2. Environmental Conditions: Minimize external factors like humidity or vibration.

3. Proper Placement: Ensuring contact or correct positioning for non-contact methods.

Pressure Measurement
Pressure measurement is an essential aspect of Basic Mechanical Engineering. It involves
determining the force exerted per unit area by a fluid (liquid or gas). Accurate pressure
measurement is crucial for the design, analysis, and control of mechanical systems in various
industrial applications.

1. Definition of Pressure Measurement


Pressure is defined as the force applied per unit area, typically expressed in units such as
Pascals (Pa), bar, or psi.
Mathematical Formula:

F
P= ​

A
Where:

P = P ressure

Unit 2 (Measurements) 7
F = F orce
A = Area

2. Importance of Pressure Measurement


Ensures the safe operation of systems such as boilers, turbines, and hydraulic machines.

Assists in performance evaluation of fluid systems like pipelines and compressors.

Enables process control in industries like petrochemicals, power plants, and aerospace.

3. Types of Pressure
1. Absolute Pressure (Pabs ): Measured relative to a perfect vacuum.

1. Gauge Pressure (Pgauge ): Measured relative to atmospheric pressure.


Positive gauge pressure is above atmospheric pressure.

Negative gauge pressure (vacuum) is below atmospheric pressure.

2. Differential Pressure: The difference between two pressure points in a system.

4. Methods of Pressure Measurement

(A) Mechanical Devices


1. Manometers:

Use the height of a liquid column to measure pressure.

Types: U-tube manometer, inclined manometer.

Advantages: Simple, accurate.

Limitations: Bulky, not suitable for dynamic measurements.

2. Bourdon Tube:

Consists of a curved tube that straightens when pressure is applied.

Common in pressure gauges.

Advantages: Durable, measures a wide range of pressures.

Limitations: Less accurate for low pressures.

3. Diaphragm Gauges:

Use a flexible diaphragm that deflects under pressure.

Unit 2 (Measurements) 8
Advantages: Sensitive, suitable for low-pressure measurements.

Limitations: Limited range.

4. Bellows:

Consist of a series of corrugated tubes that expand or contract with pressure.

Advantages: Compact, suitable for moderate pressures.

(B) Electronic Devices


1. Pressure Transducers:

Convert pressure into an electrical signal.

Types: Strain gauge transducers, piezoelectric sensors.

Advantages: Highly accurate, suitable for dynamic measurements.

Limitations: Expensive, requires power supply.

2. Pressure Sensors:

Used in modern systems like automobiles and HVAC.

Advantages: Compact, easy to integrate with digital systems.

5. Units of Pressure Measurement


Unit Value

1 Pascal (Pa) 1 N/m²

1 bar 100000 Pa

1 atm 101.325 kPa

1 psi 6.895 kPa

6. Key Characteristics of Pressure Measuring Devices


1. Range: Maximum and minimum pressure the device can measure.

2. Sensitivity: Ability to detect small changes in pressure.

3. Accuracy: Closeness of the measured value to the true value.

4. Response Time: Time taken to respond to pressure changes.

7. Common Applications of Pressure Measurement


1. Industrial Systems: Boilers, compressors, and pipelines.

2. Automobiles: Tire pressure monitoring, fuel injection systems.

Unit 2 (Measurements) 9
3. HVAC Systems: Monitoring airflow and refrigerant pressure.

4. Aerospace: Cabin pressure and altitude measurement.

8. Errors in Pressure Measurement


1. Instrumental Errors: Caused by imperfections in the measuring device.

2. Parallax Errors: Occur while reading analog gauges.

3. Environmental Errors: Due to temperature, vibrations, or electromagnetic interference.

Velocity Measurement
Velocity measurement refers to the process of determining the rate of change of position of an
object with respect to time. It is an essential aspect of mechanical engineering, particularly in
fluid mechanics, vehicle dynamics, and industrial applications.

1. Definition of Velocity Measurement


Velocity is the vector quantity that defines the speed of an object in a specific direction.

Mathematical Representation:

dx
v= ​

dt
Where:

v = V elocity
dx = Displacement
dt = T ime Interval 
Velocity measurement involves techniques to quantify the magnitude and direction of this motion.

2. Importance of Velocity Measurement


1. Design of Machinery: Ensures efficient operation of machines with moving parts.

2. Fluid Dynamics: Measures flow velocities in pipes and ducts.

3. Performance Evaluation: Assesses speed and efficiency in automotive and aerospace


systems.

4. Industrial Applications: Used in conveyor belts, turbines, and jet engines.

3. Methods of Velocity Measurement

Unit 2 (Measurements) 10
(A) For Solids
1. Contact Methods

Tachometers: Measure rotational velocity in machines.

Example: Mechanical and digital tachometers.

Advantages: Simple, reliable for direct measurements.

Limitations: Not suitable for non-rotating objects.

Encoders: Measure angular velocity and convert it to linear velocity using geometry.

Advantages: High precision, suitable for automation systems.

2. Non-Contact Methods

Laser Doppler Velocimeter (LDV): Uses the Doppler effect of laser light to measure
velocity.

Advantages: High accuracy, no physical contact with the object.

Limitations: Expensive and requires clear visibility.

(B) For Fluids


1. Pitot Tube

Measures velocity in fluid flow based on pressure difference (Bernoulli’s principle).

Advantages: Simple, widely used in aerodynamics and fluid mechanics.

Limitations: Requires calibration and is not suitable for highly turbulent flows.

2. Anemometer

Measures air velocity, commonly used in HVAC systems and meteorology.

Types: Cup anemometers, hot-wire anemometers.

Advantages: Easy to use, cost-effective.

Limitations: Limited to air or gas velocity.

3. Ultrasonic Flow Meters

Use ultrasound waves to measure fluid velocity.

Advantages: Non-invasive, suitable for various fluids.

Limitations: Expensive and requires precise alignment.

4. Magnetic Flow Meters

Measure velocity of conductive fluids based on Faraday’s law of electromagnetic


induction.

Unit 2 (Measurements) 11
Advantages: Accurate, no moving parts.

Limitations: Limited to conductive fluids.

4. Devices and Techniques for Velocity Measurement


Device Principle Applications

Pitot Tube Pressure difference (Bernoulli's principle) Aircraft speed, fluid flow in pipes

Tachometer Rotational speed of shafts Engines, turbines, machinery

LDV Doppler shift of laser light High-precision velocity in research

Ultrasonic Flow Meter Doppler or transit time of sound waves Industrial fluid flow, wastewater systems

Anemometer Measurement of airflow velocity HVAC, meteorology

5. Factors Affecting Velocity Measurement


1. Medium Properties: Density, viscosity, and temperature of the medium.

2. Environmental Conditions: Turbulence, obstructions, and flow uniformity.

3. Device Calibration: Ensures accuracy and reliability.

6. Units of Velocity
SI Unit: Meters per second (m/s)

Other Units: Kilometers per hour (km/h), miles per hour (mph)

7. Applications in Mechanical Engineering


1. Automotive Industry: Monitoring vehicle speed, engine performance.

2. Fluid Systems: Measuring flow rates in pipelines and HVAC ducts.

3. Aerospace Engineering: Determining aircraft velocity for stability and performance.

4. Industrial Automation: Controlling conveyor belt speeds and robotic motion.

8. Errors in Velocity Measurement


1. Instrumental Errors: Imperfections in the device or setup.

2. Environmental Errors: Turbulence, vibrations, or obstructions.

3. Human Errors: Inaccurate setup or misinterpretation of data.

Force and Torque Measurement

Unit 2 (Measurements) 12
Force and torque measurement are critical aspects of mechanical engineering used to analyze,
monitor, and control the performance of mechanical systems. They are essential in
understanding the behavior of structures, machines, and dynamic systems.

1. Force Measurement

Definition of Force
Force is a physical quantity that causes an object to move, deform, or change its state of rest or
motion.
Mathematical Formula:

F = m⋅a

Where:

F = F orce(N)
m = Mass(kg)
a = Acceleration(m/s2 )

Methods of Force Measurement


1. Mechanical Devices:

Use elastic deformation or deflection to measure force.

Examples:

Spring Balances: Use Hooke’s law.

F = k⋅x

Proving Rings: Measure deformation in a circular ring under force.

2. Hydraulic Devices:

Use fluid pressure to measure force.

Example:

Hydraulic Load Cells: Force compresses fluid, and pressure is measured.

3. Electrical Devices:

Use electrical properties that change under applied force.

Examples:

Strain Gauges: Measure strain produced by force using a Wheatstone bridge circuit.

Unit 2 (Measurements) 13
Piezoelectric Sensors: Generate an electrical signal proportional to applied force.

Applications of Force Measurement


1. Monitoring loads on structural components like beams and columns.

2. Measuring forces in machine tools, presses, and assembly lines.

3. Testing material properties (e.g., tensile and compressive strength).

2. Torque Measurement

Definition of Torque
Torque is the rotational equivalent of force, defined as the force applied at a distance from the
axis of rotation.

Mathematical Formula:

T = F ⋅ r ⋅ sin θ

Where:

T = T orque(Nm)
F = F orce(N)
r = P erpendicular distance from the axis (m)
θ = Angle between force and lever arm

Methods of Torque Measurement


1. Mechanical Devices:

Use deflection or deformation to measure torque.

Examples:

Torque Wrenches: Measure torque applied to fasteners.

Torsion Bars: Measure angular deformation caused by torque.

2. Electrical Devices:

Use sensors to convert torque into electrical signals.

Examples:

Strain Gauge Torque Transducers: Use strain gauges mounted on a rotating shaft to
measure torque.

Piezoelectric Torque Sensors: Generate voltage proportional to torque.

Unit 2 (Measurements) 14
3. Dynamic Torque Measurement:

For rotating systems, torque can be measured using:

Rotary Torque Sensors: Mounted on rotating shafts to measure dynamic torque.

Magneto-elastic Sensors: Measure changes in magnetic properties under torque.

Applications of Torque Measurement


1. Monitoring torque in engines, turbines, and gearboxes.

2. Quality control during assembly processes (e.g., tightening bolts).

3. Measuring torque in rotating equipment like pumps, fans, and motors.

3. Units of Force and Torque


Quantity SI Unit Other Units

Force Newton (N) kgf, lbf

Torque Newton-meter (Nm) ft-lbf, kgf-m

4. Key Characteristics of Measuring Devices


1. Range: Maximum measurable force or torque.

2. Accuracy: Closeness of the measurement to the true value.

3. Sensitivity: Device response to small changes in force or torque.

4. Durability: Ability to withstand repeated loads without damage.

5. Errors in Force and Torque Measurement


1. Instrumental Errors: Caused by wear and tear or calibration issues.

2. Environmental Errors: Temperature, vibrations, or external forces affecting readings.

3. Human Errors: Misinterpretation of readings or improper setup.

6. Industrial Applications
Force Measurement:

Material testing machines.

Measuring load in cranes, bridges, and elevators.

Torque Measurement:

Automotive engine testing.

Unit 2 (Measurements) 15
Controlling torque in assembly lines.

Monitoring wind turbine performance.

Vernier Calliper
The Vernier Calliper is a precision instrument used to measure linear dimensions such as length,
depth, internal diameter, and external diameter with high accuracy. It is widely used in
mechanical engineering for dimensional analysis.

1. Definition of Vernier Caliper


A Vernier Calliper is a measuring device equipped with a main scale and a secondary scale
(Vernier scale) to provide precise measurements, typically with an accuracy of up to 0.02 mm.

2. Construction and Components

Main Parts of a Vernier Caliper


1. Main Scale: A fixed scale marked in millimeters or inches.

2. Vernier Scale: A sliding scale with finer graduations for precise measurement.

3. Fixed Jaw: A stationary jaw on the main scale.

4. Sliding Jaw: A movable jaw on the Vernier scale.

5. Depth Rod: A thin rod used to measure the depth of holes or slots.

6. Lock Screw: Secures the Vernier scale in position after measurement.

7. Fine Adjustment Screw: Helps in making fine adjustments for accurate readings.

3. Working Principle of Vernier Caliper


The Vernier Caliper works on the principle of alignment of Vernier scale divisions with the main
scale divisions.

The main scale provides the measurement in whole units.

The Vernier scale provides the fractional part by showing the alignment of a Vernier division
with a main scale division.

4. Types of Measurements with Vernier Caliper


1. External Measurements: Using the outer jaws to measure the external dimensions of objects.

2. Internal Measurements: Using the inner jaws to measure the internal diameter of holes or
slots.

Unit 2 (Measurements) 16
3. Depth Measurements: Using the depth rod to measure the depth of holes.

4. Step Measurements: Measuring the height difference between two surfaces.

5. Procedure for Using a Vernier Caliper


1. Set to Zero: Ensure the Vernier scale aligns with the main scale when fully closed.

2. Position the Object: Place the object between the appropriate jaws or under the depth rod.

3. Take the Measurement: Note the main scale reading just before the Vernier zero.

4. Find the Vernier Reading: Identify the Vernier division aligning with a main scale division.

5. Calculate the Final Reading:


Final Reading=Main Scale Reading+(Vernier Scale Reading×Least Count)

Final Reading=Main Scale Reading+(Vernier Scale Reading×Least Count)\text{Final Reading}


= \text{Main Scale Reading} + (\text{Vernier Scale Reading} \times \text{Least Count})

6. Advantages of Vernier Caliper


1. High precision and accuracy.

2. Can measure multiple dimensions (length, diameter, depth).

3. Simple and easy to use.

4. Portable and lightweight.

7. Limitations of Vernier Caliper


1. Limited to smaller dimensions compared to micrometers.

2. Requires skill to align and read scales accurately.

3. Errors due to parallax if not viewed properly.

8. Applications of Vernier Caliper


1. Mechanical Engineering: Measuring components for manufacturing and assembly.

2. Quality Control: Ensuring dimensions meet specified tolerances.

3. Material Testing: Measuring specimen dimensions before tests.

4. Scientific Research: Precision measurement in laboratory experiments.

9. Common Errors in Vernier Caliper Usage


1. Zero Error: Occurs if the zero of the Vernier scale does not align with the main scale when
fully closed.

Unit 2 (Measurements) 17
Positive Zero Error: Vernier zero is ahead of main scale zero.

Negative Zero Error: Vernier zero is behind main scale zero.

2. Parallax Error: Misreading due to incorrect eye positioning.

Micrometer
A micrometer is a precision instrument used to measure small dimensions with high accuracy. It
is commonly used in mechanical engineering and machining for measuring external dimensions
(e.g., diameter, thickness) of objects.

1. Definition of Micrometer
A micrometer is a mechanical measuring device that uses a calibrated screw for precise
measurements. It typically provides accuracy up to 0.01 mm0.01 \, \text{mm}0.01mm or 0.001
in0.001 \, \text{in}0.001in.

2. Types of Micrometers
1. Outside Micrometer: Measures the external dimensions of objects (e.g., thickness, diameter).

2. Inside Micrometer: Measures internal dimensions, such as the diameter of holes.

3. Depth Micrometer: Measures the depth of slots, holes, or steps.

4. Specialised Micrometers: Include thread micrometers, blade micrometers, etc., for specific
applications.

3. Construction and Components

Main Parts of a Micrometer


1. Frame: The rigid structure holding all parts.

2. Anvil: The fixed measuring surface.

3. Spindle: The movable measuring surface controlled by a screw mechanism.

4. Screw: A precision-threaded component responsible for moving the spindle.

5. Thimble: A rotating component marked with divisions for reading measurements.

6. Sleeve (Barrel): A fixed scale marked with linear graduations.

7. Ratchet Stop: Ensures consistent pressure during measurement.

8. Lock Nut: Locks the spindle in position to retain the measurement.

4. Working Principle of Micrometer

Unit 2 (Measurements) 18
The micrometer operates on the principle of the screw. A precision-threaded screw converts
small rotational motion into linear movement of the spindle. By rotating the thimble, the spindle
moves closer to or farther from the anvil, allowing precise measurement of the object.

Screw Pitch Formula:


Distance Moved by Spindle=Pitch of Screw×Number of Rotations\text{Distance Moved by
Spindle} = \text{Pitch of Screw} \times \text{Number of Rotations}
Distance Moved by Spindle=Pitch of Screw×Number of Rotations

Pitch: The distance the spindle moves in one full rotation of the thimble.

5. Least Count of a Micrometer


The least count is the smallest measurable value. It is determined by the pitch of the screw and
the number of divisions on the thimble.

Pitch of Screw
LeastCount(LC) =
Number of Divisions on Thimble

Least Count (LC)=Number of Divisions on ThimblePitch of Screw​


For a standard micrometer:

Pitch = 0.5mm (spindle moves 0.5 mm per full rotation).

Thimble divisions = 50.

6. Procedure for Using a Micrometer


1. Check Zero: Ensure the zero mark on the thimble aligns with the zero mark on the sleeve
when fully closed.

2. Place the Object: Position the object between the anvil and spindle.

3. Tighten Using Ratchet Stop: Rotate the thimble gently using the ratchet to avoid over-
tightening.

4. Read the Sleeve: Note the last visible graduation on the sleeve.

5. Read the Thimble: Note the thimble graduation that aligns with the reference line on the
sleeve.

6. Calculate the Measurement: Add the sleeve reading and the thimble reading.

7. Advantages of Micrometer
1. High precision and accuracy (up to 0.01mm).

Unit 2 (Measurements) 19
2. Simple and reliable operation.

3. Portable and durable.

4. Can measure very small dimensions.

8. Limitations of Micrometer
1. Limited range (typically 0−25mm).

2. Requires calibration for accurate results.

3. Errors can occur due to improper handling or wear of threads.

4. Cannot measure large dimensions directly.

9. Applications of Micrometer
1. Manufacturing: Measuring components for machining and assembly.

2. Quality Control: Ensuring product dimensions meet design specifications.

3. Material Testing: Measuring dimensions of specimens for tensile or hardness tests.

4. Scientific Research: Precision measurement in experiments.

10. Errors in Micrometer Measurement


1. Zero Error: Occurs if the zero mark does not align properly when closed.

Positive Zero Error: Thimble zero is ahead of sleeve zero.

Negative Zero Error: Thimble zero is behind sleeve zero.

2. Parallax Error: Misreading due to improper viewing angle.

3. Instrument Wear: Affects screw and spindle precision over time.

4. Temperature Effects: Thermal expansion can alter dimensions.

Diagram of a Micrometer
Include a labeled diagram showing:

Frame

Anvil

Spindle

Screw

Sleeve

Thimble

Unit 2 (Measurements) 20
Ratchet Stop

Lock Nut

Dial Gauge
A Dial Gauge, also known as a Dial Indicator, is a precision instrument used to measure small
linear displacements, check alignment, and ensure dimensional accuracy. It provides a visual
indication of movement through the rotation of a dial.

1. Definition of Dial Gauge


A Dial Gauge is a mechanical device that amplifies small linear displacements using a system of
gears and levers and displays the measurement on a circular dial.

2. Construction and Components

Main Parts of a Dial Gauge


1. Body: The housing containing internal mechanisms.

2. Dial: A circular graduated scale for reading measurements.

3. Pointer (Needle): Rotates over the dial to indicate the measurement.

4. Plunger (Spindle): A movable rod that senses displacement.

5. Rack and Pinion Mechanism: Converts linear motion of the plunger into rotary motion of the
pointer.

6. Bezel: A rotating outer rim for setting the zero point on the dial.

7. Contact Point: The tip of the plunger that contacts the surface being measured.

8. Mounting Lug or Stem: Used to attach the gauge to stands or fixtures.

9. Return Spring: Brings the plunger back to its original position after displacement.

3. Working Principle
The Dial Gauge works on the principle of mechanical amplification:

When the plunger moves due to displacement, its motion is transferred to a rack-and-pinion
mechanism.

This mechanism amplifies the movement and converts it into rotary motion of the pointer.

The pointer then moves over a graduated circular dial, providing a magnified reading of the
displacement.

Unit 2 (Measurements) 21
4. Types of Dial Gauges
1. Plunger-Type Dial Gauges: Measures linear displacement using a vertically moving plunger.

2. Lever-Type Dial Gauges: Measures angular displacement, useful in constrained spaces.

3. Digital Dial Gauges: Displays measurements digitally, offering higher precision and ease of
reading.

5. Applications of Dial Gauges


1. Measuring Small Linear Displacements: Used in machining and tool setups.

2. Checking Flatness and Straightness: Ensures components meet geometric tolerances.

3. Alignment Verification: Measures misalignments in machine parts like shafts and bearings.

4. Runout Measurement: Checks concentricity or wobble in rotating components.

5. Thickness and Depth Measurement: Used in manufacturing processes for quality control.

6. Procedure for Using a Dial Gauge


1. Setup: Mount the gauge securely on a stand or fixture.

2. Zero Setting: Rotate the bezel to align the pointer with the zero mark.

3. Contact the Surface: Place the contact point against the surface to be measured.

4. Read the Dial: Observe the pointer movement on the graduated scale to determine the
displacement.

5. Repeat Measurements: For consistency, take multiple readings and average the results if
necessary.

7. Least Count of Dial Gauge


The least count of a dial gauge is the smallest measurement it can detect.

Pitch of Plunger Movement


LeastCount =
Number of Dial Divisions

Typical least counts range from 0.01mm to 0.001mm.

8. Advantages of Dial Gauges


1. High sensitivity and precision.

2. Easy to read and interpret measurements.

3. Versatile for different measurement tasks.

Unit 2 (Measurements) 22
4. Durable and reliable in industrial environments.

9. Limitations of Dial Gauges


1. Limited to small displacements.

2. Requires proper mounting for accurate readings.

3. Susceptible to damage if handled carelessly.

4. Manual zeroing may introduce errors.

10. Errors in Dial Gauge Measurement


1. Instrumental Errors: Caused by wear or misalignment in the internal mechanism.

2. Environmental Errors: Temperature fluctuations affecting the gauge components.

3. Parallax Errors: Misreading the dial due to improper viewing angle.

4. Handling Errors: Improper contact with the surface or incorrect mounting.

Applications in Mechanical Engineering


1. Machine Tool Alignment: Ensures accuracy in lathe and milling machines.

2. Quality Control: Verifies dimensional tolerances in manufactured components.

3. Automotive Industry: Measures cylinder bore wear, crankshaft runout, and other parameters.

4. Surface Inspection: Checks surface irregularities and flatness.

Diagram of a Dial Gauge


Include a labeled diagram showing:

Dial

Pointer

Plunger

Bezel

Contact Point

Mounting Stem or Lug

Internal Rack-and-Pinion Mechanism

Slip Gauge

Unit 2 (Measurements) 23
Slip Gauges, also known as Gauge Blocks or Johansson Gauges, are precision measuring tools
used to obtain highly accurate measurements of length, thickness, or height. They are essential
in calibration, dimensional inspection, and as reference standards in manufacturing and
metrology.

1. Definition of Slip Gauge


Slip Gauges are rectangular or square blocks made from hardened and lapped steel, ceramic, or
tungsten carbide. They are used to build up desired lengths by combining multiple blocks
through a process called wringing.

2. Construction and Materials


1. Material:

Typically made from hardened alloy steel, tungsten carbide, or ceramic for durability
and wear resistance.

Non-corrosive coatings are often applied to prevent rusting.

2. Shape:

Rectangular or square with highly polished surfaces for close adhesion during wringing.

3. Types Based on Application:

Standard Slip Gauges: Used for general industrial purposes.

Special Purpose Gauges: Designed for specific applications like angle or thread
measurement.

3. Features of Slip Gauges


1. High Precision: Typically accurate to within 0.001 mm0.001 \, \text{mm}0.001mm.

2. Grade Classification: Divided into grades based on accuracy:

Grade 2: Workshop grade for general use.

Grade 1: Inspection grade for measurement rooms.

Grade 0: Calibration grade for high-precision measurements.

Grade 00: Master grade for laboratory standards.

3. Wringing Property: Slip gauges adhere to each other when lightly pressed together due to
molecular adhesion and vacuum effect.

4. Working Principle of Slip Gauges

Unit 2 (Measurements) 24
The length of an object is measured by stacking a combination of slip gauges to match the
desired dimension. The slip gauges are "wrung" together to eliminate gaps, ensuring precision.

5. Uses of Slip Gauges


1. Calibration: Used as standards to calibrate other measuring instruments like micrometers,
vernier calipers, and dial gauges.

2. Inspection: Ensures dimensional accuracy of manufactured components.

3. Tool Setting: Adjusts tools and fixtures to the desired dimensions.

4. Precision Measurement: Used in metrology labs for length measurement and testing.

6. Types of Slip Gauges


1. Metric Slip Gauges: Graduated in millimeters.

2. Imperial Slip Gauges: Graduated in inches.

3. Angle Slip Gauges: Designed for angular measurements.

4. Special Application Gauges: Customised for specific industrial requirements.

7. Procedure for Using Slip Gauges


1. Select the Required Blocks: Based on the desired measurement, calculate the combination of
slip gauges needed.

2. Wring the Gauges: Clean the surfaces and press the gauges together in a sliding motion to
wring them securely.

3. Check the Combination: Ensure the stack matches the desired dimension.

4. Use for Measurement: Place the stack against the object to verify dimensions or calibrate
instruments.

8. Advantages of Slip Gauges


1. High accuracy and precision.

2. Durable and long-lasting if properly maintained.

3. Simple to use for dimensional calibration and measurement.

4. Versatile, as they can be combined to form various lengths.

9. Limitations of Slip Gauges


1. Requires careful handling to avoid damage or wear.

2. Surface cleanliness is essential for wringing.

Unit 2 (Measurements) 25
3. Time-consuming for setting up multiple combinations.

4. Limited to small measurement ranges unless used with accessories.

10. Maintenance of Slip Gauges


1. Store in a protective box to prevent corrosion and scratches.

2. Clean with a lint-free cloth and oil after use.

3. Periodically calibrate against master gauges to ensure accuracy.

4. Avoid exposing them to high temperatures or humidity.

Diagram of Slip Gauges


Include a labeled diagram showing:

A stack of wrung slip gauges.

Individual blocks with dimensions marked.

Example of slip gauge in use for calibration or inspection.

Sine Bar
A Sine Bar is a precision instrument used in mechanical engineering to measure and set angles
with high accuracy. It works on the principle of trigonometry, specifically the sine function,
making it a standard tool in metrology and machining.

1. Definition of Sine Bar


A Sine Bar is a straight, hardened, and precision-ground bar with two cylindrical rollers of equal
diameter fixed at specific distances. It is used to measure angles by elevating one roller and
calculating the angle using the sine function.

2. Construction of a Sine Bar


1. Body: A rigid, flat steel or hardened alloy bar with a smooth and precise top surface.

2. Cylindrical Rollers: Two rollers of equal size attached at both ends of the bar, typically
parallel to each other.

3. Standard Lengths: The distance between the centers of the rollers is standardised, typically
100mm, 200mm, or 300mm.

3. Working Principle
The Sine Bar works on the trigonometric relationship:

Unit 2 (Measurements) 26
Height of Slip Gauge (H)
sin⁡(θ) =
Distance Between Rollers (L)

H : Height of the slip gauge stack used to elevate one roller.


L : Distance between the centers of the two rollers (constant for a given sine bar).
By adjusting the height H, the desired angle θ can be set or measured accurately.

4. Types of Sine Bars


1. Plain Sine Bar: A basic sine bar without any attachments.

2. Compound Sine Bar: Used for measuring compound angles.

3. Sine Table: A sine bar integrated with a flat surface and a tilting mechanism for easier angle
adjustments.

4. Sine Center: Used for measuring angles of cylindrical objects like shafts and rollers.

5. Procedure for Using a Sine Bar

To Measure an Angle:
1. Place the sine bar on a flat surface or surface plate.

2. Position the workpiece on the sine bar.

3. Adjust the slip gauge stack under one roller until the workpiece aligns perfectly with a
reference surface.

4. Measure the height H of the slip gauge stack.

HH

5. Use the sine formula to calculate the angle:


θ=arcsin(LH​)

θ=arcsin⁡(HL)\theta = \arcsin\left(\frac{H}{L}\right)

To Set an Angle:
1. Calculate the required slip gauge height H for the desired angle:

H = L ⋅ sin(θ)

1. Arrange the slip gauges to the calculated height.

2. Place the sine bar on the slip gauge stack and align it with the desired orientation.

6. Applications of Sine Bar

Unit 2 (Measurements) 27
1. Angle Measurement: Determines the angle of components or tools with precision.

2. Angle Setting: Sets up machine tools like milling and grinding machines for specific angular
cuts.

3. Inspection: Verifies the accuracy of angular features in manufactured parts.

4. Taper Measurement: Measures the taper angle of shafts or similar components.

7. Advantages of Sine Bar


1. Highly accurate and precise for angle measurement.

2. Simple to use and reliable for small angles.

3. Widely available and compatible with other metrology tools.

4. Versatile for angle setting and measurement tasks.

8. Limitations of Sine Bar


1. Limited to small angles (<45∘) due to difficulty in handling large slip gauge stacks.

<45∘<45^\circ

2. Requires a flat and stable surface for accurate measurements.

3. Errors may arise due to thermal expansion, incorrect slip gauge selection, or improper
alignment.

4. Not suitable for direct measurement of complex or compound angles.

Errors in Sine Bar Measurements


1. Parallax Error: Misalignment in viewing slip gauge stacks.

2. Surface Plate Irregularities: Uneven surfaces can cause inaccuracies.

3. Thermal Expansion: Variations in temperature can distort the sine bar or gauges.

4. Wear and Tear: Over time, rollers or bar surfaces may degrade, affecting accuracy.

Diagram of a Sine Bar


Include a labeled diagram showing:

The sine bar with rollers.

Placement on a surface plate.

Workpiece setup.

Slip gauge stack under one roller.

Unit 2 (Measurements) 28
9. Maintenance of Sine Bar
1. Store in a protective box to prevent corrosion and damage.

2. Clean thoroughly before and after use to remove dirt and debris.

3. Regularly calibrate against standard reference angles.

4. Avoid subjecting it to extreme temperatures or excessive force.

Casting
Casting is a fundamental manufacturing process in which a liquid material is poured into a mold
that contains the desired shape and is then allowed to solidify. Once solidified, the object is
removed from the mold and undergoes finishing processes if required. This method is widely
used for making complex shapes that are difficult or uneconomical to produce by other
manufacturing processes.

1. Definition of Casting
Casting is a process where molten metal or other material is poured into a mold with a hollow
cavity of the desired shape and allowed to cool and solidify. The resulting part is called a casting.

2. Principle of Casting
The process is based on the principle of melting, pouring, solidification, and ejection:

1. The material is melted into a liquid state.

2. The molten material is poured into a prepared mold cavity.

3. The material solidifies by cooling.

4. The solidified part is removed from the mold for further processing.

3. Steps Involved in Casting Process


1. Pattern Making:

A pattern is a replica of the part to be cast, made from wood, plastic, or metal.

It is used to create the cavity in the mold.

2. Mold Preparation:

The mold is made using sand, metal, or ceramic, depending on the casting method.

The mold cavity is created by packing material around the pattern.

3. Melting the Material:

The material is melted in a furnace to the required temperature.

Unit 2 (Measurements) 29
4. Pouring:

The molten material is poured into the mold cavity through a gating system.

5. Solidification and Cooling:

The material cools and solidifies into the shape of the mold cavity.

6. Removal of Casting:

The mold is broken (in expendable molds) or opened (in reusable molds) to retrieve the
casting.

7. Cleaning and Finishing:

Removing excess material, such as gates and risers, and performing surface finishing.

8. Inspection:

The casting is inspected for defects and dimensional accuracy.

4. Types of Casting Processes


1. Sand Casting:

Uses sand as the mold material, suitable for large and complex parts.

2. Die Casting:

High-pressure molten metal injection into metal molds, ideal for mass production.

3. Investment Casting (Lost Wax Casting):

Uses a wax pattern surrounded by ceramic material, which is melted out to create the
mold cavity.

4. Centrifugal Casting:

Molten metal is poured into a rotating mold, ideal for cylindrical components.

5. Permanent Mold Casting:

Utilizes reusable metal molds, commonly for non-ferrous metals.

6. Shell Mold Casting:

Uses a thin shell of sand-based mold for better surface finish and precision.

7. Continuous Casting:

Used to produce long, uniform shapes, such as rods, beams, or sheets.

5. Advantages of Casting
1. Complex Shapes: Can create intricate shapes that are difficult to machine.

Unit 2 (Measurements) 30
2. Wide Material Range: Almost any metal or alloy can be cast.

3. Cost-Effective for Large Production: Ideal for mass production of large components.

4. Weight Reduction: Thin sections can be produced, reducing material usage.

5. Reusable Materials: Some casting methods allow reuse of mold materials.

6. Limitations of Casting
1. Surface Finish: Cast surfaces may require additional finishing.

2. Size Limitations: Extremely large parts can be challenging to cast.

3. Defects: Casting defects like porosity, shrinkage, and cracks may occur.

4. Long Cooling Time: Solidification and cooling can be time-consuming.

5. Material Wastage: Excess material in gates and risers needs to be removed.

7. Applications of Casting
1. Automotive Industry: Engine blocks, cylinder heads, and transmission cases.

2. Aerospace: Turbine blades and structural components.

3. Construction: Pipes, frames, and heavy machinery components.

4. Consumer Goods: Cookware, jewellery, and sculptures.

5. Electrical Industry: Motor housings and transformer cores.

8. Common Defects in Casting


1. Porosity: Gas trapped in the molten material forms bubbles.

2. Shrinkage: Reduction in size due to cooling, causing internal voids.

3. Cold Shut: Incomplete fusion of molten metal streams.

4. Hot Tear: Cracks caused by uneven cooling or thermal stresses.

5. Misrun: Metal solidifies before completely filling the mold cavity.

Diagram of Casting Process


Include a labeled diagram showing:

1. Mold cavity.

2. Gating system (sprue, runner, riser).

3. Pattern.

4. Pouring molten metal.

Unit 2 (Measurements) 31
5. Final casting.

9. Comparison of Casting with Other Manufacturing Processes


Aspect Casting Machining/Forging

Complexity Suitable for complex shapes Limited to simpler shapes.

Material Waste Minimal waste with proper gating design Higher material removal and waste.

Initial Cost Low for small-scale production Higher due to tooling costs.

Carpentry
Carpentry is the branch of manufacturing that deals with the construction, shaping, and
assembly of wooden components to create structures, furniture, tools, and other items. It is one
of the oldest and most fundamental crafts in mechanical engineering and construction.

1. Definition of Carpentry
Carpentry involves the cutting, shaping, joining, and finishing of wood to create functional or
decorative items. It is performed using manual or power tools and involves a variety of
techniques depending on the project.

2. Importance of Carpentry
Carpentry plays a vital role in:

The construction industry (doors, windows, roofing).

Furniture manufacturing.

Making wooden patterns for casting in foundry work.

Repairs and maintenance tasks.

3. Tools Used in Carpentry


Carpentry tools are classified into several types:

(a) Measuring Tools:


Steel Rule: For linear measurements.

Try Square: For marking and checking right angles.

Marking Gauge: Used to mark parallel lines to the wood edge.

(b) Cutting Tools:

Unit 2 (Measurements) 32
Saws:

Hand Saw: General-purpose cutting.

Tenon Saw: For precise cuts.

Coping Saw: For intricate shapes.

Chisels: For carving and shaping wood.

Planes: For smoothing and leveling surfaces.

(c) Drilling and Boring Tools:


Hand Drill or Power Drill: For making holes.

Auger: For boring deeper holes.

(d) Holding Tools:


Clamps: To hold pieces together during work.

Vice: Fixed to a workbench for securing wood.

(e) Striking Tools:


Hammers: Used for driving nails and shaping wood.

Mallets: Used to strike chisels without damaging their handles.

(f) Finishing Tools:


Files and Rasps: For smoothing edges.

Sandpaper: For finishing the surface.

4. Carpentry Operations
1. Marking and Measuring:

Wood is measured and marked using a steel rule, marking gauge, or try square.

2. Cutting:

Cutting involves shaping the wood using saws, chisels, and other tools to the required
size and shape.

3. Planning and Smoothing:

Uneven surfaces are levelled using a plane or file.

4. Drilling and Boring:

Holes are created using drills or augers for joining or decorative purposes.

5. Joining:

Unit 2 (Measurements) 33
Wooden components are joined using techniques like nailing, screwing, or specialised
joints like dovetail or mortise and tenon.

6. Finishing:

Final touches like filing, sanding, polishing, or painting are applied to enhance aesthetics
and durability.

5. Types of Carpentry Joints


Carpentry relies on various joints to assemble wooden components:

1. Butt Joint: Simple end-to-end connection.

2. Lap Joint: Overlapping wood pieces.

3. Mortise and Tenon Joint: A strong joint for furniture and frames.

4. Dovetail Joint: Used for drawer construction.

5. Tongue and Groove Joint: For flooring or paneling.

6. Applications of Carpentry
1. Construction: Making doors, windows, beams, and frames.

2. Furniture: Tables, chairs, cabinets, and beds.

3. Pattern Making: Wooden patterns used in casting processes.

4. Decorative Items: Wooden carvings and art pieces.

5. Packaging: Wooden crates and pallets for transport.

7. Advantages of Carpentry
1. Versatility: Can create a wide range of products.

2. Renewable Material: Wood is a sustainable resource when sourced responsibly.

3. Ease of Work: Wood is easier to shape compared to metals or plastics.

4. Aesthetic Appeal: Wooden products are visually pleasing and customisable.

8. Limitations of Carpentry
1. Susceptibility to Decay: Wood can rot, warp, or be affected by termites.

2. Limited Strength: Less durable than metals for heavy-duty applications.

3. Fire Hazard: Wood is combustible.

4. Time-Consuming: Precision work often requires time and skill.

Unit 2 (Measurements) 34
9. Safety in Carpentry
1. Wear protective gear like gloves, goggles, and masks.

2. Use sharp tools to avoid accidents caused by slipping.

3. Secure wood pieces with clamps before working on them.

4. Ensure proper handling of power tools.

5. Keep the work area clean and organized to avoid injuries.

10. Diagram of Carpentry Tools and Techniques


Include diagrams of:

Basic tools like a saw, chisel, and hammer.

Joints such as dovetail or mortise and tenon.

A woodworking bench setup.

Welding
Welding is a process of joining two or more similar or dissimilar materials (usually metals or
thermoplastics) by applying heat, pressure, or both, with or without filler material. It is a critical
manufacturing process widely used in construction, automotive, aerospace, and other industries
for creating strong and permanent joints.

1. Definition of Welding
Welding is a fabrication process where two or more parts are fused together by melting the
interface and adding filler material to form a strong joint upon cooling.

2. Principle of Welding
Welding involves:

1. Heat Generation: Heat is generated by a source (electric arc, flame, or laser) to melt the
materials at the joint.

2. Fusion: The molten materials mix and solidify to form a strong joint.

3. Addition of Filler: Optional filler material is used to enhance joint strength.

3. Types of Welding Processes


Welding is broadly classified into several methods based on the heat source:

(a) Fusion Welding (Without Pressure)

Unit 2 (Measurements) 35
1. Arc Welding:

Uses an electric arc to generate heat.

Types:

Shielded Metal Arc Welding (SMAW): Uses consumable electrodes and flux.

Gas Metal Arc Welding (GMAW or MIG): Uses a shielding gas and consumable
electrode.

Gas Tungsten Arc Welding (GTAW or TIG): Uses a non-consumable tungsten


electrode.

2. Gas Welding:

Combines fuel gases (e.g., acetylene) with oxygen to produce a flame for melting metals.

3. Resistance Welding:

Heat is generated by electrical resistance at the joint.

Examples: Spot welding, seam welding.

4. Laser Welding:

Uses a high-energy laser beam for precision welding.

5. Plasma Arc Welding:

Employs a high-velocity jet of ionised gas for deep penetration welding.

(b) Pressure Welding


1. Forge Welding:

Heat the materials and hammer them together.

2. Friction Welding:

Heat is generated through mechanical friction.

3. Ultrasonic Welding:

High-frequency vibrations create heat at the joint.

4. Equipment Used in Welding


1. Power Source: Supplies electricity (AC/DC) for arc welding.

2. Electrodes: Conduct electricity and provide filler material.

3. Shielding Gas: Protects the weld pool from atmospheric contamination (e.g., argon, CO₂).

4. Torch or Welding Gun: Directs heat or arc to the joint.

5. Protective Equipment: Includes gloves, goggles, welding helmet, and apron for safety.

Unit 2 (Measurements) 36
5. Applications of Welding
1. Construction: Bridges, pipelines, and structural frameworks.

2. Automotive: Vehicle frames, exhaust systems, and body panels.

3. Aerospace: Aircraft components and fuselages.

4. Shipbuilding: Hulls, decks, and superstructures.

5. Manufacturing: Machinery, tools, and heavy equipment.

6. Advantages of Welding
1. Strong Joints: Produces permanent and robust connections.

2. Versatility: Can join various materials and complex shapes.

3. Cost-Effective: Requires minimal materials for high-strength joints.

4. High Precision: Advanced welding techniques enable precision in manufacturing.

7. Limitations of Welding
1. Requires Skilled Labor: Quality depends heavily on operator expertise.

2. Thermal Stresses: Heat can cause distortion or cracking.

3. Not Suitable for All Materials: Some materials are difficult to weld (e.g., certain plastics,
composites).

4. Safety Concerns: Exposure to high heat, UV radiation, and fumes.

8. Common Welding Defects


1. Porosity: Gas trapped in the weld pool creates voids.

2. Cracks: Caused by rapid cooling or thermal stresses.

3. Incomplete Fusion: Lack of proper melting between base materials.

4. Undercut: Excess material is melted away, weakening the joint.

5. Slag Inclusion: Non-metallic material trapped in the weld.

9. Safety Measures in Welding


1. Wear appropriate protective gear (helmet, gloves, apron).

2. Ensure proper ventilation to avoid inhaling harmful fumes.

3. Follow guidelines for handling equipment safely.

Unit 2 (Measurements) 37
4. Keep flammable materials away from the welding area.

5. Use insulated tools and avoid welding in damp conditions to prevent electric shocks.

10. Diagram of Welding Process


Include diagrams showing:

1. Arc Welding setup (power source, electrode, workpiece, and arc).

2. Gas Welding torch and flame.

3. Different joint designs (butt, lap, corner, T-joints).

11. Types of Welded Joints


1. Butt Joint: Two flat surfaces joined edge-to-edge.

2. Lap Joint: One piece overlaps the other.

3. T-Joint: One piece is perpendicular to the other.

4. Corner Joint: Joins two pieces at a right angle.

5. Edge Joint: Edges of two parts are joined.

12. Comparison of Welding with Other Joining Methods


Aspect Welding Bolting Riveting

Joint Strength High (permanent) Medium (temporary) High (permanent)

Cost Economical Higher Higher

Material Type Metals, plastics All materials Mostly metals

Lathe Machine
The lathe machine is one of the most versatile and widely used machine tools in manufacturing.
It is primarily used for machining cylindrical or rotational parts by removing excess material from
a workpiece using cutting tools.

1. Definition of Lathe Machine


A lathe machine is a machine tool that rotates a workpiece about an axis to perform operations
such as turning, facing, threading, drilling, and knurling, using various cutting tools.

2. Principle of Lathe Machine

Unit 2 (Measurements) 38
The working principle of a lathe is based on the relative motion between the cutting tool and the
rotating workpiece. The workpiece rotates about its axis while the cutting tool, which remains
stationary or moves linearly, removes material to create the desired shape.

3. Main Components of a Lathe Machine


1. Bed:

The base of the lathe that supports all components.

Made of cast iron for stability and strength.

2. Headstock:

Contains the spindle that holds and rotates the workpiece.

Includes gear mechanisms for changing speed.

3. Tailstock:

Positioned opposite the headstock to support the workpiece.

Can hold tools like drills for axial operations.

4. Carriage:

Moves the cutting tool along the bed.

Includes the cross slide, tool post, and apron for tool movements.

5. Chuck:

Holds the workpiece securely during machining.

Types: Three-jaw (self-centering), Four-jaw (independent).

6. Lead Screw:

Used for threading operations by translating rotary motion into linear motion.

7. Feed Rod:

Controls the movement of the carriage for regular cutting operations.

8. Tool Post:

Holds the cutting tool securely.

4. Operations Performed on a Lathe Machine


1. Turning:

Removes material from the outer diameter of a workpiece to reduce its size.

2. Facing:

Produces a flat surface perpendicular to the workpiece's axis.

Unit 2 (Measurements) 39
3. Thread Cutting:

Creates external or internal threads on a workpiece.

4. Drilling:

Creates holes along the axis of the workpiece.

5. Knurling:

Produces textured patterns on the surface for better grip.

6. Grooving:

Cuts narrow grooves on the surface of the workpiece.

7. Boring:

Enlarges an existing hole.

8. Taper Turning:

Produces a tapered (conical) surface by varying the cutting tool's angle.

5. Types of Lathes
1. Engine Lathe:

General-purpose lathe for various operations.

2. Turret Lathe:

Designed for high-volume production with a turret for holding multiple tools.

3. CNC Lathe:

Automated lathe controlled by a computer for high precision.

4. Speed Lathe:

A simple lathe used for operations like woodturning, spinning, and polishing.

5. Bench Lathe:

A small lathe mounted on a workbench for light-duty work.

6. Advantages of a Lathe Machine


1. Versatility: Can perform a wide range of operations.

2. Precision: Produces high-accuracy components.

3. Ease of Operation: Simple setup and operation for most tasks.

4. Durability: Robust design for long-term use.

5. High Efficiency: Suitable for mass production with CNC lathes.

Unit 2 (Measurements) 40
7. Limitations of a Lathe Machine
1. Material Restrictions: Limited to rotational or cylindrical parts.

2. Initial Cost: High cost for advanced CNC lathes.

3. Space Requirement: Requires considerable space for larger models.

4. Operator Skill: Manual lathes require skilled operators for precision work.

8. Applications of Lathe Machine


1. Automotive Industry: Manufacturing shafts, axles, and other rotational components.

2. Aerospace: Producing high-precision turbine components.

3. Construction: Making columns and cylindrical fittings.

4. Manufacturing: Creating molds, patterns, and dies.

5. Furniture: Woodturning for decorative items.

9. Safety Precautions for Using a Lathe Machine


1. Wear appropriate protective gear, such as gloves and safety goggles.

2. Keep the workspace clean and free from obstructions.

3. Avoid wearing loose clothing or accessories.

4. Ensure the workpiece and tools are securely fastened.

5. Follow proper speed settings based on the operation.

10. Diagram of a Lathe Machine


Include a clear and labeled diagram showing:

1. Bed

2. Headstock

3. Tailstock

4. Carriage

5. Tool post

6. Chuck

7. Lead screw

11. Comparison with Other Machine Tools

Unit 2 (Measurements) 41
Aspect Lathe Machine Milling Machine Drilling Machine

Primary Motion Rotational (workpiece) Rotational (tool) Rotational (tool)

Operations Turning, threading, etc. Milling, slotting, etc. Drilling, boring, etc.

Workpiece Shape Cylindrical or rotational Flat or irregular shapes Holes and cylindrical cuts

Drilling Machine
A drilling machine is a widely used machine tool designed to create cylindrical holes in solid
materials such as metal, wood, or plastic by using a rotating cutting tool called a drill bit. It is a
fundamental tool in manufacturing, construction, and mechanical engineering.

1. Definition of Drilling Machine


A drilling machine is a machine tool used for making cylindrical holes in workpieces by rotating a
cutting tool (drill bit) against the stationary workpiece under controlled pressure.

2. Principle of Drilling Machine


The workpiece is clamped on the machine's table, while the drill bit rotates and moves vertically
into the material to remove material and form a hole. The hole is made by the combined actions of
rotation (cutting) and feed (pushing the drill bit into the workpiece).

3. Main Components of a Drilling Machine


1. Base:

Supports the machine and provides stability.

Made of cast iron or steel.

2. Column:

A vertical structure that supports the arm, table, and spindle.

3. Table:

Provides a surface for clamping the workpiece.

Can be adjusted vertically and rotated for positioning.

4. Spindle:

Holds and rotates the drill bit.

Driven by a motor or belt mechanism.

5. Drill Head:

Contains the spindle, motor, and feed mechanism.

Unit 2 (Measurements) 42
6. Drill Chuck:

A device that holds the drill bit firmly in the spindle.

7. Power Feed Mechanism:

Allows the drill to move vertically into the material at a controlled rate.

8. Motor:

Provides the required power to rotate the drill bit.

4. Types of Drilling Machines


1. Portable Drilling Machine:

Handheld and used for small operations in construction or repair.

2. Bench Drilling Machine:

Fixed on a workbench and suitable for light-duty operations.

3. Radial Drilling Machine:

Features a radial arm that can rotate, move vertically, and swing, allowing flexibility for
large workpieces.

4. Column Drilling Machine:

Heavy-duty machine with a fixed column, suitable for precision drilling.

5. Gang Drilling Machine:

Multiple drill heads mounted on the same table for simultaneous operations.

6. Automatic Drilling Machine:

Fully automated for high-speed production tasks.

7. Deep Hole Drilling Machine:

Specialised for drilling deep holes with high accuracy.

5. Operations Performed on a Drilling Machine


1. Drilling:

Produces a cylindrical hole in a solid material.

2. Reaming:

Enlarges an existing hole to improve dimensional accuracy and surface finish.

3. Tapping:

Creates internal threads in a hole.

Unit 2 (Measurements) 43
4. Boring:

Enlarges an existing hole to precise dimensions.

5. Counterboring:

Creates a stepped hole for bolt heads or nuts.

6. Countersinking:

Produces a conical surface at the hole's top for flat-head screws.

7. Spot Facing:

Provides a flat surface around a hole for seating washers or bolt heads.

8. Drilling at an Angle:

Accomplished using an adjustable table or workpiece fixture.

6. Applications of Drilling Machines


1. Manufacturing:

Drilling holes for assembly and machining processes.

2. Automotive:

Creating precision holes in engine blocks and parts.

3. Construction:

Drilling holes in concrete, wood, and metal for fittings.

4. Aerospace:

High-precision drilling for lightweight and durable components.

5. Shipbuilding:

Drilling holes for structural assembly.

7. Advantages of Drilling Machine


1. Versatility:

Can perform multiple operations with different drill bits.

2. Efficiency:

Quick and precise drilling with minimal manual effort.

3. Automation:

Some machines support CNC control for high-speed production.

4. Flexibility:

Unit 2 (Measurements) 44
Suitable for a wide range of materials, including metal, wood, and plastic.

8. Limitations of Drilling Machine


1. Restricted to Holes:

Cannot machine complex shapes or profiles.

2. Material Removal:

Generates waste material (chips).

3. Accuracy Limitation:

Precision may vary depending on the machine and operator skill.

9. Drilling Machine Safety Precautions


1. Use Proper PPE:

Wear safety goggles, gloves, and protective clothing.

2. Secure the Workpiece:

Clamp the material firmly to prevent movement.

3. Inspect Drill Bits:

Ensure bits are sharp and undamaged before use.

4. Avoid Loose Clothing:

Prevent clothing or accessories from getting entangled.

5. Monitor Speed:

Use appropriate spindle speed for the material and operation.

6. Turn Off Power:

Always switch off the machine before changing tools or cleaning.

10. Diagram of Drilling Machine


Include a detailed diagram with labeled components such as:

Base

Column

Table

Spindle

Chuck

Unit 2 (Measurements) 45
Drill bit

Motor

11. Comparison of Drilling Machine with Other Machine Tools


Aspect Drilling Machine Lathe Machine Milling Machine

Primary Motion Rotational (drill bit) Rotational (workpiece) Rotational (tool)

Applications Hole creation Cylindrical machining Slotting, facing, cutting

Workpiece Shape Any shape Cylindrical or rotational Flat or irregular shapes

Unit 2 (Measurements) 46

You might also like