0% found this document useful (0 votes)
1K views

Robotics Book

This document contains the course content for a robotics course, including 4 units: Unit I covers fundamentals of robotics such as definitions, anatomy, coordinate systems, classifications, applications and drives/motors. Unit II covers sensors and machine vision, including position, range, touch, analog/binary sensors and image processing techniques. Unit III covers robot kinematics including forward/inverse kinematics, Jacobians, velocity/forces and trajectory generation. Unit IV will cover robot programming using embedded C language.

Uploaded by

SHAHUL HAMEED.S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Robotics Book

This document contains the course content for a robotics course, including 4 units: Unit I covers fundamentals of robotics such as definitions, anatomy, coordinate systems, classifications, applications and drives/motors. Unit II covers sensors and machine vision, including position, range, touch, analog/binary sensors and image processing techniques. Unit III covers robot kinematics including forward/inverse kinematics, Jacobians, velocity/forces and trajectory generation. Unit IV will cover robot programming using embedded C language.

Uploaded by

SHAHUL HAMEED.S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 143

IFET COLLEGE OF ENGINEERING

(An Autonomous Institution)


DEPARTMENT OF INFORMATION AND TECHNOLOGY

19UITPE403 – ROBOTICS
(REGULATION-2019)
(For Fourth Semester – Second Year)

Prepared by Approved by

Ms. S. Sivaranjani AP/ECE Mrs. M. Margarat, HOD/ ECE

Course Co-ordinator

Dr. J. Vidhya ASP/ECE


PAGE.
TABLE OF CONTENTS NO
UNIT - I FUNDAMENTALS OF ROBOTICS
1.1 INTRODUCTION 1
1.2 DEFINITION 1
1.3 ROBOT ANATOMY 2
1.3.1 Manipulator 2
1.3.2 Robot Links 3
1.3.3 Robot joints 3
1.3.4 End Effectors 5
1.3.5 Kinematics 6
1.4 CO-ORDINATE SYSTEM 6
1.4.1 Cylindrical Configuration 6
1.4.2 Polar Configuration 7
1.4.3 Jointed Arm Configuration 8
1.4.4 Cartesian Co-ordinate configuration 9
1.4.5 Selective Compliance Articulated Robot Arm (SCARA) 10
1.5 WORK ENVELOP 10
1.6 CLASSIFICATION OF ROBOTS 11
1.7 TYPES OF ROBOTS 12
1.8 SPECIFICATIONS 13
1.9 ROBOT PARTS AND FUNCTIONS 15
1.10 NEED OF ROBOT AND ITS APPLICATION 16
1.11 ROBOTS-DIFFERENT APPLICATIONS 17
1.11.1 Industrial Applications 17
1.11.1.1 Auto Industry 17
1.11.2 Medical Applications 18
1.11.2.1 Future Applications 18
1.11.3 Future Manufacturing Applications 19
1.11.4 Service Industry and other Applications 20
1.12 DRIVES AND MOTORS 20
1.13 PID CONTROLLERS 23
1.13.1 Propotional Controller 23
1.13.2 Propotional Integral controller 24
1.13.3 Propotional Integral Derivative controller 24

UNIT-II SENSORS AND MACHINE VISION


2.1 INTRODUCTION 26
2.2 REQUIREMENTS OF A SENSOR 26
2.3 POSITION SENSORS 27
2.4 2.4 RANGE SENSORS 28
2.4.1 Triangulations Principle 28
2.4.2 Structured, Lighting Approach 29
2.5 TOUCH SENSORS 31

i
2.6 BINARY SENSORS 32
2.7 ANALOG SENSORS 32
2.8 WRIST SENSORS 32
2.9 COMPLIANCE SENSORS 33
2.10 SENSING AND DIGITIZING IMAGE DATA 33
2.11 SIGNAL CONVERSION AND IMAGE STORAGE 34
2.11.1 Lighting Techniques 35
2.12 IMAGE PROCESSING AND ANALYSIS 40
2.12.1 Image data reduction 41
2.12.2 Noise reduction operations 41
2.12.3 Segmentation 43
2.12.4 Feature Extraction technique 46
2.13 OBJECT RECOGNITION 47
2.13.1 Template matching technique 47
2.13.2 Structural Technique 48
2.13.3 Thresholding 48
2.13.4 Region Growing Methods 49
2.12.5 Edge Detection 49
2.14 APPLICATIONS 50
2.15 VISUAL SERVING AND NAVIGATION 50
2.15.1 Wheeled Vehicles 51
2.15.2 Walking Machines 51

UNIT - III ROBOT KINEMATICS


3.1 INTRODUCTION 52
3.2 FORWARD KINEMATICS 53
3.3 INVERSE KINEMATICS 53
DIFFERENCE BETWEEN FORWARD KINEMATICS AND INVERSE
3.4 KINEMATICS 53
3.4.1 Two Frame kinematic Relationship 54
3.4.2 Transformations 54
3.4.3 Homogeneous Transformation 54
3.4.4 Homogeneous Representation 54

FORWARD AND REVERSE KINEMATICS OF MANIPULATORS WITH


3.5 TWO, THREE DEGREES OF FREEDOM (IN 2 DIMENSION) 56
3.5.1. Robot Motion Analysis 56
3.5.2 Generally for robots the location of the end-effector can be defined in two
systems 56
3.5.3 Forward Kinematics of Manipulators with Two Degrees of Freedom (In 2
Dimension) 56
3.5.4 Backward Kinematic of Manipulators with Two Degrees of Freedom (In 2
Dimension) 60

ii
FORWARD AND BACKWARD KINEMATICS OF MANIPULATORS
3.6 WITH THREE DEGREES OF FREEDOM (IN 2 DIMENSION) 60

3.6.1 Forward kinematics (Three degrees of freedom in 2D manipulator) 61

3.6.2 Reverse Kinematics (Three degrees of freedom in 2D manipulator) 62

FORWARD AND BACKWARD KINEMATICS OF MANIPULATORS


3.7 WITH FOUR DEGREES OF FREEDOM (IN 3 DIMENSION) 62
3.7.1 Forward Kinematics 63
3.7.2 Reverse kinematics 64
3.8 JACOBIANS 65
3.9 VELOCITY AND FORCES 65
3.9.1 Position Control 65
3.9.2 Force control 66
3.9.3 Angular Velocioty 66
3.9.4 Linear velocity 66
3.9.5 Static forces in manipulators 67
3.10 MANIPULATOR DYNAMICS 68
3.11 TRAJECTORY GENERATOR 68
3.11.1 Trajectory planning 69
3.11.2 Trajectory Planning For Robotics 69
3.11.3 Joint Space Scheme 69
3.12 MANIPULATOR MECHANISM DESIGN 69
3.12.1 Basing the Design on Task Requirements 69
3.12.2 Number of degrees of freedom 69
3.12.3 Kinematic configuration 71
3.12.4 Wrists 73
3.12.5 Designing well-conditioned workspaces 75
3.12.6 Redundant and Closed-Chain Structures 76
3.12.7 Reduction and transmission system 78
3.12.8 Stiffness and deflections 79

UNIT - IV ROBOT PROGRAMMING


4.1 INTRODUCTION 83
4.2 PROGRAMMING EMBEDDED SYSTEM IN C 83
4.2.1 Embedded System 83
4.2.2 C Language 83
4.2.3 Embedded C 83
4.2.4 Programming Embedded System Using Embedded C 84
4.3 READING SWITCHES 86
4.3.1 Basic Techniques for Reading from Port Pins 87
4.4 MAKING SENSE OF ACUTATORS 88

iii
4.4.1 Hydraulic Actuators 89
4.4.2 Pneumatic Actuators 90
4.4.3 Electric Actuators 91
4.5 UNDERSTANDING MICROCONTROLLERS 92
4.5.1 Specialized Features in a Microcontroller 92
4.5.2 Programming Microcontroller 93
4.5.3 Future hold of Microcontroller 94
4.6 CHOOSING A MICRO CONTROLLER 94
4.6.1 Types of Motor Controller 94
4.6.2 Choosing a Motor Controller 95
USING THE SERIAL/PARALLEL INTERFACE CONTROLLING OUR
4.7 ROBOT 96
4.7.1 RS-232 Protocol 96
4.7.2 Parallel Port 97
4.8 USING SENSORS 98
4.8.1 Tactile Sensors 98
4.8.2 Touch Sensors 98
4.8.3 Force Sensors 98
4.9 GETTING THE RIGHT TOOL 99
4.10 ASSEMBLING A ROBOT 101
4.10.1 Assembling the Robot Components 102
4.11 PROGRAMMING ROBOT 103
4.11.1 Methods of Robot Programming 103
4.12 ROBOT PROGRAMMING LANGUAGE 104
4.13 VAL PROGRAMMING 106
4.14 END EFFECTOR 108
4.14.1 End Effector Commands 108

UNIT - V IMPLEMENTATION AND ROBOT ECONOMICS


5.1 INTRODUCTION 111
5.2 MATERIAL HANDLING 111
5.3 AUTOMATED GUIDED VEHICLE SYSTEMS (AGVS) 112
5.3.1 Steering Control 113
5.3.2 Path Decision 113
5.3.3 Frequency Select Mode 114
5.3.4 Path Select Mode 114
5.3.5 Magnetic Tape Mode 114
5.3.6 Traffic Control 114
5.3.7 Forward Sensing Control 114
5.3.8 Combination Control 115
5.3.9 System Management 115
5.3.10 Types of AVGs 115
5.4 BATTERY CHARGING 116

iv
5.5 COMPONENTS OF AN AGV 116
5.6 APPLICATIONS OF AGV 117

5.7 PERFORMANCE MEASURES OF MATERIAL HANDLING 117


5.8 VARIOUS STEPS OF IMPLEMENTING A ROBOT 118

5.9 SAFETY CONSIDERATION FOR ROBOT OPERATIONS 120


5.10 ECONOMICS ANALYSIS OF ROBOT 121
5.10.1 Type of Robot Installation 122
5.10.2 Cost of data required for the analysis 122
5.11 WEATHER MONITORING SYSTEM 123
5.11.1 List of components used for the design 123
5.11.2 Methodology and System Design 126
5.11.3 Flowchart for program design 126
5.12 MODERN ROBOTS 128
5.12.1 Future Application and Challenges 128
5.12.2 Challenges 132
5.13 CASE STUDY 134

v
CHAPTER I
FUNDAMENTALS OF ROBOTICS
1.1 INTRODUCTION
Robots are devices that are programmed to move parts or to do work with a tool. Robotics is a
multidisciplinary engineering field dedicated to the development of autonomous devices, including
manipulators and mobile vehicles.
Roboticists develop man-made mechanical devices that can move by themselves, whose
motion must be modeled, planned, sensed, actuated and controlled and whose motion behavior can be
influenced by “programming”. Robots are called “intelligent” if they succeed in moving in safe
interaction with an unstructured environment, while autonomously achieving their specified tasks.
Robotics is, to a very large extent, all about system integration, achieving a task by an
actuated mechanical device, via an “intelligent” integration of components, many of which it shares
with other domains, such as systems and control, computer science, character animation, machine
design, computer vision, artificial intelligence, cognitive science, biomechanics, etc. In addition, the
boundaries of robotics cannot be clearly defined, since also its “core” ideas, concepts and algorithms
are being applied in an ever increasing number of “external” applications, and, vice versa, core
technology from other domains (vision, biology, cognitive science or biomechanics, for example) are
becoming more crucial components in modern robotic systems.

1.2 DEFINITION
1. Robot is “an automatically controlled, reprogrammable, multipurpose, manipulator programmable
in three or more axes, which may be either, fixed in place or mobile for use in industrial automation
applications”.
2. The RIA (Robotics Industries Association) has officially given the definition for Industrial Robots.
According to RIA, “An Industrial Robot is a reprogrammable, multifunctional manipulator designed
to move materials, parts, tools, or special devices through variable programmed motions for the
performance of a variety of tasks.”
Reprogrammable means: The machine must be capable of being reprogrammed to perform a new
or different task or to be able to change the motion of the arm or tooling
Multifunction emphasizes: The fact that a robot must be able to perform many different functions
depending on the program and tooling currently in use.
3. A robot is a system that combines mechanical, electronic and electrical parts to implement one or
more functions. Robot control is done with help of electric and electronic circuitry. To obtain the
desired behavior, the control components vary current and voltage throughout the circuitry.
4. Robots can be implemented with discrete components (resistances, capacitors, and transistors).
However, there are several integrated circuits (ICs) that make it easier to design robots (logic gates,
memory, microcontrollers, LCD interface chips, operational amplifiers, timers, etc.).
5. Children's view: A robot is a one armed, blind idiot with limited memory which cannot speak,
see, or hear. A Robot is a machine which can be programmed to do a variety of tasks in the same way
that computer is an electronic circuit which can be programmed to do a variety of tasks.

1
1.3 ROBOT ANATOMY
The anatomy of robot is also known as structure of robot. The mechanical structure of a robot
is like the skeleton in the human body. The robot anatomy is, therefore, the study of structure of robot
that is physical construction of the manipulator structure.

The mechanical structure of a manipulator consists of rigid bodies (links) connected by means
of joints, is segmented into an arm that ensures mobility and reachability, a wrist that confers
orientation, and an end effector that performs the required task. Figure 1.1 shows the base, arm, wrist
and an end effector representing anatomy of the robot.

Fig: 1.1 Robot Anatomy

The Industrial robots resemble the human arm in its physical structure. Like the hand attached
to the human body the robot manipulator or Robot arm is attached to the base. The chest, the upper
arm and fore-arm in the human body compare with the links in the robot manipulators. The wrist,
elbow and the shoulder in the human hand are represented by the joints in the robot arm. As the
industrial robot arm compares with the human hand, they are also known as “anthropomorphic or
articulated robots”.
Some of the key representations of robot anatomy are,
 Manipulators
 Robot links
 Robot joints
 End Effectors
 Kinematics

1.3.1 Manipulator
The manipulators in a robot are developed by the integration of links and joints. The joints are
used to provide the relative motion and it is divided into two types such as linear and rotary, whereas
the links are rigid members between the joints. In the body and arm, the manipulators are applied for
moving the tools in the work volume. It is also used in the wrist to adjust the tools. Figure 1.2 shows
the structure of the manipulators.

2
Fig 1.2 Robot Manipulators

1.3.2 Robot Links


The two adjacent joint axes of a robotic manipulation are connected and defined by a rigid
body called “link”, which maintains a fixed relationship between the two joint axes through a
kinematic function. The relationship is described by two variables- the length of the link, ‘a’ and the
twist of the link, ‘𝛼’.The Links are numbered starting from the fixed base of the manipulator, which
is called link 0. The first moving rigid body is link 1. The below figure 1.3 shows the joints and links
of the robots.
Design consideration of a links are:
 Strength and stiffness of link
 The material used for fabrication
 The weight and inertia
 The location of the bearings
 The selection of the type of bearing
 The fits and tolerances in the joint
 The external shape and aesthetics.
 The friction and lubrication
1.3.3 Robot Joints
The Robot joints are helpful to perform sliding and rotating movements of a component. It
helps the links to make different kind of movements as similar to the human body. And it provides a
relative motion between the two parts. Most of the industrial robots have five different types of
mechanical joints such as,
 Rotational Joints
 Linear Joints
 Twisting Joints
 Orthogonal Joints
 Revolving Joints

3
Fig: 1.3 Links and joints
Rotational joints

Fig: 1.4 Rotational Joints

Figure 1.4 represents the R- Joint (Rotational joints). This type will allow the joints to move
in rotary motion along the axis, which is vertical to the arm axes or perpendicular to the axes of the
input and output links.
Linear joints
It is represented by the letter L and it is shown in the figure 1.5. The linear joint can perform
both the translational and sliding movements. These motions will be attained by several ways such as
telescoping mechanism and piston. The two links should be in parallel axes for achieving the linear
movement.

Fig: 1.5 Linear joints

Twisting joints
Twisting joint will be referred as V-joint which is shown in the below figure 1.6. This joint
makes twisting motion among the output and input link. During this process, the output link axis will
be vertical to the rotational axis. The output link rotates in relation to the input link.
Orthogonal joints
This Orthogonal joint which is shown in the figure 1.7 is represented by the symbol O. This
joint is similar to the linear joint in the way of providing linear motion. However the input and output

4
links are perpendicular to each other.

Fig: 1.6 Twisting joints

Fig: 1.7 Orthogonal joints

Revolving Joints
Revolving joint is generally known as V- Joint and it is shown in the figure 1.8. This joint
will provide the rotational motion. Here the output link axis is perpendicular to the rotational axis,
and the input link is parallel to the rotational axes. As like the twisting joint, the output link spins
about the input link.

Fig: 1.8 Revolving joints

1.3.4 End Effectors


A hand of a robot is considered as end effectors. The grippers and tools are the two significant
types of end effectors. The grippers are used to pick and place an object, while the tools are used to
carry out operations like spray painting, spot welding, etc. on a workpiece. Robotic end effectors are
also known as robotic peripherals, robotic accessories, robotic tools or end-of-arm tooling (EOA) and
tool tips.
It is categorised into two major types:
1. Grippers: It grasp and manipulate objects during the work cycle. The objects grasped are work
parts that need to be loaded or unloaded from one station to another. It may be custom-designed to
suit the physical specifications of the work parts they have to grasp. The different types of grippers
used in industrial robots are
 Mechanical grippers: Two or more fingers that can be actuated by robot controller to open and
close on a work part.
 Vacuum gripper: Suction cups are used to hold flat objects.
 Magnetised devices: Making use of the principles of magnetism, these are used for holding

5
ferrous work parts.
 Adhesive devices: Deploying adhesive substances these hold flexible materials, such as fabric.
 Simple mechanical devices: For example, hooks and scoops.
 Dual Grippers: Mechanical gripper with two gripping devices in one end effectors for machine
loading and unloading. Reduces cycle time per part by gripping two work parts at the same time.
 Interchangeable fingers: Mechanical gripper whereby, to accommodate different work part
sizes, different fingers may be attached.
 Sensory feedback fingers: Mechanical gripper with sensory feedback capabilities in the fingers
to aid locating the work part and to determine correct grip force to apply (for fragile work parts).
 Multiple fingered grippers: Mechanical gripper with the general anatomy of the human hand.
 Standard grippers: Mechanical grippers that are commercially available, thus reducing the need
to custom design a gripper for each separate robot application.
2. Tools: Tools are used to perform processing operations on the work part. Typically to robot uses
the tool relative to a stationary or slowly moving object.
Examples of the tools used as end effectors by roots to perform processing applications include:
 Spot welding gun
 Arc welding tool
 Spray painting gun
 Rotating spindle for drilling, routing, grinding, etc.
 Assembly tool (e.g. automatic screwdriver)
 Heating torch
 Water-jet cutting tool

1.3.5 Kinematics
Kinematics concerns with the assembling of robot links and joints. It is also used to illustrate
the robot motions.

1.4 CO-ORDINATESYSTEM
A coordinate system defines a plane or space by axes from a fixed point called the origin.
Robot targets and positions are located by measurements along the axes of coordinate systems. A
robot uses several coordinate systems, each suitable for specific types of jogging or programming.
The Robots are mostly divided into four major configurations based on their appearances, sizes, etc.
The following are the different configurations of co-ordinate system.
 Cylindrical Configuration
 Polar Configuration
 Jointed Arm Configuration
 Cartesian Co-ordinate Configuration

1.4.1 Cylindrical Configuration


This kind of robots incorporates a slide in the horizontal position and a column in the vertical
position. It also includes a robot arm at the end of the slide. Here, the slide is capable of moving in up
and down motion with the help of the column. In addition, it can reach the work space in a rotary

6
movement as like a cylinder. Hence it contains two linear motions and one rotational motion as
shown in the figure 1.9.
Example: GMF Model M1A Robot.

Fig 1.9 Cylindrical configuration


Advantages:
 Cylindrical configuration has increased rigidity.
 It has capacity of carrying high payloads.
 Larger work envelope than Cartesian configuration.
 Easy to program off-line.
Disadvantages:
 Repeatability and accuracy lower in direction of rotary movement.
 More sophisticated control system required than Cartesian.
 Horizontal motion is circular only.
 Restriction on the compatibility with other types of arms in a common workspace.
Applications
 Coating application
 Die casting
 Foundry and forecasting application
 Machine loading and unloading

1.4.2 Polar Configuration:


The polar configuration of robots which is shown in the figure 1.10 will possess an arm,
which can move up and down. It comprises of a rotational base along with a pivot. It has one linear &
two rotary joints that allows the robot to operate in a spherical work volume. It is also stated as
Spherical Coordinate Robots which has one linear and two rotary motion.
Example: Unimate 2000 Series Robot.

Fig 1.10 Polar Configuration

7
Advantages:
 Long reach capability in the horizontal position.
 Simple design
 High payloads
 Can be light in weight
Disadvantages:
 Vertical reach is low
 Lower mechanical rigidity
 Large and variable torques on joints (2, 3) that gives the counter balance problem
 Positional error is proportional to the radius at which the arm is operating.
Applications:
 Injection modeling
 Stacking and Unstacking
 Forging
 Material transfer

1.4.3 Jointed Arm Configuration:


The arm in these configurations of robots looks almost like a human arm as shown in figure
1.11. It gets three rotary joints and three wrist axes, which form into six degrees of freedoms. As a
result, it has the capability to be controlled at any adjustments in the work space. These types of
robots are used for performing several operations like spray painting, spot welding, arc welding, and
more.
Example: Cincinnati Milacron T3 776 Robot

Fig: 1.11 Jointed arm configuration


Advantages:
 Increased flexibility
 Huge work volume
 Compatible with other robots working in common work space.
 Quick operation.
Disadvantages:
 Very expensive
 Difficult operating procedures

8
 Plenty of components.
 Less stable as arm approaches maximum reach
Applications
 Automatic assembly
 In- process inspection
 Machine vision
 Painting and welding

1.4.4 Cartesian Co-ordinate configuration:


These robots are also called as XYZ robots, because it is equipped with three rotary joints for
assembling XYZ axes. The robots will process in a rectangular work space by means of this three
joints movement as seen the figure 1.12. It is capable of carrying high payloads with the help of its
rigid structure. It is mainly integrated in some functions like pick and place, material handling,
loading and unloading, and so on. Additionally, this configuration adds a name of Gantry Robot.
The important aspects of Cartesian Co-ordinate configuration are:
 The three orthogonal directions X, Y and Z are orthogonal to each other.
 X- Coordinate axis may represent left and right motion
 Y- Coordinate axis may represent forward and backward motion.
 Z- Coordinate axis may represent up and down motion.
Example: Overhead crane movement and IBM 7565 Robot.

Fig: 1.12 Cartesian co-ordinate configurations

Advantages:
 Highly accurate & speed
 Fewer cost
 Simple operating procedures and
 High payloads
Disadvantages:
 Less work envelope
 Reduced flexibility
 Limited in movement
 Restriction on the compatibility with other types of arms in a common workspace

9
Applications:
 Pick and place operation.
 Adhesive application.
 Assembly and sub assembly
 Nuclear material handling
 Welding and surface finishing

1.4.5 Selective Compliance Articulated Robot Arm (SCARA)


It combines the Cartesian linear motion with the rotation of an articulated system and creates
a new type of motion. It consists of two revolute joints followed by a prismatic joint as shown in
figure 1.13. All these three joint axes remain parallel to each other and point along the direction of
gravity. This type of robot is used in the electronics field and for the assembly of the parts of a plane
and they have easiest integration system and are best for the majority of applications.

Fig: 1.13 SCARA Robots

1.5 WORK ENVELOPE


Work envelope is the shape created when a manipulator reaches forward, backward, up and
down. These distances are determined by the length of a robot's arm and the design of its axes. Each
axis contributes its own range of motion.

The workspace can also be defined as the range of motion over which a robot arm can move.
In practice, it is the set of points in space that the end effectors can reach. The size and shape of the
work envelope depends on the coordinate geometry of the robot arm and also on the number of
degrees of freedom. Some work envelopes are flat, confined almost entirely to one of horizontal
plane as shown in the figure 1.14. Others are cylindrical and spherical. A robot can only perform
within the confines of this work envelope. Still, many of the robots are designed with considerable
flexibility. Some have the ability to reach behind themselves. Gantry robots defy traditional
constraints of work envelopes. They move along track systems to create large work spaces. The
figure 1.15 shows the work envelope of robots with different configurations.

Reach envelope: A three dimensional shape that defines the boundaries that the robot manipulator
can reach. It is also known as reach envelope.
Maximum envelope: The envelope that encompasses the maximum designed movements of all robot
parts, including the end effector, work piece and attachments.
Restricted envelope: it is the portion of the maximum envelope which a robot is restricted by
limiting devices.

10
Operational envelope: The restricted envelope that is used by the robot, while performing its
programmed motions.

Fig: 1.14 Work Envelope

Fig: 1.15 Work envelope for different types of robot configuration.

1.6 CLASSIFICATION OF ROBOTS


The robots can be classified according to the configuration, types of control, drive,
movement, application, degree of freedom and sensory system.
(a) Physical Configuration
 Cartesian coordinate configuration
 Cylindrical coordinate configuration

11
 Polar coordinate configuration
 Jointed arm configuration
 SCARA.
(b) Control System
 Point to point robots
 Straight line robots
 Continuous robot
(c) Movement
 Fixed robot
 Mobile robot
 Walking or legged robot
(d) Types of Drive
 Pneumatic drive
 Hydraulic drive
 Electric drive
(e) Application
 Manufacturing
 Handling
 Testing
(f) Degrees of Freedom
 Single degree of freedom
 Two degree of freedom
 Three degree of freedom
 Six degree of freedom
(g) Sensory Systems
 Simple and blind robot
 Vision robot
 Intelligent robot
(h) Capabilities of Robot System
 External robot control and communication
 System parameters
 Program control
 Control for the end effector
 Program debug and simulation
 Ability to move between points in various ways

1.7 TYPES OF ROBOTS


The following are the different types of robots.
(a) Industrial Robot
 They have arms with gripper attached, which are times like and can grip or pick up various
objects.
 They are used to pick and place.

12
 The robots can be programmed and computerized.
 Sensory, welding and assembly robots usually have a self contained micro or minicomputer.
(b) Laboratory Robot
 They take many shapes and many things
 They have micro computers brain multi joined arms or advanced vision or tactile senses.
 Some of these may be mobile and others stationary.
(c) Explorer Robots
 They are used to go where human cannot go or fear to tread, e.g. to explore caves, dive far deeper
underwater and rescue people in sunken ships.
 They are sophisticated machine that have sensory systems and remotely controlled.
(d) Hobbyist Robots
 Most of the hobbyist robots are mobile and made to operate by rolling around on wheels
propelled by electric motors controlled by an on board microprocessor.
 Most hobbyist robots are equipped with speech synthesis and speed recognition systems.
 They have an arm or arms and resemble a person in appearance.
(e) Class Room Robots
 They are developed to assist the instructor in various aspects of the teaching learning processors.
(f) Educational Robot
 They have the ability to speak and respond to the spoken word.
 They can be used to entertain the people at various events or operate revoking advertisement.
(g) Tele-Robots
 Tele robots are guided by human operators through remote control.

1.8 SPECIFICATIONS
The broad classification of robots is conveniently based on the drive system types, work space
geometries and movement control techniques. Apart from these there are specific characteristics
provided to the customer, useful in the selection of the robotic manipulators, precisely to the required
application.
(i) Number of Axes
The translatory movement of the links along a particular direction and/or rotational motions
about a specific axis decides on the number of axes attached to a given robotic manipulator. To
achieve arbitrary position for the wrist and any specific position for the tool or the gripper the general
axes for the robotic manipulator are given as under, in table1.1.

Table 1.1 General Robotic Axes


Types of axis Axes Arbitration

Major 1 to 3 Positioning the wrist


Minor 4 to 6 Orienting the gripper
Redundant 7 to i Preventing obstacles

The movements assigned to links, aiding in positioning the wrist are the major types of axis which
vary from 1 to 3, which can be regarded as the independent axes of motion. Activating the tool and

13
gripper fingers are the function of the mechanisms the movement of which are not considered to be
about/along independent axis which are called minor types and they vary from 4 to 6 axes.

The obstacles within the work envelope are to be tackled by one or more redundant axes assigned to
the manipulator links. The incorporation of the redundant axes adds extra complexity to the design of
robot mechanism.
(ii) Capacity
It is nothing but load carrying ability of the robot with the allowed deflection of the
manipulator end. It depends upon the synthesis of the manipulator dimension based on the statics and
and dynamics of the forces coming on the manipulator.
(iii) Speed
It is the distance moved by the tool tip in unit time. The time required to execute periodic
motion while performing work, can be one of the measure of speed. The higher speed may be a
requisite in high volume production and put a limit on the capacity of the robot.
(iv) Reach and stroke
The reach and the stroke are the measures of the dimensions of the work volume. They can be
horizontal and vertical in the sense of movements.
(iv) Operating Environment
The nature of the work performing surrounding of a particular robot is specific to an
application. The application of robot to a job can have following types of operating environments.
 Dangerous to human beings
 Unhealthy in nature
 Harsh and difficult to access
 Complex and contaminated
 Extremely clean and dustless
 Ordinary and workable
(v) Tool Orientation
The minor axes of movements determine the assumed orientation of the tool or gripper within
the work envelope described by the major axes of motion. One of the tool orientation conventions is
to specify the yaw-pitch-roll (YPR) of the end effectors or tool as is attributed to the aircraft
movements. The below figure 1.16 shows the axes.
(a) Pitch:
The oscillating movement about one of the transverse axis i.e., X-axis is the ‘pitch’
(b) Yaw:
The rotation about the other transverse axis, i.e., Y-axis is the ‘Yaw’
(c) Roll:
The rotation of the tool about longitudinal axis, i.e., Z-axis is the ‘Roll’

Fig: 1.16 Tool orientation

14
(d) Notation TRL:
It consists of a sliding arm (L joint) actuated relative to the body, which can rotate about both a
vertical axis (T joint) and horizontal axis (R joint).
(e) Notation TLO:
It consists of a vertical column, relative to which an arm assembly is moved up or down ƒ the
arm can be moved in or out relative to the column.
(f) Notation LOO:
It consists of three sliding joints, two of which are orthogonal other names include rectilinear
robot and x-y-z robot.
(g) Speed of Motion
1. Point-to-point (PTP) control robot: is capable of moving from one point to another point. The
locations are recorded in the control memory. PTP robots do not control the path to get from one
point to the next point. Common applications include component insertion, spot welding, whole
drilling, machine loading and unloading, and crude assemblyoperations.
2. Continuous-path (CP) control robot: with CP control, the robot can stop at any specified point
along the controlled path. All the points along the path must be stored explicitly in the robot’s control
memory. Typical applications include spray painting, finishing, gluing, and arc welding operations.
3. Controlled-path robot: the control equipment can generate paths of different geometry such as
straight lines, circles, and interpolated curves with a high degree of accuracy. All controlled- path
robots have a servo capability to correct their path.
(viii) Pay Load
Maximum payload is the weight of the robotic wrist, including the EOAT and work piece. It
varies with different robot applications and models. Determining your payload requirements is one
way to narrow down your robot search.

1.9 ROBOT PARTS AND FUNCTIONS


 The controller is the "brain" of the industrial robotic arm and allows the parts of the robot to
operate together. It works as a computer and allows the robot to also be connected to other
systems. The robotic arm controller runs a set of instructions written in code called a program.
The program is inputted with a teach pendant. Many of today's industrial robot arms use an
interface that resembles or is built on the Windows operating system.
 Industrial robot arms can vary in size and shape. The industrial robot arm is the part that
positions the end effectors. With the robot arm, the shoulder, elbow, and wrist move and twist
to position the end effectors in the exact right spot. Each of these joints gives the robot
another degree of freedom. A simple robot with three degrees of freedom can move in three
ways: up & down, left & right, and forward & backward. Many industrial robots in factories
today are six axis robots.
 The end effectors connects to the robot's arm and functions as a hand. This part comes in
direct contact with the material the robot is manipulating. Some variations of effectors are a
gripper, a vacuum pump, magnets, and welding torches. Some robots are capable of changing
end effectors and can be programmed for different sets of tasks.
 The drive is the engine or motor that moves the links into their designated positions. The
links are the sections between the joints. Industrial robot arms generally use one of the

15
following types of drives: hydraulic, electric, or pneumatic. Hydraulic drive systems give a
robot great speed and strength. An electric system provides a robot with less speed and
strength. Pneumatic drive systems are used for smaller robots that have fewer axes of
movement. Drives should be periodically inspected for wear and replaced if necessary.
 Sensors allow the industrial robotic arm to receive feedback about its environment. They can
give the robot a limited sense of sight and sound. The sensor collects information and sends it
electronically to the robot controlled. One use of these sensors is to keep two robots that work
closely together from bumping into each other. Sensors can also assist end effectors by
adjusting part variances. Vision sensors allow a pick and place robot to differentiate between
items to choose and items to ignore.

1.10 NEED OF ROBOT AND ITS APPLICATION


Frequently, robots are used to do jobs that could be done by humans. However, there are many
reasons why robots may be better than humans in performing certain tasks.
(a) Speed
 Robots may be used because they are faster than people at carrying out tasks.
 This is because a robot is really a mechanism, which is controlled by a computer and it is
known that computers can do calculations and process data very quickly.
 Some robots actually move more quickly than we can, so they can carry out a task, such as
picking up and inserting items, more quickly than a human can.
(b) Hazardous (dangerous) Environment
 Robots may be used because they can work in places where a human would be in danger.
 For example, robots can be designed to withstand greater amounts of heat radiation, chemical
fumes than humans could.
(c) Repetitive Tasks
 Sometimes robots are not really much faster than humans, but they are good at simply doing
the same job over and over again.
 This is easy for a robot, because once the robot has been programmed to do a job once; the
same program can be run many times to carry out the job many times and the robot will not
get bored as a human would.
(d) Efficiency
Efficiency is all about carrying out tasks without waste. This could mean
 Not wasting time
 Not wasting materials
 Not wasting energy
(e) Accuracy
 Accuracy is all about carrying out tasks very precisely.
 In a factory manufacturing items, each item has to be made identically. When items are being
assembled, a robot can position parts within fractions of a millimeter.
(f) Adaptability
 Adaptability is where a certain robot can be used to carry out more than one task.
 A simple example is a robot being used to weld car bodies. If a different car body is to be
manufactured, the program which controls the robot can be changed. The robot will then carry
out a different series of movements to weld the new car body.

16
 Further reasons for using robots include:
1. Ability to work fast
2. Ability to work in a hazardous environment
3. Ability to repeat tasks again and again
(g) How a Robot Can Help?
 An automatic industrial machine replacing the human in hazardous work environment
 An automatic mobile sweeper machine at a modern home.
 An automatic toy car for a child to play with.
 A machine removing mines in a war field all by itself and many more.

1.11 ROBOTS-DIFFERENT APPLICATIONS

1.11.1 Industrial Applications


Industrial robots are used to assemble the vehicle parts. As the assembly of the machine parts is a
repetitive task to be performed, the robots are conveniently used instead of using mankind (which is
more costly and less précised compared to robots).
1.11.1.1 Auto Industry
The auto industry is the largest users of robots, which automate the production of various
components and then help, assemble them on the finished vehicle. Car production is the primary
example of the employment of large and complex robots for producing products. Robots are used in
that process for the painting, welding and assembly of the cars. Robots are good for such tasks
because the tasks can be accurately defined and must be performed the same every time, with little
need for feedback to control the exact process being performed.

Material Transfer, Machine Loading And Unloading


There are many robot applications in which the robot is required to move a work part or other
material from one location to another. The most basic of these applications is where the robot picks
the part up from one position and transfers it to another position. In other applications, the robot is
used to load and/or unload a production machine of some type.
Material transfer applications are defined as operations in which the primary objective is to
move a part from one location to another location. They are usually considered to be among the most
straightforward of robot applications to implement. The applications usually require a relatively
unsophisticated robot, and interlocking requirements with other equipments are typically
uncomplicated. These are the pick ad place operations. The machine loading and unloading
applications are material handling operations in which the robot is used to service a production
machine by transferring parts to and/or from the machine.
Robots have been successfully applied to accomplish the loading and/or unloading
function in the production operations
 Die casting
 Plastic molding
 Forging and related operations
 Machining operations
 Stamping press operations

17
The other industrial applications of robotics include processing operations such as spot welding,
continuous arc welding; spray coating, also in assembly of machine parts and their inspection.
Robotic arm
The most advanced robot in practical use today is the robotic arm and it is seen in applications
throughout the world. The robotic arms are used to carry out dangerous work such as dealing with
hazardous materials. And they used to carry out work in outer space where man cannot survive. Also
it is used to do work in the medical field such as conducting experiments without exposing the
research. Some of the most advanced robotic arms have such amenities as a rotating base, pivoting
shoulder, pivoting elbow, rotating wrist and gripper fingers. All of these amenities allow the robotic
arm to do work that closely resembles what a man can do only without the risk.

1.11.2 Medical Applications


Medical robotics is a growing field and regulatory approval has been granted for the use of
robots in minimally invasive procedures. Robots are being used in performing highly delicate,
accurate surgery, or to allow a surgeon who is located remotely from their patient to perform a
procedure using a robot controlled remotely. More recently, robots can be used autonomously in
surgery.
1.11.2.1 Future Applications
The future robot based will be based on the various research activities that are currently being
performed. The features and capabilities of the future robot will include the following (it is unlikely
that all future robots will possess all of the features listed below).
• Intelligence: The future robot will be an intelligent robot, capable of making decisions about the
task it performs based on high-level programming commands and feedback data from its
environment.
•Sensor capabilities: The robot will have a wide array of sensor capabilities including vision, tactile
sensing, and others. Progress is being made in the field of feedback and tactile sensors, which allow a
robot to sense their actions and adjust their behavior accordingly. This is vital to enable robots to
perform complex physical tasks that require some active control in response to the situation. Robotic
manipulators can be very precise, but only when a task can be fully described.
•Tele presence: It will possess a tele presence capability, the ability to communicate information
about its environment (which may be unsafe for humans) back to a remote” safe” location where
humans will be able to make judgments and decisions about actions that should be taken by the
robots.
•Mechanical design: The basic design of the robot manipulator will be mechanically more efficient,
more reliable, and with improved power and actuation systems compared to present day robots. Some
robots will have multiple arms with advanced control systems to coordinate the actions of the arms
working together. The design of robot is also likely to be modularized, so that robots for different
purposes can be constructed out of components that are fairly standard.
•Mobility and navigation: Future robots will be mobile, able to move under their own power and
navigation systems.
•Universal gripper: Robot gripper design will be more sophisticated, and universal hands capable of
multiple tasks will be available.
• Systems integration and networking: Robots of the future will be “user friendly” and capable of
being interfaced and networked with other systems in the factory to achieve a very high level.

18
Industrial Applications
The future industrial applications are divided into three areas:
 Manufacturing
 Hazardous and in accessible environments
 Service industries
1.11.3 Future Manufacturing Applications
The present biggest application areas for industrial robots are in the spot-welding and the
materials handling and machine loading categories. The handling of materials and machine tending
are expected to continue to represent important applications for robots, but the relative importance of
spot welding is expected to decline significantly. The most significant growth in shares of
manufacturing applications is expected to be in assembly and inspection and in arc welding.
Robotic welding is one of the most successful applications of industrial robot manipulators. In
fact, a huge number of products require welding operations in their assembly processes. Welding can
in most cases impose extremely high temperatures concentrated in small zones. Physically, that
makes the material experience extremely high and localized thermal expansion and contraction
cycles, which introduce changes in the materials that may affect its mechanical behaviour along with
plastic deformation. Those changes must be well understood in order to minimize the effects.
The majority of industrial welding applications benefit from the introduction of robot
manipulators, since most of the deficiencies attributed to the human factor is removed with
advantages when robots are introduced. This should lead to cheaper products since productivity and
quality can be increased, and production costs and manpower can be decreased.

Hazardous and Inaccessible Nonmanufacturing Environments


Manual operations in manufacturing that are characterized as unsafe, hazardous,
uncomfortable, or unpleasant for the human workers who perform them have traditionally been ideal
candidates for robot applications. Examples include die-casting, hot forging, spray-painting, and arc
welding. Potential manufacturing robot applications that are in hazardous or inaccessible
environments include the following:
 Construction trades.
 Underground Coal mining: The sources of dangers in this field for humans include fires,
explosions, poisonous gases, cave-ins, and underground floods.
 Hazardous utility company operations: The robots have a large scope of application in the
nuclear wastage cleaning in nuclear plants, in the electrical wiring, which are dangerous and
hazardous to humans.
 Military applications
 Fire fighting
 Undersea operations: The Ocean represents a rather hostile environment for human beings
due principally to extreme pressures and currents. Even when the humans venture into the
deep, they are limited in terms of mobility and the length of time they can remain underwater.
It seems much safer and more comfortable to assign aquatic robots to perform whatever task
must be done underwater.
 Robots in space: Space is another inhospitable environment for humans, in some respects the
opposite of the ocean. Instead of extremely high pressures in deep waters, there is virtually no

19
pressure in outer space. Therefore, this field is also of large importance as far as the robotics
is concerned.
1.11.4 Service Industry and other Applications
In addition to manufacturing robot applications, robot applications that are considered
hazardous, there are also opportunities for applying robots to the service industries. The possibilities
cover a wide spectrum of jobs that are generally non-hazardous:
 Teaching robots
 Retail robots
 Fast-food restaurants
 Garbage collection in waste disposal operations
 Cargo handling and loading and distribution operations
 Security guards
 Medical care and hospital duties
 Agricultural robots
 House hold robots
Medical Applications
 The medical applications of robotics include Nano robotics, swarm robotics, also surgeries
and operations using the knowledge of robotics.
 Nano robotics is the technology of creating machines or robots at or close to the scale of a
nanometer (10-9 meters). Nanorobots (nanobots or nanoids) are typically devices ranging in
size from 0.1-10 micrometers and constructed of nanoscale or molecular components. As no
artificial non-biological nanorobots have so far been created, they remain a hypothetical
concept at this time.
 Swarm robotics is a new approach to the coordination of multirobot systems, which consist of
large numbers of relatively simple physical robots. Potential application for swarm robotics
includes tasks that demand for extreme miniaturization (Nano robotics, microbotics), on the
one hand, as for instance distributed sensing tasks in micro machinery or the human body. On
the other hand, swarm robotics is suited to tasks that demand for extremely cheap designs, for
instance a mining task, or an agricultural foraging task. Artists are using swarm robotic
techniques to realize new forms of interactive art installation.
Robots for Paralyzed Patients
One of the interesting and concerning future applications of robotics in medical field include
service to paralyzed people who electric wheelchairs to move around. But now a robotic device can
help paralyzed patients to walk on treadmills. After training, some of the patients, who rebuild
confidence, have also regained muscle power and can, walk over short distances. The robot helps the
paralyzed patients in their daily routine such as helping them to take bath, changing their clothes, and
feeding them. A robot doesn’t force food into their mouth but it takes the spoon to till the patient’s
mouth.

1.12 DRIVES AND MOTORS


The drive systems are the links of the robots move about the prescribed axis through which it
receives the power. It is also known as actuators. The movements produced by it are translator in
nature or rotary about a joint. At the joints the actuators provide required force or torque for the

20
movement of the links. The movements of all the links combined together form the arm end or wrist
motion. The source of the power for the actuators can be through the compressed air, pressurised
fluid or the electricity, based on which they are classified. The below figure 1.17 illustrates the
classification of actuators.

Fig: 1.17 Robot Drives and motors

Hydraulic Actuators
The hydraulic actuators receive pressurised hydraulic oil with controlled direction and
pressure through a system known as ‘power packs’. The speed and volume flow rates are also
controlled by the elements of the power pack. To produce linear motion the hydraulic cylinders are
used and hydraulic motors are used to produce rotational movements.
The hydraulic drive which is shown in figure 1.18 consists of a power supply, one or more
motors, a set of pistons and valves, and a feedback loop. The valves and pistons controls the
movement of the hydraulic fluid is practically incompressible, it is possible to generate large
mechanical forces over small surface areas, or, conversely to position large-area pistons with extreme
accuracy. The feedback loop consists of one or more force sensors that provide error correction and
ensure that the manipulator follows its intended path.

Pneumatic Actuators:
The principles of Pneumatic actuators match with that of the hydraulic actuator. The working
fluid in case of this is the compressed air. The pressure of air used in this varies from 6-10 MPa.
Because of low air pressure the components are light and force/torque transmitted is also less,
Pneumatic cylinders are used to actuate the linear joints and pneumatic devices is that the working
fluid(air) is compressible, hence the actuators drifts under loads.

21
Electric drives:

 Principle: A rotational movement is produced in a rotor when an electric current flows


through the windings of the armature setting up a magnetic field opposing the field set up by
the magnets.
 The main components: Rotor, stator, brush and commutator assembly. The rotor has got
windings of armature and stator has got the magnet. The brush and the commutator
assemblies switch the current to the armature maintaining an opposed field in the magnets.
 Types:
a. DC servo motors
b. AC Servo motors
c. Stepper motors
 Features:
DC servo motors AC Servo motors Stepper motors
 Higher power to  Rotor is a permanent  Moves in known
weight ratio. magnet and stator is angle of rotation.
 High accerlation housing the winding.  Position feed back is
 Uniform torque  No commutators and not necessary.
 Good response for brushes.  Rotation of the shaft
better control.  Switch is due to AC by rotation of the
 Reliable, sturdy and but not by magnetic field.
powerful. commutation.  Needs microprocessor
 Produces sparks in  Fixed nominal speed. circuit to start.
operation, not  More powerful.  Used in table top
suitable for certain  Reversibility of robots.
environments. rotation possible.  Finds less use in
industrial robots.
 Extensive use is
robotic devices

Fig: 1.18 Hydraulic Actuators

22
1.13 PID CONTROLLERS
The simplest method of control is not always the best. A more advanced controller and almost
industry standard is the PID controller (shown in figure 1.19). It comprises a proportional, an
integral, and a derivative control part. The controller parts are introduced in the following sections
individually and in combined operation.

Fig: 1.19 PID controllers block diagram

1.13.1 Proportional controller


The simplest controllers, which it should be noted are adequate for many situations, produce
an output proportional to the error between the observed result and the desired set point. This solution
tends to be slow to reach a steady state response and sometimes oscillates.

Suppose a person is in the driver’s seat of a stopped car (in a safe location, of course). If he
presses the accelerator a quarter ways down and hold it at that position. The car will start moving and
speed up until the drag on the car off-sets the amount of gas sent to engine and it reaches a steady
state speed. This scenario, with a constant output to the motor, describes an over-damped controller
where the output slowly reaches a steady state position.
u(t) = kp e(t)

.
Fig: 1.20 Curves of damping conditions

If the output of the controller is proportional to the error between the measured result and set
point, then the result will be under-damped as shown in figure 1.20 such that it will reach the

23
approximate set point quickly, but will often oscillate about the set point before reaching steady state.
It may also fall short and reach a steady state result that is less than the desired output.

1.13.2 Proportional Integral Control (PI)


To correct the case where the output falls short of the desired output, a term proportional to
the area under the time domain error curve is added. From calculus, we learn that an integral is a way
to calculate the area under a curve. Such a term will respond fairly slowly, but in time will contribute
just the right amount such that the error between the reference and output goes to zero.

u(t) = kP e(t) + kI  e(T) dT

1.13.3 Proportional Integral Derivative Controller (PID)


To produce a critically damped result, where it reaches the set point quickly, but has minimal
oscillation, It need a derivative term. From calculus, we learn that a derivative measures the rate of
change (in our case, with respect to time). If our output is rapidly increasing, then the derivative of
the error will be a negative number. If the output is decreasing, then the derivative of the error will be
positive. If the output tends to change slowly (near the steady state), then the derivative will be a
small number. Thus a derivative term will tend to counteract oscillation, but will have minimal
impact on the steady state condition.
𝑡 𝑑𝑒(𝑡)
u(t) = kP e(t) + kI 0 e(T) dT + kD
𝑑𝑡

The figure 1.21 shows the response of a typical PID closed loop system. The biggest challenge with
correctly using a PID controller is to correctly determine the three constants: , and .

Fig: 1.21 Response of a typical PID closed loop system

Application:
In a car’s cruise control that uses PID, if the system is overtuned, it would rapidly accelerate
while braking up and down in search of a steady speed, whereas an undertuned system would take a
long time to respond to changes in demand such as going up a hill. In this case, the motor would lag

24
out and slow to a crawl. PID is a mathematical algorithm that can be used to optimize these control
requirements. A proper PID tuning algorithm in the case of cruise control would result in getting up
to speed at a reasonable rate without overshooting the desired speed by more than an acceptable
margin or maintaining a consistent speed under changing demands.

Modern PID controllers:


While PID control is often used because of its adaptability to a wide range of applications and
operating conditions, there are several common problems that occur during the PID tuning process.
Automatic and adaptive tuning can reduce the time and frustration often involved in reaching the
correct tuning parameters for a given operation.

25
CHAPTER 2

SENSORS AND MACHINE VISION


2.1 INTRODUCTION
The potential range of robotic applications requires different types of sensors to perform
different kinds of sensing tasks. Specialized devices have been developed to meet various sensing
needs such as orientation, displacement, velocity, acceleration and force. Robots must also sense the
characteristics of the tools and materials they work with. Though currently available sensors rely on
different physical properties for their operation, they may be classified into two general types:
contacting and non-contacting.
Since contacting sensors must touch their environment to operate, their use is limited to
objects and conditions that can do no harm to the sensors. For instance, the elastic limit of a
deformable sensor must not be exceeded; also, a material such as hot steel would be extremely
difficult to measure using contact sensors. Contact devices vary in sensitivity and complexity. Some
can only determine whether something is touching or not, while others accurately measure the
pressure of the contact. The most simple contact sensor is merely a mechanical switch. The more
sophisticated devices can produce a three dimensional profile of an object. No contacting sensors
gather information without touching an object. They can be used in environments where contact
sensors would be damaged since they can sense most materials, including liquid, powder, and smoke;
and they can measure many parameters, including velocity, position, and orientation. Simple
noncontact sensors merely determine whether something is present or not. More complicated devices
can be used to distinguish between objects and workpieces. Through special techniques, data for a
three dimensional profile of an object can be obtained as with tactile sensing.
Machine vision is the application of computer vision to industry and manufacturing. Whereas
computer vision us mainly focused on machine-based image processing, machine vision most often
requires digital input/output devices and computer networks to control other manufacturing
equipment such as robotic arms.
Machine vision can be defined as a means of simulating the image recognition and analysis
capabilities of the human system with electronic and electro mechanical techniques.
Robotic vision may be defined as the process of extracting, characterising and interpreting
from images of a three dimensional world. It is called as computer vision.

2.2 REQUIREMENTS OF A SENSOR


The interaction of the robot with the environment set-ups needs mechanisms known as
sensors that can perform the following functions:
 Motion control variables detection.
 Robot guidance without obstruction
 Object identification tasks.
 Handling the objects.
i. The sensors that provide the information like joint position, velocity and acceleration are
known as internal state sensors.
ii. The robots are being guided by the help of vision and range sensors that are known as
non-contact external state sensors.

26
iii. The information of object handling is supplied as a feedback from force and torque
sensors termed as contact type internal state sensors.
The latest sensor equipment includes heart rate, electrical voltage, gas, light, sound,
temperature, and distance sensors. Data is collected via the sensors and then transmitted to the
computer. Up to date software is used to collect, display and store the experimental data. The
computer software can then display this data in different formats - such as graphs, tables or meter
readings, which make it easy for students to understand the process and bring science to life. The
significance of sensor technology is constantly growing. Sensors allow us to monitor our
surroundings. New sensor applications are being identified everyday which broadens the scope of the
technology and expands its impact on everyday life.
In Industry
On the factory floor, networked vibration sensors warn that a bearing is beginning to fail.
Mechanics schedule overnight maintenance, preventing an expensive unplanned shutdown. Inside a
refrigerated grocery truck, temperature and humidity sensors monitor individual containers, reducing
spoilage in fragile fish.
In the Environment
Networks of wireless humidity sensors monitor fire danger in remote forests. Nitrate sensors
detect industrial and agricultural runoff in rivers, streams and wells, while distributed seismic
monitors provide an early warning system for earthquakes. Meanwhile built-in stress sensors report
on the structural integrity of bridges, buildings and roadways, and other man-made structures.
For Safety and Security
Fire fighters scatter wireless sensors throughout a burning building to map hot spots and flare-
ups. Simultaneously, the sensors provide an emergency communications network. Miniature
chemical and biological sensors in hospitals, post offices, and transportation centres raise an alarm at
the first sign of anthrax, smallpox or other terror agents.

2.3 POSITION SENSORS


The sensors which are Positional in nature as their name implies is known as Position Sensors
which detects the position of something which means that they are referenced either to or from some
fixed point or position. These types of sensors provide a “positional” feedback. One method of
determining a position is to use either “distance”, which could be the distance between two points
such as the distance travelled or moved away from some fixed point, or by “rotation” (angular
movement). For example, the rotation of a robots wheel to determine its distance travelled along the
ground. Either way, Position Sensors can detect the movement of an object in a straight line using
Linear Sensors or by its angular movement using Rotational Sensors.
The use of sensors in robots has taken them into the next level of creativity. Most importantly,
the sensors have increased the performance of robots to a large extent. It also allows the robots to
perform several functions like a human being. The robots are even made intelligent with the help of
Visual Sensors (generally called as machine vision or computer vision), which helps them to respond
according to the situation. The Machine Vision system is classified into six sub-divisions such as Pre-
processing, Sensing, Recognition, Description, Interpretation, and Segmentation.
The Different types of sensors used in robotics:
There are plenty of sensors used in the robots, and some of the important types are listed below:
 Proximity Sensor

27
 Range Sensor and
 Tactile Sensor
2.4 RANGE SENSORS
Range Sensor is implemented in the end effectors of a robot to calculate the distance between
the sensor and a work part. The values for the distance can be given by the workers on visual data. It
find use in robot navigation and avoidance of the obstacles in the path. The exact location and the
general shape characteristics of the part in the work envelope of the robot is done by special
applications for the range sensors. There are several approaches like, triangulation method, structured
lighting approach and time-of light range finders etc., in these cases the source of illumination can be
light-source, laser beam or based on ultrasonic.
2.4.1 Triangulations Principle
This is the simplest of the techniques, which is easily demonstrated in the below figure 2.1.
The object is swept over by a narrow beam of sharp light. The sensor focussed on a small spot of the
object surface detects the reflected beam of light. If  is the angle made by the illuminating source
and ‘b’ is the distance between source and the sensor, the distance ‘d’ of the sensor on the robot is
given as
d= b.tan
The distance ‘d’ can be easily transformed into 3D- co-ordinates

(a)

(b)

Fig: 2.1 Triangulation method of range sensing (a) Transmission (b) Reception

28
2.4.2 Structured, Lighting Approach
 This approach consists of projecting a light pattern onto a set of objects and using the
distortion of the pattern to calculate the range. One of the most popular light patterns in use
today is a sheet of light generated through a cylindrical lens or a narrow slit.
 As illustrated in Figure 2.2, the intersection of the sheet with objects in the work space yields
a light stripe which is viewed through a television camera displaced a distance B from the
light source. The stripe pattern is easily analysed by a computer to obtain range information.
 For example, an inflection indicates a change of surface, and a break corresponds to a gap
between surfaces.

Fig: 2.2 Range measurement by structured lighting approach

 Specific range values are computed by first calibrating the system. One of the simplest
arrangements is shown in figure 2.3 which represents a top view of 2.2, in this arrangement,
the light source and camera are placed at the same height, and the sheet of light is
perpendicular to the line joining the origin of the light sheet and the centre of the camera lens.
 The vertical plane containing this line is the reference plane. Clearly, the reference plane is
perpendicular to the sheet of light, and any vertical flat surface that intersects the sheet will
produce a vertical stripe of light in which every point will have the same perpendicular
distance to the reference plane.
 The objective of the arrangement shown in the figure 2.3 is to position the camera so that
every such vertical stripe also appears vertical in the image plane. In this way, every point
along the same column in the image will be known to have the same distance to the reference
plane.
 It has simplified calibration.

29
 Most systems based on the sheet-of light approach use digital images. Suppose that the image
seen by the camera is digitized into an N x M array and let y, 0, 1, 2, ......, M-1 be the column
index of this array.
 As explained below, the calibration procedure consists of measuring the distance B between
the light source and lens centre, and then determining the angles 𝛼c and 𝛼0. Once these
quantities are known, it follows from elementary geometry d in figure by
d=𝜆 tan 

Where 𝜆 is the focal length of the lens and


= 𝛼c- 𝛼0

Fig: 2.3 Showing a specific arrangement which simplifies calibration

For an M-column digital image, the distance increment dk between column is given by
dk =k d/(M/2)= 2kd/M
For 0 k M/2, (in an image viewed on a monitor, k=0 would correspond to the leftmost column and
k=M/2 to the centre column). The angle 𝛼k made by the projection of an arbitrary stripe is easily
obtained by nothing that
𝛼k = 𝛼c-’k ...(2.1)
𝑑−𝑑𝑘
Where, tan k = 𝜆

𝑑(𝑀−2𝑘)
k = tan-1[ 𝑀𝜆 ] ...(2.2)
Where 0 k M/2. For the remaining values of k
𝛼k = 𝛼c +nk ...(2.3)

𝑑(2𝑘−𝑀)
nk = tan-1[ ] ...(2.4)
𝑀𝜆

For M/2<k(M-1).

30
By comparing equations 6 and 8 we note that nk=-’k. So the equations 2.2 and 2.4 are
identical for the entire range 0 k M-1. The perpendicular distance dk between an arbitrary light
stripe and the reference plane is given by
Dk =B tan k ...(2.5)
For 0 k M-1, where 𝛼kis given either by equations 2.1 and 2.3.
It is important to note that once B, 𝛼, 𝛼c, M, and 𝜆 are known, the column number in the
digital image completely determines the distance between the reference plane and all points in the
stripe imaged on that column.
Since M and 2. are fixed parameters, the calibration procedure consists simply of measuring B and
determining 𝛼c and 𝛼0, as indicated above.

To determine 𝛼c. we place a flat vertical surface so that its intersection with the sheet of light is
imaged on the centre of the image plane (i.e., at y = M/2).

The physical measure is perpendicular to the distance D between the surface and the reference
plane.
𝐷𝑐
𝛼c = tan-1 𝐵

In order to determine d0, move the surface closer to the reference plane until its light stripe is
imaged at y=0 on the image plane then measure D0 as,
𝑫𝟎
𝛼0 = tan-1 𝑩
This completes the calibration procedure.

The principal advantage of the arrangement just discussed is that it results in relatively simple
range measuring technique. Once calibration is completed, the distance associated with every column
in the image is computed using Equation (2.5) with k = 0, 1, 2, ..., M- 1 and the results are stored in
memory. Then, during normal operation, the distance of any imaged point is obtained simply by
determining its column number in the image and addressing the corresponding location in memory.

2.5 TOUCH SENSORS


The touch sensors gather the information established by the contact between the parts to be
handled and the fingers in the manipulator end effectors. The signals of the touch information are
useful in
 Locating the objects
 Recognising the object type
 Force and torque control needed for task manipulation.

The types of touch sensors are:


 Binary sensors detect the existence of the object to be handled e.g. micro switches and limit
switches.
 Analog sensors produce proportional output signal for the force exerted locally. e.g. a code
wheel with a plunger.

31
2.6 BINARY SENSORS
The devices that deliver sensing signals by contact at two gripping points termed the binary
sensors. The fingers are shown in figure 2.4 accommodate the binary sensors. The contact with the
parts results in deflection and this information is sufficient to determine the presence of the object
between the fingers. The proper grasping and manipulation of the object in the work envelope can be
easily achieved through centring of the fingers assisted by the information given by binary sensors.

Fig: 2.4 Hand with Binary sensor

2.7 ANALOG SENSORS


These types of sensors are featured by spring actuated plunger connected to a code wheel.
The deflection of the plunger rod is by the action of contact force. The schematic arrangement of the
analog sensor is as shown in figure 2.5.
If k is the spring rate and  is the deflection of the plunger recorded, the force of contact is given by
F=k

Fig: 2.5 Analog Sensors

2.8 WRIST SENSORS


As shown in figure 2.6, several different forces exist at the point where a robot arm joins the
end effector. This point is called the wrist. It has one or more joints that move in various ways. A

32
wrist-force sensor can detect and measure these forces. It consists of specialized pressure sensors
known as strain gauges. The strain gauges convert the wrist forces into electric signals, which go to
the robot controller. Thus the machine can determine what is happening at the wrist, and act
accordingly.
Wrist force is complex. Several dimensions are required to represent all the possible motions
that can take place. The illustration shows a hypothetical robot wrist, and the forces that can occur
there. The orientations are right/left, in/out, and up/down. Rotation is possible along all three axes.
These forces are called pitch, roll, and yaw. A wrist-force sensor must detect, and translate, each of
the forces independently. A change in one vector must cause a change in sensor output for that force,
and no others.

Fig: 2.6 Wrist-Force Sensing

2.9 COMPLIANCE SENSORS


 Compliance sensors or ultrasonic range finder are a function of range sensors which is to
measure the distance.
 The basic idea being same as that used with pulsed laser.
 An ultrasonic chip is transmitted over a short time period and, since speed of sound is known
for specified medium.
 A simpler calculation involving the internal between the outgoing pulse and return eco yields
an estimate of the distance to the reflecting surface.

2.10 SENSING AND DIGITIZING IMAGE DATA


 The sensing and digitizing functions involve the input of vision data by me of a camera
focused on the scene of interest.
 Special lighting techniques are frequently used to obtain an image of suffice contrast for later
processing. The image viewed by the camera is typically digitized and stored in computer
memory.

33
 The digital image is called a frame of vision data, and is frequently captured by a hardware
device called a frame grabber. These devices are capable of digitizing images at the rate of 30
frames per second.
 The frames consist of a matrix of data representing projections of the scene sensed by the
camera. The elements of the matrix are called picture elements, or pixels. The number of
pixels is determined by a sampling process performed on each image frame.
 A single pixel is the projection of a small portion of the scene which reduces that portion to a
single value. The value is a measure of the light intensity for that element of the scene.
 Each pixel intensity is converted into a digital value. (ignoring the additional complexities
involved in the operation of a colour video camera.)

The Digitized Image


 The digitized image matrix for each frame is stored and then subjected to image processing
and analysis functions for data reduction and interpretation of the image.
 These steps are required in order to permit the real-time application of vision analysis
required in robotic applications.
 Typically an image frame will be thresholded to produce a binary image, and then various
feature measurements will further reduce the data representation of the image
 This data reduction can change the representation of a frame from several hundred thousand
bytes of raw image data to several hundred bytes of feature value data
 The resultant feature data can be analysed in the available time for action by the robot system.

2.11 SIGNAL CONVERSION AND IMAGE STORAGE


The image presented to a vision system's camera is light, the pattern of light varies in
intensity and wavelength (colour) throughout the image. Light received by the camera may come
from more than one light source. The designer of the vision application must ensure that the pattern
of light presented to the camera is one that can be interpreted easily.
The designer can often control what the camera sees, ensuring that the image has minimum
clutter. A human can often control where important features will be in the field of view, perhaps by
ensuring that workpieces are held in fixtures. The designer can also control die pattern of light seen
by the camera, by designing the light sources themselves and by blocking extraneous light (e.g.,
sunlight) that might affect the image.

To be useful in a computerized vision system:


 Light energy must be converted to electrical energy.
 The image must be divided into discrete pixels (picture elements).
 The brightness of the received light at each pixel must be recorded.
 A color camera can be thought of as three separate cameras: one for each basic light colour,
each otherwise identical to a black and white camera.
 This discussion will therefore discuss only the ways in which a black and white camera
breaks an image into pixels and how it evaluates the amount of light received at each pixel.
 The figure 2.7 shows distortion of light.

34
Fig: 2.7 A glass lens showing source of distortion

2.11.1 Lighting Techniques

i. Structured lighting technique


ii. Illumination techniques

Structured lighting technique


The use of carefully designed lighting is what is meant by the term "structured lighting."
Some structured lighting techniques, shown in figure 2.8. They include the use of backlight surfaces,
upon which objects silhouettes are presented to the vision system. The use of front lighting from a
angle, where the shadow lengths provide depth information. The use of light focused to a spot, so the
diameter of the light spot on the target can be used to determine the distance to the target from the
light source. The use of a single line of light (often from a laser light source), projected at the field of
view at an angle. The location of the resultant light strip in the field of view is used to determine
height of surfaces.
 The use of multiple light sources.
 The use of polarized light,
 The use of redirected light to (for example) see the top and side of the same object in the same
image.
 The use of different coloured light (LEDs have narrow colour bandwidths),
 The use of any combination of the above, either in a single image or in a series of images of the
same scene.
Sometimes, the image presented to the camera can be improved by controlling characteristics
of the objects that will be in the field of view. Variations in the light reflectance qualities of a part,
although acceptable to a human, can be disastrous in a computerized vision system.

35
Fig: 2.8 Structured lighting technique

Examples of undesirable reflectance variations include: slight color differences, differences in


glossiness of painted surfaces, presence or absence of oil films, random rust patches, and even the
orientation of tooling marks! Grinding mark orientations can have significant effects in a structured
lighting situation. The figure 2.8 shows the structured lighting technique types.

It may be better to attempt to control these characteristics of the parts than to attempt to
design a vision system that will work despite such variations. The more complex the part recognition
task, the more carefully one should examine and control the image presented to the camera.
Illumination Techniques
 Proper Illumination of a scene is an important factor that often affects the complexity of vision
algorithms.
 Arbitrary lighting of the scene can result in low contrast images, specular, reflections, shadows
and extraneous details.
 A well designed illuminations system minimizes the complexity of the resulting image, while the
information required for object detection and extraction is enhanced.
 The basic types of lighting devices in robot vision may be grouped into the following categories

36
a) Diffuse surface devices
b) Condenser projectors
c) Flood or spot projectors
d) Collimators
e) Images

(c)
Fig: 2.9 Lighting technique

Illumination techniques shown in figure 2.9 comprises of the following:


i. Back lighting
ii. Diffuse lighting.
iii. Bright Field
iv. Dark Field

The application of some techniques requires a specifier light änd geometry, or relative
placement of the camera, sample, and light, others do not. For example, a standard bright field bar
light may also be used in dark-field mode; whereas a diffuse light is used exclusively as such. Most
manufacturers of vision lighting products also offer lights with various combination of techniques
available in the same light, and at least in the case of LED-based varieties, each of the techniques
may be individually addressable.
This circumstance allows for greater flexibility and also reduces potential costs when many
different inspections can be accomplished in a single station, rather than two. If the application
conditions and limitations of each of these lighting techniques, as well as the intricacies of the
inspection environment and sample light interactions are well-understood, it is possible to develop an
effective lighting solution that meets the 3 acceptance criteria listed earlier.
(i). Back lighting
Back lighting (image shown in figure 2.10) generates instant contrast as it creates dark silhouettes
against a bright background.

37
There are two basic illumination techniques used in robot vision.
a) Front lighting
b) Back lighting
The most common uses are detecting presence/absence of holes and gaps, part placement or
orientation, or for measuring objects. Often it is useful to use a monochrome light, such as red, green,
or blue, with light control polarization if very precise (sub pixel) edge detection becomes necessary.

Fig: 2.10 Back lighting

(ii) Diffuse (Full Bright field) Lighting


Diffuse lighting is also called as full bright field. Diffuse dome lights which is shown in
figure 2.11 are very effective at lighting curved, specular surface commonly found in the automotive
industry. For example, on-axis lights work in similar fashion for flat samples, and are particularly
effective at enhancing differentially angled, textured or topographic features on relatively flat objects.
To be effective, diffuse lights, particularly dome varieties, required proximity to the sample.

Fig: 2.11 Dome diffuse


(iii) Partial Bright Field or Directional Lighting
Partial bright field lighting (shown in figure 2.12) is the most commonly used vision lighting
technique, and is the most familiar lighting we use every day including sunlight. This type of lighting
is distinguished from full bright Held in that t is directional, typically from a point source, and
because of its directional picture, it is a good choice for generating contrast and enhancing

38
topographic details. It is much less effective, however when used on-axis with specular surfaces
generating the familiar "hotspot" reflection.

Fig: 2.12 On-axis diffuse

(iv) Dark Field Lighting


Dark field lighting (shown in figure 2.13), is perhaps the least well understood of all the
techniques, although we do use these techniques in everyday life. For example, the use of automobile
hard light relies on light incident at low angles on the road surface, reflecting back from the small
imperfections and also nearby objects. Table 2.1 provides various lighting techniques and its
function.

Fig: 2.13 Dark field lighting

Table 2.1 Various Lighting Techniques


Technique Functions/use
A. Front light source
1. Front illumination Area flooded such that suface is defining
feature of image.

39
2. Specular illumination Used for surface defect recognition
(Dark field) (Background dark).
3. Specular illumination Used for surface defect recoginition: camera
(Light field) in-line with reflected rays (Background light).
4. Front imager Structured light applications; Imaged light
Superimposed on object surface—Light beam
displaced as function of thickness.
B. Back light source
1.Rear illumination Uses surface diffuser to silhouette feature-;
(Light field) used in parts inspection and basic
measurements.
2. Rear illumination Produces high-contrast images; useful for
(Condenser) high
magnification application.
3. Rear illumination Produces parallel light ray source such that
(Collimator) features of object.
4. Rear offset illumination Useful to produce feature highlights when
feature is in transparent medium.
C. Other miscellaneous devices
1. Beam splitter Transmits light along same optical axis as
sensor. Advantage is that it can illuminate
difficult-to-view objects.
2. Split mirror Similar to beam splitter but more efficient
with
lower intensity requirements.
3. Non selective redirectors Light source is redirected to provide proper
illuminations.
4. Retro reflector A device that redirects incident rays back to
sensor, Incident angle capable of being
varied.
It provides high contrast for object between
source and reflector.
5. Double density A technique used to increase illumination
intensity at sensor used with transparent
media
and retro reflector.

2.12 IMAGE PROCESSING AND ANALYSIS


As shown in figure 2.14, In oder to make image processing more effective, the following
techniques are used.
i. Image data reduction
ii. Smoothing [Noise reduction]
iii. Segmentation
(a) Thresholding
(b) Region growing
(c) Edge detection

40
iv. Feature extraction
v. Objects recognition
2.12.1 Image data reduction
Image data reduction are:
(i) Digital Conversion
(ii) Windowing
(i) Digital Conversion
Image data reduction is achieved by number of bits of A/D convertor. For example, with 8 bit
number of gray levels will be 2 =256 whereas with 4 bits it will be 24 =16. This will considerably
reduce the magnitude of image processing problem.
Grey scale system requires higher degree of image refinement, huge storage processing
capability. For analysis 256 x 256 pixels image array up to 256 different pixel values will require
65000-8-bit storage locations at a speed of 30 images per second. Techniques windowing and image
restoration are involved.
Histogram of images: A histogram is a representation of the total number of pixels of an image at
each gray level. Histogram information can help in determining cut off point when an image is to be
transformed into binary values.
(ii) Windowing
 Processing is the desired area of interest and ignores non-interested part of image.
 Windowing involves using only a portion of the total image stored in the frame buffer
for image processing and analysis.

Fig: 2.14 Components of image processing


2.12.2 Noise Reduction Operations
i. Convolution masks
ii. Image averaging
iii. Frequency domain

41
iv. Median filters

(i) Convolution Masks


The noise is reduced by using masks. Create masks that behave like a low pass filter, such
that the higher frequencies of an image are attended while the lower frequencies are not changed very
much.
(ii) Image Averaging
A number of images of the exact same scene are averaged together. This technique is time
consuming. This technique is not suitable for operations that are dynamic and change rapidly. It is
more effective with an increased number of images. It is useful for random noise.
iii) Frequency Domain
When the Fourier transform of an image is calculated, the frequency spectrum might show a
clear frequency for the noise, which in any cases can be selectively eliminated by proper filtering.
(iv) Median Filters
 In image processing, it is usually necessary to perform a high degree of noise reduction in an
image before performing higher-level processing steps, such as edge detection.
 The median filter is a non-linear digital filtering technique, often used to remove noise from
images or other signals. The idea is to examine a sample of the input and decide if it is
representative of the signal.
 This is performed using a window consisting of an odd number of samples. The values in the
window are sorted into numerical order, the median value, the sample in the center of the
window, is selected as the output. The oldest sample is discarded, a new sample acquired, and
the calculation repeats.
 Median filtering is a common step in image processing. It is particularly useful to reduce
speckle noise and salt and pepper noise. Its edge-preserving nature makes it useful in cases
where edge blurring is undesirable.

A common problem with all filters based on all adjacent pixels is how to process the edges of
the image. As the filter nears the edges, a median filter may not preserve its odd number of samples
criteria. It is also more complex to write a filter that includes a method to specifically deal with the
edges. Common solutions to the problems are:

 Not processing edges, with or without a crop of the image edges afterwards.
 Fetching pixels from other places in the image. Typically the other horizontal edge on
horizontal edges and the other vertical edge on vertical edges are fetched.
 Making the filter process fewer pixels on the edges.
 Comparing the filtered sample to the original sample to determine if that sample is an outlier
before replacing it with the filtered one.
The median filter is also a spatial filter, but it replaces the centre value in the Window with
the median of all the pixel values in the window. The kernel is usually square but can be any
shape. An example of median filtering of a single 3 x 3 window of values is shown below.

42
Unfiltered Values

6 2 0

3 97 4

19 3 10

In order: 0, 2, 3, 3, 4. 6, 10, 15, 97


Median Filtered

***
*4*
***

Centre value (previously 97) is replaced by the median of all nine values (4).

Note that for the first (top) example, the median filter would also return a value of 5, since the
ordered values are 1, 2, 3, 4, 5, 6, 7, 8, 9.

For the second (bottom) example, though, the mean filter returns the value 16 since the sum
of the nine values in the window is 144 and 144/9 16. This illustrates one of the celebrated features of
the median filter: its ability to remove impulse noise (outlying values, either high or low). The
median filter is also widely claimed to be edge-preserving since it theoretically preserves step edges
without blurring However, in the presence of noise it does blur edges in images slightly.

2.12.3. Segmentation
Segmentation is the method to group areas of an image having similar characteristics or features into
distinct entities representing part of the image.
 Segmentation is a general term, which applies to various methods of data reduction as shown
in figure 2.15 and 2.16.
 Segmentation is the method to group areas of an image having similar characteristics or
features into distinct entities representing part of the image.
 In computer vision, segmentation refers to the process of partitioning a digital image into
multiple regions (sets of pixels).
 The goal of segmentation is to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze.
 Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in
images.
 The result of image segmentation is a set of regions that collectively cover the entire image,
or a set of contours extracted from the image.
 Each of the pixels in a region are similar with respect to some characteristic computed
property, such as colour, intensity, or texture.

43
 Adjacent regions are significantly different with respect to the same characteristic(s).
 Some of the practical applications of image segmentation are:
i. Locate objects in satellite images (roads, forests, etc.)
ii. Fingerprint recognition Automatic traffic controlling systems
iii. Machine vision
 Several general-purpose algorithms and techniques have been developed for image
segmentation.
 Since there is no general solution to the image segmentation problem, these techniques often
have to be combined with domain knowledge in order to effectively solve an image
segmentation problem for a problem domain.

Fig: 2.15 Image segmentation Technique

Methods of Segmentation

i) Histogram Based Methods


ii) Multiscale segmentation
iii) Semi-Automatic Segmentation
iv) Neural Network Segmentation
v) Image Segmentation Technique

(i) Histogram-Based Methods


 Histogram-based methods are very efficient when compared to other image segmentation
methods, because they typically require only one pass through the pixels.
 In this technique, a histogram is computed from all of the pixels in the image and the peaks
and valleys in the histogram are used to locate the clusters in the image. Colour or intensity
can be used as the measure.
A refinement of this technique is to recursively apply the histogram-seeking method to clusters in the
image in order to divide them into smaller clusters This is repeated with smaller and smaller clusters
until no more clusters are formed.

44
Fig: 2.16 Image segmentation (a) Image pattern with grid; (b) segmented image after run test

Disadvantages
 The histogram-seeking significant method is that it peaks and valleys in the image.
 In this technique of image classification distance metric and integrated may be difficult to
region matching are familiar.
(ii) Multi-Scale Segmentation
 Image segmentations are computed at multiple scales in scale-space and sometimes
propagated from coarse to fine scales.
 Segmentation criteria can be arbitrarily complex and may take into account global as well as
local criteria.
 A common requirement is that each region must be connected in some sense.
(iii) Semi-Automatic Segmentation
 In this kind of segmentation, the user outlines the region of interest with the mouse clicks and
algorithms are applied so that the path that best fits the edge of the image is shown.
 Techniques like Livewire or Intelligent Scissors are used in this kind of segmentation
(iv) Neural networks Segmentation
 Neural network segmentation relies on processing small areas of an image using a neural
network or a set of neural networks.

45
 After such processing the decision-making mechanism marks the areas of an image
accordingly to the category recognized by the neural network.
 GIMP
 VXL
(v) Application of Segmentation
i. Detection of isolated points.
ii. Detection of lines and edges in an image.

2.12.4 Feature Extraction Techniques


The techniques available to extract feature values for two dimensional cases can be roughly
categorized as those that deal with boundary features and those that deal with area features.
- Reduce the size of the data which should build the interferences.
- Reduction should retain most informative pieces.
 Edges and Corners
 Texture
Mathematically/computationally
 In pattern recognition and in image processing, feature extraction is a special form of
dimensionality reduction.
 When the input data to an algorithm is too large to be processed and it is suspected to be
notoriously redundant (much data, but not much information then the input data will be
transformed into a reduced representation set of features (also named features vector).
 Transforming the input data into the set of features is called features extraction.
 If the features extracted are carefully chosen it is expected that the features set will extract the
relevant information from the input data in order to perform the desired task using this reduced
representation instead of the full size input.
 Feature extraction involves simplifying the amount of resources required w describe a large set
of data accurately.
 When performing analysis of complex data one of the major problems from number of variables
involved.
 Analysis with a large number of variables generally requires a large amount of memory and
computation power or a classification algorithm which overfits the training sample and
generalizes poorly to new samples.
 Feature extraction is a general term for methods of constructing combinations of the variables to
get around these problems while still describing the data with sufficient accuracy.
 Best results are achieved when an expert constructs a set of application dependent features.
Nevertheless, if no such expert knowledge is available general dimensionality reduction
techniques may help.
 These include:
i. Principal components analysis
ii. Semi definite embedding
iii. Multifactor dimensionality reduction
iv. Nonlinear dimensionality reduction
v. Latent semantic analysis

46
2.13 OBJECT RECOGNITION
The next step in image data processing is to identify the object the image represents. This
identification is accomplished using the extracted feature information described. The recognition
algorithm must be powerful enough to uniquely identify the object.
Object Recognition by Features
 This may include gray level histogram, morphological features such as area, perimeter, number of
holes, eccentricity, cord length, moments, etc. The information extracted is compares with a prior
information about the object, which may be in a lookup table.
 The next step in image data processing is to identify the object the image represents, This
identification is accomplished using the extracted feature information described. The recognition
algorithm must be powerful enough to uniquely identify the object.
 The second problems facing the vision system after the edges are detected and stored is the
recognition or identification of the current image.
 Three frequently used two-dimensional recognition strategies are template matching, edge and
region statistics, and statistical matching using the Stanford Research Institute (SRD)
algorithm or one of the many others developed in the past 15 years.

The two techniques used in Object Recognition is


i. Template matching technique
ii. Structural technique

2.13.1 Template matching technique


 Templates of the part to be recognized are stored in the vision system memory. The images
recorded by the vision camera are compared with the templates stored in memory to
determine if a matching part is present.
 The template of a part and the problem that this type of system must overcome. Any change
in the object's scale or a rotation of the view makes a match with the template more difficult.
 In the teaching process, regions in the captured image are identified and recorded with the
correct contents or parts present. In operation, an image of this region is captured and
compared with the image recorded during the teaching process.
 If the image values for the regions are the same, the captured image matches the stored
template, and the part is identified.
 The edge and region statistics technique defines significant features for the parts studied, then
develops a method of evaluating these features from the pixel data.

Some of the more common features used in this method are the following

 Centre of area: A unique point from which all other points on the object are referenced
 Major axis: The major axis of an equivalent or best-fit ellipse
 Minor axis: The minor axis of an equivalent or best-fit ellipse
 Number of holes: The number of holes in the object's interior
 Angular relationships: The angular relationship of all major features to one another.

47
2.13.2 Structural Technique
 Structural techniques of pattern recognition consider relationship between features (or) edges
of an object.
 For example, four primitive lines, if they are at right angles, then it recognizes as rectangle.
 This type of technique is employed for structural recognition. Machine vision system is to be
trained with known objects.
 The system stored these objects in the form of extracted feature values that can be
subsequently compared against corresponding feature values from image of unknown objects.
 Training of the vision system is to be carried out as close to operating condition as possible.
Table: 12.2 shows the components of object recognition

2.13.3 Thresholding
Thresholding is a binary conversion technique in which each pixel is converted into a binary
value either black (or) white. Due to the nature of lighting and the sensor quantization noise creeping
into the picture, even an object with a uniform surface in a uniform background seems to be
composed of varying intense. A histogram for the scene may be constructed, which is a plot of
intensity Vs the number of pixels having the same intensity. A bright sample objects against a dark
background yields a bimodal histogram. There is a intensity spread with two peaks and a valley. A
threshold close to the valley is chosen and the picture can be quantized such that all pixels having
intensities less than the threshold can be assigned and all pixels having intensities above the threshold
can be assigned. The resulting binary image is ideally suited for robotic applications. When there are
many objects in the scene, the histogram may exhibit more than minimum.

Correlation techniques can then be used to obtain the threshold level.

 This is the most widely used technique in industrial vision application.


 The reasons are that it is fast and easily implemented and that lighting usually controllable in
industrial setting.

Gray level thresholding is the simplest segmentation process. Many objects or image regions
are characterized by constant reflectivity or light absorption of their surface. Threholding is
computationally inexpensive and fast. Thresholding can easily be done in real time using specialized

48
hardware. Complete segmentation can result from thresholding in simple scenes. After smothering
the image, the objects are to be separated from the background.
2.13.4 Region Growing Methods
Region growing is a collection of segmentation techniques in which pixels are grouped in
regions called grid elements based on attribute similarities. The first region growing method was the
seeded region growing method. This method takes a set of seeds as input along with the image. The
seeds mark each of the objects to be segmented. The regions are iteratively grown by comparing all
unlocked neighbouring pixels to the regions.

The difference between a pixel's intensity value and the region's mean, , is used as a
measure of similarity. The pixel with the smallest difference measured this way is allocated to the
respective region. This process continues until all pixels are allocated to a region. Seeded region
growing requires seeds as additional input. The segmentation results are dependent on the choice of
seeds. Noise in the image can cause the seeds to be poorly placed. Unseeded region growing is a
modified algorithm that doesn't require explicit seeds. It starts off with a single region Al- the pixel
chosen here does not significantly influence final segmentation at each iteration it considers the
neighbouring pixels in the same way as seeded region growing.

It differs from seeded region growing in that if the minimum  is less than a predefined
threshold T then it is added to the respective region Aj. If not, then the pixel considered significantly
different from all current regions Ai and a new region An+1 created with this pixel.

One variant of this technique, proposed by Haralick and Shapiro (1985), is based on pixel
intensities. If the test statistic is sufficiently small, the pixel is added to the region, and the region's
mean and scatter are recomputed. Otherwise, the pixel is rejected, and is used to form a new region.

2.13.5 Edge Detection


 Edge detection considers intensity change that occurs in the pixels at the boundary or edges of
the part.
 In machine vision, it is often necessary to distinguish one object from another. This is usually
accomplished by necessary features that uniquely characterize the object.
 Given that a region of similar attributes has been found, but the boundary shape is unknown,
the boundary can be determined by a simple edge following procedure. This can be illustrated
by the schematic of a binary image as shown in figure 2.17 for the binary image; the
procedure is to scan the image until a pixel within the region is encountered. For a pixel
within the region, turn left and step, otherwise, turn right and step.
 The procedure is stopped when the boundary is traversed and the path has returned to the
starting pixel. The contour-following procedure described can be extended to gray level
images.
Edge detection methods
 Edge detection is a well-developed field on its own within image processing.
 Region boundaries and edges are closely related, since there is often a sharp adjustment in
intensity at the region boundaries.

49
 Edge detection techniques have therefore been used as the base of the another segmentation
technique.

Fig: 2.17 Edge following procedure to detect the edge of a binary image.

 The edges identified by edge detection are often disconnected. To segment an object from an
image however, one needs closed region boundaries.
 Discontinuities are bridged if the distance between the two edges is within some
 predetermined threshold.

2.14 APPLICATIONS
i. Large scale industrial manufacture.
ii. Short run unique objected manufacture.
iii. Retail automation
iv. Visual stock control and management systems (counting barcode reading, store interface
for digital system)
v. Control of Automated Guided Vehicles (AGV).
vi. Automated monitoring of sites for security and safety
vii. Monitoring of agricultural production
viii. Quality control and refinement of food products.
ix. Consumer equipment control
x. Provide Artificial Visual Sensing for the blind
xi. Medical Imaging Process (eg, Interventional Radiology )
xii. Medical Remote Examination and Procedures.
xiii. Safety system in industrial environment
2.15 VISUAL SERVING AND NAVIGATION
Today’s industrial robots tend to be planted at one location. By contrast, materials and people
in a factory generally move about. Providing robots with the capacity to move under their own power
would greatly increase their potential utilization. Robot mobility in the factory environment can be
used either to move materials from one work volume of today’s stationary robots. Outside of the

50
factory, there are many possible applications for a mobile robot capable of self-navigation. There are
two basic ways in which robots can be made mobile: wheeled vehicles and pedded vehicles (walking
machines). The following subsections will examine some of the possibilities within these two
categories.
2.15.1 Wheeled Vehicles
The current state-of-the-art in self-propelled machine locomotion is the automated guided
vehicle (AGY). AGYs are typically battery-powered, three-or four-wheeled systems, instructions to
start/stop or change routes are communicated to each vehicle electronically over radio frequencies.
Automated guided vehicle systems are typically used in today’s applications for moving materials in
warehouses and factories.
2.15.2 Walking Machines
Wheeled vehicles have limitations; they can only travel over relatively smooth surfaces.
Vehicles with tank tracks would be an improvement for rough terrain. Walking machines offer the
greatest versatility for dealing with a variety of surfaces and obstacles. However, walking machines
must over-come all of the 486 Industrial Robotics same technological hurdles as autonomous
locomotive wheeled vehicles, with the additional problem of coordinating the motions of the legs. In
addition, since it is assumed that such vehicles will be used over rough terrains, they must be highly
adaptive to the irregularities of the terrain.

51
CHAPTER 3

ROBOT KINEMATICS
3.1 INTRODUCTION
Robot arm kinematics deals with the analytic study of the motion of a robot arm with respect
to a fixed reference coordinate system as a function of time. The mechanical manipulator can be
modelled as an open loop articulated chain with several rigid links connected in series by either
‘revolute’ or ‘prismatic’ joints driven by the actuators.
Kinematics
Kinematics means the analytical study of the geometry of motion of a mechanism with
respect to a fixed reference coordinate system, without regard to the forces (or) moments that cause
the motion. It refers to the study of geometric and time based quantities like position, velocity and
acceleration of every part of the robot .
Object Manipulation:
Manipulation is the skilful handling and treating of objects picking them up moving them,
fixing them one to another and working on them with tools. Before programming a robot to perform
such operations, it requires a method of specifying where the object is relative to the robot gripper,
and a way of controlling the motion of the gripper.
Kinematic Model
Before a robot can move its hand to an object, the object must be located relative to it. There
is currently no simple method for measuring the location of a robot hand Most robots calculate the
position of their hand using a kinematic model of their arm.
Position and Orientation
 The state of the gripper of a robot is described by its position orientation in space with respect
to a fixed frame called the base frame.

Fig: 3.1 Position and Orientation


 In figure 3.1, a fixed frame (x0, y0, z0) is chosen to be the base frame. In robotics, the study is
always attached to a coordinate system or a rigid frame to every member of the robot and to
every object that is of interest.

52
 It is often considered that the attribute like position, orientation, velocities, forces and torques
being described with respect to some frame of reference. These activities can be redefined
with respect to other frame via transformation matrices.

3.2 FORWARD KINEMATICS


 For a manipulator, if the position and orientation of the end-effectors are derives from the
given joint angles and link parameters, the scheme is called the forward kinematics problem.
 Given the joint angles, determine the position and orientation of the end effector. For example
for a revolute robot having three degrees of freedom, if the joint angles 1, 2, 3 are specified.
Then the position and orientation of the end-effector can be calculated.
 The outcome of the forward kinematics problem is always unique and there are no multiple
solutions. The block diagram for forward kinematics is given in figure 3.2.

3.3 INVERSE KINEMATICS


 If, the joint angles and the different configuration of the manipulator are derived from the
position and orientation of the end-effectors, the scheme is called the reverse kinematics
problem.
 Given the position and the orientation of the end-effectors, determinate the numerical values
for the joint variables.
 This problem is not quite straight forward like the forward kinematics problem. In general it
is not possible to obtain closed form solutions due to the non-linear simulations equations.
 Further, the non-linear nature of the problem leads to multiple solutions in certain cases. The
block diagram for Inverse kinematics is given in figure 3.3.

3.4 DIFFERENCE BETWEEN FORWARD KINEMATICS AND INVERSE KINEMATICS

(a) Forward Kinematics:


 Given joint angles, compute the transformation between world and gripper coordinates.
 Relatively straight forward.

Fig: 3.2 Forward Kinematics

(b) Inverse Kinematics:


 Given the transformation between world coordinates and an arbitrary frame, compute the joint
angles that would line gripper coordinates up with that frame.
 For a kinematic mechanism, the inverse kinematic problems difficult to solve.
 The robot controller must solve a set of non-linear simultaneous algebraic.

53
Fig: 3.3 Inverse Kinematics

3.4.1 Two Frame kinematic Relationship


 There is a kinematic relationship between two frames, basically a translation and rotation.
 The relationship is represented by 4 × 4 homogeneous transformation.

3.4.2 Transformations
 Transformations of frames introduced to make modeling the relocation of objects easier.
 An object is described with respect to a frame located in the object and this frame is relocated
with a transformation.
 The transformation is the result of a sequence of rotations and translation which are recorded with
a transformation equation.

3.4.3 Homogeneous Transformation


1. Homogeneous Transformation Matrix
𝑅 𝑃3×1
A
TB = [ 3×3 ]
0 1

Where R is the Rotational matrix, P is the position matrix and 1 is scaling

2. Composite Homogeneous Transformation Matrix


 Transformation (rotation/translation) with respect to (X,Y, Z) (OLD FRAME), using pre-
multiplication
 Transformation (rotation/translation) with respect to (U, V, W) (NEW FRAME).using post
multiplication.
Composite Rotation Matrix
A sequence of finite rotations matrix multiplications does not commute rules:
 If rotating coordinate O-U-V-W is rotating about principal axis of OXYZ frame then Pre-
multiply the previous (resultant) rotation matrix with an appropriate basic rotation matrix.
 If rotating coordinate 0-U-V-Wis rotating about its own principal axes, then post-multiply the
previous (resultant) rotation matrix with an appropriate basic rotation matrix.

3.4.4 Homogeneous Representation


A frame in space (Geometric Interpretation)
𝑅 𝑃3×1
F = [ 3×3 ]
0 1

54
nx s x a x p x 
n s a p y 
F=  y y y

nz s z a z p z 
 
0 0 0 1
Principal axis n with respect to the reference coordinate system.

Manipulator Kinematics
In order to develop a scheme for controlling the motion of a manipulator it is necessary to
develop techniques for representing the position of the arm at points in time. The robot manipulators
uses two basic elements such as joints and links. Each joint represents 1 degree of freedom. The
joints may involve either linear motion or rotational motion between the adjacent links, which are
rigid structures that connects the joints. Joints are labelled starting from 1 and moving towards the
end effector with the base being joint 1. The figure 3.4 shows the labelling. The below equation
shows the relationship between joint space and task space which is known as Jacobian Matrix.

Fig 3.4 Two different two-jointed manipulators: (a) RR robots (b) LL robots

 1  x 
   
   
 2   y
 1     
Forward x   
   y  
  2    
 3  z 
 
 3 
  z   
  Jacobian    
Kinematic  4   
   
Matrix  4 
 5 
      
   
   
Inverse  6     
 5  
   
   
 6  
 

Joint space Task space

55
3.5 FORWARD AND REVERSE KINEMATICS OF MANIPULATORS WITH TWO and
THREE DEGREES OF FREEDOM (IN 2 DIMENSION)

3.5.1. Robot Motion Analysis


Robot motion analysis describes the geometry of the robot arm with respect to a reference
coordinate system, while the end-effector moves along the prescribed path.
This kinematic analysis involves two different kinds of problems:

i. Determining the coordinates of the end-effector or end of arm for a given set of joint
coordinates.
ii. Determining the joints coordinates for a given location of the end-effector or end of arm.
iii. The position, V, of the end-effector can be defined in the Cartesian coordinate system.

3.5.2 Generally for robots the location of the end-effector can be defined in two systems
(a) Joint space
(b) World space (also known as global space)

(a) Joint Space


In Joint space, the joint parameters such as rotating or twisting joint angles and variable link
lengths are used to represent the position of the end-effector.
Vj = (,0) for RR robot
Vj = (L1, L2) for LL robot
Vj = (𝛼, L2) for TL robots
Where Vj refers to the position of the end-effector in joint space.

(b) World space


In world space rectilinear coordinates with reference to the basic cartesian system are used to
define the position of the end-effector.
Usually the origin of the cartesian axes is located in the robot's base,
VW = (x,y)
where VW refers to the position of the end-effector in world space.

Similarly, the transformation of coordinates from world space to joint space is known as backward or
reverse transformation.

3.5.3 Forward Kinematics of Manipulators with Two Degrees of Freedom (2 Dimension)


The transformation of coordinates of the end- effectors point from the joint space to word
space is known as forward kinematics transformation.
LL Robot
Let us consider a Cartesian LL robot. The figure 3.5 illustrates the scheme of forward and
reverse kinematics LL Robot.

From the figure 3.6, Joints J1, and J2 are linear joints of variable lengths L1 and L2 . Let joint J1 be
denoted by (x1, y1) and joint J2 (x2, y2).

56
Fig: 3.5 LL Robot

Fig: 3.6 Two manipulator with two degree of freedom

From geometry, we can easily get the following:


x2= x1 + L2
y2= y1
These relations can be represented in homogeneous matrix form:
x2 1 0 𝐿2 𝑥1
[y2 ] = [0 1 0 ]. [𝑦1 ]
1 0 0 1 1

Or
X2 = T1 X2

x2 1 0 𝐿2 𝑥1
Where, X2 = [y2 ] ; T1 = [0 1 0]; X2 = [𝑦1 ]
1 0 0 1 1

If the end-effector point is denoted by (x, y), then:

57
x = x2
y = y2 – L3
therefore,
x 1 0 0 𝑥2
[𝑦]= [0 1 −𝐿2 ]. [𝑦2 ]
1 0 0 1 1
or

X=T2 X2 . . . 3.1

Substitute X2 value in equation (3.1), we get

X = T2 (T1 X1) = TLL X1 [since T2T1= TLL]


Where TLL= T2 T1

1 0 𝐿2
TLL = [0 1 −𝐿3 ]
0 0 1
RR Robot
Let  and 𝛼 be the joints J1 and J2 respectively. Let J1 and J2 have the coordinates of (x1, y1) and
(x2, y2) respectively.
From the figure 3.7,
x2 = x1 + L2 cos ()
y2 = y1 + L2 sin () . . . 3.2
In matrix form,
x2 1 0 𝐿2 cos 𝜃 𝑥1
[y2 ] = [0 1 𝐿2 𝑠𝑖𝑛𝜃 ]. [𝑦1 ]
1 0 0 1 1

X2=T1 X1

Fig: 3.7 RR Robot

58
On the other end:
x = y2 + L2 cos(𝛼 − 𝜃)
y = y2 + L3 sin(𝛼 − 𝜃)

In matrix form:
X=T2 X2 . . . 3.3

x 1 0 𝐿2 cos(𝛼 − 𝜃) 𝑥2
[ y ] = [0 1 −𝐿2 sin(𝛼 − 𝜃)]. [𝑦2 ]
1 0 0 1 1
Substitute X2 value in equation 3.3, we get
Combining the two equation gives,

X= T2 (T1 X1)
since T2 T1 = TRR
X= TRR X1

1 0 𝐿2 cos 𝜃 + 𝐿2 cos(𝛼 − 𝜃)
TRR = [0 1 𝐿2 𝑠𝑖𝑛𝜃 − 𝐿2 sin(𝛼 − 𝜃) ]
0 0 1
TL Robot
Let 𝛼 be the rotation at twisting joint. J1 and L2 be the variable link length at linear joint J2 as
shown in the figure 3.8.
One can write that:
x = y2 + L2 cos(𝛼)
y = y2 + L2 sin(𝛼)
In matrix form:
x 1 0 𝐿2 cos(𝛼) 𝑥2
[y ] = [0 1 𝐿2 sin(𝛼) ]. [𝑦2 ]
1 0 0 1 1
or
X= TTL X2

Fig: 3.8 TL Robot

59
3.5.4 Backward Kinematic of Manipulators with Two Degrees of Freedom (In 2 Dimension)
LL Robot
In backward kinematic transformation, the objective is to drive the variable link lengths from
the known position of the end-effector in world space.
x = x1 + L2(𝛼) . . . 3.4
y = Y1 – L3 . . . 3.5
y1 = y2
By combining equations 3.4 and 3.5, we can get
L2 = x - x1
L3 = -y + y2
RR Robot
x= x1 + L2 cos () + L3 cos(𝛼 - ) . . . 3.6
y= y1 + L2 sin () - L3 sin(𝛼 - )() . . . 3.7
Combining the equations 3.6 and 3.7 easily, we can get the angles.

[(𝑥−𝑥1 )2+ (𝑦−𝑦1 )2 −𝐿2 2−𝐿3 2 ]


Cos (𝛼) = 2𝐿2 𝐿3

(𝑦−𝑦1 )(𝐿2+ 𝐿3 cos(𝛼))+(𝑥−𝑥1 )𝐿3 sin(𝛼)


tan () =
(𝑥−𝑥1 )(𝐿2+ 𝐿3 cos(𝛼))+(𝑦−𝑦1 )𝐿3 sin(𝛼)

TL Robot
x = x2 + L cos (𝛼) () . . . 3.8
y = y2 + L sin (𝛼) () . . . 3.9
The equation for length and angle:
L = √(𝑥 − 𝑥2 )2 + (𝑦 − 𝑦2 )2
Substitute equations 3.8 and 3.9 in the above equation, we get
𝑦−𝑦
Sin(𝛼) = 𝐿 2

3.6 FORWARD AND BACKWARD KINEMATICS OF MANIPULATORS WITH THREE


DEGREES OF FREEDOM (IN 2 DIMENSIONS)
The figure 3.9 is the spherical Robot configuration (RRP) used to determine Forward and
backward Kinematics of Manipulators with three degrees of freedom in 2 Dimension.

Where,

J1 and J2  Joints which gives the rotary motions.

J3  joint which can gives the linear motion

60
Fig: 3.9 The forward and reverse kinematics solution of a spherical robot configuration

3.6.1 Forward kinematics (Three degrees of freedom in 2D manipulator)


Let us consider that the co-ordinates as r1, r2 in which the values is taken for the base to end-
effector.
r1 = cos  (L cos )
r2 = sin  (L cos )
From which the value of the forward kinematics from base to end-effector is given. Squaring on both
sides, equation 3.10 and 3.11, we get
x = cos  (L cos ) . . .3.10
L cos ) . . .3.11
x = cos  (L cos )
2 2 2 2
. . .3.12
(y - L1) = sin  (L cos )
2 2 2
. . .3.13
Adding in equation 3.12 and 3.13, we get
x2 + (y - L1)2 = L2 cos2 [sin2+ cos2]
x2 + (y - L1)2 = L2 cos2
x2 + (y−L1 )2
L2 =
𝑐𝑜𝑠 2 
𝟏
[[𝐱 𝟐 + (𝐲−𝐋𝟏 )𝟐 ]𝟐 ]
L= . . . 3.14
𝒄𝒐𝒔
Taking square root on both side in equation 3.14,
sin2  (y−L1 )2
==
cos2  x2
Taking square root on both sides,
sin  (y−L1 )
==
cos  x
𝟏
[𝐱 𝟐 + (𝐲−𝐋𝟏 )𝟐 ]𝟐
L=
𝒄𝒐𝒔

61
(y−L1 )
tan  =
x
(y−L1 )
sin =
L
3.6.2 Reverse Kinematics (Three degrees of freedom in 2D manipulator)
Reverse kinematics is the process at which the value from base to end-effector is found in
reverse manner.
x = cos  (L cos  + L3 cos ) . . . 3.15
y = sin  (L cos  + L1 + L3 cos )
x3 = x - L3 cos  . . . 3.16
Substitute the x value in equation 3.16,
y3 = y - L3 cos 
Substitute the y value in equation 3.15,
x3 = cos  (L cos  )+ L3 cos  - L3 cos 
x3 = cos  (L cos  )
y3 = sin  (L cos  ) + L3 cos  - L3 cos ) +L1 + L3
(y3 - L1 + L3) = sin  (L cos  )
Squaring on both sides we get
x32 = cos2  (L2 cos2 ) . . . 3.17
[(y3 - L1 + L3)] = sin  (L cos  )
2 2 2 2
. . . 3.18
Adding and squaring the equation 3.17 and 3.18 we get,
x32 + [(y3 - L1 + L3)]2 = (L2 cos2 )
x23 + [(y3 − L1 + L3 )]2
L2 =
cos2 
𝟏/𝟐
𝟐
[𝐱𝟐
𝟑 + [(𝐲𝟑− 𝐋𝟏 + 𝐋𝟑 )] ]
L= 𝐜𝐨𝐬
(𝐲𝟑 – [𝐋𝟏 + 𝐋𝟑 ] )
tan  =
𝒙𝟑
(𝐲𝟑 – [𝐋𝟏 + 𝐋𝟑] )
sin  =
𝑳
3.7 FORWARD AND BACKWARD KINEMATICS OF MANIPULATORS WITH FOUR
DEGREES OF FREEDOM (IN 3 DIMENSION)
The base to wrist link transformation matrix for 3 axis revolute joint robot (shown in figure
3.10) using forward and reverse kinematics for TRLR type robot consists of 4 joints J 1, J2, J3 and J4.
Here,
J1  (T- type) Twisted type joined which can be used to move the base.
J2  (R- type) Rotational type joints in which used to rotate the arm or manipulator.
J3  (L- type) Linear type which used for to and for motions.
J4  (R- type) Gripper joint which can have a rotational motions.
The base to end- effectors is given by the co- ordinates P(x, y, z).
The base to wrist is given by P4 (x4, y4, z4).
L1  Length between base and arm
L4  Length between the wrist and gripper

62
L  length between arm and wrist.
  Twisted angle made by the joint 1
  Rotation angle made by arm
𝜓  Rotation angle made by the wrist

Fig: 3.10 The base to wrist link transformation matrix for 3 axis revolute joint robot

3.7.1 Forward Kinematics


In which the total value of x, y, z is given as,
x = cos  (L cos  + L4 cos 𝜓) . . . 3.18
y = sin  (L cos  + L4 cos 𝜓) . . . 3.19
z = L1 + L sin  + L4 sin 𝜓 . . . 3.20
In the backward transformation, we are given the world coordinates x, y, z, 𝜓 and  mention
orientation. To find out joint values, we define the coordinate joint 4.
The position from the base to wrist x4, y4, z4 is given by,
x4 = x – cos  (L4 cos 𝜓) . . . 3.21
y4 = y – sin  (L4 cos 𝜓) . . . 3.22
z4 = z- L4 sin 𝜓 . . . 3.23
Substituting the value of equations 3.18, 3.19, 3.20, 3.21, 3.22 and 3.23, we get
x4 = cos  (L cos  + L4 cos 𝜓) - cos  (L4 cos 𝜓)
= L cos  cos  + L4 cos  cos 𝜓 - L4 cos  cos 𝜓
x4 = L cos  cos  . . . 3.24
y4 = sin  (L cos  + L4 cos 𝜓) – sin  (L4 cos 𝜓)
= L sin  cos  + L4 sin  cos 𝜓 - L4 sin  cos 𝜓
y4 = L sin  cos  . . . 3.25
z4 = L1 + L sin  + L4 sin 𝜓 - L4 sin 𝜓
z4 = L1 + L sin 
It can be written as

63
z4 - L1 = L1 + L sin  . . . 3.26
Squaring and adding equations 3.24, 3.25 and 3.26 on both sides, we get
x42 + y42 + (z4 – L1)2 = L2 cos2 cos2 + L2 sin2 cos2 + L2 sin2
= L2 cos2 + L2 sin2
= L2 (cos2 + sin2)
= L2
x42 + y42 + (z4 – L1)2 = L2
L2 = x42 + y42 + (z4 – L1)2
𝟏/𝟐
The length is given by 𝑳 = [𝒙𝟐𝟒 + 𝒚𝟐𝟒 + (𝒛𝟒 − 𝑳𝟏 )𝟐 ]
z 4 − L1
Sin 𝜃2 =
𝐿
-1 z4 − L1
𝜃2 = sin [ ]
𝐿
Where  = 𝜃2 + 𝜃4
𝜽𝟒 =  - 𝜽𝟐
3.7.2 Reverse kinematics

The backward transformation for a TRLR robot is explained by giving solution to the below problem.

Example: 3.1

Given the world coordinate for a TRLR robot (similar to that in figure) as x = 300 mm, y = 350 mm,
z = 400 mm and 𝛼 = 45°; and given that the links have values L0 = 0, L1 = 325 mm, 𝜆3 has a range
from 300 to 500 mm, and L4 = 25 mm, determine the joint angles 𝜃1 , 𝜃2 , 𝜃3 𝑎𝑛𝑑𝜃4 .

Given:
x = 300 mm; y = 350 mm; z = 400 mm; L4 = 25mm and 𝛼 = 45°.
Solution:
To find 𝜃1 using
𝑦
tan 𝜃1 = 𝑥
350
tan 𝜃1 = = 1.667
300
𝜃1 = tan -1 (1.667)
𝜃1 = 59.04°
Next, the position of joint 4 must be offset from the given x-y-z world coordinates.
x4 = x – cos 𝜃1 (L4 cos 𝜃4 )
x4 = 300 – cos 59.4 (25 cos 45°) = 290.91
y4 = y - sin 𝜃1 (L4 cos 𝜃4 )
y4 = 350 - sin 59.04 (25cos 45°)
y4 = 334.91
z4 = z - L4 sin 𝜃4
z4 = 400 - 25 sin 45°
The required extension of linear joint 3 can now be determined.
𝜆3 = √𝑥42 + 𝑦42 + ((𝑧4 − 𝑧1 )2 )

64
𝜆3 = √(290.91)2 − 334.912 + ((382.3 − 325)2 )
𝜆3 = √200076.62
𝜆3 = 447 mm
Now 𝜃2 can be found from equation
𝑧4 − 𝐿1
Sin 𝜃2 =
𝜆3
382.3 − 325
Sin 𝜃2 = = 0.128
447
𝜃2 = Sin -1 (0.128)
𝜃2 = 7.36
𝜃4 = 𝛼 - 𝜃2
𝜃4 = 45° - 7.36°
𝜽𝟒 = 37.64
3.8 JACOBIANS
Let the linear velocity and the angular velocity of the end- effector be represented in the
vectorial form by
V= [Vx, Vy, Vz, Wx, Wy, Wz]
Let the joint angular velocities of a revolute robot be represented by
 1 
 
 2 
=  3
𝑑𝜃  
𝑑𝑡  4 
 5 
 
 6 
𝑑𝜃
The vector V and can be connected by a matrix known as the Jacobian, i.e.,
𝑑𝑡
𝑑𝜃
V= J
𝑑𝑡

Where, J = J() is the Jacobian.

𝒅𝜽
Further V= J
𝒅𝒕

3.9 VELOCITY AND FORCES


 The term velocity generally means the linear velocity along the three axes as well as the
angular velocity about the three axes.
 In such a situation, the robot control systems are programmed to be in the position control
mode.
 In the second mode, the tool tip of the robot might be required to exert forces which are
specified along the three axes as well as torques about the three axes.
 In such a case, the robot control systems are programmed to be in the force control mode.
3.9.1 Position Control
 The individual joints are provided with separate systems, if the desired is known.

65
 It could serve as the reference input to the control system. The actual displacement 1, is
measured by the sensors mounted at the with joint.
 The difference signal e1= (1-1) is used to drive the joint motor Ms, such that the error tends
to zero as the time tends to infinity.
3.9.2 Force control
 The joints of an industrial robot can be driven in any one of the two following modes: (a)
Position control mode, or (b) Force control mode.
 A robot in the force control mode has the ability to exert the desired force on the work piece
after the contact with the job has been made.
 This ability is of vital importance when a robot is used for tightening bolts or nuts or when it
is employed for spot welding. Force control is complementary to position control.
 When a robot is moving, it is in the position control mode. After the contact is made, the robot
does not have to move any further and the control scheme is switched to the force control
mode.
 The desired force at the tool tip is developed by supplying the necessary torques at the various
joints with the help of servo motor.
 The actual (torques/forces) can be measured using sensors discussed earlier and the reflected
load torques.
3.9.3 Angular Velocity
The angular velocity of the frame (i +1) resolved onto the frame { i } is given by
𝑖𝜔𝑖+1 =𝑖𝜔𝐼 +{The contribution due to the joint angular velocity at the joint (i+1)}
0
𝑖
𝑖𝜔𝑖+1 = 𝑖 𝜔𝑖 +[ 𝑖+1𝑅] [ 0 ]
𝑖+1
𝑑𝜃
Where,  =
𝑑𝑡
 The rotation matrix comes into the picture because we have chosen the frame of reference to
be {i} while we are discussing the effect of a phenomenon associated with the frame {i+1}.
 It is also expressed that the angular velocity of joint {i+1} in space, with respect to the frame
{i+1}:
[𝑖 + 1𝜔𝑖+1 ] = [ 𝑖+1𝑖𝑅 ][𝑖𝜔𝑖+1 ]
Hence
0
𝑖 + 1𝜔𝑖+1 ] = [ 𝑖+1𝑖 𝑅] [𝑖𝑤𝑖 ] + l3[ 0 ]
𝜃𝑖+1

Where l3 is the identity matrix arrived at as a result of the product [ 𝑖+1𝑖 𝑅][ 𝑖+1𝑖𝑅].

3.9.4 Linear Velocity


The velocity of the origin of (i+1) frame with respect to the ith frame is

iV = iVi + i𝜔 I + 𝑖𝜌𝑖+1 ... 3.37


i+1
 The frame of reference of all the vectors on both sides of the equation is the same, { i }, is just a
particular case of equation. 𝒊𝝆𝒊+𝟏 is the arm length between the origin of {i+1} and the origin {i}.

66
 The measurement of this arm length is made with respect to the frame of reference { i }.
 Pre-multiplying both sides of equations 1 by the transform matrix [ 𝑖+1𝑖𝑅], we get

[ 𝑖+1𝑖 𝑅] [ivi+1] = [i+1vi+1]

[ 𝑖+1𝑖 𝑅] [ivi] + [ 𝑖+1𝑖 𝑅][ii] × [i𝜌𝑖+1 ]

[i+1vi+1] = [ 𝑖+1𝑖𝑅] [ivi+1 + ii× i𝜌𝑖+1 ] ...3.38

Equations 3.37 and 3.38 are sufficient to determine the relationship between the linear
velocities of the origins of the various frames and the angular velocities of the individual links in
space.
It may be noted that for a general robot, the linear and angular velocities discussed need not
necessarily coincide with linear joint velocities in the case if Cartesian robots and joint angular
velocities in the case of revolute robots respectively.

3.9.5 Static forces in manipulators


It deals with estimating the static forces and torques appearing at the various joints after the
robot’s tool makes contact with the job.
Forces
Let fi be the force exerted on line i by link i-1. Similarly fi+1 is the force exerted on link i+1 by i.
These forces can be resolved onto the three coordinate axes of frame {i} or onto the three coordinate
axes of the frame {i+1}.

if
i+1 = [ 𝑖+1𝑖𝑅][i+1fi+1]

This is the reflected force from {i+1} to frame {i} and this must be balanced by ifi

[ifi]= [ ifi+1] = [ 𝑖+1𝑖𝑅] [i+1fi+1] . . .3.46


Torques
The torques acting on the ith frame (as shown in figure 3.11) is equal to reflected torque acting
on the (i+1) frame + (moment due to the force fi+1 and the torque arm i𝜌𝑖+1 ) , i.e.,
i𝜂 = i𝜂
𝑖+1 + i𝜌𝑖+1 × fi+1
i
𝑖

The various quantities on both sides of equation are with respect to single frame, viz. {i}, but

i
𝜂𝑖+1 = [ 𝑖+1𝑖𝑅] [i+1𝜂𝑖+1 ] ...3.47

i
𝜂𝑖 = [ 𝑖+1𝑖𝑅][i+1 𝜂𝑖+1 ] + [i𝜌𝑖+1 ]×[ ifi+1] ...3.48

 The torque i𝜂𝐼 have three components, corresponding to the axes xi, yi and zi.
 The third component of three torques is resisted by the various structural members of the robot.

67
Fig: 3.11 Force and torque transformation between successive joints

3.10 MANIPULATOR DYNAMICS


This field is devoted to the study of motion caused by forces and torques. One method of
controlling a robot while following a specified part requires the computation of the reflected torques
generated by the joint motion. The computation involves the solution of the dynamic equations of the
manipulator. These equations are non-linear in nature.
Potential energy of link i,
Pi = -mi g r0-1
= -mi g (T0i ri-i)
Where,
r0-1 : Centre of mass with respect to base frame.
ri-i : Centre of mass with respect to ith frame
g= (gx, gy, gz, 0)
g: Gravity row vector expressed in base frame
|𝑔| = 9.8 m/sec2
Potential energy of a robot arm
𝑛 𝑛

𝑃 = ∑ 𝑃𝑖 ∑[−𝑚𝑖 𝑔 (𝑇0𝑖 𝑟𝑖−𝑖 )]


𝑖=1 𝑖=1
3.11 TRAJECTORY GENERATOR
 Let the end-effector of the robot are required to move from a point A to another point B through
some specified intermediate points.
 A straight line fit for the path from A to B through the intermediate point may not be preferable to
many situations.
 This is due to the discontinuity experienced in joint velocities and accelerations.
 To overcome this problem “cubic spines” may be flitted to the path.

68
3.11.1 Trajectory planning
“Interpolate" or "approximate" the desired path by a class of polynomial functions and
generates a sequence of time-based “control set points” for the control of manipulator from the initial
configuration to its destination.
3.11.2 Trajectory Planning For Robotics
 Given the position and the orientation of the tool at the initial instant t 0, and the final instant t f, it
is possible to determine the joint angles at t 0 and tf,
 However, the way in which f can be reached from 0 is purely arbitrary. We have to evolve a
suitable procedure to bring the arm from the initial to the final position.
 We can have a straight line fit between 0 and f with respect to time. However, this calls for
infinite acceleration at the beginning and at the end of the path. To overcome this problem, a
"cubic fit" could be attempted generate the trajectory.
3.11.3 Joint Space Scheme
Cubic Polynomial Fit: Let the initial joint angle be 0 and the final joint angle be f . We can fit a
cubic polynomial between 0 and f.
Let (t) = a0 + a1 t + a2 r2 + a3 t3
Be the cubic fit.
(𝑡)|𝑡=0 = 0 , (𝑡)|𝑡=𝑡𝑓 = 𝑓 , 0 , 𝑓 , 𝑡𝑓 are given.
Clearly, at t=0, a0 =0
Now, ' = a1 + 2 a2t + 3a3 t2
The initial velocity = final velocity = 0. From equation, we see that
a1= 0
and 2a2 tf + 3 a3t2f=0
f = a0 + a1 tf + a2 t2f + a3 t3f
Using equations, the face that a1=0 and a0 =0 ,
𝟑(𝒇 − 𝟎 ) 𝟐(𝒇 − 𝟎 )
𝒂𝟐 = 𝟑 , 𝒂𝟑 =
𝒕𝒇 𝒕𝟑𝒇
3.12 MANIPULATOR MECHANISM DESIGN
The manipulators are designed by considering the factors such as , basing the design on task
requirements, kinematic configuration, quantitative measures of workspace attributes, redundant and
closed-chain structures, actuation schemes, stiffness and deflections, position sensing and force
sensing.
3.12.1 Basing the Design on Task Requirements
Although robots are nominally "universally programmable" machines capable of performing a
wide variety of tasks, economies and practicalities dictate those different manipulators are designed
for particular types of tasks. For example, large robots capable of handling payloads of hundreds of
pounds do not generally have the capability to insert electronic components into circuit boards. As it
has seen, not only the size, but the number of joints, the arrangement of the joints, and the types of
actuation, sensing, and control wifi all vary greatly with the sort of task to be performed.
3.12.2 Number of degrees of freedom
The number of degrees of freedom in a manipulator should match the number required by the
task. Not all tasks require a full six degrees of freedom. The most common such circumstance occurs
when the end-effector has an axis of symmetry. Figure 3.12 shows a manipulator positioning a

69
grinding tool in two different ways. In this case, the orientation of the tool with respect to the axis of
the tool, ZT, is immaterial, because the grinding wheel is spinning at several hundred RPM. To say
that we can position this 6-DOF robot in an infinity of ways for this task (rotation about ZT is a free
variable), hence the robot is redundant for this task. Arc welding, spot welding, deburring, glueing,
and polishing provide other examples of tasks that often employ end-effectors with at least one axis
of symmetry.

Fig: 3.12 A 6-DOF manipulator with a symmetric tool contains a redundant degree of freedom.

In analyzing the symmetric-tool situation, it is sometimes helpful to imagine a fictitious joint


whose axis lies along the axis of symmetry. In positioning any end-effector to a specific pose, it
needs a total of six degrees of freedom. Because one of these six is our fictitious joint, the actual
manipulator need not have more than five degrees of freedom. If a 5-DOF robot were used in the
application of figure. 3.10, then it would be back to the usual case in which only a finite number of
different solutions are available for positioning the tool. Quite a large percentage of existing
industrial robots are 5-DOF. Positioning parts on a planar surface requires three degrees of freedom
(x, y, and 0); in order to lift and insert the parts, a fourth motion normal to the plane is added (z). In
counting the number of degrees of freedom between the pipes and the end-effector, the tilt/roll
platform accounts for two which is shown in figure 3.13. This, together with the fact that arc welding
is a symmetric-tool task, means that, in theory, a 3-DOF manipulator could be used. In practice,
realities such as the need to avoid coffisions with the workpiece generally dictate the use of a robot
with more degrees of freedom. Parts with an axis of symmetry also reduce the required degrees of
freedom for the manipulator.

Fig: 3.13 A tilt/roll platform provides two degrees of freedom to the overall manipulator
system.

70
3.12.3 Kinematic configuration
Once the required number of degrees of freedom has been decided upon, a particular
configuration of joints must be chosen to realize those freedoms. For serial kinematic linkages, the
number of joints equals the required number of degrees of freedom. Most manipulators are designed
so that the last n - 3 joints orient the end-effector and have axes that intersect at the wrist point, and
the first three joints position this wrist point. Manipulators with this design could be said to be
composed of a positioning structure followed by an orienting structure or wrist. These manipulators
always have closed-form kinematic solutions. Although other configurations exist that possess
closed-form kinematic solutions, almost every industrial manipulator belongs to this wrist-partitioned
class of mechanisms. Furthermore, the positioning structure is almost without exception designed to
be kinematically simple, having link twists equal to 0° or ±90° and having many of the link lengths
and offsets equal to zero. It has become customary to classify manipulators of the wrist-partitioned,
kinematically simple class according to the design of their first three joints (the positioning structure).
The following paragraphs briefly describe the most common of these classifications.
Cartesian
A Cartesian manipulator has perhaps the most straightforward configuration. As shown in
figure 3.14, joints 1 through 3 are prismatic, mutually orthogonal, and correspond to the X, Y, and Z
Cartesian directions. The inverse kinematic solution for this configuration is trivial. This
configuration produces robots with very stiff structures. As a consequence, very large robots can be
built. These large robots, often called gantry robots, resemble overhead gantry cranes. Gantry robots
sometimes manipulate entire automobiles or inspect entire aircraft. The other advantages of Cartesian
manipulators stem from the fact that the first three joints are decoupled. This makes them simpler to
design and prevents kinematic singularities due to the first three joints.
Their primary disadvantage is that all of the feeders and fixtures associated with an
application must lie "inside" the robot. Consequently, application workcells for Cartesian robots
become very machine dependent. The size of the robot's support structure limits the size and
placement of fixtures and sensors. These limitations make retrofitting Cartesian robots into existing
workcells extremely difficult.

Fig: 3.14 A Cartesian manipulator.

Articulated
Figure 3.15 shows an articulated manipulator, sometimes also called a jointed, elbow, or
anthropomorphic manipulator. A manipulator of this kind typically consists of two "shoulder" joints
(one for rotation about a vertical axis and one for elevation out of the horizontal plane), an "elbow"
joint (whose axis is usually parallel to the shoulder elevation joint), and two or three wrist joints at

71
the end of the manipulator. Both the PUMA 560 and the Motoman L-3, fall into this class.
Articulated robots minimize the intrusion of the manipulator structure into the workspace, making
them capable of reaching into confined spaces. They require much less overall structure than
Cartesian robots, making them less expensive for applications needing smaller workspaces.

Fig: 3.15 An articulated manipulator.

SCARA
The SCARA' configuration, shown in figure 3.16, has three parallel revolute joints (allowing
it to move and orient in a plane), with a fourth prismatic joint for moving the end-effector normal to
the plane. The chief advantage is that the first three joints don't have to support any of the weight of
the manipulator or the load. In addition, link 0 can easily house the actuators for the first two joints.
The actuators can be made very large, so the robot can move very fast.

Fig: 3.16 A SCARA manipulator

Spherical
The spherical configuration in figure 3.17 has many similarities to the articulated
manipulator, but with the elbow joint replaced by a prismatic joint. This design is better suited to
some applications than is the elbow design. The link that moves prismatically might telescope—or
even "stick out the back" when retracted.

Cylindrical
Cylindrical manipulators figure 3.18 consist of a prismatic joint for translating the arm
vertically, a revolute joint with a vertical axis, another prismatic joint orthogonal to the revolute joint
axis, and, finally, a wrist of some sort.

72
Fig: 3.17 A spherical manipulator

Fig: 3.18 A cylindrical manipulator

3.12.4 Wrists
The most common wrist configurations consist of either two or three revolute joints with
orthogonal, intersecting axes. The first of the wrist joints usually forms joint 4 of the manipulator. A
configuration of three orthogonal axes wifi guarantee that any orientation can be achieved (assuming
no joint-angle limits).

Fig: 3.19 An orthogonal-axis wrist driven by remotely located actuators via three concentric
shafts.

Any manipulator with three consecutive intersecting axes wifi possess a closed-form
kinematic solution. Therefore, a three-orthogonal-axis wrist can be located at the end of the

73
manipulator in any desired orientation with no penalty. Figure 3.19 is a schematic of one possible
design of such a wrist, which uses several sets of bevel gears to drive the mechanism from remotely
located actuators. In practice, it is difficult to build a three-orthogonal-axis wrist not subject to rather
severe joint-angle limitations. Figure 3.20 shows the manipulator with a wrist whose axes do not
intersect which does possess a closed-form kinematic solution and figure 3.21 shows the typical wrist
design of a 5-DOF welding robot.

Fig: 3.20 A manipulator with a wrist whose axes do not intersect. However, this robot does
possess a closed-form kinematic solution.

Fig: 3.21 Typical wrist design of a 5-DOF welding robot

Quantitative Measures of Workspace Attributes


Manipulator designers have proposed several interesting quantitative measures of various
workspace attributes.
Efficiency of design in terms of generating workspace
Some designers noticed that it seemed to take more material to build a Cartesian manipulator
than to build an articulated manipulator of similar workspace volume. To get a quantitative handle on
this and it first defines the length sum of a manipulator as
L = ∑𝑁𝑖=1(𝑎𝑖−1 + 𝑑𝑖 )

74
where 𝒂𝒊−𝟏 and 𝒅𝒊 are the link length and joint. Thus, the length sum of a manipulator gives a rough
measure of the "length" of the complete linkage. For prismatic joints, 𝒅𝒊 must here be interpreted as a
constant equal to the length of travel between the joint-travel limits. The structural length index, QL'
is defined as the ratio of the manipulator's length sum to the cube root of the workspace volume that
is,
𝐿
𝑄𝐿 = 3
√𝑤
Where L is Length sum and W is the volume of the manipulator's workspace. Hence, QL
attempts to index the relative amount of structure (linkage length) required by different
configurations to generate a given work volume. Thus, a good design would be one in which a
manipulator with a small length sum nonetheless possessed a large workspace volume. Good designs
have a low considering just the positioning structure of a Cartesian manipulator (and therefore the
workspace of the wrist point), the value of QL is minimized when all three joints have the same
length of travel. This minimal value is Q L = 3.0. On the other hand, an ideal articulated manipulator,
such as the one in figure 3.15, has QL = 0.62. This helps quantify our earlier statement that
articulated manipulators are superior to other configurations in that they have minimal intrusion into
their own workspace.

Example : 3.2
A SCARA manipulator like that of figure 3.16 has links 1 and 2 of equal length l/2, and the range of
motion of the prismatic joint 3 is given by d3. Assume for simplicity that the joint limits are absent,
and find QL. What value of d3 minimizes QL and what is this minimal value?
The length sum of this manipulator is L = 1/2 + 1/2 + d3 = 1 + d3, and the workspace volume is that
of a right cylinder of radius 1 and height d3;
therefore,
𝑙 + 𝑑3
𝑄𝐿 = 3
√𝑙 2 𝑑3

Minimizing QL as a function of the ratio d3 / 1 gives d3 = l/2 as optimal. The corresponding minimal
value of QL is 1.29.

3.12.5 Designing well-conditioned workspaces


In some sense, the farther the manipulator is away from singularities, the better able it is to
move uniformly and apply forces uniformly in all directions. Several measures have been suggested
for quantifying this effect. The use of such measures at design time might yield a manipulator design
with a maximally large well-conditioned subspace of the workspace.
Singular configurations are given by
det(J(𝜣)) = 0,
So it is natural to use the determinant of the Jacobian in a measure of manipulator dexterity. In the
manipulability measure, w, is defined as
w =√det(J(Θ)𝐽𝑇 (Θ) ,
For a non-redundant manipulator, reduces to
w = |det(𝐽(𝛩)𝐽𝑇 𝛩)|

75
A good manipulator design has large areas of its workspace characterized by high values of
w. Whereas velocity analysis motivated in above equation other researchers have proposed
manipulability measures based on acceleration analysis or force-application capability.
Mx (𝜣) = J-T (𝜣) Mx (𝜣) J-T (𝜣)
As a measure of how well the manipulator can accelerate in various Cartesian directions. He suggests
a graphic representation of this measure as an inertia ellipsoid, given by
XT Mx (𝜣) X = 1, . . . 3.39
The equation of an n-dimensional ellipse, where n is the dimension of X. The axes of the
ellipsoid given in 3.39 lie in the directions of the eigen vectors of Mx (𝜣), and the reciprocals of the
square roots of the corresponding Eigen values provide the lengths of the axes of the ellipsoid. Well-
conditioned points in the manipulator workspace are characterized by inertia ellipsoids that are
spherical figure 3.22 shows graphically the properties of a planar two-link manipulator. In the center
of the workspace, the manipulator is well conditioned, as is indicated by nearly circular ellipsoids. At
workspace boundaries, the ellipses flatten, indicating the manipulator's difficulty in accelerating in
certain directions.

Fig: 3.22 Workspace of a 2-DOF planar arm, showing inertia ellipsoids

3.12.6 Redundant and Closed-Chain Structures


A micromanipulator is generally formed by several fast, precise degrees of freedom located
near the distal end of a "conventional" manipulator. The conventional manipulator takes care of large
motions, while the micromanipulator, whose joints generally have a small range of motion,
accomplishes fine motion and force control. Additional joints can also help a mechanism avoid
singular configurations, For example, any three-degree-of-freedom wrist wifi suffer from singular
configurations (when all three axes lie in a plane), but a four-degree-of-freedom wrist can effectively
avoid such configurations.
Figure 3.23 shows two configurations suggested for seven-degree-of freedom manipulators.
The addition of a seventh joint allows an infinite of ways, permitting the desire to avoid obstacles to
influence the choice.
Although we have considered only serial-chain manipulators in our analysis, some
manipulators contain closed-loop structures. For example, the Motoman L-3 robot possesses closed-
loop structures in the drive mechanism of joints 2 and 3 and offer a benefit of increased stiffness of
the mechanism and reduces the allowable range of motion of the joints and thus decrease the
workspace size.

76
Fig: 3.23 Two suggested seven-degree-of-freedom manipulator designs

Figure 3.23 depicts a Stewart mechanism, a closed-loop alternative to the serial 6-DOF
manipulator. The position and orientation of the "end-effector" is controlled by the lengths of the six
linear actuators which connect it to the base. At the base end, each actuator is connected by a two-
degree-of-freedom universal joint. At the end effector, each actuator is attached with a three-degree-
of-freedom ball-and-socket joint. It exhibits characteristics common to most closed-loop
mechanisms: it can be made very stiff, but the links have a much more limited range of motion than
do serial linkages. The Stewart mechanism, in particular, demonstrates an interesting reversal in the
nature of the forward and inverse kinematic solutions.
In general, the number of degrees of freedom of a closed-loop mechanism is not obvious. The total
number of degrees of freedom can be computed by means of Grübler's formula
𝐹 = 6(𝑙 − 𝑛 − 1) + ∑𝑛𝑖=1 𝑓𝑖 . . .3.40
Where F is the total number of degrees of freedom in the mechanism, 1 is the number of links
(including the base), n is the total number of joints, and is the number of degrees of freedom
associated with the ith joint. A planar version of Grübler's formula (when all objects are considered to
have three degrees of freedom if unconstrained) is obtained by replacing the 6 in (3.40) with a 3.

Fig: 3.24 The Stewart mechanism is a six-degree-of-freedom fully parallel manipulator.

77
EXAMPLE 3.3
Use Grübler's formula to verify that the Stewart mechanism (figure. 3.24) indeed has six degrees of
freedom.

The number of joints is 18 (6 universal, 6 ball and socket, and 6 prismatic in the actuators).
The number of links is 14 (2 parts for each actuator, the end-effector, and the base). The sum of all
the joint freedoms is 36.
Using Griibler's formula, we can verify that the total number of degrees of freedom is six:
F=6(14—18—1)+36=6

Actuation Schemes:
The one of the most important matter of concern is the actuation of the joints. Typically, the
actuator, reduction, and transmission are closely coupled and must be designed together.
Actuator location:
The most straightforward choice of actuator location is at or near the joint it drives. If the
actuator can produce enough torque or force, its output can attach directly to the joint. This
arrangement is known as a direct-drive configuration. The advantages of simplicity in design and
superior controllability—that is, with no transmission or reduction elements between the actuator and
the joint, the joint motions can be controlled with the same fidelity as the actuator itself.
Unfortunately, many actuators are best suited to relatively high speeds and low torques and therefore
require a speed-reduction system. If they can be located remotely from the joint and toward the base
of the manipulator, the overall inertia of the manipulator can be reduced considerably. This, in turn,
reduces the size needed for the actuators. To realize these benefits, a transmission system is needed
to transfer the motion from the actuator to the joint.

3.12.7 Reduction and transmission systems:


Gears are the most common element used for reduction. They can provide for large
reductions in relatively compact configurations. Gear pairs come in various configurations for
parallel shafts (spur gears), orthogonal intersecting shafts (bevel gears), skew shafts (worm gears or
cross helical gears), and other configurations. Different types of gears have different load ratings,
wear characteristics, and frictional properties The major disadvantages of using gearing are added
backlash and friction. Backlash, which arises from the imperfect meshing of gears, can be defined as
the maximum angular motion of the output gear when the input gear remains fixed. If the gear teeth
are meshed tightly to eliminate backlash, there can be excessive amounts of friction. Very precise
gears and very precise mounting minimize these problems, but also increase cost.
The gear ratio, 𝜂 describes the speed-reducing and torque-increasing effects of a gear pair.
For speed-reduction systems, we will define 𝜂> 1; then the relationships between input and output
speeds and torques are given by
𝜃0 = (1/𝜂)i
0 = 𝜂i
Where 𝜽𝟎 and i are output and input speeds, respectively, and 0and i are output and input torques,
respectively.
The second broad class of reduction elements includes flexible bands, cables, and belts.
Because all of these elements must be flexible enough to bend around pulleys, they also tend to be
flexible in the longitudinal direction. They have the ability to combine transmission with reduction.

78
As is shown in figure. 3.25, when the input pulley has radius r1 and the output pulley has radius r2,
the "gear" ratio of the transmission system is
𝑟1
𝜂=
𝑟2

Fig: 3.25 Band, cable, belt, and chain drives have the ability to combine transmission with
reduction.

Whereas the Lead screws and ball-bearing screws combine a large reduction and transformation from
rotary to linear motion as shown in figure 3.26.

Fig: 3.26 Lead screws (a) and ball-bearing screws (b) combine a large reduction and
transformation from rotary to linear motion.

3.12.8 Stiffness and Deflections


An important goal for the design of most manipulators is overall stiffness of the structure and
the drive system. Stiff systems provide two main benefits. First, because typical manipulators do not
have sensors to measure the tool frame location directly, it is calculated by using the forward
kinematics based on sensed joint positions. Second, flexibilities in the structure or drive train wifi
lead to resonances, which have an undesirable effect on manipulator performance. In this section, we
consider issues of stiffness and the resulting deflections under loads.

(i)Flexible elements in parallel and in series


The combination of two flexible members of stiffness k1 and k2 "connected in parallel" produces the
net stiffness connected in series," the combination produces the net stiffness.

kparallel = k1 + k2
𝟏 𝟏 𝟏
=𝑲 +𝑲
𝑲𝑺𝒆𝒓𝒊𝒆𝒔 𝟏 𝟐

79
In considering transmission systems, we often have the case of one stage of reduction or transmission
in series with a following stage of reduction or transmission
(ii) Shafts
A common method for transmitting rotary motion is through shafts. The torsional stiffness of a round shaft can
be calculated as.
𝐺𝑑 4
𝐾=
32𝑙
Where d is the shaft diameter, l is the shaft length, and G is the shear modulus of elasticity (about 7.5
x 1010 Nt/rn2 for steel, and about a third as much for aluminum).
(iii) Gears
Gears, although typically quite stiff, introduce compliance into the drive system. An approximate
formula to estimate the stiffness of the output gear (assuming the input gear is fixed) is given as
k = Cg br2
Where b is the face width of the gears, r is the radius of the output gear, and Cg = 1.34 x 1010 Nt/rn2for
steel.
Gearing also has the effect of changing the effective stiffness of the drive system by a factor of η 2the
stiffness of the transmission system prior to the reduction (i.e., on the input side) is ki so that
i = Ki𝜃𝑖
and the stiffness of the output side of the reduction is k 0, so that
0 = K0𝜃0
then we can compute the relationship between and k0 (under the assumption of a perfectly rigid gear pair) as
0 𝜂Ki𝜃𝑖
𝐾0 = = = 𝜂2 𝐾𝑖
𝜃0 1/𝜂𝜃𝑖
Hence, a gear reduction has the effect of increasing the stiffness by the square of the gear ratio.

EXAMPLE 3.4

A shaft with torsional stiffness equal to 500.0 Nt-rn/radian is connected to the input side of a gear set
with η = 10, whose output gear (when the input gear is fixed) exhibits a stiffness of 5000.0 Nt
m/radian. What is the output stiffness of the combined drive system?
1 1 1
= + 2
𝐾 5000.0 10 (500.0)

or
50000
𝐾𝑠𝑒𝑟𝑒𝑖𝑠 = = 4545.4 Nt − rn/radian
11
When a relatively large speed reduction is the last element of a multi element transmission system,
the stiffnesses of the preceding elements can generally be ignored.

Belts
In such a belt drive as that of figure 3.22, stiffness is given by
𝐴𝐸
𝐾=
𝑙
Where A is the cross-sectional area of the belt, E is the modulus of elasticity of the belt, and 1 is the
length of the free belt between pulleys plus one-third of the length of the belt in contact with the
pulleys.
Links

80
Fig: 3.27 Simple cantilever beam used to model the stiffness of a link to an end load.

As a rough approximation of the stiffness of a link, we might model a single link as a cantilever beam
and calculate the stiffness at the end point, as in figure 3.27. For a round hollow beam, this stiffness
is given by
3(𝑑04 − 𝑑𝑖4 )
𝐾=
64𝑙 3
Where di and d0 are the inner and outer diameters of the tubular beam, l is the length, and E is the
modulus of elasticity (about 2 x 1011 Nt/rn2 for steel, and about a third as much for aluminum). For a
square-cross-section hollow beam, this stiffness is given by
𝐸 (𝑤04 − 𝑤𝑖4 )
𝐾=
4𝑙 3
Where wi and w0 are the outer and inner widths of the beam (i.e., the wall thickness is w 0 — wi).

EXAMPLE 3.5
A square-cross-section link of dimensions 5 x 5 x 50 cm with a wall thickness of 1 cm is driven by a
set of rigid gears with = 10, and the input of the gears is driven by a shaft having diameter 0.5 cm and
length 30 cm. What deflection is caused by a force of 100 Nt at the end of the link?

Using,
𝐸 (𝑤04 − 𝑤𝑖4 )
𝐾=
4𝑙 3

To calculate the stiffness of the link


(2 × 1011 )(0.054 − 0.044 )
𝐾=
4(0.5)
= 3.69 × 105
Hence, for a load of 100 Nt, there is a deflection in the link itself of
100
x= = 2.7 ×10−4 m,
𝐾𝑙𝑖𝑛𝑘
Or
0.027 cm.
Additionally, 100 Nt at the end of a 50-cm link is placing a torque of 50 Nt-rn on the output gear. The
gears are rigid, but the flexibility of the input shaft is
(7.5×1010 )(3.14)4
𝑘𝑠ℎ𝑎𝑓𝑡 =
32×0.3
Which, viewed from the output gear, is
𝐾 ′ 𝑠ℎ𝑎𝑓𝑡 = (15.3)(103) = 1530.0 Nt-rn/radian.

81
Loading with 50 Nt-rn causes an angular deflection of
50.0
= = 0.0326 radian
1530.0
So the total linear deflection at the tip of the link is
x = 0.027 + (0.0326) (50) = 0.027 + 1.630 = 1.657 cm.
In this solution, it is assumed that the shaft and link are made of steel. The stiffness of both members
is linear in E, the modulus of elasticity, so, for aluminium elements, multiply the result by about 3.
Position sensing
Virtually all manipulators are servo-controlled mechanisms—that is, the force or torque
command to an actuator is based on the error between the sensed position of the joint and the desired
position. This requires that each joint have some sort of position-sensing device. The most common
approach is to locate a position sensor directly on the shaft of the actuator. If the drive train is stiff
and has no backlash, the true joint angles can be calculated from the actuator shaft positions. Such
co-located sensor and actuator pairs are easiest to control. The most popular position-feedback device
is the rotary optical encoder. As the encoder shaft turns, a disk containing a pattern of fine lines
interrupts a light beam. A photo detector turns these light pulses into a binary waveform. Typically,
there are two such channels, with wave pulse trains 90 degrees out of phase. The shaft angle is
determined by counting the number of pulses, and the direction of rotation is determined by the
relative phase of the two square waves. Additionally, encoders generally emit an index pulse at one
location, which can be used to set a home position in order to compute an absolute angular position.
Force sensing
A variety of devices have been designed to measure forces of contact between a manipulator's
end-effector and the environment that it touches. Most such sensors make use of sensing elements
called strain gauges, of either the semiconductor or the metal-foil variety. These strain gauges are
bonded to a metal structure and produce an output proportional to the strain in the metal. In this type
of force-sensor design, the issues the designer must address include the following:
1. Number of sensors needed to resolve the desired information.
2. Mounting of sensors relative to each other on the structure.
3. Structure that allows good sensitivity while maintaining stiffness.
4. Protection against mechanical overload has been built into the device.

There are three places where such sensors are usually placed on a manipulator:
1. At the joint actuators. These sensors measure the torque or force output of the actuator/reduction
itself. These are useful for some control schemes, but usually do not provide good sensing of contact
between the end-effector and the environment.
2. Between the end-effector and last joint of the manipulator. These sensors are usually referred to as
wrist sensors. They are mechanical structures instrumented with strain gauges, which can measure
the forces and torques acting on the end-effector. Typically, these sensors are capable of measuring
from three to six components of the force/torque vector acting on the end-effector.
3. At the "fingertips" of the end-effector. Usually, these force-sensing fingers have built-in strain
gauges to measure from one to four components of force acting at each fingertip.

82
CHAPTER 4

ROBOT PROGRAMMING
4.1 INTRODUCTION
For the robot user to get the task done from the robot manipulator there is the need for an
effective and efficient communication method. Several subjective and objective communication
techniques are developed to perform the task as determined to suit the application. The methods are:
lead through teaching method, speech recognition and programming.

4.2 PROGRAMMING EMBEDDED SYSTEM IN C


Robots are no longer confined to industrial automation. They are becoming increasingly
reliable, affordable and user friendly. In addition, they are improving the quality of life. Robots are
performing everyday household tasks such as vacuum cleaning and personal assistance. The medical
device industry is utilizing controller microchips that translate muscle movements into prosthetic
responses. As the demand for these robots grows, so does the need for qualified professionals.

In a typical mechanical oriented task, robots use sensors, actuators, and software to perceive
their environment and safely perform programmed goals. An embedded system resides inside the
robot tying together the different subsystems. Without an embedded system, robots would need to
rely on external computing systems which can increase the safety risks due to delay and failure in the
communication link between the robot and its external control system.

4.2.1 Embedded System


An embedded system is a combination of computer hardware and software, and perhaps
additional mechanical or other parts, designed to perform a specific function. In other words, an
embedded system is a microprocessor-based computer hardware system with software that is
designed to perform a dedicated function, either as an independent system, or as a part of a large
system.
Some real time examples of embedded systems are:
 Mobile phone systems (including both customer handsets and base stations).
 Automotive applications (including braking systems, traction control, airbag release systems,
engine-management units, steer-by-wire systems and cruise control applications).
 Domestic appliances (including dishwashers, televisions, washing machines, microwave ovens,
video recorders, security systems, garage door controllers).
4.2.2 C Language
C programming is a general-purpose, procedural, imperative computer programming
language developed in 1972 by Dennis M. Ritchie at the Bell Telephone Laboratories to develop the
UNIX operating system. C is the most widely used computer language.
4.2.3 Embedded C
Embedded C is a generic term given to a programming language written in C, which is
associated with a particular hardware architecture. Embedded C is an extension to the C language
with some additional header files. These header files may change from controller to controller.

83
4.2.4 Programming Embedded System using Embedded C
In every embedded system-based project, Embedded C programming plays a key role to make
the microcontroller run & perform the preferred actions. At present, we normally utilize several
electronic devices like mobile phones, washing machines, security systems, refrigerators, digital
cameras, etc. The controlling of these embedded devices can be done with the help of an embedded C
program. Both the embedded C and C languages are the same and implemented through some
fundamental elements like a variable, character set, keywords, data types, declaration of variables,
expressions, statements. All these elements play a key role while writing an embedded C program.

The designing of an embedded system can be done using Hardware & Software. For instance,
in a simple embedded system, the processor is the main module that works like the heart of the
system. Here a processor is nothing but a microprocessor, DSP, microcontroller, CPLD & FPGA. All
these processors are programmable so that it defines the working of the device.

An Embedded system program allows the hardware to check the inputs & control outputs
accordingly. In this procedure, the embedded program may have to control the internal architecture
of the processor directly like Timers, Interrupt Handling, I/O Ports, serial communications interface,
etc. There are different programming languages are available for embedded systems such as C, C++,
assembly language, JAVA, JAVA script, visual basic, etc. There are different steps involved in
designing an embedded C program is explained below.

(i) Comments
The comments are important to the programming languages which describes function of
program. Comments are non-executable code that is used to provide documentation to the program.
The comments make an easy way to understand function of the program. There are two types of
comments in embedded C programming tutorial such as:
 Single Line Comment
 Double Line Comment or Multi Line Comment
Single Line Comment
Generally single line comments are useful for the programming languages that can be used to
explain a part of the code. The single line comments start with double slash (//) which can be placed
anywhere in the program. Single line comments are used to ignore complete line in a program.
Double Line Comment or Multi Line Comment
The multi-line comments start with single slash and an asterisk (/*) that can be used to
explain a block of code. The multi-line comments can be placed anywhere in the program. The multi-
line comments are used to ignore a complete block of code in a program.
(ii) Processor Directives
Preprocessor directives are lines integrated in the code of programs which can be followed by
a hash sign (#). These lines are not programmed statements, but directives for the preprocessor. The
preprocessor inspects the code before actual compilation of code begins and resolves all these
directives before any code is actually generated by regular statements. Even though there are many
different preprocessor directives, but two directives are very useful in the embedded C programming
are #include and #define, which can be called as a header file, containing C declarations and macro
definitions to be shared between several source files.

84
The #include directive is normally used to include standard library such as study. h that can
be used to access I/O functions from the C library. The #define directive normally used to define the
string of variables and to assign the values by performing the operations in a single instruction it can
be defined as macros.

(iii) Port Configuration:


In every micro-controller consists of many ports, each port contains many pins which can be
used to control the interfacing devices. These pins are declared in a program using keywords. The
embedded C has consist standard and predefined keywords such as bit, sbit, SFR which can be used
to declare the single pin and bits in a program.
Table: 4.1 Port configurations
Name Function
Sbit Accessing of single bit
Bit Accessing of bit addressable memory of RAM
Sfr Accessing of sfr register by another name

Sbit: This data type is used in case of accessing a single bit of SFR register.
Syntax: Sbit variable name = SFR bit;

Ex: Sbit a=P2^1;

Explanation: If we assign p2.1 as ‘a’ variable, then we can use ‘a’ instead of p2.1 anywhere in the
program, which reduces the complexity of the program.

Bit: This data type is used for accessing the bit addressable memory of RAM (20h-2fh).
Syntax: name of bit variable;
Ex: bit c;
Explanation: It is a bit sequence setting in a small data area that is used by a program to remember
something.
SFR: This data type is used to get the SFR register peripheral pots by another name. All the SFR
registers must be declared with capital letters.

Syntax: SFR variable name = SFR address of SFR register;


Ex: SFR port0=0×80;
Explanation: If we assign 0×80 as ‘port0’, then we can use 0×80 instead of port0 anywhere in the
program, which reduces the complexity of the program.

(iv) SFR Register: ‘Special Function Register’ is represented as SFR register. Microcontroller
8051 has 256 bytes of RAM memory, which is separated into two parts: the first part of 128 bytes is
used for data storage, and the other of 128 bytes is used to SFR registers. All peripheral devices like
timers and counters, I/O ports are stored in the SFR register, and each element has a unique address.

(v) Global Variables


The variable declared before the main function is called a global variable, that can be
accessed on any function in the program. The life time of the global variable depends on the program
until program comes to an end.

85
(vi) Main Function or Core Function
The main function is a core of every program execution, starts with main function only. Every
program uses only one main function because if program contains more than one main function, then
the compiler will get confused where to start the program execution.
(vii) Variable Declaration
The variable is a name that can be used to store the values. That variable must be declared
before used in the program. The declaration of a variable specifies its name and data type. The
storage representation of data is called data type. The embedded C programming uses four basic data
types such as float, integer, character, etc. used to store the data in the memory. The size and range of
data type defined based on compiler.

Data type Size Range


Char or signed char 1byte -128 to +128
Unsigned char 1byte 0 to 255
Int or signed int 2byte -32768 to 32768
Unsigned int 2byte 0 to 65535

(viii) Program Logic


The plan of path is called a program logic that presents the theory behind and expected
outcomes of a program’s actions. It describes the assumption or theory about why the program will
work, showing the acknowledged effects of activities or resources. The below LED flash light
program explains the steps involved in designing an embedded C program.

Fig. 4.1 LED flash light program

4.3 READING SWITCHES


The embedded systems usually execute one program, which begins running when the device
is powered up. The key challenge for desktop programmers moving into the embedded market is the
implementation of the user interface. On the desktop, design of the user interface means working

86
with a high-resolution graphics screen, some form of mouse (or equivalent ‘pointing’ device), and a
large keyboard.
Design freedom is restricted by the fact that the user of your application wants to have a
similar ‘look and feel’ to other applications that he or she uses. To match these design constraints and
speed up the development process developers will typically use some form of standard code library
when building applications, rather than attempting to create all the code from scratch. Figure 4.2
illustrates the development of the user interface for a modern desktop application will almost
invariably mean working with a high-resolution graphics screen, a keyboard and a mouse, using code
libraries written in, say, Java or C++.

In the embedded world, it may appear – at first sight at least – that there are fewer constraints.
Instead, it can seem that there is a ‘free for all’ where every developer will implement a different
interface to their system. However, there is atleast one common denominator: embedded systems
usually use switches as part of their user interface. Figure 4.3 illustrates a collection of user-interface
components taken from a range of different embedded systems. Most such interfaces contain at least
one switch. The general rule applies from the most basic remote-control system for opening a garage
door, right up to the most sophisticated aircraft autopilot system. Whatever the system you create,
you need to be able to create a reliable switch interface.

Fig. 4.2 Developing the user interface for a modern desktop application

Fig. 4.3 A collection of user-interface components

4.3.1 Basic techniques for reading from port pins


The control of the 8051 microcontroller ports is carried out using 8-bit latches (SFRs). We
can send some data to Port 1 as follows:

87
sfr P1 = 0x90; // Usually in header file
P1 = 0x0F; // Write 00001111 to Port 1

In exactly the same way, we can read from Port 1 as follows:

unsigned char Port_data;


P1 = 0xFF; // Set the port to 'read mode'
Port_data = P1; // Read from the port

After the microcontroller is reset, the port latches all have the value 0xFF (11111111 in
binary): that is, all the port-pin latches are set to values of ‘1’. It is tempting to assume that writing
data to the port is therefore unnecessary, and that we can get away with the following version:

unsigned char Port_data;


// Assume nothing written to port since reset
// – DANGEROUS!!!
Port_data = P1;

The problem with this code is that, in simple test programs it works: this can null the
developer into a false sense of security. If, at a later date, someone modifies the program to include a
routine for writing to all or part of the same port, this code will not generally work as required:

unsigned char Port_data;


P1 = 0x00;
...
// Assumes nothing written to port since reset
// – WON’T WORK
Port_data = P1;

In most cases, initialization functions are used to set the port pins to a known state at the start
of the program. Where this is not possible, it is safer to always write ‘1’ to any port pin before
reading from it.

4.4 MAKING SENSE OF ACTUATORS


Actuators are the device used for converting hydraulic, pneumatic and electrical energy into
mechanical energy. The mechanical energy used to get the work done. Actuators perform function
just opposite to that of pump.
The hydraulic actuator produces linear, rotary or oscillation motion. They can be used for
lifting, tilting, clamping, opening, closing, metering, mixing and turning and for many other
operations.
Types of Actuators
 Hydraulic Actuators
 Pneumatic Actuators
 Electrical Actuators
(a) Servo Motor;
(b) Stepper motor;
(c) AC/DC motor

88
4.4.1 Hydraulic Actuators
Hydraulic actuators transform the hydraulic energy stored in a reservoir into mechanical
energy by means of suitable pumps. Hydraulic actuators are also fluid power device for industrial
robots which utilize high pressure fluid such as oil to transmit forces to the point of application
desired. Main Components of Hydraulic Actuators are:

Principle and working-of hydraulic actuation system are similar to pneumatic system, except
that instead of air, fluid such as water or oil supplies the inlet power. Although the working principle
remains the same, the structural design varies. These devices utilize pressurized fluid to produce
linear motion and force f (or) rotary motion and torque.

Hydraulic actuators are used in a variety of power transfer application. The pressure that is
transmitted when a quantity of fluid such as water (or) oil, is forced through a comparatively small
orifice (or) through tube, operate hydraulic actuator. Based on the principle, a handful number of
hydraulic actuating components are designed.

They are hydraulic accumulator, hydraulic cylinders, hydraulic flow controls, hydraulic
motors, hydraulic power units, hydraulic pumps, hydraulic pressure regulators, rod less hydraulic
cylinder and vacuum pressure regulators.

Types of Hydraulic Actuators


(i) Linear Hydraulic Actuator
 Single acting cylinder
 Double acting cylinder
 Double acting double rod cylinder

(ii) Hydraulic Rotary Actuators


 Gear motor
 Vane motor
 Piston motor

Hydraulic actuators, are mainly classified into three types:

Single Acting Spring Return Type: The single acting cylinder is pressurized on only one end. An
internal spring is compressed by pressure on the cap end, and a rod extends. A reduction of pressure
allows for the retraction of the rod by spring.

Double Acting Cylinder: In a double acting type, pressure can be applied to two parts, thereby
generating power and motion in two directions.

Ram Type: In the ram type has a single fluid chamber and produces unidirectional force. Hydraulic
actuator can also be designed to provide rotary movement. These types of actuators provide toque.
There are many types as far as design is concerned, but importantly rack and pinion type and gear
motor type actuators are employed in robot application. They are specified depending upon the
angular rotation and torque involved. The rack and pinion type system uses rack and pinion

89
mechanism whereas the gear motor uses gearing mechanism. Both types convert fluid power energy
to rotary motion of a shaft in order to achieve mechanical function such as turning, positioning,
steering, opening and closing, swinging or any other involving restricted rotation.
Advantages
 It has advantage of generating extremely large force from very compact actuators.
 It can also provide precise control at low speeds.
 Robust.
 Self-lubricating.
 Due to the presence of accumulator which act as a storage device, the system can meet sudden
demands in power.
 No mechanical linkage is required.
 High efficiency and high power to size ratio.
 They generally have a greater load carrying capacity than electric and pneumatic actuator.
 Hydraulic robots are more capable of withstanding suck loads.
Disadvantages
 The hydraulic system is required for a large infrastructure is high pressure pump, tank, and
distribution lines.
 Leakage can occur causing a loss in performance.
 High maintenance.
 Not suitable for clear environment.
 Servo control of hydraulic system is complex and is not as widely understood as electric servo
control.
 Noisy operation.
 Expensive.
 Not energy sufficient.

4.4.2 Pneumatic Actuators


Pneumatic actuators utilize pneumatic energy provided by a compressor and transform it into
mechanical energy by means of pistons (or) turbines. Pressurized air is used to transmit and control
power. Pneumatic actuators are the devices that cause things to move by taking the advantage of
potential energy. The actuators in their conventional form are basically called pneumo-mechanical
device and have been used to automate industrial tasks of iterative nature. The actuator has three
components namely cylinder, piston and valve.

The cylinder is hollow chamber into which the external compressed air is allowed to enter, so
as to enable the piston to move. The air enters through a hole usually called port and a valve, which is
considered as an actuator, controls the rate of flow of air into the chamber. The valve is a controlling
element and it is an electro technical device.

In operation the piston is rigidly attached to load and can slide inside the cylinder. In view of
their motion, only two types of actuators such as linear and angular types arc manufactured.
However, in some more generalized sense pneumatic actuator are classified according to their
mechanical design parameter such as the number of ports and ways the piston moves. The linear
actuator converts the potential energy in the compressed air into mechanical energy in terms of linear

90
motion. The actuator consists of a piston and cylinder. The air enters the actuator and pushes the
piston from one end of the cylinder to the other.

Advantages of Pneumatic Actuators


 Control is simple.
 When source of compressed air is readily available, as they often are in engineering related
facilities, pneumatic actuators may be good choice.
 It is cheapest form of all actuators.
 Pneumatic actuators have a very quick action and response time, thus allowing for fast work
cycles.
 The systems are usually compact.
 Individual components can be easily interconnected.
 No mechanical transmission is usually required.
 Compressed air can be stored und conveyed easily over long distance.
Disadvantages
 More noise and vibration.
 Since air is compressible, pneumatic cylinders are not typically used for applications requiring
accurate motion between two well-defined end points.
 Pneumatics is not suitable for heavy loads.
 If mechanical stops are used, resetting the system can be slow.

4.4.3 Electric Actuators


An actuator obtaining electrical energy from mechanical system is called electric actuator.
Electric actuators are generally referred to as being those where an electric motor drives the robot
links through some mechanical transmission i.e., gears.
In the early years of industrial robotics, hydraulic robots were the most common, but recent
improvements in electric motor design have meant that most new robots are of all electric
construction. The first commercial electrically driven industrial robot was introduced in 1974 by
ABB. In the electric system, a servo power amplifier is also needed to provide a complete actuation
system. The MAKER 110 is an example of electric drive robot, that is consistent with these
tendencies. Electric motors can also be used to actuate linear joints (e.g., telescoping arms) by means
of pulley systems (or) other translational mechanisms.
Electrical actuators comprise the following:
(i). Drive system:
 DC motor
 AC motor
 Stepper motor
(ii). Switching Device
(a) Mechanical switch:
 Solenoids
 Relays
(b) Solid state switch:
 Diodes
 Thyristor
 Transistors

91
Advantages
 Widespread availability of power supply.
 High power conversion efficiency.
 The basic drive element in an electric motor is usually lighter than that for fluidpower
 No pollution of working environment.
 Being relatively quiet and clean, they are very acceptable environmentally.
 They are easily maintained and repaired.
 Structural components can be light weight.
 The drive system is well suited to electronic control.
Disadvantages
 A larger and heavier motor must be used which is costly.
 Poor dynamic response.
 Compliance and wear problems are causing inaccuracies.
 Conventional gear driven create backlash.
 Electric motors are not intrinsically safe. They cannot therefore be used in for explosive
atmospheres.
Application
 AC servomotors
 DC servomotors
 Stopper motors

4.5 UNDERSTANDING MICROCONTROLLER


A microcontroller is a computing device capable of executing a program (i.e., a sequence of
instructions) and is often referred to as the “brain” or “control center” in a robot since it is usually
responsible for all computations, decision making, and communications. In order to interact with the
outside world, a microcontroller possesses a series of pins (electrical signal connections) that can be
turned HIGH (1/ON), or LOW (0/OFF) through programming instructions. These pins can also be
used to read electrical signals (coming from sensors or other devices) and tell whether they are HIGH
or LOW.
Most modern microcontrollers can also measure analogue voltage signals (i.e., signals that
can have a full range of values instead of just two well defined states) through the use of an Analogue
to Digital Converter (ADC). By using the ADC, a microcontroller can assign a numerical value to an
analogue voltage that is neither HIGH nor LOW.

4.5.1 Specialized Features in a Microcontroller


Special hardware built into the microcontrollers means these devices can do more than the
usual digital I/O, basic computations, basic mathematics, and decision taking. Many microcontrollers
readily support the most popular communication protocols such as UART (a.k.a. serial
or RS232), SPI and I2C.This feature is incredibly useful when communicating with other devices
such as computers, advanced sensors, or other microcontrollers. Although it is possible to manually
implement these protocols, it is always nice to have dedicated hardware built-in that takes care of the
details.

92
It allows the microcontroller to focus on other tasks and allows for a cleaner program.
Analogue-to-digital converters (ADC) are used to translate analogue voltage signals to
a digital number proportional to the magnitude of the voltage, this number can then be used in the
microcontroller program. In order to output an intermediate amount of power different from HIGH
and LOW, some microcontrollers are able to use pulse-width modulation (PWM). For example, this
method makes it possible to smoothly dim an LED. Finally, some microcontrollers integrate a
voltage regulator in their development boards. This is rather convenient since it allows the
microcontroller to be powered by a wide range of voltages that do not require you to provide the
exact operating voltage required. This also allows it to readily power sensors and other accessories
without requiring an external regulated power source. The below two examples illustrates when to
use a digital or analogue pin in a microcontroller.

1. Digital: A digital signal is used in order to assess the binary state of a switch. A momentary
switch or push button closes a circuit when pressed, and allows current to flow (a pull-up resister
is also shown). A digital pin connected (through a green wire in the picture) to this circuit would
return either LOW or 0 (meaning that the voltage at the pin is in the LOW range, 0V in this case)
or a HIGH (meaning the button is pressed and the voltage is at the HIGH range, 5V in this case).
2. Analogue: A variable resistor or potentiometer is used to provide an analogue electrical signal
proportional to a rotation (e.g., the volume knob on a stereo). When a potentiometer is connected
to a 5V supply and the shaft is turned, the output will vary between 0 and 5 V, proportionally to
the angle of rotation. The ADC on a microcontroller interprets the voltage and converts it to a
numeric value. For example, a 10-bit ADC converts 0V to the value “0”, 2.5 V to “512” and 5 V
to “1023”. Therefore, if you suspect the device you plan to connect will provide a value that is
proportional to something else (for example temperature, force, position), it will likely need an
analogue pin.

4.5.2 Programming Microcontroller


Programming a microcontroller is very simple by using the modern Integrated Development
Environments (IDE) that use up-to-date languages, fully featured libraries that readily cover all of the
most common (and not so common) action. The latest microcontrollers can be programmed in
various high-level languages including C, C++, C#, processing (a variation of C++), Java, Python,
.Net, and Basic. It is also possible to program them in Assembler but this privilege is reserved for
more advanced users with very special requirements (and a hint of masochism). In this sense, anyone
should be able to find a programming language that best suit their taste and previous programming
experience.

IDEs are becoming even simpler as manufacturers create graphical programming


environments. Sequences which used to require several lines of code are reduced to an image which
can be connected to other “images” to form code. For example, one image might represent
controlling a motor and the user need only place it where the user wants it and specify the direction
and rpm. On the hardware side, microcontroller developments boards add convenience and are easier
to use over time. These boards usually break out all the useful pins of the microcontroller and make
them easy to access for quick circuit prototyping. They also provide convenient USB power and
programming interfaces that plug right into any modern computer. For those unfamiliar with the

93
term, a Development Board is a circuit board that provides a microcontroller chip with all the
required supporting electronics (such as voltage regulator, oscillators, current limiting resistors, and
USB plugs) required to operate.

4.5.3 Future hold of Microcontroller


It is apparent that a microcontroller is very similar to a PC CPU or microprocessor, and that a
development board is akin to a Computer motherboard. In more advanced robots, especially those
that involve complex computing and vision algorithms, the microcontroller is often replaced (or
supplemented) with a standard computer. A desktop computer includes a motherboard, a processor, a
main storage device (such as a hard drive), video processing (on-board or external), RAM, and of
course peripherals such as monitor, keyboard, mouse etc. This type of system is usually more
expensive, physically larger, more power hungry
As the price of computers has gone down, and advances in technology make them smaller and
more energy efficient, single-board computer have emerged as an attractive option for robots.
These single-board computers (like the popular Raspberry Pi) are essentially all-in-one computers
you may have used about 5 years ago, and incorporate many devices into one circuit board (so you
cannot swap anything out). They run a complete operating system (Windows and Linux are most
common) and can connect to external devices such as USB peripherals, monitor(s), cameras and
more.
Unlike their ancestors, these single-board computers tend to be much more power efficient
and easily used in mobile robot applications. Although the price of single board computers is
dropping to the point where they are almost on par with microcontroller boards, for practical
purposes, the suggestion is only considering using a single board computer for more advanced needs
like adding a camera system, full monitor output, running multiple services etc. and start with a good
microcontroller for ease of use.

4.6 MOTOR CONTROLLER


A motor controller is an electronic device (usually comes in the shape of a bare circuit board
without enclosure) that acts as an intermediate device between a microcontroller, a power supply or
batteries, and the motors. Although the microcontroller (the robot's brain) decides the speed and
direction of the motors, it cannot drive them directly because of its very limited power (current and
voltage) output.

4.6.1 Types of Motor Controller


There are several types of motor controllers which are discussed below:
 Brushed DC motor controllers: used with brushed DC, DC gear motors, and many linear
actuators.
 Brushless DC motor controllers: used with brushless DC motors.
 Servo Motor Controllers: used for hobby servo motors
 Stepper Motor Controllers: used with unipolar or bipolar stepper motors depending on their
kind

94
4.6.2 Choosing a Motor Controller
Motor controllers can only be chosen after the user have selected the motors/actuators. Also,
the current a motor draws are related to the torque it can provide: a small DC motor will not consume
much current, but cannot provide much torque, whereas a large motor can provide higher torque but
will require a higher current to do so.

DC Motor Control:
1. The first consideration is the motor's nominal voltage. DC motor controllers tend to offer a
voltage range. For example, if the motor operates at 3V nominal, the user should not suggest to select
a motor controller that can only control a motor between 6V and 9V. This criterion helps to cross off
some motor controllers from the list.

2. If a range of controllers that can power the motor with the appropriate voltage is found, the next
consider at ion is the continuous current the controller will need to supply. It is suggested to find a
motor controller that will provide current equal to or above the motor's continuous current
consumption under load.

3. The Control method is another important consideration. Control methods include analogue
voltage, I2C, PWM, R/C, UART (a.k.a. serial). The pin types of the motor controller help to check
which motors are available to choose. If the microcontroller has serial communication pins, then a
serial motor controller is chosen. For PWM, the user will likely need one PWM channel per motor.

4. The final consideration is a practical one: Single vs. dual (double) motor controller. A dual DC
motor controller can control the speed and direction of two DC motors independently and often saves
the money (and time). The motors do not need to be identical, though for a mobile robot, the drive
motors should be identical in most cases. The dual motor controller is chosen based on the more
powerful DC motor. Note that dual motor controllers tend to have only one power input, so if the
user want to control one motor at 6V and the other at 12V, it will not be possible. Note that the
current rating provided is almost always per channel.

Servo Motor Control:


Since standard servo motors are meant to use specific voltages (for peak efficiency), most operate
at 4.8V to 6V, and their current consumption is similar, the steps for the selection are
somewhat simplified. However, there are some servo motor that operates at 12V. it is important to do
additional research about a servo motor controller if the selected servo motor is not considered
"standard". Also, most hobby servo motors use the standard R/C servo input (three wires which
are ground, voltage and signal)
1. Choose the control method. Some servo motor controllers allow to control the servo's
position manually using a dial/switch/buttons, while others communicate using UART (serial)
commands or other means.
2. Determine the number of servos to be controlled. Servo controllers can control many servos
(usually 8, 16, 32, 64 and up).
3. As with DC motor controllers, the control method is an important consideration.

95
Stepper Motor Control:

1. Is the selected motor is unipolar or bipolar? Choose a stepper motor controller type
accordingly, though a growing number are able to control both types. The number of leads is
usually a dead give-away of the motor type: if the motor has 4 leads, then it is bipolar; should
it have 6 or more leads, then it is unipolar.
2. Choose the motor controller voltage range to match your motor’s nominal voltage.
3. Find out how much current per coil the selected motor requires, and find out how much
current (per coil) the stepper motor controller can provide. If the user cannot find the current
per coil, most manufacturers list the coil impedance, R. Using Ohms Law (V=IR), then the
user can calculate the current (I).

4.7 USING THE SERIAL/PARALLEL INTERFACE CONTROLLING OUR ROBOT


Serial communication is the process of sending data one bit at a time, sequentially, over a
communication channel or computer bus where as in case of parallel communication, where several
bits are sent as a whole, on a link with several parallel channels. Serial communication is used for all
long-distance communication and most computer networks, where the difficulties like cost and
synchronization make parallel communication difficult. Serial communication mostly refers to the
RS-232 communication protocol where 9 pin connectors used for communication between 2 devices.

4.7.1 RS-232 Protocol


RS-232 is the name for a standard for serial binary single-ended data and control signals
connecting between data terminal equipment and data circuit-terminating equipment. It is commonly
used in computer serial ports.
The standard defines the electrical characteristics and signals timing, size and pin-out of
connectors. This protocol defines the maximum open-circuit voltage of +/- 25 volts. Valid signals are
in the range of +3 to +15 volts or the range -3 to +15 volts with respect to the ground. The range
between -3 to +3 volts is not a valid RS-232 level. For data transmission lines (TxD, RxD etc.) logic
one is defined as a negative voltage and the condition is called mark. Logic zero is +ve and the
signal condition is termed space. Control signals have the opposite polarity: the active state is
positive voltage and the inactive state is negative voltage.
The following is commonly used RS-232 signals and pin:

Pin 1 = CT (Carrier Detect)


Pin 2 = RxD (Received Data)
Pin 3 = TxD (Transmitted Data)
Pin 4 = DTR (Data Terminal Ready)
Pin 5 = Ground
Pin 6 = DSR (Data Set Ready)
Pin 7 = RTS (Request to Send)
Pin 8 = CTS (Clear to Send)
Pin 9 = RI (Ring Indicator)

96
Serial Port has following base addresses and IRQ number associated with it: 3F8 4, 2F8 3,
3E8 4, 2E8 3. The most commonly used UART IC in personal computers is pc16550D. It has 8
registers which are used for controlling the data transmission. They are:

Register 0: Receiver Buffer Register (Read Only) & Transmitter Register (Write Only).These
buffers are used for receiving and transmitting data between two devices.

Register 1: Interrupt Enable register. This register is used to enabling the interrupts. Setting bit 0-3
will enable Receive Data Available, Transmitter empty, Line Status Interrupt and Modem Status
Interrupt.

Register 2: Interrupts Identification Register(R/O) & FIFO Control Register (W/O). IIR: If any
interrupt has occurred, reading out this register will give the corresponding interrupt value.FCR: This
will enable the FIFO mode for UART i.e., it will store up-to 14 bytes of data before transmission.

Register 3: Line Control Register. This register controls the transmission pattern of data. It is also
used to set the baud rate of the UART by setting bit 7 and writing value in Divisor latch buffers.

Register 4: Modem Control Register. Used to start the handshaking mechanism between host and
peripheral.

Register 5: Line status Register. This register is used to get the line status and if data is available. It
is a read only register.

Register 6: Modem Status Register. This register shows the current state of data line when read.

Register 7: Scratch Register. Divisor Latch (LS) & (MS) :These register together forms a 16bit
register when the bit7 of LCR register is set. They hold the multiplier value, to set the baud rate of
the UART at desired rate.

Data Transmission
First the transmitter will indicate the receiver by setting is MCR bit0 which will notify the
receiver at MSR bit5 as pin 4 of transmitter is connected to pin 6 of receiver. After receiving the
signal DSR, receiver will set its MCR bit1 which is connected to transmitter’s RTS pin and will
notify the transmitter that he is ready to receive data. Then transmitter will put the data in Transmitter
buffer and receiver will receive the data in receiver buffer.

4.7.2 Parallel Port


A parallel interface for connecting an external device. Parallel port is basically used to
transfer data in parallel. 8 bits of data is transmitted at a time in parallel port. On PCs, the parallel
port uses a 25-pin connector (type DB-25) and is used to connect printers, computers and other
devices that need relatively high bandwidth. It is often called a centronics interface after the company
that designed the original standard for parallel communication between a computer and printer. (The
modern parallel interface is based on a design by Epson.) A newer type of parallel port, which
supports the same connectors as the Centronics interface, is the EPP (Enhanced Parallel Port) or ECP
(Extended Capabilities Port). Both of these parallel ports support bi-directional communication and

97
transfer rates ten times as fast as the Centronics port. There are 5 modes of transferring data using
parallel port:
i. Compatibility mode or Centronics mode
ii. Nibble mode
iii. Byte mode
iv. ECP mode
v. EPP mode

4.8 USING SENSORS


The sensors used in robotics mainly for interaction with the environment include a wide range
of devices which can be divided into the following general categories:
 Tactile sensors
 Proximity and range sensors
 Miscellaneous sensors and sensor-based systems
 Machine vision systems
4.8.1 Tactile Sensors
Tactile sensors are devices which indicate contact between themselves and some other solid
object. Tactile sensing devices can be divided into two classes: touch sensors and force sensors.
Touch sensors provide a binary output signal which indicates whether or not contact has been made
with the object. Force sensors (also sometimes called stress sensors) indicate not only that contact has
been made with the object but also the magnitude of the contact force between the two objects.
4.8.2 Touch Sensors
Touch sensors are used to indicate that contact has been made between two objects without
regard to the magnitude of the contacting force, included within this category are simple devices such
as limit switches, micro switches, and the like. The simpler devices are frequently used in the design
of interlock systems in robotics, for example, they can be used to indicate the presence or absence of
parts in a fixture or at the pickup point along a conveyor. Another use for a touch-sensing device
would be as part of an inspection probe which is manipulated by the robot to measure dimensions on
a work part. A robot with six degrees of freedom would be capable of accessing surfaces on the part
that would be difficult for a three-axis coordinate measuring machine, the inspection system normally
considered for such an inspection task. Unfortunately, the robot's accuracy would be a limiting factor
in contact inspection work.
4.8.3 Force Sensors
The capacity to measure forces permits the robot to perform a number of tasks. These include
the capability to grasp parts of different sizes in material handling, machine loading, and assembly
work, applying the appropriate level of force for the given part. In assembly applications, force
sensing could be used to determine if the screws have become cross-threaded or if the parts are
jammed.
Force sensing in robotics can be accomplished in several ways. A commonly used technique
is a force-sensing wrist. This consists of a special loud-cell mounted between the gripper and the
wrist. Another technique is to measure the torque being exerted by each joint. This is usually
accomplished by sensing motor current for each of the joint motors. Finally, a third technique is to

98
form an array of force-sensing elements so that the shape and other information about the contact
surface can be determined.
Tactile array sensors: A tactile array sensor is a special type of force sensor composed of a matrix of
force-sensing elements. The force data provided by this type of device may be combined with pattern
recognition techniques to describe a number of characteristics about the impression contacting the
array sensor surface. Among these characteristics are the presence of an object, 2. the object's contact
area, shape, location, and orientation, 3. The pressure and pressure distribution, and 4. force
magnitude and location. Tactile array sensors can be mounted in the fingers of the robot gripper or
attached to a work table as a flat touch surface.

The device is typically composed of an array of conductive elastomer pads. As each pad is
squeezed its electrical resistance changes in response to the amount of deflection in the pad, which is
proportional to the applied force. By measuring the resistance of each pad, information about the
shape of the object against the array of sensing elements can be determined. In the background is the
CRT monitor display of the tactile impression made by the object placed on the surface of the sensor
device. As the number of pads in the array is increased the resolution of the displayed information
improves.

4.9 GETTING A RIGHT TOOL

Mechanical Tools
 Small screwdriver set: These small screwdrivers are necessary when working with
electronics. They should not be forced too much as their size makes them more fragile.
 Regular screwdriver set: All workshops need a multi-tool or tool set which includes flat /
Phillips and other screwdriver heads.
 Needle nose pliers: A set of needle nose pliers is incredibly useful when working with small
components and parts and is a very inexpensive addition to the toolbox. These are different
from regular pliers because they come to a point which can get into small areas.
 Wire strippers/cutters: To cut any wires, a wire stripper will save the considerable time and
effort. A wire stripper, when used properly, will only remove a cable insulation and will not
produce any kinks or damage the conductors. The other alternative to a wire stripper is a pair
of scissors, though the end result can be messy.
 Scissors, ruler, pen, marker pencil, exact knife (or other handheld cutting tool) These are
essentials in any office.
 Tabletop CNC mill: A tabletop CNC machine allows you to precisely machine plastics,
metals and other materials and creates three dimensional, intricate shapes.
 Tabletop lathe: A (manual) tabletop lathe allows to create your own hubs, shafts, spacers,
adapters and wheels out of various materials. A CNC lathe tends to be overkill since most
builders only need to change the diameter rather than create complex shapes.
 Vacuum Forming Machine: Vacuum forming machines are used to create complex plastic
shells that are molded to the exact specifications.
 Metal Benders: When making robotic frames or enclosures out of sheet metal or metal
extrusions, using a metal bender essential in order to obtain precise and repeatable bends.

99
 Other Specialized tools: at this stage, you will be very aware of your machining needs and
will probably require more specialized tools such as metal nibblers, welding machines, 3D
printers, etc.
Electrical Tools
 Breadboard: This has nothing to do with slicing bread. These boards are used to easily create
prototype circuits without having to solder. This is not good in the event of
fully developed soldering skills or wants to quickly put together prototypes and test ideas
without having to solder a new circuit each time.
 Jumper wires: These wires fit perfectly from hole to hole on a solderless breadboard and not
only look pretty but also prevent clutter.
 Breadboard power supply: When experimenting with electronics it is very important to
have a reliable and easy to use power source. A breadboard power supply is the least
expensive power supply offering these features.
 Soldering tool kit: An inexpensive soldering iron kit has all the basic components needed to
help learning how to solder and make simple circuits.
 Multimeter: A multimeter is used to measure voltage, resistance, current, check continuity of
connections and more. To know will be building several robots and working with electronics,
it is wise to invest in a higher quality multimeter.
 Wall adapter: Standard voltages used in robotics include: 3.3V, 5V, 6V, 9V, 12V, 18V and
24V. 6V is a good place to start since it is often the minimum voltage for DC gear motors and
microcontrollers and is also the maximum voltage for servo motors. A wall adapter can also
be a good replacement for batteries since they can be very expensive in the long run. A wall
adapter can allow to use the project without interruption whereas even rechargeable batteries
need to be recharged.
 Adjustable temperature soldering station: A basic soldering iron can only takes so far. A
variable temperature soldering iron with interchangeable tips will allows to be more precise
and decrease the risk of burning or melting components.
 Brass sponge for solder In combination with the more traditional wet sponge to wipe away
excess solder, a brass sponge can help clean the soldering iron tip without cooling it down,
allowing to spring back into action quicker and solder like a ninja.
 Variable power supply: (instead of wall adapter) Having a powerful and reliable power
source is very important when developing complex circuits and robots. A variable power
supply allows to test various voltages and currents without the hassle of needing several types
of batteries and power adaptors.
 Oscilloscope: An oscilloscope is very useful when dealing with analogue circuits or periodic
signals.
 Logic Analyzer: A logic analyzer is like a "digital eye" when working with digital signals. It
allows to see and store the data produced by a microcontroller and makes it simpler to debug
digital circuits.

Software
 Google Sketch Up: is a free program which can be used to create your robot in 3D, to the
proper scale, complete with texture. This can help to ensure that parts are not overlapping,

100
check dimensions for holes and change the design before it is built. Autodesk 123D is
another free 3D CAD (Computer Aided Design) software aimed at hobbyists.
 Programming software: The first programming software should correspond to whichever
microcontroller is selected. To chose an Arduino microcontroller, it is necessary to choose
the Arduino software; to choose a Basic Stamp from Parallax, it is necessary to choose basic
and so forth. In order to use a variety of microcontrollers, the user should learn more
fundamental programming language such as BASIC or C.
 Schematics and PCBs: There are many free programs available on the market, and Cad
Soft’s EAGLE is one of the more popular. It includes an extensive library of parts and helps
you convert your schematic to a PCB.
 CAD: Solid Works is the CAD program of choice for many when doing mechanical design
but it is certainly not the only one available. When working at this level (i.e. using programs
worth several thousands of dollars) the user should have a good idea of needs in order to
choose the right tool (Unigraphics, Catia, ProE etc.).
 CAM: If you are using a CNC machine, the user will need a proper 3D CAD program such as
Pro E, AutoCAD, Solid Works or another similar program.

Raw Materials
 Thin sheet metal: This material can be cut easily with scissors and can be bent and shaped as
needed to form the frame or other components of robot without necessarily having to do
machining.
 Cardboard: The right cardboard (thick but can still be cut using hand tools) can easily be
used to make a frame or prototype. Even basic glue can be used to hold cardboard together.
 Thin plastic: Polypropylene, PVC about 1/16” thick can be scored or sawed to create a more
rigid and longer lasting frame for robot.
 Thin wood: Wood is a great material to work with. It can be screwed, glued, sanded, finished
and more.
 Polymorph: Polymorph allows you to create plastic parts without the hassle of having to
create custom moulds.
 Sheet metal: For a thicker metal-cutting sheers, sheet metal makes an excellent building
material for a robot frame because of its durability, flexibility and resistance to rust.
 Plastic sheets: Plastic sheets are fairly rigid and resist deformation. If the user is cautious and
slow when cutting or drilling most plastics, the results can look professional.

4.10 ASSEMBLING A ROBOT


With all the basic available building blocks used to make a robot, the next step is to design
and build a structure or frame which keeps them all together and gives robot a distinct look and
shape. There are many materials you can use to create a frame. To use more and more materials to
build not only robots but other devices, will get a better feeling as to which is most appropriate for a
given project. The list of suggested building materials is basic construction material, flat structural
material, laser cut/ bent plastic or metal, 3D printing and polymorph.

101
4.10.1 Assembling the Robot Components

Connecting Motors to Motor Controllers:


A DC (gear) motor or DC linear actuator will likely have two wires: red and black. Connect
the red wire to the M+ terminal on the DC motor controller, and the black to M-. Reversing the wires
will only cause the motor to spin in the opposite direction. A servo motor, there are three wires: one
black (GND), red (4.8 to 6V) and, yellow (position signal). A servo motor controller has pins
matching these wires so the servo can be plugged directly to it.
Connecting Batteries to a Motor Controller or a Microcontroller:
Most motor controllers have two screw terminals for the battery leads labelled B+ and B-. If your
battery came with a connector and the controller uses screw terminals, the user may be able to find a
mating connector with pigtails (wires) which you can connect to the screw terminal. If not, the user
may need to find another way to connect the battery to the motor controller while still being able to
unplug the battery and connect it to a charger. It is possible that not all the electromechanical
products you chose for robot can operate at the same voltage and thus may require several batteries or
voltage regulation circuits. The below are the usual voltage levels involved in common hobby
robotics components:
 DC gear motors - 3V to 24V
 Standard Servo motors - 4.8V to 6V
 Specialty Servo motors - 7.4V to 12V
 Stepper motors - 6V to 12V
 Microcontrollers usually include voltage regulators - 3V to 12V
 Sensors - 3.3V, 5V and 12V
 DC motor controllers - 3V to 48V
 Standard batteries are 3.7V, 4.8V, 6V, 7.4V, 9V, 11.1V and 12V

Connecting Motor controllers to Microcontroller:


A microcontroller can communicate with motor controllers in a variety of ways:
 Serial: The controller has two pins labelled Rx (receive) and Tx (transmit). Connect the Rx
pin of the motor controller to the microcontroller’s Tx pin and vice versa.
 I2C: The motor controller will have four pins: SDA, SCL, V, GND. The microcontroller will
have the same four pins but not necessarily labelled, simply connect them one to one.
 PWM: The motor controller will have both a PWM input and a digital input for each motor.
Connect the PWM input pin of the motor controller to a PWM output pin on the
microcontroller, and connect each digital input pin of the motor controller to a digital output
pin on the microcontroller.
 R/C: To connect a microcontroller to an R/C motor controller, you need to connect the signal
pin to a digital pin on the microcontroller.

Connecting Sensors to a Microcontroller:


Sensors can be interfaced with microcontrollers in a similar way than motor controllers. Sensors
can use the following types of communication:
 Digital: The sensor has a digital signal pin that connects directly to a digital microcontroller
pin. A simple switch can be regarded as a digital sensor.

102
 Analogue: Analogue sensors produce an analogue voltage signal that needs to be read by an
analogue pin. If the microcontroller does not have analog pins, the user will need a separate
analog to digital circuit (ADC). Also, some sensors some with the required power supply
circuit and usually have three pins: V+, GND and Signal. If a sensor is a simple variable
resistor for instance, it will require the user to create a voltage divider in order to read the
resulting variable voltage.
 Serial or I2C: the same communication principles explained for motor controllers apply here.
Communication device to microcontroller:
Most communication devices (e.g., XBee, Bluetooth) use serial communication, so the same
RX, TX, GND and V+ connections are required. It is important to note that although several serial
connections can be shared on the same RX and TX pins, proper bus arbitration is required in order to
prevent cross-talk, errors and madness in general. For a very few serial devices, it is often simple to
use a single serial port for each one of them.

4.11 PROGRAMMING A ROBOT


The key for the integration of industrial robots into existing manufacturing systems is the
availability of efficient software tools for the development of application and control software. Over
the past 17 years, robot programming methods have been rapidly changed. Most robot applications
are carried out in an industrial environment where robots, simple positioning devices, sensors,
peripherals, and machine tools perform discrete repetitive operations.

 Manual lead-through, teach-in.


 Manual programming
 Tactile and optional sequence programming
 Master-slave programming
 Textual programming
 Pictorial programming
 Explicit programming.
 Implicit programming.
Several scientific programming languages were extended with movement instructions, sensor
control statements, and data types (frames, vector matrices, etc.).Various dedicated languages were
developed with robot-specific commands derived from other automation languages, including APT.

4.11.1 Methods of Robot Programming


Robot programming is accomplished in several ways Programming methods for robot may be
classified into four categories
 Manual Programming Method
 Walk through Programming Method
 Lead through Method (or) Teach Pendant
 Off-line Programming Method
Manual Method:
This method is not really programming in the conventional sense of the word. It is more like
setting up a machine rather than programming- It is the procedure used for the simpler robots and
involves setting mechanical stops, cams, switches, or relays in the robot's control unit. For these low-

103
technology robots used for short work cycles (e.g., pick and place operations, the manual
programming method is adequate.
Walkthrough Method:
In this method the programmer manually moves the robot's arm and hand through the motion
sequence of the work cycle. Each movement is recorded into memory for subsequent playback during
production. The speed with which the movements are performed can usually be controlled
independently, so that the programmer does not have to worry about the cycle time during the
walkthrough. The main concern is getting the position sequence correct. The walkthrough method
would be appropriate for spray painting and are welding robots.
Lead through programming
Lead through Programming Method (or Teach Pendant) makes use of a leach pendant to
power drive the through its motion sequence. The teach pendant is usually a small hand-held device
with witches and d to control the robot's physical movements. Each motion is recorded into memory
for future playback during the work cycle. The lead through method is very popular among robot
programming methods because of its ease and convenience. The figure 4.4 shows the classification of
lead through programming method.

Off-line Programming:
This method involves the preparation of the robot program off-line, in a manner similar to NC
part programming. Off-line robot programming is typically accomplished on a computer terminal.
After the program has been prepared, it is entered into the robot memory for use during the work
cycle.

4.12 ROBOT PROGRAMMING LANGUAGE


Currently, a large number of robot languages are available. Although no standards for these
exist. The more common languages include AL, AML, RAIL, RPL and VAL.

AL: The AL wan developed at the robot research centre of Stanford University. AL expression is
AM language.

 It can support robot level language and interpreted on real time control machine. Real time
programming language constructs such as synchronisation and concurrent execution.
 It can support for world modelling.
 ALGOL like data and control structure

AML: This language was developed by IBM in the year 1982. The expression of AML is A
Manufacturing Language. AML is the control language for the IBM RS-I robot.

RS-I robot is a Cartesian manipulator with 6DOFs.

 It supports data aggregation.


 It supports joint space trajectory planning subject to position and velocity Constraints.
 Supports feature of LISP-Like and APL-Like constructs.
 It may provide sensor monitoring that can be interruption.

104
RAIL:
 RAIL was developed by Automatix Inc. in 1981.
 RAIL means Robotic Automatix Incorp. Language.
 Many constructs have been incorporated into RAIL, to support inspection and arc welding
system. Peripherals are a terminal and a teach box.
 RAIL, is an interpreter, loosely based on Pascal.
Lead through teaching

Manual lead through Powered lead through

• An operator grasps physically • A control box with buttons


and moves the wrist end in the known as teach pendent is
path of operation, which is used to control joint motors.
recorded in the memory • The points of motion is
• He uses a teach button while recorded in the memory,
in teach mode. and converted to motion
• During run mode the wrist programs.
end repeats the taught motion. • Subsequently the program
• When robot structure is is utilized to playback the
complicated a geometrically motion during the cycle of
similar model is used to easen operation.
the robot handling. • This is more useful in
• More useful in continuous applications of point to
operations. point movements.

Applications: Applications:

• Machine loading and • Spray painting


unloading. • Full welding
• Spot welding. • Controlling
• Point-to-point operations.
movements.

Fig: 4.4 Classification of lead through teaching

VAL:

VAL means Versatile Algorithmic Language. This is a programming language for industrial robots,
one of the most commonly used at this writing. It has been developed by Unimation Inc for the

105
PUMA series of robots capabler, doing a broad range of Jobs notably assembly are welding, spray
painting and material handling.
 In fact, a VAL is specific set of computer and robot commands that allow the operator to
enter and edit complex robot programs efficiently. Edit programs written in VAL.
 Handle files on external devices, monitor the status of the system, and calibrate the sensors.

4.13 VAL Programming

The VAL commends with their description is listed in the below table 4.2.

Table: 4.2 VAL commends


Definition Command Statement Explanation
1. Motion Control Command to approach a
APPRO P1, Z1 point Pl in the z-direction
by Z1 distance above the
object.
MOVE P1 Command the move the
arm from the present
position to point P1.
MOVE P1 VIA P2 Asks the robot to move to
point P1 through point P2

DMOVE (JI, X) Moves the joint J1 by an


increment of X(linear)
DMOVE (J1, J2,J3) Command to move joints
(d,d,d) J1, J2 and J3 by
incremental angles of
d,d,d respectively.
2.Speed Control SPEED V IPS The speed of the end
effector is to be V9 inch
per second at the time of
program execution.
SPEED R Command to operate the
arm end effector at R
percent of the normal
speed at the time of
program execution.

3. Position control HERE P1 Defining the name of a


point as P1
DEFINE PI= POINT (X, The command defines the
Y, Z) point P1 with x,y, z co-

106
ordinates and w,w,w the
wrist rotation angles.
Path control: DEFINE The path of the end
PATH 1= PATH (P1, P2, effector is defined by the
P3) connection between
points P1, P2 and P3 in
series.
MOVE PATH 1 Movement of the end
effector along path 1.
Frame definition: Assigns variable name to
DEFINE FRAME 1= FRAME 1 defi1-origin,
FRAME (P1, P2, P3) P2-point along x-axis and
p3-point along xy plane.
MOVE ROUTE: Defines the movement in
FRAME 1. the path for frame 1.
4. End effector operation OPEN Opens the gripper fingers.
CLOSE 50 MM In forms gripper to close
keeping 50mm width
between the fingers.
CLOSE 5 LB Applies 5Lb gripper
force.
CENTER Closes the gripper slowly
till the establishment of
contact with the object to
be gripped.
OPERATE TOOL Positioning and operating
(SPEED N RPM) the powered tool. Here
the EE is replaced by
servo powered tool.
5. Operation of the SIGNAL 4, ON The command actuates
sensors the output port4 and turns
on at certain stage of the
program.
SIGNAL 5, OFF The output port5 is turned
WAIT 13, ON off.
The device gives a
feedback signal
indicating that it is on.
REACT 16, SAFETY. The change in signal (if
any), in the input line
169, 9should be deviated
to the sub-routine
SAFETY.

107
4.14 End Effector
In robotics, the term 'end effector' is used to describe the hand or tool that is attached to the
wrist. The end effector represents the special tooling that permits the general-purpose robot to
perform a particular application. This special tooling must usually be designed specifically for the
application. End effectors can be divided into two categories: grippers and tools. Grippers would be
utilized to grasp an object, usually the work part, and hold it during the robot work cycle.
4.14.1 End Effector Commands
Robots usually work with something in their work space. In the simplest case, it may be a part
that the robot will pick up, move, and drop off during execution of its work cycle. In more complex
cases, the robot will work with other pieces of equipment in the work cell.

Nearly all industrial robots can be instructed to send signals or wait for signals during
execution of the program. These signals are sometimes called interlocks. The most common form of
interlock signal is to actuate the robot's end effector. In the case of a gripper, the signal is to open or
close the gripper. Signals of this type are usually binary, that is, the signal is on-off or high-level-
low-level. Binary signals are not readily capable of including any complex information such as force
sensor measurements. The binary signals used for the robot gripper are typically implemented by
using one or more dedicated lines. Air pressure is commonly used to actuate the gripper. A binary
valve to actuate the gripper is controlled by means of two interlock signals, one to open the gripper
and the other to close it. In some cases, feedback signals can be used to verify that the actuation of
the gripper had occurred, and interlocks could be designed to provide this feedback data.

In addition to control of the gripper, robots are typically coordinated with other devices in the
cell also. For example, let us consider a robot whose task is to unload a press, it is important to inhibit
the robot from having its gripper enter the press before the press is open, and even more obvious, it is
important that the robot remove its hand from the press before the press closes.To accomplish this
coordination, we introduce two commands that can be used during the program.

The first command is SIGNAL M which instructs the robot controller to output a signal
through line M (where M is one of several output lines available to the controller).

The second command is WAIT N which indicates that the robot should wait at its current
location until it receives a signal on line N (where N is one of several input lines available to the
robot controller).

In most robot languages, there are better ways of exercising control over the end effector
operation. The most elementary commands are

OPEN and CLOSE


VAL II distinguishes between differences in the timing of the gripper action. The two
commands OPEN and CLOSE cause the action to occur during execution of the next motion, while
the statements.

OPENI and CLOSET

108
Cause the action to occur immediately, without waiting for the next motion to begin. This
latter case results in a small-time delay which can be defined by a parameter setting in VAL II.

The preceding statements accomplish the obvious actions for a non-served gripper. Greater
control over a served gripper operation can be achieved in several ways.

CLOSE 40 mm CLOSE 1.575 in


For instance, this command, when applied to a gripper that has servo control over the
width of the finger opening would close the gripper to an opening of 40 mm (1:575 in.). Similar
commands would control the opening of the gripper. Some grippers also have tactile and/or force
sensors built into the fingers. These permit the robot to sense the presence of the object and to apply a
measured force to the object during grasping. For example, a gripper served for force measurement
can be controlled to apply a certain force against the part being grasped.

CLOSE 3.0 LB
This command indicates the type of command that might be used to apply a 3-lb gripping
force against the part. Force control of the gripper can be substantially more refined than the
preceding command.

CENTER
For a properly instrumented hand, the AL language statement CENTER provides a fairly
sophisticated level of control for tactile sensing. Invoking this command causes the gripper to slowly
close until contact is made with the object by one of the fingers. Then, while that finger continues to
maintain contact with the object, the robot arm shifts position while the opposite finger is gradually
closed until it also makes contact with the object. The CENTER statement allows the robot to centre
its arm around the object rather than causing the object to be moved by the gripper closure. This will
be useful in determining the position of an object whose location is only approximately known by the
robot.

For end effectors that are powered tools rather than grippers, the robot must be able to
position the tool and operate it. An OPERATE statement (based roughly on a command available in
the AL language) might be used to control the powered tool. The following sequence of commands
are the example:

OPERATE TOOL (SPEED - 125 rpm) OPERATE TOOL (TORQUE - 5 in lb)

OPERATE TOOL (TIME – 10 sec):

Assuming a powered rotational tool such as a powered screwdriver. All three statements
apply to the operation. However, the first two statements are mutually exclusive, either the tool can
be operated at 125 riming or it can be operated with a torque of 5 in. Ib. The driver would be operated
at 125 rpm/min until the screw began to tighten, at which point the torque statement would take
precedence. The third statement indicates that after 10 sec the operation will terminate. The SIGNAL
command can be used both for turning on or off an output signal.

109
SIGNAL 3, ON and SIGNAL 3, OFF
The statements SIGNAL 3, ON would allow the signal from output port 3 to be turned on at
one point in the program and turned off at another point in the program. The signal in this illustration
is assumed to be binary. An analog output could also be controlled with the SIGNAL command will
reserve for analog signals the input/output ports numbered greater than 100.

SIGNAL 105, 4.5


This would provide an output of 4.5 units (probably volts) within the allowable range of the
output signal. The on-off conditions can also be applied with the WAIT command. In the following
sequence, the robot provides power to some external device, The WAIT command is used to verify
that the device has been turned on before permitting the program to continue. Then next the robot
turns off the device and the device signals back that it has been turned off before the program
continues.

110
CHAPTER 5
IMPLEMENTATION ANDROBOT ECONOMICS

5.1 INTRODUCTION

Robotics is a sophisticated technology and the successful implementation of robot in industry


is a formidable management problem as well as a technical problem. The purpose of this chapter is to
describe a logical approach that is proposed for introducing a robotics program into an
organization.The approach for implementing robotics is described in terms of a logical sequence of
steps that a company would want to follow in order to implement a robotics program in its operations.
The steps in the approach are the following:

1. Initial familiarization with the technology

2. Plant survey to identify potential applications

3. Selection of the application 4. Selection of the robot

5. Detailed economic analysis and capital authorization

6. Planning and engineering the installation

7. Installation

5.2 MATERIAL HANDLING

Material handling is defined as the function and systems associated with the transportation,
storage and physical control to work in process material in manufacturing. It can be defined as using
the right method to provide safely the right amount of the right material at the right place at the right
time in the right sequence in the right position in the right condition and the right cost.

 Material handling system includes the movement, storage and control of material and
considerable emphasis placed on control.
 Types of material handling equipment: The material handling equipment commonly used to
move parts between stations can be grouped such as industrial trucks, cranes, hoists,
conveyor, monorails, automated guided vehicles and robots.

Handling System Selection

A wide selection of automated handling and storage equipment is available for incorporation
in manufacturing system

Types of Transportation

1. Overhead: Power and free conveyor, individually powered monorail.


2. Below floor: AGVs
3. Above floor: a) Powered Roller(b) Powered slot(c) Platen Type Conveyor
4. Storage Interface: AS/R Machines

111
5.3 AUTOMATED GUIDED VEHICLE SYSTEMS (AGVS)

Automated guided Vehicles (AGVs) are mode material - handling and conveying systems
that are more appropriate for FMS applications and automation. It helps to reduce costs of
manufacturing and increase efficiency in manufacturing system. AGVS can tow objects behind them
in small trailers which they can autonomously hook up to. These trailers can be used to move raw
materials into line to get them ready be manufactured.

The AGV can also store objects on a bed. The objects can be placed on a set of motorized
treads and then pushed off by reversing them. Some AGVs use fork lifts to lift objects for storage.
Transporting materials such as medicine in a hospital situation is also done. Automated Guided
Vehicle (AGV) is also known as Laser Guided Vehicle (LGV) or Self-Guided Vehicle (SGV).

In Germany the technology is also called Fahrerlose Transport System (FTSY and in Sweden
forarlosa truckar. The first AGV was brought to market in the 1950s, at the time it was simplytow
truck that followed a wire in the floor instead of a rail.Over the years the technology has become
more sophisticated and today automated vehicles are mainly Laser navigated e.g., LGV (Laser
Guided Vehicle) In an automated process, LGVS are programmed to communicate with otherrobots
to ensure product is moved smoothly through the warehouse, whether is being stored for future use
or set directly to shipping areas. Today, the LGV plays an important role in the design of new
factories and warehouses safely moving goods to their rightful destinations

(i) Navigation

AGVs in FMS are used to transport an object from point-to-point B. AGV navigate
manufacturing areas with seniors. There are two main sensor AGVs used for navigation, a wired and
a wireless sensor.

(ii) Wired

The wired sensor is placed on the bottom of the bar and is placed facing Ground. A slot is cut
in the ground and a wire is placed approximately one inch below the ground. The sensor detects the
radio frequency being transmitted from the wire and follows the path.

(iii) Guide Tape

Many light duty AGV (some known as AGCS for Automated Guided Carts) use tape for the
guide path. The tapes can be one of two styles: Magnetic or Coloured. The AGV is fitted with the
appropriate guide sensor to follow the path of the super One major advantage of tape over wired
guidance is that it can be easily removed and relocated if the course needs to change.

It also does not involve the expense of cutting the factory or warehouse floor for the entire
travel route. Additionally, it is considered a "passive" system since it does not require the guide
medium to be energized us wire does. Colored tape is initially less expensive, but lacks the advantage
of being embedded in high traffic areas where the tape may become damaged or dirty.

112
(iv) Laser Target Navigation
The wireless navigation is done by mounting retro collective tape on walls poles or machines.
The AGV carries a laser transmitter and receiver on a rotating turret. The laser is sent off then
received again the angle and sometimes) distance are automatically calculated and stored into the
AGV's memory. The AGV has reflector map stored in memory and can correct its position ha on
errors between the expected and received measurements It can then navigate to a destination target
wing the constantly updating position.

(v) Gyroscopic Navigation


Another form of an AGV guidance is inertial navigation. With inertial guidance, a computer
control system directs and assigns tasks to the vehicles. Transponders are embedded in the floor of
the work place. The AGV uses the transponders to verify that the vehicle is on course. A gyroscope
is able to detect the slightest change in the direction of the vehicle and corrects it in order to keep the
AGV on its path. The margin of error for inertial is method is 1 inch. Inertial can operate in nearly
any environment including extreme temperatures and has a longer lifespan than other guidance
options

(v) Natural Features Navigation


Navigation without retrofitting of the workspace is called Natural Features Navigation. One
method uses one or more range-finding, sensors, such as a laser range- finder, as well as gyroscopes
and/or inertial measurement units with Monte- Carlo/Markov localization techniques to understand
where it is an it dynamically plans the shortest permitted path to its goal.

The advantage of such systems is that they are highly flexible and can handle failure without
bringing down the entire manufacturing operation, since AGVS can plan paths around the failed
device.

5.3.1 Steering Control


To help and AGV navigate it can use two different steer control systems. The differential
speed control is the most common. In this method there are two sets of wheels being driven. East set
is connected to a common drive train.

These drive trains are driven at different speeds in order to turn or the same speed to allow the
AGV to go forwards and/or backwards. The AGV turns in a similar fashion to a tank.

This method of steering is good in the sense that it is easy to maneuver in small spaces. More
often than not, this is seen on an AGV that is used to transport and turn in tight spaces or when the
AGV is working near machines. This setup for the wheels is not used in towing applications because
the AGV would cause the trailer to jackknife when it turned.

The other type of steering used is steered wheel control AGV. This type of steering is similar
to a cars steering. It is more precise in following the wire program than the differential speed
controlled method.

5.3.2 Path Decision


AGVs have to make decisions on path selection. This is done through different methods:

1) Frequency select mode (wired navigation only)


2) Path select mode (wireless navigation only) or via a magnetic tape on the floor not only to
guide the AGV, but also to issue steering commands and speed commands.

113
5.3.3 Frequency Select Mode

Frequency select mode bases its decision on the frequencies being emitted from the floor.
When an AGV approaches a point on the wire which splits the AGV detects the two frequencies and
through a table stored in its memory decides on the best path.
The different frequencies are required only at the decision point for the AGV. The
frequencies can change back to one set signal after this point. This method is not easily expandable
and requires extra guide cutting meaning more money.

5.3.4 Path Select Mode

An AGV using the path select mode chooses a path based on pre-programmed paths. It uses
the measurements taken from the sensors and compares them to values given to them by
programmers. When an AGV approaches a decision point it only has to decide whether to follow
path 1, 2, 3, etc.

This decision is rather simple since it already knows its path from its programming. This
method can increase the cost of an AGV, because it is required to have a team of programmers to
program the AGV with the correct paths and change the paths when necessary. This method is easy
to change and set up.

5.3.5 Magnetic Tape Mode

The magnetic tape is laid on the surface of the floor or buried in a 10 mm channel, not only
does it provide the path for the AGV to follow but also sort strips of the tape in different combos of
the strip tell the AGV to change lane and also speed up slow down and stop with north and south
magnetic combos, for example the industries like TOYOTA USA and TOYOTA JAPAN make use
of it.

5.3.6 Traffic Control

Flexible manufacturing systems containing more than one AGV may require it to have traffic
control, so the AGV's will not run into one another. Methods include zone control, forward sensing
control, and combination control each method has its advantages and disadvantages.

5.3.7 Forward Sensing Control

Forward sensing control uses collision avoidance sensors to avoid collisions with other AGV
in the area. These sensors include:

 Sonic, which work like radar.


 Optical, which uses an infrared sensor
 Bumper, physical contact sensor.
Most AGVs are equipped with a bumper sensor of some sort as a fail safe Sonic sensors send
a "chirp" or high frequency signal out and then wait for a reply from the outline of the reply the AGV
can determine, if an object is ahead of it and take the necessary actions to avoid collision The optical
uses an infrared transmitter/receiver and sends an infrared signal which then gets reflected back.

114
Working on a similar concept as the sonic sensor. The problems with these are they can only
protect the AGV from so many sides. They are relatively hard to install and work with as well.

5.3.8 Combination Control

Combination control sensing is using collision avoidance sensors as well as the zone control
sensors. The combination of the two helps to prevent collisions in any situation. For normal
operation the zone control is used with the collision avoidance as a fail safe.

For example, if the zone control system is down, the collision avoidance system would
prevent the AGV from colliding.

5.3.9 System Management

Industries with AGVs need to have some sort of control over the AGV There are three main
ways to control the AGV:

(1) Locator panel

(2) CRT color graphics display

(3) Central logging and report

Locator Panel

A locator panel is a simple panel used to see which are the AGV is in. If the AGV is in one
area for too long, it could mean it is stuck or broken down.

CRT Color Graphics Display

CRT color graphics display shows real time where each vehicle is. It also gives a status of the
AGV, its battery voltage, unique identifier, and can show blocked spots.

Central Logging and Report

Central logging used to keep track of the history of all the AGVs in the system. Central
logging stores all the data and history from these vehicles which can be printed out for technical
support or logged to check for up time.

5.3.10 Types of AVGs

There are six basis types of AGVS: 2.

i. Towing
ii. Pallet
iii. Light load
iv. Unit load
v. Fork truck
vi. Assembly line

115
5.4 Battery Charging

AGVs utilize a number of battery charging options. Each option is dependent on the user's
preference. The most commonly used battery charging technologies are

i. Battery Swap
ii. Automatic/Opportunity Charging
iii. Automatic Battery Swap.
Battery Swap

"Battery swap technology” requires an operator to manually remove the discharged battery
from the AGV and place a filly charged battery in its place approximately 8 - 12 hours ( about one
shift) of AGVS operation. 5-10 minutes is required to perform this with each AGV in the fleet.

Automatic/Opportunity Charging


Automatic and opportunity battery charging allow for continuous operation.

On average an AGV charges for 12 minute every hour for automatic charging and no
manual intervention is required.
 If opportunity is being utilized the AGV wall receive a charge whenever the
opportunity arises.
 When a battery pack gets to a predetermined level the AGV will finish the current job
that it has been assigned before it goes to the charging station.
Automatic Battery Swap

 Automatic battery swap is an alternative to manual battery sway. It requires an


additional piece of automation machinery, automatic battery charger, to the overall
AGV system.
 AGVs will pull up to the battery swap station and have their batteries automatically
replaced with fully charged batteries.
 The automatic battery charger then places the removed batteries into a charging slot
for automatic recharging.
 The automatic battery charger keeps track of the batteries in the system and pulls them
only when they are fully charged.

5.5 COMPONENTS OF AN AGV

The essential components of an AGV are:

i. Mechanical structure
ii. Driving and steering mechanism actuators
iii. Servo controllers
iv. On board computing facility
v. Servo amplifiers
vi. Feedback components
vii. On board power system

116
5.6 APPLICATION OF AGV’s

The applications of AGV's are in the following categories.

i. Driver less train operations: For movement of large quantities of materials over -
relatively large distances.
ii. Storage/distribution systems: Unit load caries and pallet trucks are used in these
applications by interfacing with AS/RS in a distribution system. This can also be
applied in light manufacturing and assembly operations.
iii. Assembly live operations: Between the workstations, components are kitted and placed
on the vehicle for the assembly operations that are to be performed on the partially
completed product at the next station
iv. Flexible manufacturing systems: The AGVS are used as the materials handling system
in the FMS. The vehicles deliver work from the staging area to the individual work
stations in the system and between stations in the manufacturing system.
v. Miscellaneous applications: Such as mail delivery in office buildings and hospital
material handling operations.

Advantages of AGVs

The important advantages of AGVS are:

i. AGVS represent a flexible approach to materials handling as they can be computer


controlled.
ii. They decrease labor costs by decreasing the amount of human involvement in
materials handling.
iii. They can operate in hazardous environments.
iv. They are compatible with production and storage equipment.
v. They can handle and transport hazardous materials safely.
vi. Reduction in downtime of machines due to timely availability of materials.
vii. Improvement in productivity and profit.
viii. Continuous work without interruptions.

5.7 PERFORMANCE MEASURES OF MATERIAL HANDLING

There are several performance measures used to analyze the requirements of material-
handling systems.

(i) Transport work: A system's transport work capability is measured as the number of
required multiplied by the length of each move per unit time. Typical units are move-feet per hour or
move-meters per hour. Each vehicle is capable of a certain transport work quantity.

Average delivery time. Given the transport work requirements, one can model or estimate the
average delivery time for a load.

Delivery time is usually measured from the time. The move is requested by the MCS to the
time. The transport equipment signals successful completion of the move.

117
(ii) Delivery distribution: Equally important to estimating the delivery time is the
distribution of delivery times. In the case of an AGV carrying multiple loads, the delivery of the first
load is a function of the speed of the vehicle and the distance it must travel. The last load however,
takes considerably longer to deliver.

Thus we would have a fairly broad and flat distribution profile. In the case of multiple unit-
load vehicles, the distribution would be narrower. Delivery distribution can have an impact on the
processing of the wafers.

Once the process is developed, consistent delivery times between steps should be maintained
to permit management of oxidization and molecular contamination The value of each 300-mm wafer
makes this especially important.

(iii) Machine interference: When servicing multiple tools with a single loader or robot, a
number of tools will probably require servicing at the same time.

The percentage of time that tools are waiting to be serviced by the limited resource, relative
to the process time, is called the machine interference. If only one vehicle is used to service multiple
machines so that the machines are often waiting for delivery, both OEE of the tool and factory
productivity are adversely affected.

(iv) Relative Performance Capabilities: This section compares the relative performance
capabilities in terms of te above outlined measures for RGVs, AGVs, and monorails.

5.8 VARIOUS STEPS OF IMPLEMENTING A ROBOT

There are five steps to follow to ensure success when implementing a robotic system.
Skipping these steps often results in projects that are cut short, drag on too long, or never get off the
ground. A robotic system is a big investment, so it deserves special consideration and planning time.

Step 1: Get company-wide support

There are many individuals and departments that would be impacted by a robotic solution.
Before doing anything, there needs to be education and discussion among several parties, including
senior management, plant managers, senior engineering, manufacturing engineering, maintenance,
IT, safety managers, shop floor staff, and HR. Anyone impacted by the purchase, installation,
operation, and maintenance of the robotic system needs to be a part of the discussion from the very
beginning. It’s absolutely critical that everyone understand the basic facts about robotic
automation—that it has a short return on investment (ROI), can open opportunities for the company,
and does not replace shop workers. The robots can take over the monotonous and dangerous
activities, enabling staff to have more fulfilling roles that involve quality control and operating the
robot.

Step 2: Get consensus on the definition of success

To get agreement from multiple parties and manage expectations, it’s important to have
agreement on what criteria make the project successful. The most important measurement is often the

118
ROI, and on average, companies consider two years to be an appropriate payback period. When
calculating ROI, there are many factors beyond comparing the hourly labour costs of a human
operator versus the capital cost of a robot.

a. Increased production: A robot’s ability to work consistently all day every day allows it the
potential to increase production on a line. If a company is in a situation that it can sell everything it
makes, and if that company has been limited to what the human operator can produce, then a robot
automatically will produce more saleable product and increase profit. More units sold can add up
quickly and dramatically reduce payback time. The ability to increase production may give a
company the opportunity to seek new customers and get even higher demand for products, leading to
further profit that would justify the purchase of a robot.

b. Reduced cost per part: On average, materials account for 75 percent of a product’s cost, with
labour being the other 25 percent. A robot can reduce materials costs by eliminating scrap and the
labour costs to rework those mistakes. The speed and consistency of a robot moves parts through the
process much faster over an entire shift, which reduces the cost to make each part.

c. Reduced risk of personal injury: Safety is a huge issue in manufacturing plants. Even a
relatively minor injury can cost thousands of dollars, and a major injury can cost hundreds of
thousands of dollars. When a robot handles the most dangerous tasks, it can reduce the risk of human
injury and related costs.

Step 3: Get consensus on the definition of failure

To calculate the viability of installing a robot, it’s necessary to have all parties agree on an
acceptable tolerance for failure. Often, the potential for negative impact on product quality can
prevent a company from changing over to robots. If there are nuances to the job that are best suited to
a person who can make split-second judgement or manipulate components, a robot may not produce
the same part quality. However, in those cases, the process likely is not consistent enough to be
automated.

When a process is not running efficiently and smoothly without some operator interference to
correct discrepancies, those problems rarely are solved by automating the process. If an operator
makes adjustments to the process parameters using gut-feel and tribal knowledge, these issues must
be resolved before automation is introduced.

Step 4: Prepare the budget

It’s essential to figure out potential savings before figuring out what is affordable to invest.
Determining a budget requires proper evaluation of costs and sales potential.

a. Reduced costs: It may seem straight-forward to calculate the cost of a human operator with
just the hourly wage. However, there are many hidden costs to having human operators that are often
overlooked. In fact, those related costs are often higher than the wage itself.

b. Higher profit potential

c. If a robot can increase production that is saleable right away, the new volume potential
needs to be factored into ROI and be considered in the budgeting process.

119
d. Added costs

e. On the front end, a robot may require the purchase of some accessories and other related
equipment. Those are up-front capital investments that will be recovered many times over in the
long-term.

Step 5: Gather information

Having all the right information ready will make robotic installation faster and easier. A
robotic integrator will need this data to make a proper recommendation and cost estimate. The data to
collect ahead of time includes: 3-D part models; 2-D part prints with tolerances, material specs and
notes; work definition; machine and fixture descriptions, machine manuals, models and drawings;
and pictures and videos that really tell the story. Additional non-technical information can be just as
useful in streamlining the process of robotic implementation. This other information includes TAKT
time, process cycle times, and annual volumes.

Once all success and failure criteria are determined and data is collected, it’s time to bring in
a robotic system integrator. A good integrator will ask a lot of questions and dig deep into the
process before making any recommendations. An experienced integrator that has seen a lot of
automation applications will be best at providing consultation with real-life success examples.
Choose an integrator that wants to build a relationship instead of just doing one-off projects, because
that background knowledge will make future projects easier.

5.9 SAFETY CONSIDERATION FOR ROBOT OPERATIONS

For the planning stage, installation, and subsequent operation of a robot or robot system, one
should consider the following.
 Risk Assessment. At each stage of development of the robot and robot system a risk
assessment should be performed. There are different system and personnel safeguarding
requirements at each stage. The appropriate level of safeguarding determined by the risk
assessment should be applied. In addition, the risk assessments for each stage of development
should be documented for future reference.

 Safeguarding Devices. Personnel should be safeguarded from hazards associated with the
restricted envelope (space) through the use of one or more safeguarding devices: Mechanical
limiting devices, Non-Mechanical limiting devices, Presence-sensing safeguarding devices,
fixed barriers (which prevent contact with moving parts) and Interlocked barrier guards.

 Awareness Devices. Typical awareness devices include chain or rope barriers with
supporting stanchions or flashing lights, signs, whistles, and horns. They are usually used in
conjunction with other safeguarding devices.

 Safeguarding the Teacher. Special consideration must be given to the teacher or person who
is programming the robot. During the teach mode of operation, the person performing the
teaching has control of the robot and associated equipment and should be familiar with the
operations to be programmed, system interfacing, and control functions of the robot and other
equipment. When systems are large and complex, it can be easy to activate improper
functions or sequence functions improperly. Since the person doing the training can be within

120
the robot's restricted envelope, such mistakes can result in accidents. It may result in
unintended movement or actions with similar results. Several other safeguards are suggested
in the ANSI/RIA R15.06-1992 standard to reduce the hazards associated with teaching a
robotic system.

 Operator Safeguards. The system operator should be protected from all hazards during
operations performed by the robot. When the robot is operating automatically, all
safeguarding devices should be activated, and at no time should any part of the operator's
body be within the robot's safeguarded area.
 Attended Continuous Operation. When a person is permitted to be in or near the robots
restricted envelope to evaluate or check the robots motion or other operations, all continuous
operation safeguards must be in force. During this operation, the robot should be at slow
speed, and the operator would have the robot in the teach mode and be fully in control of all
operations.

 Maintenance and Repair Personnel. Safeguarding maintenance and repair personnel is very
difficult because their job functions are so varied. Troubleshooting faults or problems with
the robot, controller, tooling, or other associated equipment is just part of their job. Program
touchup is another of their jobs as is scheduled maintenance, and adjustments of tooling,
gages, recalibration, and many other types of functions.While maintenance and repair are
being performed, the robot should be placed in the manual or teach mode, and the
maintenance personnel perform their work within the safeguarded area and within the robots
restricted envelope. Additional hazards are present during this mode of operation because the
robot system safeguards are not operative.

 Maintenance. Maintenance should occur during the regular and periodic inspection program
for a robot or robot system. An inspection program should include, but not be limited to, the
recommendations of the robot manufacturer and manufacturer of other associated robot
system equipment such as conveyor mechanisms, parts feeders, tooling, gages, sensors, and
the like. These recommended inspection and maintenance programs are essential for
minimizing the hazards from component malfunction, breakage, and unpredicted movements
or actions by the robot or other system equipment. To ensure proper maintenance, it is
recommended that periodic maintenance and inspections be documented along with the
identity of personnel performing these tasks.

 Safety Training. Personnel who program, operate, maintain, or repair robots or robot
systems should receive adequate safety training, and they should be able to demonstrate their
competence to perform their jobs safely.

5.10 ECONOMICS ANALYSIS OF ROBOT

Basic Data Required: To perform the economic analysis of a proposed robot project, certain
basic information is needed about the project. This information includes the type of project being
considered, the cost of the robot installation, the productive cycle time, and the savings and benefits
resulting from the project

121
5.10.1 Type of Robot Installation
There are two basic categories of robot installations that are commonly encountered. The first
involves a new application. This is where there is no existing facility. Instead, there is a need for a
new facility, and a robot installation represents one of the possible approaches that might be used to
satisfy that need. In this case, the various alternatives are compared and the best alternative is
selected, assuming it meets the company's investment criteria. The second situation is the robot
installation to replace a current method of operation. The present method typically involves a
production operation that is performed manually, and the robot would be used somehow to substitute
for the human labor. In this situation, the economic justification of the robot installation often
depends on how inefficient and costly the manual method is, rather than the absolute merits of the
robot method. In either of these situations, certain basic cost information is needed in order to
perform the economic analysis. The following subsection discusses the kinds of cost and operating
data that are used to analyze the alternative investment projects.

5.10.2 Cost Data Required for the Analysis


The cost data required to perform the economic analysis of a robot project divide into two
types: Investment costs and Operating costs. The investment costs include the purchase cost of the
robot and the engineering costs associated with its installation in the workcell. In many robot
applications projects, the engineering costs can equal or exceed the purchase cost of the robot. The
below points discuss a list of the investment costs typically encountered in robot projects.

A. Investment Costs

 Robot purchase cost: The basic price of the robot equipped from the manufacturer with the
proper options (excluding end effector) to perform the application.
 Engineering costs: The costs of planning and design by the user company's engineering staff
to install the robot.
 Installation costs: This includes the labor and materials needed to prepare the installation site
(provision for utilities, floor preparation, etc.)
 Special tooling: This includes the cost of the end effector, parts positioners, and other fixtures
and tools required to operate the work well.
 Miscellaneous costs: This covers the additional infest costs not included by any of the above
categories (e.g., other equipment needed for the cell).

B. Operating Costs and Savings

 Direct labor cost: The direct labor cost associated with the operation of the robot cell. Fringe
benefits are usually included in the calculation of direct labor rate, but other overhead costs
are excluded.
 Indirect labor cost: The indirect labor costs that can be directly allocated to the operation of
the robot cell. These costs include supervision, setup, programming, and other personnel costs
not included in category 6 above.
 Maintenance: This covers the anticipated costs of maintenance and repair for the robot cell.
These costs are included under this separate heading rather than in category 7 because the
maintenance costs involve not only indirect labor (the maintenance crew) but also materials
(replacement parts) and service calls by the robot manufacturer. A reasonable “rule of thumb"

122
in the absence of better data is that the annual maintenance cost for the robot will be
approximately 10 percent of the purchase price.
 Utilities: This includes the cost of utilities to operate the robot cell (e.g., electricity, air
pressure, gas). These are usually minor costs compared to the above items.
 Training: Training might be considered to be an investment cost because much o fthe
training required for the installation will occur as a first cost of the installation. However,
training should be a continuing activity, and so it is included as an operating cost.

5.11 WEATHER MONITORING SYSTEM


Weather is the state of the atmosphere and can be determined by several variables including
pressure, wind, precipitation, solar radiation, temperature and humidity and so on. Temperature and
humidity have shown to be suitable in forecasting weather condition in a short term. These factors
can be measured to determine the quality of local atmospheric conditions and weather forecast. The
components used for designing weather monitoring system is listed in the below table 5.1.

5.11.1 List of components used for the design

Table: 5.1 Components used for designing weather monitoring system

Sl.No Part type Properties


1 DHT11 Humidity and Temperature Sensor
2 Microcontroller Arduino UNO (Rev3)
3 Resistor, R1 tolerance ±5%; resistance 220Ω;
4 Resistor, R2 bands 4; tolerance ±5%; pin spacing 400 mil;
resistance 10kΩ;
5 Monitor Liquid Crystal Display (LCD); characters 16x2;
1602A
6 Jumpers male – male, male – female
7 POT variant trim; package trim_pot

DHT11 Humidity Temperature Sensor

Varying temperature and humidity information of the environment are captured by the
DHT11 component as shown in figure 5.1. It is a Temperature and Humidity Sensor which has a
calibrated digital signal output. The DHT11 ensures a high reliability and long-term stability by
using the exclusive digital-signal-acquisition technique and temperature & humidity sensing
technology. With a resistive-type humidity measurement component and a temperature measurement
component, the DHT11 provides a reliable data. Its element is calibrated in the laboratory under
extremely accurate humidity calibration conditions and stores the calibration coefficients in memory
as programs for later use. The temperature and humidity sensor used for this study has a coverage
range of up to 20meters. It complies with standard reference temperature for industrial measurement
which is given as 200 c – 250 c.

It has low power consumption and an impressively small size suitable for most projects. It is
worthy of note that the DHT11 sensor requires a minimum of one second delay for it to stabilize.

123
This delay is imperative to guarantee a reliable data from the sensor. Besides temperature
measurements, DHT11 also measures relative humidity – which is the amount of water vapour in the
atmosphere. Normally, at the saturation point, water vapour begins to condense to form dew.
Changes in the air temperature greatly determine its saturation point. Notably, a higher air
temperature holds more water vapour than a cold air temperature. At 0% Relative humidity –
expressed as a percentage – the air is considered totally dry, but condenses at 100%.The value of
relative humidity can be calculated as:

Relative Humidity (RH) = - - (1)

The ranges and accuracy of the DHT11 is as follows:

 Humidity Range: 20-90% RH

 Humidity Accuracy: ±5% RH

 Temperature Range: 0-50 °C

 Temperature Accuracy: ±2% °C

 Operating Voltage: 3V to 5.5V

Fig. 5.1 DHT11 Temperature and Humidity Sensor

Arduino UNO (Rev3)

Arduino is an open-source platform comprising of both a physical Programmable Circuit


Board (often referred to as a microcontroller) and a piece of software that can be installed on the
computer, used to write and upload computer code to the physical board. The Arduino software
works on all known operating systems. It is an Integrated Development Environment (IDE) that
provides programmers with tools such as a source code editor, automation tools, and a debugger
(Arduino, 2018). There are several variants of the Arduino hardware including the Arduino Uno

124
which is used for this study. The Arduino board is a vital component in this design. It has an inbuilt
Atmel ATmega328P microcontroller which reads and reports signals from the DHT11 sensor.

The Arduino Uno has fourteen digital input/output pins, six analog inputs, a Universal Serial
Bus (USB) connection, a power jack, a reset button and much more.

It contains everything needed to support the in-built microcontroller. It can be powered via an
AC-to-DC adapter or battery. It can also get power supply from the computer when it is connected
with a USB cable. The Arduino platform has become very popular with amateurs and professionals
alike. Amongst other reasons, the Arduino does not need a separate programmer in order to upload
program codes onto the board. It comes pre-burned with a bootloader that allows you to upload new
code to it without the use of an external hardware programmer. A USB cable is used to connect the
Arduino hardware to the computer and the instruction codes are uploaded from the Arduino software.

The Arduino programming software is very easy to learn and to program. It adopts a modified
C++ programming language structure. However, most of the complexities in the C++ language have
been simplified; making programmers to achieve much with lesser codes. Notably, the Arduino
software is free and available online and the Arduino hardware is cheap. Also, the Arduino allows
the use of Libraries to extend the functionality in sketches; hence enhancing the programmer’s ability
to develop programs that meet industry standard. The codes produced using Arduino programming
software (IDE) is known as a sketch. This sketch can then be uploaded into the microcontroller as a
firmware. Firmware is series of instructions that are written and uploaded into an electronic device to
control how it communicates with other hardware devices.
Resistor
Resistor is an electronic component designed to resist the flow of current across a device.
The resistance to current flow results in a voltage drop across the resistor device. Resistor devices
may provide a fixed, variable, or adjustable value of resistance. Resistor values are expressed in
Ohms, the electric resistance unit. Resistors are incorporated within electrical or electronic circuits to
create a known voltage drop or current-to-voltage relationship.
Liquid Crystal Display (LCD)
Liquid Crystal Display (LCD) was used as a monitor showing massages on the screen. A 16
by 2 LCD, as shown in Fig 2, was used for this study which was suitable for the task. This means that
the LCD has two(2) display lines with each line displaying 16 characters. Although this class of LCD
requires a 16 pin connection, a lesser number can be used if only four(4) data lines are used instead
of the default eight (8) data line connection.
Jumper
Jumpers are like on/off switches they may be removed or added to alternate component
performance options. A jumper is made of materials that conduct electricity and is sheathed in a
nonconductive plastic covering to prevent accidental short circuit. The jumper’s main advantage is its
one-time configuration, which make it less vulnerable to corruption or power failure than firmware.
Potentiometer
A potentiometer is a three-terminal variable resistor that is mechanically actuated. The resultant
output voltage is a function of the position of this contact. One of the terminals is connected to a
mobile contact moving over the resistive track, while the other two terminals are linked to the ends of
the resistive element. Potentiometer is recommended for use as a voltage divider.

125
5.11.2 Methodology and System Design
The Arduino Uno Microcontroller Board was used as the main hardware component, while
the Arduino IDE was used in writing the instruction codes (known as firmware) which was uploaded
into the microcontroller. Figure 5.2 shows the circuit diagram used to design the Weather Monitoring
System. Figure 5.4 highlights the implementation of the circuit diagram using the selected
components. The connections between components are also shown.

Fig.5.2: Schematic for Arduino-based Weather Monitoring System

5.11.3 Flowchart for program design


Fig 5.3 shows the flowchart for the design of the instructions that drives the microcontroller.
The firmware was developed using Arduino IDE which provided needed tools to debug and upload
to the microcontroller.

The Arduino IDE was used in developing the sketches that were uploaded as firmware
into the microcontroller. Thereafter, the system could work without the user's intervention. Libraries
are required for a robust firmware development using Arduino. In this case, the ‘Liquid Crystal’ and
‘dht’ libraries are used. Next we set the Arduino pins and attached them to the LCD for display.
Arduino pins 9, 10, 4, 5, 6, 7 were attached to the RS, E, D4, D5, D6, D7 pins respectively on the
LCD. The ‘pinMode’ of Arduino pin 12 was set as INPUT. This is the pin that reads the numeric
values from the signal pin of the DHT11 sensor. At least a second delay is required to get reliable
readings from the DHT11 sensor. However, we used three seconds delay to ensure that the previous

126
values have been displayed. It is also important to confirm that the temperate and humidity readings
are within the acceptable range for the sensor. Here the humidity range was between 20 - 90 relative
humidity, while the temperature ranged between 0 - 500 c. Once the read values are within range, it
is displayed on the LCD screen. as seen in figure. 5.4.

Fig. 5.3 Flowchart of the developed firmware loaded into the microcontroller

127
Fig.5.4 Plastic foam casing the components for the weather forecasting system

5.12 MODERN ROBOTS


i. Multiple adaptable robotic arms with modular construction.
ii. Multiple nodes with one controller assisted by temperature gauge, pressure gauge and
position sensors.
iii. Motion Control: By mechanical couplings (coupled motion control and coordination
kinematics and dynamics. Large work envelopes and higher payloads managed and controlled
with servo tuning to avoid resonance and vibration.
iv. Usage of micro-controllers and embedded systems for less power requirement, compact in
size, changeable functions, less movable parts for longer life, chip forming the brain.

Controller Area Network (CAN) connections.

 Controls efficiently the distributed intelligence.


 Good price performance ratio.
 Give reliability through error detection and error handling system.
 With immunity against electromagnetic interference.
 Giving dynamic connection and disconnection of nodes for flexibility.
 Providing real time capability for Letter repeatability, accuracy and precision.
 Communication: Radio frequency and infrared links for digital communications.

Programmable Automation Control (PAC) for rapid advancement in capability for which re-
engineering is needed, good portability of control engine. Robot Vision Machine vision replaces
human vision through video cameras, special computer hardware and software.

5.12.1 Future Application

The most prominent characteristic about present-day robot applications is that they require
the robot to perform a repetitive motion pattern. Although the motion pattern is sometimes
complicated, variations in the motion pattern are minimum. Also, the level of sensor technology
required in the application is fairly low.

128
The enhancements in the technological capabilities of future robots will permit the
applications to evolve in new directions. Some of the important characteristics of future robot tasks
that will distinguish them from typical present-day applications are the following:

The tasks will be increasingly complicated. In addition to repetitive tasks, robots will perform
semi repetitive and even non-repetitive operations,

The tasks will require higher levels of intelligence and decision-making capabilities from the
robot. Advances in the field of artificial intelligence will
be incorporated into the design of robot controllers.

Some of the tasks will require robust mobility, the capability to move aboutthe work area
without relying on rails or moving platforms to execute the
move.

The robot tasks of the future will commonly make use of a variety of sensor capabilities,
including vision, tactile sensing, and voice communication.

Many of the future tasks performed by robots will require a higher level of end effector
technology. The requirements for hand articulation and tactile sensing capabilities will be far in
advance of today's gripper devices. The concept ofthe universal hand will be much closer to reality.
6. The greater variety of robot applications will require that robot anatomy become more specialized
and differentiated according to the applications. The physical configuration of the robot will be
designed for the specific purpose that the robot is supposed to serve. The economics of this
specialization will be improved by the use of techniques such as flexible automation, modularized
construction, and standardization of components.

Tasks that are performed in inaccessible environments will require significant improvements
in robot reliability because of the difficulty in servicing, maintaining, and repairing the machine. The
reliability improvements will also be incorporated into robot designs that are not used in these kinds
of environments.

The inaccessible environments may require the use of a telepresence capability. so that
humans can instruct the robot during the task. Many of these applications characteristics correspond
to the technological capabilities of the future robot profile described in Chap.

Future Manufacturing Applications of Robots

Robots are extensively used in manufacturing as we have seen in earlier chapters that the cost
of labor is going up while the cost of robots are going down. Some examples of robotic use in
industries are as given in next.

If the projected trends continue, the installed base will increase to approximately one million
units. Not all of these units will be employed in manufacturing operations, but many of them will rise
questions in performance of the of tasks that robots perform today.

To estimate the application trends to see how robots are likely to be used in manufacturing
operations in the future. It also considers the various problem areas that are presented by these
operations and how future robots might be capable of dealing with the problems.

129
Non-manufacturing uses of robots are very limited in numbers and doilar value. Examples of
these current uses include research and development applications and teaching robots in colleges and
universities. Teaching robots constitute a growing share of the market in terms of numbers of units:
however, the price per unit for these robots is low compared to the price of an industrial grade robot.
The non-manufacturing applications will still constitute a relatively small proportion of the total
robot installations.

Assembly Applications
The assembly process represents an important future application for robots. The area of
assembly in which robots are expected to be used is in batch production operations. In the mass
production of relatively simple products (e.g., flashlights, pens, and other mechanical products with
fewer than 10 components), robots will probably never be able to compete with fixed automation in
terms of speed and throughput rates. Even with lower-cost robots in the future, the economics will
favour the use of high-speed specialized machines to accomplish the assembly tasks for these
products. It is used in assembly of medium and small lots (e.g. electric motors, pumps, and many
other industrial products) and in the high production of more complex assembled products (e.g.
automobiles, televisions, radios, clocks), that robots are most likely to be utilized. However, these
kinds of operations are currently the domain of human workers who possess the intelligence,
dexterity, and adaptability needed for the tasks that go far beyond the capabilities of present-day
robots

This general area of assembly automation is sometimes referred to by the name


programmable assembly. The present state of the art in programmable assembly is such that
relatively few robots are employed in this technology. An estimated five per cent of the current
systems use robotics technology. But this proportion is expected to grow to 30 per cent in 1990. This
suggests that advances in robot technology directed at the assembly process, combined with a better
understanding of programmable assembly techniques, will occur during the period between 1985and
1990. Some of the technological improvements needed to introduce robots in greater numbers into
the assembly process include:

 Improvements in sensor technology (especially machine vision)


 Higher accuracy and repeatability
 Higher speeds
 Changes in design concepts and fastening methods for products to permit
 Easier assembly by robots
 More versatile grippers
 Improved off-line programming methods that will permit complex robot programs to be
developed from design data with the aid of advanced CAD/ CAM software and to be
downloaded directly to the assembly workstation for the required assembly tasks

A large field of applications for programmable assembly systems using robot technology is
electronic assembly. The tremendous growth potential in the electronics industry over the next two
decades provides a substantial impetus for the robotics industry to develop new robotic assembly
systems.

Arc Welding Application

Another application expected to grow in importance is are welding. Most continuous arc-
welding operations are accomplished manually today. The latest state-of-the-art of robot are-welding
installations almost invariably involve the production of medium or high quantities of items. In this

130
situation, the robot must be programmed to perform the required welding cycle and the parts to be
welded must be placed in fixed locations. Programming the robot to do the welding cycle takes
considerably longer than the actual welding task. The location requirement is satisfied by means of a
special welding fixture to hold the parts and a human fitter who works in the robot cell. The
productivity of these semi-automated welding cells can be two or three times as high as the
corresponding manual cell in which a fitter and a welder work together. This is because of the low
arc-on times usually encountered in manual weakling operations. The economics of the application
require that the production quantities must be sufficient so that the productivity gains per unit of
product can overcome the initial cost of the programming time and the special fixture.

One of the technical problems that arises in using robots for are welding is the variation in the
part edges that are to be welded. Human welders are capable of compensating for these variations
during the welding operation, but the conventional playback robot cannot. This inability of playback
robots to follow the variations in the welling gap has inhibited their use in the arc-welding process.
Several sensor technologies are being developed to deal with the problem of part edge variations. It
is anticipated that the widespread adoption of these sensor technologies will be an important factor in
the expanded use of robots for are welding.

Parts Handling and New Robot Applications Parts handling and machine loading represent a
third large area of future robot applications, although the proportion of these applications compared
to others will probably decline modestly. Perhaps the biggest limitation in using robots for these
functions is the problem of locating and orienting the part so that the robot can fetch it at the
beginning of the work cycle. In the past, there has been only one solution to this problem and that is
to present the parts to the robot in a known position and orientation. This solution requires the parts
to be prepositioned and reoriented for the robot application by means of some form of parts-handling
device. Additional expense is involved to engineer the parts-handling capability and sometimes an
extra manual operation is required to load work parts into the device. The problem of part position
and orientation in robot applications provides reinforcement to a general argument among factory
automation specialists that part orientation must be established when processing of the part initially
begins and should never be lost during subsequent manufacturing and assembly operations. For
certain not the common practice in factories geared largely toward manual operations in which parts
are usually stored in random arrangements in tote pans, bins, and boxes.

Running counter to this part orientation argument are the recently introduced commercial
systems capable of retrieving work parts that are all mixed together in a tote pan. These systems are
called 'bin-picking systems. They are based on the use of machine vision. It is anticipated that this
bin-picking capability will be an important factor which will allow robots to be used in an increasing
number of parts-handling, machine-loading unloading, and many other factory operations. Robots
equipped with machine vision to accomplish the bin-picking problem might simultaneously perform
visual inspection operations on the parts that are being retrieved.

The advances that are being made in robotics technology and related computer software will
allow new uses of industrial robots that are today only found in the laboratory or not considered.
These possibilities include wire harness assembly, garment manufacturing, shoe making. Product
packaging, food-processing operations, dipping cycles in electrochemical plating, and a host of other
unanticipated operations.

Flexible Manufacturing Systems


Finally, another technology that will spur future robot applications in manufacturing is
computer-integrated flexible manufacturing systems (FMS). A flexible manufacturing system can be

131
defined as a group of automated machine tools (usually numerically controlled machines) that are
interconnected by means of a material handling and storage system, and which operates as an
integrated system under computer control. Although these systems started appearing approximately
15 years ago. the application of FMS technology is only now beginning to grow significantly. It is
anticipated that flexible manufacturing systems will become less expensive to install and operate as
improvements are made in the technology and as we learn more about them. The improving cost
advantage compared to other forms of production will result in an increase in the share of U.S.
manufacturing activity that is performed by FMS technology. Robots are being used increasingly in
flexible manufacturing systems and machining cells to perform the materials handling function.
Industrial robots with properly designed grippers have been found to be ideal for handling rotational
workparts in this type of application. The conveyors are used to bring parts into and out of the cell
and the robots are used to handle parts between machines in the cell.

5.12.2 Challenges

Robotics research has been making great strides in recent years, but there are still many
hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has
now identified 10 grand challenges the field will have to grapple with to make that a reality.

(i) New Materials and Fabrication Scheme


Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with
things like artificial muscles, soft robotics, and new fabrication methods that combine multiple
functions in one material. But most of these advances have been “one-off” demonstrations, which are
not easy to combine.

Multifunctional materials merging things like sensing, movement, energy harvesting, or


energy storage could allow more efficient robot designs. But combining these various properties in a
single machine will require new approaches that blend micro-scale and large-scale fabrication
techniques. Another promising direction is materials that can change over time to adapt or heal, but
this requires much more research.

(ii) Bio-inspired and Bio-Hybrid Robots


Nature has already solved many of the problems robotics are trying to tackle, so many are
turning to biology for inspiration or even incorporating living systems into their robots. But there are
still major bottlenecks in reproducing the mechanical performance of muscle and the ability of
biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and
energy and power density need to be improved. Embedding living cells into robots can overcome
challenges of powering small robots, as well as exploit biological features like self-healing and
embedded sensing, though how to integrate these components is still a major challenge. And while a
growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how
animals’ transition between capabilities like flying and swimming to build multimodal platforms.

(iii) Power and Energy


Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric
vehicles, and renewable energy is driving progress in battery technology, but the fundamental
challenges have remained largely unchanged for years.
That means that in parallel to battery development, there need to be efforts to minimize
robots’ power utilization and give them access to new sources of energy. Enabling them to harvest

132
energy from their environment and transmitting power to them wirelessly are two promising
approaches worthy of investigation.

(iv) Robot Swarms


Swarms of simple robots that assemble into different configurations to tackle various tasks
can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more
powerful hardware that lets simple robots sense their environment and communicate is combining
with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—
small swarms can be controlled centrally, but larger ones need to be more decentralized. They also
need to be made robust and adaptable to the changing conditions of the real world and resilient to
deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous
robots with complementary capabilities.

(v) Navigation and Exploration


A key use case for robots is exploring places where humans cannot go, such as the deep sea,
space, or disaster zones. That means they need to become adept at exploring and navigating
unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from
navigation failures and are able to make and recognize new discoveries. This will require high levels
of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a
picture of the world from multiple data sources of varying reliability and accuracy.

(vi) AI for Robotics


Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be
combined with model-based reasoning to create adaptable robots that can learn on the fly.
Key to this will be creating AI that’s aware of its own limitations and can learn how to learn
new things. It will also be important to create systems that are able to learn quickly from limited data
rather than the millions of examples used in deep learning. Further advances in our understanding of
human intelligence will be essential to solving these problems.

(vii) Brain-Computer Interfaces


BCIs will enable seamless control of advanced robotic prosthetics but could also prove a
faster, more natural way to communicate instructions to robots or simply help them understand
human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though,
so work on compact, low-power, and wireless devices will be important. They also tend to involve
extended training, calibration, and adaptation due to the imprecise nature of reading brain activity.
And it remains to be seen if they will outperform simpler techniques like eye tracking or reading
muscle signals.

(viii) Social Interaction


If robots are to enter human environments, they will need to learn to deal with humans. But
this will be difficult, as we have very few concrete models of human behavior and we are prone to
underestimate the complexity of what comes naturally to us.
Social robots will need to be able to perceive minute social cues like facial expression or
intonation, understand the cultural and social context they are operating in, and model the mental

133
states of people they interact with to tailor their dealings with them, both in the short term and as they
develop long-standing relationships with them.

(ix) Medical Robotics


Medicine is one of the areas where robots could have significant impact in the near future.
Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to
increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of
contexts and be able to use situational awareness and spoken commands to understand what’s
required of them. In surgery, autonomous robots could perform the routine steps of a procedure,
giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many
roadblocks to their adoption, including effective delivery systems, tracking and control methods, and
crucially, finding therapies where they improve on current approaches.

(x)Robot Ethics and Security


As the preceding challenges are overcome and robots are increasingly integrated into our
lives, this progress will create new ethical conundrums. Most importantly, we may become over-
reliant on robots.

It will lead the humans to lose certain skills and capabilities, making unable to take the reins
in the case of failures. It may end up delegating tasks that should, for ethical reasons, have some
human supervision, and allow people to pass the buck to autonomous systems in the case of failure.
It could also reduce self-determination, as human behaviors change to accommodate the routines and
restrictions required for robots and AI to work effectively.

5.13 CASE STUDY

MiR mobile robots enable intralogistics innovation at Bossard Smart Factory Logistics
With Industry 4.0 era, numerous industrial manufacturers actively participate in the vast
automation production revolution. When automatic management and efficiency upgrading gradually
become rigid needs, diversified human-machine collaboration scenarios emerge up, which enables
greater values generated by collaborative robots as a solution on productivity advancement to be
seen.
The Smart Factory Logistics solution practiced by Bossard, one of biggest names in global
fastener industry, forms up and defines its "Proven Productivity" ecosystem together with Product
Solutions and Assembly Technology Expert. Meanwhile, Bossard’s technology integration power
has been proven again. Its "Last Mile Management" solution takes internal logistics as a practical and
effective aspect to improve, in combination with MiR’s advanced mobile robotics technologies to
upgrade and iterate.
The two MiR100 robots at Bossard Shanghai Logistics Centre rely on its outstanding
security, flexibility and system adaptability, and gain definite recognition from both European and
Chinese teams. With the help of the robots solution, a brand new outlook that workers are
collaborating with mobile robots opens up, helping address pain points and bottlenecks that water
spider model is not available enough in modern flexible production before. At the meanwhile, MiR

134
also highlight how important intelligent intralogistics is through connecting to other operation system
efficiently.
High security opens up a new complexion for human-machine collaboration
Security is the cornerstone of collaborative robot invention, and also the origin of developing
mobile robotics.
An ancient poetry is saying "only when standing on the top of the highest mountain, you see
others are small”. Following the highest and most stringent security standards worldwide, MiR
mobile robots therefore have an outstanding safety performance throughout the industry. There are
two MiR100 robots that back up the intralogistics of the Bossard Shanghai Logistics Centre, as the
name suggests, they can move heavy loads of up to 100 kilograms in manufacturing or warehousing.
In fact, the Chinese and European teams both consider MiR has remarkable promise and
potential just as a Chinese saying of “Great minds think alike” said. Introduced by Ellick Qin, Smart
Factory Logistics Manager at Bossard Fastening Solutions (Shanghai) Co., Ltd, the European team
finally selected a MiR mobile robot that strictly meets European industrial safety production
standards in July 2018. Almost at same time, the Chinese team came across the MiR’s autonomous
mobile robot solution at an industry exhibition in Chengdu. The cooperation has matched up ever
since. After a three-month internal test in Europe, the MiR robot project was officially launched at
the end of 2018 in Bossard Shanghai Logistics Centre.
Security has long been the hallmark of MiR mobile robots. With the world-class advanced
navigation and sensor technologies, MiR robots can run accurate analysis and judgment for obstacles
ahead in different application scenarios. For example, in Bossard Shanghai Logistics Centre, MiR
robots enable to automatically identify passing workers, forklifts and other devices, with each other
to keep the safe distance under dynamic circumstance, to proactively evade obstacles, while doing
real-time prediction to movement feasibility and adaptively adjusting operation routes.
High flexibility matches flexible requirements of modern manufacturing
Mobile robots based on reliable safety are highly expected to meet actual production
requirements, to deal with internal logistics bottlenecks, optimize work processes, save related
operating costs, and create the human-machine collaboration scenarios that both sides perform their
respective roles. The flexibility of MiR robot makes it a truly efficient internal logistics solution.
Externally, MiR robot is a good hand in assisting flexible production. There is no need to
invest heavily in laying tracks up front like AGV robots. The deployment flexibility saves customers
a lot of time, manpower and fixed costs.
In addition, MiR robot is easy to operate and program. With a friendly user interface, MiR
robot can quickly and flexibly respond to transportation requirements of different batches, which
highly matches market requirements for flexible production and customized service in the era of
Industry 4.0.
Internally, MiR robot is a wonderful hand for optimizing daily management. MiR robots can
flexibly handle sample picking, sample delivery and replenishment for long distance, multi-point and
high frequency requests in Bossard.

135
At present, the two MiR100 robots serving in Bossard factory play different roles: one robot
stationed in the exhibition hall, presenting more like an ambassador of the enterprise, showing the
operation mode of "Last Mile Management" to customers; Moreover, it presents in large-scale
industry exhibitions and seminar activities, to introduce the most advanced internal logistics
solutions to the industry.
The other robot is dedicated to fulfilling tasks. It is always going back and forth between the
warehouse and the laboratory, to perform the work of sample delivery and sampling with full energy.
At the same time, it assists distribution of receiving and delivering supplementary materials between
warehouse to various stations.
High adaptability to connect various operating systems seamlessly
Bossard offers a full range of high-quality fasteners products for its various clients in
precision manufacturing, engineering machinery, construction, transportation systems. Based on the
characteristics of fasteners products, which includes various types, low individual value, relatively
high hidden costs on assembly and transportation logistics, Bossard provides its intelligent internal
logistics solutions "Last Mile Management", to its customers, in order to achieve the intelligent
internal logistic optimization, from storage to replenishment.
Another reason that MiR robot can shine is that its excellent adaptability perfectly matches
various systems of Bossard. "From the recommendation of MiR robot, to the completion of
installation, debugging and personnel training, the whole application process took only a week,
which shows the strong adaptability to different systems of MiR robot," Ellick Qin said.
In line with different sensor storage systems from customer side, information including
fasteners inventory is managed by Bossard’s cloud-based operation system, then can be fed back in
real-time way to ARIMS backstage with strong data analysis, to achieve full/semi-automatic
replenishment demand at production station.
When MiR100 receives task orders from ARIMS, it can cooperate with ARIMS system
online to customize its route for selecting goods efficiently in warehouse and assembly storage area.
It can also make real-time adjustment based on practical circumstances. When it comes to the
replenishment stage of the production station, MiR100 can also seamlessly cooperate with ARIMS
system, to carry materials and participate in the water spider operation, so as to recycle
replenishment for production stations efficiently and accurately.
In addition, due to the complexity of working environment, the high adaptability of MiR
robot shows its outstanding advantages in various operation situation of Bossard Shanghai Logistics
Centre, such as involving rolling doors, elevators, access control system and other elements in
operation. In the near future, Bossard plans to build a new industrial park covering an area of around
3700m2 in Tianjin. The internal logistics requirements will be more diversified and complex at that
time. It is believed that MiR robot’s advanced adaptive ability will definitely play a more important
role.
As the application of cooperative robots in many fields continues to expand and deepen, more
dull, repetitive, or high-risk mechanical operations will be handed over to safe and flexible
cooperative robots to complete. But the value of human will not be replaced. On contrary, human

136
worker can take more higher value task of using human wisdom after liberation of basic work. With
the efforts of MiR robot, efficiency of internal logistics will be further motivated, which sets Bossard
as a role model of improving internal logistics. The further man-machine cooperation mode created
by industry 4.0 can retake the efficiency eaten by nonlean manufacturing, and thus is worthy of the
entire industrial industry expecting.

137

You might also like