Lecture Plan-Unit III
Lecture Plan-Unit III
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Most robots of today are nearly deaf and blind. Sensors can provide some limited feedback to the robot so it can do its job . The sensor sends information, in the form of electronic signals back to the controller. Sensors also give the robot controller information about its surroundings and let it know the exact position of the arm, or the state of the world around it 1. To provide positional and velocity information concerning the joint,arm and end effector status,position,velocity and acceleration. 2. To prevent damage to the robot itself,surroundings,human operators. 3. To provide identification and real time information indicating the presence of different types of components and concerning the nature of tasks performed. Transducers: A device that converts one type of physical variable into another type.e.g. Force, pressure, temperature, velocity, flow rate etc. common conversion is to electrical voltage. Sensors: Sensor is a transducer used to make a measurement of a physical variable of interest. e.g.Strain gauges( to measure force and pressure) Thermocouples( to measure temperatures) Speedometers( to measure velocity) Characteristics of Sensory Devices: 1. Accuracy: Value of the variable sensed with no systematic +ve or ve errors. 2. Precision: Random variability or dispersion in measurement is minimized. 3. Operating Range: This refers to the minimum and maximum change in input signal to which the sensor can respond. 4. Speed of Response: minimum time required for responding to changes in sensed variable. 5. Calibration:Time and trouble required to accomplish the calibration procedure should be minimum. 6. Reliability: need high reliability- should not be subjected to frequent failures. 7. Cost and Ease of Operation: The cost to purchase,install and operate the sensor should be low. 8. Sensitivity: It refers to the change in the output exhibited by the sensor for a unit change in input. should be as high as possible. 9. Linearity: Sensory device should exhibit the same sensitivity over its entire operating range. 10. Other considerations: i) Device shouldnt disturb quantity it senses or measures. ii) Should be suitable for environment
Requirements of Sensors
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems
Types of sensors:
The various types of sensors for taking feedback from the robotic system are classified below. Force Sensor- These measure the three orthogonal forces and three orthogonal torque at the tips of the finger of robot. Inertial Sensors- These feel the gravity and acceleration generated reaction torque. Tactile Sensor - These respond to contact forces arising between themselves and objects. It is used to warn the manipulator of robot to avoid collision when the end effector is near to object. Visual Sensor These help in determining the coordinate of the object before it is grasped Proprioceptors Sensors for measuring kinematic and dynamic parameters of the robot. The Kinematic parameters such as Position, velocity & acceleration feedback are taken. Joint position: Potentiometer Synchro / Resolver RVDT Inductosyn Velocity : Tachometers
Feedback system for Robot Sensors provide information like Recognition data to understand shape, size, and feature of the object
Orientation data Physical interaction data to understand the effector and the object
The sensors are in general broadly classified into two categories contact and noncontact sensors.
Sensors
Contact sensors
i) Touch sensors (Binary sensors) ii) Slip sensors iii) Force sensors (Analog sensors) iv) Torque sensors
Touch sensors+ Force sensors = Tactile sensors Classification according to Mikell.P.Groover , 1. Tactile sensors 2.Proximity sensors 3.Miscelllaneous sensors and sensor based systems 4. Machine vision systems
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Touch Sensors:(Binary sensors) - Sensors which indicates whether contact has been made with the object or not. - Provide tactile information by giving binary output signal Part of an inspection probe e.g. Limit switches, Micro switches
Fig.1.Robot hand with micro switches iii) Force Sensors:(Analog sensors) - Indicate the magnitude of the contact forces inaddition to whether contact has been made or not. - Apply appropriate force using the following techniques 1. Force Sensing wrist consists of special load cell mounted between the gripper and the wrist. 2. Measure torque exerted between two points. Position and displacement sensors: Position Sensors: Sensors used to know the position of the joint in order to know the position of the end effector to enable programming. Types: 1. Absolute position sensors e.g.(Joint sensors, wrist sensors, Tactile Array sensors, Artificial skins) 2. Incremental position sensors(Displacement sensors) e.g. Strain gauge, Potentiometers, Encoders, LVDT Displacement sensors Potentiometers:
Fig.3.Potentiometer wiper contact creates wear and Electrical noise. V0 = K. Where, Vo = output voltage K = voltage constant of potentiometer in volts per radian (for angular pot) Or in volts per mm (for linear pot); = position of the pot in radian or mm. Encoders: 1. Absolute encoders 2.Incremental encoders - gives digital signal output (pulse train) Incremental encoders: - By counting the number of pulses and by adding or subtracting it can be used for position information - Two sets of transmitters and receivers aligned 900 out of phase to provide direction information Absolute encoders: - position known in absolute terms not with respect to a starting position - more number of tracks of stripes and corresponding number of transmitters and receivers - shaft angle directly red from the encoder without counting. Table.1.Decimal numbers and their corresponding Binary and Gray codes
Linearly Variable Differential Transformer: - Primary is excited with an AC source - Core position determines the voltages in secondary-1 and secondary 2 - AC output of LVDT converted to DC using rectifier
Fig.5.LVDT construction
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Range sensors.
Fig.7.Range imaging sensor Range sensors are used to sense and measure the distance between the objects and the sensing device .They may be used to locate the work piece in the robot workcell. Triangulation method: The distance of the object undersea can be calculated by this method. Light wave from laser source or Radar is emitted at particular frequency and wavelength. The rays after reflection is collected by the receiver.The difference between the emitted waves and received waves are calculated and it is used to give information about the distance of the object. Let d is the distance of the object to be measured,b is the perpendicular distance between the emitter and receiver along the normal to the ground and be the angle of incidence of the ray on the object or the angle at which it is emitted then,
d b. tan
Fig.8a..Triangulation method of range sensing A laser rangefinder is a device which uses a laser beam to determine the distance to an object. The most common form of laser rangefinder operates on the time of flight principle by sending a laser pulse in a narrow beam towards the object and measuring the time taken by the pulse to be reflected off the target and returned to the sender. Laser rangefinders are used extensively in 3-D object recognition, 3-D object modelling, and a wide variety of computer vision-related fields. This technology constitutes the heart of the socalled time-of-flight 3D scanners. In contrast to the military instruments described above, laser rangefinders offer high-precision scanning abilities, with either single-face or 360-degree scanning modes. Laser rangefinders used in computer vision applications often have depth resolutions of tenths of millimeters or less. This can be achieved by using triangulation or refraction measurement techniques as opposed to the time of flight techniques used in LIDAR. Pulsed Time-of-flight Method These laser ranging systems are used to measure the distance (or range) between the source (where the ranging system is located) and some object, which we will call the target. This is accomplished by: 1. Irradiating the target with a laser pulse from the source transmitter. 2. Detecting a reflection of the beam off of the target. 3. Measuring the time required for the laser signal to travel from the source to the target and back to the detector. A block diagram of this type of laser ranging system is shown in Figure.
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Proximity sensors. The presence of an object can be sensed by a proximity sensor. There are several types of Proximity sensors.They are 1.Optical proximity sensors 2.Hall effect proximity sensor 3.Inductive proximity sensors 4.Ultrasonic proximity sensors
Fig.11.Proximity array sensor 2. Hall effect proximity sensor When a ferromagnetic body approaches the sensor element ,the magnetic field associated with the element changes which induces an emf in the leads and is calibrated to give the display.This effect is similar to the airconditioning effect in a hall shared by new person entering the room.The effect depends on the number of persons inside the room.
Fig.12.hall effect sensors 3. Inductive proximity sensors Proximity sensors based on electric fields are commercially available.The sensing device when brought near the object creates an alternating magnetic field in a small region and this field induces eddy currents through the conducting objects.The eddy current produce their own magnetic field which interacts with the primary field.This changes the flux density and indicates the presence of the object. 4. Ultrasonic proximity sensors A transducer element is placed in the metal casing and it is closed by a resin mould.The metal casing is provided with acoustic absorber.The transducer is connected through the leads to the cable and then to the display.There is a transmitter to produce ultrasonic waves.These waves passes through the object.
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Wrist sensors: Indicate the magnitude of the contact forces inaddition to whether contact has been made or not. Capability to grasp parts of different sizes in material handling, machine loading and assembly work Apply appropriate force using the following techniques 3. Force Sensing wrist consists of special load cell mounted between the gripper and the wrist. 4. Measure torque exerted between two points.
Compliance sensors: - used to find out the distance traveled by the arm or end effector per unit force applied. - used for assembly purposes. Slip sensors:
Fig.16.Slip sensor based robot gripper The slip sensor components with force sensorsand robot controller are shown in the block diagram.The fingers are crossed and pivoted to the support.One finger is affixed and the other movable jaw is connected to the strain gauge.The fingers are closed at the object to secure gripping by means of a lever actuated with the wrist of a robot.A full bridge with electrical strain gauges on the lever measures strain due to the effort required to close the finger and the gripping force can be determined with proper calibration.A specially mounted rubber padded and spring loaded wheel in contact with the upper surface of the object measures the degree of slip through the positional rotation of a potentiometer.Dead weights are placed on the weighing pan to induce the slip between the finger and the object being grasped.The movement of the slider point of the potentiometer varies with the slip and analog voltage signal obtained is digitized and fed to a microprocessor.When there is a slip the microprocessor detects it and sends a high value signal through an open collector buffer to the I/O module of the robot controller.
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Types of Illuminators There are several types of illuminators. 1. Diffused Light: The light rays are spread from the diffuser. This is used to illuminate larger surface. 2. Flourancent: Different colored light film can be made to illuminate the surface in all directions. 3. Condenser: A beam of focused light can be produced .Two focusing lenses are used.
Fig.17.Basic types of illumuinators 4. Flood projector: Circular and spherical surfaces can be illuminated effectively with flood projectors. Convex lenses can be used for this purpose. 5. Collimator: Parallel beam of light can be produced by using collimator. It uses a pinhole and a lens arrangement. 3.3.2 Illumination Techniques The illumination technique is selected based on the feature on the work surface and the position and orientation .There are several illumination techniques such as 1.Front lighting 2.Offset illumination 3.Rear illumination 4.Rear offset illumination 1. Front Lighting: It is the generally used technique for illuminating most of the surfaces of object. There are one or more light sources placed in front of the object surface. The reflected light is received by the camera kept perpendicular to the surface. 2. Offset Illumination: This technique is used where several light sources cannot be used. For example splines, intricate features and complicated profiles can be illuminated using this technique.
3. Rear Illumination: Rear or backside of the opaque object cannot be illuminated by Front or offset illumination techniques. Internal features on opaque objects can be imaged using rear 4. Rear offset Illumination: Internal features on opaque objects can be imaged using rear offset illumination. This technique is used where several light sources cannot be used. Cameras, Frame Grabber Cameras are the important element in the vision sensor. Vision Cameras The principal imaging devices used for robot vision are television cameras, consisting either of a tube (vidicon camera) of the solid state cameras (CCD, CID or silicon bipolar sensor cameras and associated electronics. The principles of operation of the vidicon camera, a commonly used representative of the tube family of TV cameras and the charge-coupled device (CCD), one of the principal exponents of solid state image sensors have been considered. Solid state imaging devices offer a number of advantages over tube cameras such as lighter weight, smaller size, longer life and lower power consumption. However, the resolution of certain tubes is still beyond the capabilities of solid state cameras.
Fig.19.Vidicon camera Figure 19. shows the scheme of a vidicon camera. An image is formed on the glass faceplate that has its inner surface coated with two layers of materials. The first layer is a transparent signal electrode film deposited on the faceplate of the inner surface. The second is a thin layer of photosensitive material deposited over the conducting film which consists of small areas of high density. Line scan sensors obviously yield only one line of an input image and are ideally suited for applications in which objects are moving past the camera as on conveyor belts. A twodimensional image is produced by the motion of the object in the direction perpendicular to the camera.
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems ROBOT VISION SYSTEMS: Process of extracting,characterizing and interpreting information from images of a three dimensional world. - The operation consists of three functions 1. Sensing and Digitizing 2. Image Processing and Analysis 3. Applications Robotic vision system The basic function of robotic vision system is to identify an object and determine its location (position and orientation) The vision hardware consist of a camera, TV monitor an image pre-processor and a conventional micro computer Signal is digitized and loaded into image memory
Robot Vision: - Process of extracting, characterizing and interpreting information from images of a three dimensional world. - The operation consists of three functions 1. Sensing and Digitizing 2. Image Processing and Analysis 3. Applications 3.4.1. Sensing and Digitizing: - Process that yields a visual image of sufficient contrast that is typically digitized and stored in the computer memory. - Requires the following devices i) Cameras ii) Digitizer iii)Frame Grabber iv)Vision Controller&Frame grabber 3.4.2.Image Processing and Analysis: For data reduction and interpretation of the image,the digitized data may be further subdivided into the following sub functions. 1. Preprocessing 2.segmentation 3.Description 4.Recognition 5.Interpretation 1. Preprocessing: - Noise reduction - Enhancement details 2. Segmentation: - partitions an image into objects of interest 3. Description: - Computation of various features like size, shape, etc. suitable for differentiating one object from other 4. Recognition: - Identifies the object 5. Interpretation: - assigns meaning to an ensemble of recognized objects in the scene. 3.4.3. Applications of robotic vision systems. 1. Inspection 2. Part Identification 3. Location and orientation 1. Inspection - checking for gross surface defects - discovery of flaws in labeling( during final inspection of the product package) - verification of the presence of components in assembly 2. Part Identification - recognize and classify objects - Identification involves a recognition process in which a part itself,or its position and orientation is determined.(unlike inspection where part is either accepted or rejected) - part sorting,Palletizing,Depalletizing,and picking parts that are randomly oriented from a conveyor or bin. 3. Location and orientation (Visual serving and navigation) - part positioning, retrieving parts moving along a conveyor,assembly,bin picking and seam tracking in continuous arc welding - Automatic robot path planning and collision avoidance using visual data
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems LOW LEVEL VISION - Concepts and techniques required to implement the functions such as sensing and preprocessing - Imaging techniques i) structured lighting approach ii) Stereo imaging - Visual information is converted into electrical signals based on light intensity - The various sub functions in this process are 1. Sensing: 2.Digitizing 3.Image storage and computation 4.Processing and Analysis Sensing: - Process that yields a visual image of sufficient contrast that is typically digitized and stored in the computer memory.Requires the following devices i) Cameras ii)Digitizer iii) Frame Grabber iv)Vision Controller&Frame grabber - Imaging devices such as cameras are used to take images of the objects in a 3-D world Digitizing: Sampling: Given analog signal is approximated by the sampled digital output and it is done by sampling the analog signal periodically at a proper sampling rate. Quantization: Sampled voltage is quantized into finite number of defined amplitude levels which corresponds to gray scale used in the system. Number of Quantization levels is equal to 2n ,where n is the number of digits (bits) of the A/D converter. Encoding: Quantized amplitude levels are changed to digital code. This involves representing an amplitude level by a binary digit sequence. Image Storage and Computation: - A combination of row and column counters are used in the frame grabber which are - Each position of the screen may be uniquely addressed The given pixel array is stored Thresholding, windowing and calculations for histogram modification are carried out using the intensity of the light in the image Preprocessing and Analysis: - Vision Controller process and analyze stored images - Approaches in preprocessing. i) Noise reduction The intensity of each pixel is replaced by the average of intensities of predefined neighborhood of that pixel. ii) enhancement of details - Sources of Noise: i) Sampling ii) quantization iii) Transmission or disturbances in the environment during image acquisition and digitizing
Techniques: i) Neighborhood averaging for noise reduction ii) Histogram equalization for enhancement of details iii) Histogram Linearization for enhancement of details - Difficulties: Noise reduction: i) blurred edges and other sharp details this is avoided by using median filters. The intensity of each pixel is replaced by the median of intensities of predefined neighborhood of that pixel, instead of by the average. Enhancement of details: ii) To be able to automatically adapt to changes in illumination (This plays a central role in determining the success of subsequent processing algorithms.) HIGHER LEVEL VISION: Medium and higher level vision areas are subdivided into four prinpal areas. 1. Segmentation 2. Object Description or Feature Extraction 3. Interpretation of Visual Information 1. Segmentation: (i) Similarity approaches: 1. Thresholding: This is a binary conversion technique in which each pixel is converted into a binary value, either black or white. This is accomplished by utilizing the frequency histogram of the image and establishing what frequency intensity (grey level) is to be the border between black and white. 2. Region Growing: This is a procedure that groups pixels or sub regions into larger regions based on attribute similarities.The next step after - Pixel aggregation To start with a set of seed points and grow regions by appending to each seed point those neighboring pixels that have similar properties such as intensity,texture or colour. - Region Growing analysis set of Descriptors based on intensity and special properties. (ii) Discontinuity approaches: 1. Edge Detection: - the features of similar regions at the edges show demarcation representing changeover of the attributes. - edge detection is based on follow the edge procedure, to scan the pixel within the region for turn left and stepor otherwise turn right and step from a starting point outside the boundary. 2.Object recognition: Template matching: - General statistical pattern recognition technique,the object to be recognized is stored in the computer memory in advance.The properties like area,perimeter etc are calculated for the prototype pattern Structural technique: - deals with relationship between features or the boundaries of an object which is subdivided into primitive or the elements with defined inter-relations. - This is also known as syntactic pattern recognition.
Definition/Formula/Principle/Diagram/Derivation/Examples and Exercises etc.(as applicable to the topics) University pattern problems Applications of robotic vision systems. 1. Inspection 2. Part Identification 3. Location and orientation 1. Inspection - checking for gross surface defects - discovery of flaws in labeling( during final inspection of the product package) - verification of the presence of components in assembly - measuring for dimensional accuracy and presence of holes and other features - 100% inspection possible unlike sampling in manual method - reduce idle time 2. Part Identification - recognize and classify objects - Identification involves a recognition process in which a part itself,or its position and orientation is determined.(unlike inspection where part is either accepted or rejected) - part sorting,Palletizing,Depalletizing,and picking parts that are randomly oriented from a conveyor or bin. 3. Location and orientation (Visual serving and navigation) - part positioning, retrieving parts moving along a conveyor,assembly,bin picking and seam tracking in continuous arc welding - Automatic robot path planning and collision avoidance using visual data