0% found this document useful (0 votes)
15 views

RAT292 M3 Part 2 Sensors and Actuators

Uploaded by

priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

RAT292 M3 Part 2 Sensors and Actuators

Uploaded by

priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

RAT292 Sensors and

Actuators for Robotics


Dr Sreepriya.S
Dept. of Robotics and Automation
Module III(7Hours)
Vision based sensors-
Elements of vision sensor, image acquisition, image processing, edge detection,
feature extraction, object recognition, pose estimation and visual servoing,
hierarchy of a vision system, CCD and CMOS Cameras, Monochrome, stereovision,
night vision cameras, still vs video cameras, kinect sensor; Block schematic
representations.
Steps in a Vision
System
Steps in vision
• vision sensing has two steps, namely,
image acquisition and image processing.
1. Image Acquisition
• In image acquisition, an image is acquired from a
vidicon which is digitized or from a digital camera
(CCD or CID).
• The image is stored in computer memory (also
called a frame buffer) in the format such as TIFF,
JPG, Bitmap, etc.
• The buffer may be a part of the frame grabber
card or in the computer itself.
• Image acquisition is primarily a hardware function,
however, software can be used to control light
intensity, focus, camera angle, synchronization,
field of view, read times, and other functions.
• Image acquisition has four principle elements,

• A light source, either controlled or ambient,


• A lens that focuses reflected light from the
object on to the image sensor,
• An image sensor that converts the light image
into a stored electrical image,
• The electronics to read the sensed image from
the image sensing element, and after
processing, transmit the image information to a
computer for further processing.
Image Acquisition Model
Digitized picture
No.of quantization level 2N
Encoding
2. Image Processing
• To enhance, improve, or otherwise alter an image
and to prepare it for image analysis.
• Usually, during image processing, information is
not extracted from the image.
• The intention is to remove faults, trivial
information, or information that may be
important, and to improve the image.
• It examines the digitized data to locate and
recognize an object within the image field.
• There are many steps in image processing
a. Image Data Reduction
• The objective is to reduce the volume of data.
• As a preliminary step in the data analysis, the
following two schemes have found common usage
for data reduction:
• Digital conversion
• Windowing
The function of both schemes is to eliminate the
bottleneck that can occur from the large volume
of data in image processing.
• This kind of conversion would significantly reduce
the magnitude of the image-processing problem.
Digital conversion
• Digital conversion reduces the number of gray
levels used by the vision system.
• For example, an 8-bit register used for each
pixel would have 28 = 256 gray levels.
• Depending on the requirements of the
application, digital conversion can be used to
reduce the number of gray levels by using
fewer bits to represent the pixel light intensity.
• Four bits would reduce the number of gray
levels to 16.
Windowing

• It involves using only a portion of the


total image stored in the frame buffer
for image processing and analysis.
• Eg. In windowing, to inspect a circuit
board, a rectangular window is selected
to surround the component of interest
and only pixels within that window are
analyzed.
Effect of windowing
b. Histogram Analysis
• A histogram is a representation of the total number of
pixels of an image at each gray level.
• Histogram information is used in a number of
different processes, including thresholding.
• For example, histogram information can help in
determining a cut-off point when an image is to be
transformed into binary values.
histogram
• It is a graph showing the
number of pixels in an
image at each different
intensity value found in that
image.
• For an 8-bit grayscale image
there are 256 different
possible intensities, and so
the histogram will
graphically display 256
numbers showing the
distribution of pixels
amongst those grayscale
values.
c. Thresholding
• It is the process of dividing an image into
different portions or levels by picking a certain
grayness level as a threshold.
• Comparing each pixel value with the threshold,
and then assigning the pixel to the different
portions or level, depending on whether the
pixel’s grayness level is below the threshold (‘off’
or 0, or not belonging) or above the threshold
(‘on’ or 1, or belonging).
• Thresholding is the simplest method of
segmenting images.
• From a grayscale image, thresholding can be used to
create binary images.
• The simplest thresholding methods replace each pixel
in an image with a black pixel if the image intensity Ii,j
is less than some fixed constant T(ie, Ii,j <T}), or a
white pixel if the image intensity is greater than that
constant.
Example of a threshold effect
d. Masking
• A mask may be used for many different purposes:
• filtering operations
• noise reduction, and others.
• It is possible to create masks that behave like a
low pass filter such that higher frequencies of an
image are attenuated while the lower frequencies
are not changed very much. Thereby, the noise is
reduced.
Masking of an image
• Masking an image considers a portion of
an imaginary image ,which has all the
pixels at a gray value of 20 except the one
at a gray level of 100.
• The one with 100 may be considered noise.
• Applying the 3 *3 mask over the corner of
the image yields the following value:

• where S =m1 + m2 + …. + m9 = 9
• The large difference between the noisy
pixel and the surrounding pixels, i.e., 100
vs. 20, becomes much smaller, namely, 29
vs. 20, thus reducing the noise.
• With this characteristic, the mask acts as a
low-pass filter.
• Here reduction of noise has been achieved
using what is referred as neighbourhood
averaging.
• It causes the reduction of the sharpness
of the image as well.
Edge Detection
• Edge detection is a general name for
a class of computer programs and
techniques that operate on an image
and result in a line drawing of the
image.
• The lines represent changes in values
such as cross section of planes,
intersections of planes, textures,
lines, etc.
• Sudden changes of discontinuities in an
image are called as edges.
• Significant transitions in an image are
called as edges.
• Edge detection includes a variety
of mathematical methods that aim at
identifying points in a digital image at
which the image brightness changes
sharply or, more formally, has
discontinuities.
• The points at which image brightness
changes sharply are typically organized into
a set of curved line segments termed edges.
Edge detection
• In many edge-detection techniques, the
resulting edges are not continuous.
• Many applications, continuous edges
are preferred, which can be obtained
using the Hough transform.
• It is a technique used to determine the
geometric relationship between
different pixels on a line, including the
slope of the line.
• Hough Space, an alternate way to
represent a line, and how lines are
detected.
Hough transform
• Consider a straight line in the xy-plane, as, which is
expressed as
y = mx + c, where m is the slope and c is the
intercept
• The line can be transformed into a Hough
plane of m – c with x and y as its slope and
intercept, respectively.
• A line in the xy-plane will transform into a
point in the Hough plane.
• All lines through a point in the Hough plane
will transform into a single line in the x-y
plane.
• If a group of points is collinear, their Hough
transform will all intersect. So it can be
determined whether a cluster of pixels is on a
straight line or not.
• The orientation of an object in a plane can be
determined by calculating the orientation of a
Another notation
• when representing lines in
the form of y = ax + b and
the Hough Space with the
slope and intercept, the
algorithm won’t be able to
detect vertical lines because
the slope is
undefined/infinity for
•vertical
To avoidlines
this issue, a straight line is instead represented by
a line called the normal line that passes through the origin
and perpendicular to that straight line.
• The form of the normal line is ρ = x cos(θ) + y sin(θ) where
ρ is the length of the normal line and θ is the angle
between the normal line and the x axis.
Note
• Basically, lines are drawn from each of the points that are equal to
255 (white pixels in binary image) in all possible directions (180
degrees), and corresponding r (radius) and θ (angle) are noted
down.
• This is done for each pixel with value 255 on the image.
• Now if there are multiple points on the image and they happen to
lie on a line, they will generate same value of radius and θ (angle).
• Assume that we increment the count for same value of radius and
θ (angle).
• When we are finished going through all the points on the image,
we will have a few combinations of radius and θ (angle) which will
have count more than 1.
• All these points that have same value for radius and θ (angle) can
be joined using a straight line.
Segmentation
• Segmentation is a generic name for a number of
different techniques that divide the image into
segments .
• The purpose of segmentation is to separate the
information contained in the image into smaller
entities that can be used for other purposes.
• Segmentation includes
• edge detection,
• region growing and splitting, and others
A.region growing
• region growing works based on the similar
attributes, such as gray-level ranges or other
similarities, and then try to relate the regions by
their average similarities.

• A simple approach to image segmentation is to


start from some pixels (seeds) representing
distinct image regions and to grow them, until
they cover the entire image
B.region splitting
• The opposite approach to region growing is region
splitting .
• It is a top-down approach and it starts with the
assumption that the entire image is homogeneous
• If this is not true , the image is split into four sub
images
• This splitting procedure is repeated recursively
until we split the image into homogeneous regions
• region splitting is carried out based on
thresholding in which an image is split into closed
areas of neighborhood pixels by comparing them
with thresholding value or range.
Morphology Operations
• These are a family of operations which are applied on the shape of subjects in
an image.
• They include many different operations, both for binary and gray images, such
as
• thickening,
• dilation,
• erosion,
• skeletonization,
• opening,
• closing,
• filing.
• These operations are performed on an image in order to aid in its analysis, as
well as to reduce the ‘extra’ information that may be present in the image
3. Image Analysis
• Image analysis is a collection of operations and techniques that
are used to extract information from images.
• Among these are:
• feature extraction;
• object recognition;
• analysis of the position,
• size, orientation;
• extraction of depth information, etc.
• Some techniques can be used for multiple purposes.
a. Feature Extraction
• Objects in an image may be recognized by their
features that uniquely characterize them.
• These include, but are not limited to,
• gray-level histograms,
• morphological features such as perimeter,
area, diameter, number of holes, etc.,
• eccentricity,
• cord length,
• moments.
As an example
• perimeter of an object may be found by first applying
an edge-detection routine and then counting the
number of pixels on the perimeter.
• The area can be calculated by region-growing
techniques, whereas diameter of a non circular object
is obtained by the maximum distance between any
two points on any line that crosses the identified area
of the object.
• In order to know the thinness of an object, it can be calculated
using either of the two ratios:
b. Object Recognition
• The next in image analysis is to
identify the object that the image
represents based on the extracted
features.
• The recognition algorithm should be
powerful enough to uniquely identify
the object.
• Typical techniques used in the
industries are template matching and
structural technique.
template matching

• The features of the object in the


image, e.g., its area, diameter, etc.,
are compared to the corresponding
stored values.
• These values constitute the stored
template.
• When a match is found, allowing for
certain statistical variations in the
comparison process, the object has
been properly classified.
Structural techniques of pattern
recognition
• It rely on the relationship between
features or edges of an object.
• For example, if the image of an
object can be subdivided in four
straight lines connected at their end
points, and the connected lines are
at right angles then the object is a
rectangle.
• This kind of technique is known as
syntactic pattern recognition.
• Thank You

You might also like