Digital Signal Processing Using Labview
Digital Signal Processing Using Labview
104,000+
3,150+ INTERNATIONAL 109+ MILLION
OPEN ACCESS BOOKS AUTHORS AND EDITORS DOWNLOADS
AUTHORS AMONG
BOOKS TOP 1% 12.2%
DELIVERED TO AUTHORS AND EDITORS
MOST CITED SCIENTIST FROM TOP 500 UNIVERSITIES
151 COUNTRIES
Chapter from the book Practical Applications and Solutions Using LabVIEW™ Software
Downloaded from: http://www.intechopen.com/books/practical-applications-and-
solutions-using-labview-software
1. Introduction
Digital Image processing is a topic of great relevance for practically any project, either for
basic arrays of photodetectors or complex robotic systems using artificial vision. It is an
interesting topic that offers to multimodal systems the capacity to see and understand their
environment in order to interact in a natural and more efficient way.
The development of new equipment for high speed image acquisition and with higher
resolutions requires a significant effort to develop techniques that process the images in a
more efficient way. Besides, medical applications use new image modalities and need
algorithms for the interpretation of these images as well as for the registration and fusion of
the different modalities, so that the image processing is a productive area for the
development of multidisciplinary applications.
The aim of this chapter is to present different digital image processing algorithms using
LabView and IMAQ vision toolbox. IMAQ vision toolbox presents a complete set of digital
image processing and acquisition functions that improve the efficiency of the projects and
reduce the programming effort of the users obtaining better results in shorter time.
Therefore, the IMAQ vision toolbox of LabView is an interesting tool to analyze in detail
and through this chapter it will be presented different theories about digital image
processing and different applications in the field of image acquisition, image
transformations.
This chapter includes in first place the image acquisition and some of the most common
operations that can be locally or globally applied, the statistical information generated by
the image in a histogram is commented later. Finally, the use of tools allowing to segment or
filtrate the image are described making special emphasis in the algorithms of pattern
recognition and matching template.
www.intechopen.com
298 Practical Applications and Solutions Using LabVIEW™ Software
Radiance is the light that travels in the space usually generated by different light sources,
and the light that arrives at the object corresponds to the Irradiance. According to the law of
energy conservation, a part of the radiant energy that arrives to the object is absorbed by
this, other is refracted and another is reflected in form of radiosity:
φ (λ ) = R( λ ) + T (λ ) + A( λ ) (1)
Where φ (λ ) represents the incident light on the object, A( λ ) the absorbed energy by the
material of the object, T (λ ) the refracted flux and R( λ ) the reflected energy, all of them
define the material properties(Fairchild, 2005) at a given wave length (λ) . The radiosity
represents the light that leaves a diffuse surface (Forsyth & Ponce, 2002). This way when an
image is acquired, the characteristics of this will be affected by the type of light source, the
proximity of the same ones, and the diffusion of scene, among others.
RG
BF
ilter
R G B Matrix
Analog
Voltage Digital
ADC Signal
www.intechopen.com
Digital Image Processing Using LabView 299
A colour mask is generally used (RGB Filter) for acquisition of colour images. This filter
allows decomposing the light in three bands, Red, Green and Blue. The three matrixes are
generated and each one of them stores the light intensity of each RGB channel (Fig. 2).
The next example (presented in Fig. 3) show to acquire video from a webcam using the NI
Vision Acquisition Express. This block is located in Vision/Vision Express toolbox and it is
the easiest way to configure all the characteristics in the camera. Inside this block there are
four sections: the first one corresponds to the option of “select acquisition source” which
shows all the cameras connected in the computer. The next option is called “select
acquisition type” which determines the mode to display the image and there are four
modes: single acquisition with processing, continuous acquisition with inline processing,
finite acquisition with inline processing, and finite acquisition with post processing. The
third section corresponds to the “configure acquisition settings” which represents the size,
brightness, contrast, gamma, saturation, etc. of the image and finally in the last option it is
possible to select controls and indicators to control different parameters of the last section
during the process. In the example presented in Fig. 3 it was selected the continuous
acquisition with inline processing, this option will display the acquired image in continuous
mode until the user presses the stop button.
⎛ x11 A x 1n ⎞
⎜ ⎟
x12
I = ⎜ 21
A x2 n ⎟
⎜ B B ⎟
x x22
(2)
⎜⎜ ⎟
B D
⎝ xm 1 xm 2 A xmn ⎟⎠
Since most of the devices acquire the images with a depth of 8 bits, the typical range of
levels of gray for an image is from 0 to 255 so that the matrix elements of the image is
represented by xij ∈⎡⎣0...255⎤⎦ .At this point it is convenient to say that even if the images are
www.intechopen.com
300 Practical Applications and Solutions Using LabVIEW™ Software
acquired at RGB format, it is frequently transformed in a gray scale matrix and for
achieving the transformation from RGB type to gray Grassman level (Wyszecki & Stiles,
1982)is employed:
In the example presented in Fig. 4 shows how to acquire a digital image in RGB and
grayscale format using the IMAQ toolbox. In this case there are two important blocks: The
first one is the IMAQ Create block located in Vision and Motion/Vision Utilities/Image
Management, this block creates a new image with a specified image type (RGB, Grayscale,
HSL, etc.), the second block is the IMAQ Read Image which is located in Vision and
Motion/Vision Utilities/Files/, the function of this block is to open an image file which is
specified previously in the file path of the block and put all the information of this opened
image in the new image created by IMAQ Create. In other words, in the example presented
in Fig. 4 (A) the file picture4.png is opened by the IMAQ Read Image and the information
this image is saved in a new image called imageColor that corresponds to a RGB (U32)
image type.It is very simple to modify the image type of the system, in Fig. 4 (B) the image
type is changed to Grayscale (U8) and the image is placed in imageGray.
4-neighbourhood
D-neighbourhood
8-neighbourhood
www.intechopen.com
Digital Image Processing Using LabView 301
Another important characteristic in the image definition is the neighbourhood of pixels, that
could be classified in 3 groups described in (Fig. 5), if the neighbourhood is limited at the
four adjacent pixels is named call 4-neighbourhood, the one conformed by the diagonal
pixels is the D- neighbourhood and the 8 surrounding pixels is the 8-neighbourhood, the last
one includes the 4- and the D-neighbourhood of pixel.
4. Image transformations
The images processing can be seen like a transformation of an image in another, in other
words, it obtains a modified image starting from an input image, for improving human
perception, or for extraction of information in computer vision. Two kinds of operations can
•
be identified for them:
Punctual operations. Where each pixel of the output image only depends of the value of
•
a pixel in the input image.
Grouped operations. In those that each pixel of the output image depends on multiple
pixels in the input image, these operations can be local if they depend on the pixels that
constitute its neighbourhood, or global if they depend on the whole image or they are
applied globally about the values of intensity of the original image.
I ' = f (I ) (4)
The f functions most commonly used are, the identity, the negative, the threshold or
binarization and the combinations of these. For all operations the pixels q of the new image
I' depends of value of pixels p at the original I images.
q=p (5)
q = 255 − p (6)
In Fig. 6 is shown how to inverse the image using the toolbox of Vision and Motion of
LabView. As it is observed in Fig. 6 b), the block that carry out the inverse of the image is
called IMAQ inverse located in Vision and Motion/Vision Processing/Processing. This
block only receives the source image and automatically performed the inverse of the image.
www.intechopen.com
302 Practical Applications and Solutions Using LabVIEW™ Software
a) b)
Fig. 6. a) Gray scale image and b) Inverse grayscale image.
⎧ 0 if p ≤ t
q=⎨
⎩255 if p > t
(7)
Fig. 7 b) shows the result of applying the threshold function to image in Fig. 7 a) with a t
value of 150.
a) b)
Fig. 7. a) Original Image and b) Thresholding Image (150-255).
www.intechopen.com
Digital Image Processing Using LabView 303
A variation of threshold function is the gray level reduction function, in this case, some
values of threshold are used, and the number of gray levels at the output image is reduced
as is shown in (8)
⎧0 p ≤ t1
⎪
if
⎪⎪ q1 if t1 < p ≤ t 2
q = ⎨q 2 t2 < p ≤ t3
⎪B
if (8)
⎪
B B
⎪⎩qn if tn − 1 < p ≤ 255
⎡ −1 0 1 ⎤
⎢ ⎥
⎢ −2 1 2 ⎥
(9)
⎢⎣ −3 0 3 ⎥⎦
⎡1 1⎤
⎢9 9 ⎥⎥
1
⎢ 9
High − pass ≡ ⎢
1⎥
⎢9 9⎥
1 1 (11)
⎢ ⎥
9
⎢1 1⎥
⎢⎣ 9 9 ⎥⎦
1
9
www.intechopen.com
304 Practical Applications and Solutions Using LabVIEW™ Software
⎡ −1 −1 −1⎤
⎢ ⎥
Low − pass ≡ ⎢ −1 9 −1⎥ (12)
⎢⎣ −1 −1 −1⎥⎦
5. Image histogram
The histogram is a graph that contains the number of pixels in an image at different
intensity value. In a 8-bit greyscale image a histogram will graphically display 256 numbers
showing the distribution of pixels versus the greyscale values as is shown Fig. 8.
www.intechopen.com
Digital Image Processing Using LabView 305
Read Image Block is connected to the input of the IMAQ histograph block and then a
waveform graph is connected in order to show obtained results.
6. Image segmentation
Prewitt Edge Detector: The prewitt operator has a 3x3 mask and deals better with the noise
effect. An approach using the masks of size 3x3 is given by considering the below
arrangement of pixels about the pixel (x , y). In Fig. 10 is shown an example of the Prewitt Edge
Detector.
⎡ −1 0 1⎤ ⎡1 1 1⎤
⎢ ⎥ ⎢ ⎥
Gx = ⎢ −1 0 1⎥ Gy = ⎢ 0 0 0 ⎥ (15)
⎢⎣ −1 0 1⎥⎦ ⎢⎣ −1 −1 −1⎥⎦
Fig. 10. Prewitt Edge Detector. (a) Grayscale image, (b) Prewitt Transformation.
Sobel Edge Detector: The partial derivatives of the Sobel operator are calculated as
Gx = ( a2 + 2 a3 + a4 ) − ( a0 + 2 a7 + a6 ) (16)
Gx = ( a6 + 2 as + a4 ) − ( a0 + 2 a1 + a2 ) (17)
⎡ −1 −2 −1⎤ ⎡ −1 0 1 ⎤
⎢ ⎥ ⎢ ⎥
Gx = ⎢ 0 0 0 ⎥ Gy = ⎢ −2 0 2 ⎥ (18)
⎢⎣ 1 2 1 ⎥⎦ ⎢⎣ −1 0 1 ⎥⎦
In Fig. 11 and Fig. 12 is shown the image transformation using the Sobel Edge Detector.
www.intechopen.com
306 Practical Applications and Solutions Using LabVIEW™ Software
Fig. 11. Sobel Edge Detector. (a) Grayscale image, (b) Sobel Transformation.
Fig. 12. Sobel Edge Detector. (a) Grayscale image, (b) Sobel Transformation.
7. Smoothing filters
The process of the smoothing filters is carried out moving the filter mask from point to point
in an image; this process is known like convolution. Filter a MxN image with an averaging
filter of size mxn is given by:
h( x , y ) =
∑ s =− a ∑ t =−b d ( s , t ) f ( x + s , y + t )
a b
∑ s =− a ∑ t =−b d ( s , t )
a b
(19)
In Fig. 13 is shown a block diagram of a smoothing filter. Although IMAQ vision libraries
contain the convolution blocks, this function is presented in order to explain in detail the
operation of the smoothing filter. The idea is simple, there is a kernel matrix of mxn
dimension, these elements are multiplied by the each element of a sub-matrix (mxn)
contained in the image of MxN elements. Once the multiplication is done, all the elements
are summed and divided by the number of elements in the kernel matrix. The result is saved
in a new matrix, and the kernel window is moved to another position in the image to carry
out a new operation.
www.intechopen.com
Digital Image Processing Using LabView 307
a) b)
c) d)
Fig. 14. Local Average Filter, a) Vision Express, b) Original Image, c) block diagram, d)
smoothed image.
www.intechopen.com
308 Practical Applications and Solutions Using LabVIEW™ Software
a) b)
c) d)
Fig. 15. Gaussian Filter, a) Vision Express, b) Original Image, c) block diagram, d) smoothed
image.
8. Pattern recognition
Pattern recognition is a common technique that can be applied for the detection and
recognition of objects. The idea is really simple and consists into find an image according a
template image. The algorithm not only searches the exact apparition of the image but also
finds a certain grade of variation respect to the pattern. The simplest method is the template
matching and it is expressed in the following way:
An image A (size (WxH) and image P (size wxh), is the result of the image M (size (W-
w+1)x(H-h+1)), where each pixel M(x,y) indicates the probability that the rectangle [x,y]-
[x+w-1,y+h-1] of A contains the Pattern.
The image M is defined by the difference function between the segments of the image:
(
M ( x , y ) = ∑ a =0 ∑ b =0 P ( a, b ) − A ( x + a, y + b ) )
w h 2
(20)
In the literature are found different projects using the template matching technique to solve
different problems. Goshtasby presented an invariant template matching using normalized
invariant moments, the technique is based on the idea of two-stage template matching
(Goshtasby, 2009). Denning studied a model for a real-time intrusion-detection expert
system capable of detecting break-ins, penetrations, and other forms of computer(Denning,
2006). Seong-Min presents a paper about the development of vision based automatic
inspection system of welded nuts on support hinge used to support the trunk lid of a
car(Seong-Min, Young-Choon, & Lee, 2006). Izák introduces an application in image
analysis of biomedical images using matching template (Izak & Hrianka).
www.intechopen.com
Digital Image Processing Using LabView 309
a) b)
Fig. 17. a) Selection of pattern matching, b) creation of new template.
In the last step of the configuration the vision assistant carries out the pattern matching
algorithm and identifies the desired object in the whole (Fig. 18 a). Finally when the
program is executed in real-time, each frame of the camera identifies the desired object in
the scene (Fig. 18 b).
It is important to remark that in order to obtain the output parameters of the recognized
matches, it must be selected the checking box Matches inside the control parameters. The
www.intechopen.com
310 Practical Applications and Solutions Using LabVIEW™ Software
output matches is a cluster that contains different information. In order to acquire certain
parameters of the cluster is necessary to place an index array block and the unbundle block
located in programming/cluster.
a) b)
Fig. 18. a) Recognition of the template in the whole image, b) real-time recognition.
dϑ
The left and right wheels generates a specific trajectories around the ICC with the same
angular rate ω = .
dt
ω ∗ R = Vc (21)
www.intechopen.com
Digital Image Processing Using LabView 311
⎛ D⎞
ω ∗⎜R − ⎟ = v1
⎝ 2⎠
(22)
⎛ D⎞
ω ∗⎜R − = v2
⎝ 2 ⎟⎠
(23)
Where v1 and v2 are the wheel’s velocities, R is the distance from ICC to the midpoint
between the two wheels.
v2 + v1 D
R= ∗
v2 − v1 2
(24)
v2 − v1
ω= (25)
D
The velocity in the C Point is the midpoint between the two wheels and can be calculated as
the average of the two velocities
v1 + v 2
Vc = (26)
2
According to these equations if v1=v2 the robot will move in a straight line. In the case of
different values in the velocities of the motors, the mobile robot will follow a curved trajectory.
Brief description of the system: A picture of the mobile robot is shown in Fig. 19. The mobile
robot is dived in five important groups: the mechanical platform, the grasping system, the
•
vision system, the digital control system and power electronics system.
In mechanical platform were mounted 2 DC motors, each one of the motors has a high
torque and a speed of 80 RPMs. Obtaining with these two parameters a high torque and
•
acceptable speed of the mobile robot.
The grasping system is a mechatronic device with 2 DOF of linear displacements, there
•
are two servos with a torque of 3.6 kg-cm and speed of 100 RPMs.
The Microsoft webcam Vx-1000 was used for the Vision System section. This webcam
•
offers high-quality images and good stability.
The digital control system consists in a Microcontroller Microchip PIC18F4431. The
velocity of the motors are controlled by Pulse Wide Modulation, therefore, this
microcontroller model is important because contains 8 PWMs in hardware and 2 in
software (CCP). Moreover, all the sensors of the mobile robot (encoders, pots, force
sensors, etc.) are connected to the analog and digital inputs. The interface with the
•
computer is based on the protocol RS232.
The power electronic system is carried out by 2 H-Bridges model L298. The H-Bridge
regulates the output voltage according to the Pulse Wide Modulation input. The motors
are connected to this device and the velocity is controlled by the PWM of the
microcontroller.
The core of the system remains in the fusion of the computer-microcontroller interface and the
vision system. On one hand, the computer-microcontroller interface consists in the serial
communication between these two devices, therefore it is important to review how a serial
communication can be carried out using LabView. In the other hand, the vision consists in the
pattern matching algorithm which find the template of a desired object in an acquired image.
www.intechopen.com
312 Practical Applications and Solutions Using LabVIEW™ Software
www.intechopen.com
Digital Image Processing Using LabView 313
www.intechopen.com
314 Practical Applications and Solutions Using LabVIEW™ Software
www.intechopen.com
Digital Image Processing Using LabView 315
Fig. 24 shows the complete block diagram of the mobile robot, there is a timed loop where
the joystick and serial transmission are performed in a specific interval of time in the same
time the vision algorithms are carried out in the same loop. Moreover, there is a selector to
switch from manual to automatic control. In the automatic way, the coordinates of the
match are sent to the microcontroller and this device runs special routines to orientate the
robot in a correct position. In the manual way, the control is carried out by the user using a
joystick.
9. Conclusions
Different basis techniques of digital image processing using LabView have been boarded in
this chapter. At the beginning some theoretical concepts about the image generation were
discussed for standing out the effects of illumination, scene and acquisition system in the
results of image processing.
Then the stages of a classic system of image processing were described as well as the
necessary tools of the LabView platform to achieve each stage, from the acquisition of the
image to a control of a mobile using the image matching template. The image
transformation leave to know how the output image is generated from an input image with
the use of punctual and grouped operations, some examples were presented of most
comment image transformations.
Finally the pattern recognition section shows how to use an image into a computer vision
application, through an example of object detection and the use of other functionalities of
LabView suggest that the use of LabView as an excellent platform to develop robotic
projects as well as vision and image processing applications. The versatility provided by the
www.intechopen.com
316 Practical Applications and Solutions Using LabVIEW™ Software
software LabView and the capability of IMAQ toolbox increase the possibility to improve
the use of digital image processing in any application area.
10. Acknowledgement
This work is supported by the General Council of Superior Technological Education of
Mexico (DGEST).
Additionally, this work is sponsored by the National Council of Science and Technology
(CONACYT) and the Public Education Secretary (SEP) through PROMEP.
11. References
Chitsaz, H., & La Valle, S. (2006). Minimum Wheel-Rotation Paths for Differential Drive
Mobile Robots Among Piecewise Smooth Obstacles. Robotics and Automation, 2006.
ICRA 2006. Proceedings 2006 IEEE , 1616 - 1623.
Denning, D. (2006). Software Enginnering. IEEE Transaction on , SE-13, 222-232.
2001 Digital Imiage AnalysisNew YorkUSASpringer-Verlag
Dongkyoung, C. (2003). Tracking Control of Differential-Drive Wheeled Mobile Robots
Using a Backstepping-Like Feedback Linearization. IEEE transactions on systems,
man, and cybernetics—part a: systems and humans , 1-11.
Fairchild, M. (2005). Color Apperance Modeles. Chichester, UK: Wiley-IS&T.
Forsyth, D., & Ponce, J. (2002). Computer Vision: A Modern Approach. New Jersey: Prentice
Hall.
Frery, L. V., Gomes, A., & Levy, S. Image Processing for Computer Graphics and Vision. New
York, USA: Springer-Verlag.
Goshtasby, A. (27. Enero 2009). Pattern Analysis and Machine Intelligence. IEEE transactions
on , 338-344.
Greenwald, L., & Jkopena, J. (2003). Mobile Robot Labs. IEEE Robotics and Automatization
Magazine , 25-32.
Izak, P., & Hrianka, M. (kein Datum). Biomedical Image Analysis by Program "Vision
Assistan" and "LabView". Advances in Electrical and Electronic Engineering , 233-236.
Papadopoulos, E., & Misailidis, M. (2008). Calibration and planning techniques for mobile
robots in industrial environments. Industrial Robot: An International Journal , 564-572.
Reinders, M. (1997). Eye Tracking by Template Matching using an Automatic Codebook
Generation Scheme. Third Annual Conference of ASCI .
Seong-Min, K., Young-Choon, L., & Lee, S.-C. (18. Octuber 2006). SICE-ICASE , 1508-1512.
Wyszecki, G., & Stiles, W. (1982). Color Science: Concepts and Methods Quantitative Data and
Formulae. New York: John Wiley & Sons.
www.intechopen.com
Practical Applications and Solutions Using LabVIEW™ Software
Edited by Dr. Silviu Folea
ISBN 978-953-307-650-8
Hard cover, 472 pages
Publisher InTech
Published online 01, August, 2011
Published in print edition August, 2011
The book consists of 21 chapters which present interesting applications implemented using the LabVIEW
environment, belonging to several distinct fields such as engineering, fault diagnosis, medicine, remote access
laboratory, internet communications, chemistry, physics, etc. The virtual instruments designed and
implemented in LabVIEW provide the advantages of being more intuitive, of reducing the implementation time
and of being portable. The audience for this book includes PhD students, researchers, engineers and
professionals who are interested in finding out new tools developed using LabVIEW. Some chapters present
interesting ideas and very detailed solutions which offer the immediate possibility of making fast innovations
and of generating better products for the market. The effort made by all the scientists who contributed to
editing this book was significant and as a result new and viable applications were presented.
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Rubén Posada-Gómez, Oscar Osvaldo Sandoval-González, Albino Martínez Sibaja, Otniel Portillo-Rodríguez
and Giner Alor-Hernández (2011). Digital Image Processing Using LabView, Practical Applications and
Solutions Using LabVIEW™ Software, Dr. Silviu Folea (Ed.), ISBN: 978-953-307-650-8, InTech, Available from:
http://www.intechopen.com/books/practical-applications-and-solutions-using-labview-software/digital-image-
processing-using-labview