0% found this document useful (0 votes)
1 views

Report on Computer Vision

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Report on Computer Vision

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

REPORT ON: COMPUTER VISION

Submitted to Submitted By
Prof. Samapika Das Biswas Sushmita Mallick
Project guide Dept-IT
IEM Roll No: 10400313178
Registration No:131040110437

DATE- 19-05-2015

1|Page
CERTIFICATE
To Whom It May Concern

This is to certify that the project report entitled “COMPUTER VISION” submitted by

SUSHMITA MALLICK (Roll No: 10400313178, Registration No: 131040110437)

Student of Institute of Engineering & Management, in partial fulfilment of the requirement for
the degree of Bachelor of Information Technology, is a benefited work carried out by them
under the supervision and guidance of prof. Samapika Das Biswas during 4th semester of
Academic Session of 2014-2015. The content of this report has not been submitted to any other
university or institute for the award of any other degree.

I am glad to inform you that the work is entirely original and its performance is found to be
quite satisfactory.

Prof. Samapika Das Biswas Prof. Moloy Ganguly

Project Guide HOD

Institute of Engineering & Management Dept. of ECE

Institute of Engineering & Management

Prof. Dr. A. K. Nayak


Principle
Institute of Engineering & Management
Salt lake, Sector-V, Electronics Complex
Kolkata-700091

2|Page
ACKNOWLEDGEMENTS

I would like to express my special thanks of gratitude to my teacher


Samapika ma’am as well as our principal Dr. A. K. Nayak who gave me the
golden opportunity to do this wonderful project on the topic Computer Vision,
which also helped me in doing a lot of research. I came to know about so many
new things I am really thankful to them.

Secondly I would also like to thank my parents and friends who helped
me a lot in finalizing this project within the limited time frame

I would also like to show my gratitude to the Institute of Engineering and


Management, Kolkata for sharing their pearls of wisdom with me during the
project.

3|Page
CONTENTS
SL.NO TOPICS PAGE NO.
1 Title Page 1
2 Certificate 2
3 Acknowledgements 3
4 Table of Contents 4
5 Preface 5
6 List of Illustrations 6
7 Abstract 7
8 Introduction 8
9 Classification of 10
Images
10 Image Processing and 12
Computer Vision
11 Basic Concepts 16
12 Image Representation 17
13 Colour Spaces 19
14 Typical tasks of 23
Computer Vision
15 Computer Vision 25
system Methods
16 Application and Future 28
Prospects
17 Research Areas in 30
Computer Vision
18 Conclusion 32
19 References 33
PREFACE
4|Page
The goal of computer vision is to compute properties of the three-
dimensional world from digital images. Problems in this field include
reconstructing the 3D shape of an environment, determining how
things are moving, and recognizing people and objects and their
activities, all through analysis of images and videos.

This report will provide an introduction to computer vision, with


topics including image formation, feature detection, motion
estimation, image mosaics, 3D shape reconstruction, and object and
face detection and recognition. Applications of these techniques
include building 3D maps, creating virtual characters, organizing
photo and video databases, human computer interaction, video
surveillance, automatic vehicle navigation, and mobile computer
vision.

LIST OF ILLUSTRATIONS
5|Page
SL.NO TOPIC PAGE NO.
1 Raster Image 11
2 Fields related to 13
computer vision
3 Image processing vs 15
Computer Vision
4 Binary 17
5 Grayscale 17
6 Colour 18
7 RGB Additive Nature 20
8 HSV 20
9 Object Tracking 29

6|Page
ABSTRACT

Today, images and video are everywhere. Online photo-


sharing sites and social networks have them in the billions.
Search engines will produce images of just about any
conceivable query. Practically all phones and computers come
with built-in cameras. It is not uncommon for people to have
many gigabytes of photos and videos on their devices.
Programming a computer and designing algorithms for
understanding what is in these images is the field of computer
vision. Computer vision powers applications like image
search, robot navigation, medical image analysis, photo
management, and many more.

7|Page
INTRODUCTION

We humans have a sense of vision, due to which we can see


and perceive different objects, we can traverse and manoeuvre
an area, we can perform different tasks, we can take
decisions, and what not? This makes us intelligent,
autonomous and self-controlled. But let’s take a machine, or a
robot. They are man-made and not living things, and hence do
not have a sense of vision in them, unless we impart it to
them.
Inanimate objects likes machines and robots are made
artificially, some are manually controlled, whereas some
are autonomous and take decisions on their own. Unlike we
human beings, who have a very complex neural network in
our brain which gives us complex decision making power,
robots are completely blank. Its we who need to construct and
program these robots or computers in such a way that they can
‘see’. And this is what we call as Artificial Intelligence,
commonly known as AI. Computer Vision is, unsurprisingly,
the way and method by which we impart vision capabilities to
machines and robots. As per Wikipedia, Computer vision is a
field that includes methods for acquiring, processing,
analysing, and understanding images and, in general, high-
dimensional data from the real world in order to produce
numerical or symbolic information. Most of the concepts
remains the same in artificial intelligence as well, and where
there is vision there intelligence and automation is possible as
well.
The thing is that just because something thinks
differently from us,doesn’t mean it is not thinking. Lets say a
camera is mounted on the robot. But the robot won’t see the

8|Page
image above as it is. All the robot perceives is some random
voltage levels, and an array of binary or decimal data. It will
have absolutely no idea as to which is. Its we, the intelligent
humans, who program the robot in such a way that identifies
different objects.

9|Page
Classification of Images
Some of the basic concepts related to images and image
processing like types of images, pixel, channel, depth are
discussed here.
 Image- An image is an artefact that depicts or records
visual perception
Images may be two-dimensional, such as a photograph, screen
display, and as well as a three-dimensional, such as
a statue or hologram. They may be captured by optical devices
– such as cameras, mirrors, lenses, telescopes, microscopes,
etc. and natural objects and phenomena, such as the human
eye or water.

 Digital Image- A digital image is a numeric


representation (normally binary) of a two-
dimensional image. They are represented by means of a
multidimensional array of numbers (usually binary).
Depending on whether the image resolution is fixed, it
may be of vector or raster type. By itself, the term digital
image usually refers to raster
images or bitmapped images.

 Raster Image- A raster image is where images are stored


in matrix form. The pixels of image are arranged in forms
of rows and columns. This is the most commonly found
image type, and can be created very easily using cameras,
software, etc. All the image processing and editing

10 | P a g e
software support raster images only.

Fig.1
 Vector Image- Images which are inspired by the concepts
of mathematical geometry (vectors) are called vector
images. According to the rule of vectors, each point in
the image has a direction and length. Such images are
quite complicated to understand and process. It is not
supported by many software and much work hasn’t been
done in this field.

11 | P a g e
Image Processing and Computer Vision

An image, as it is, is useless to us, unless we extract some


useful information from it. Basically, what we do is we take
an image, process it by applying some algorithms and
procedures on it, and then finally we get an output which can
be image or some other characteristics.
For example if we want to enhance an image which has faded,
whose colors are washed out etc. we would open up any photo
editing software like Photoshop or Picasa, and then increase
the brightness, enhance the contrast, fill some light into it,
highlight the shadows a little bit, smoothen the pic a little to
remove some discontinuities maybe.This is called Image
Processing.We take an image, tweak some of its properties,
and you get another enhanced image.
On the other hand if we take a image and extract
some features from it like faces, objects, colors, gestures, etc
we call it Computer Vision.We can detect a face of a person
in a image by using Computer Vision.
Thus, in contrast to Image Processing, where
we mostly deal with applying techniques directly on the pixels
to enhance the overall picture, Computer Vision involves
working with higher concepts and algorithms related to
artificial intelligence, which involves intense programming so
that it blends with the user’s activities and requirements.

12 | P a g e
The scope of Computer Vision is much large as can be seen in
the following —

Fig.2

Many methods in computer vision are based


on statistics, optimization or geometry. The fields most
closely related to computer vision are image
processing, image analysis and machine vision. There is a
significant overlap in the range of techniques and applications
that these cover.

The various characterizations which distinguish each of the


fields from the others have been presented:
 Image processing and image analysis tend to focus on 2D
images, how to transform one image to another, e.g., by
pixel-wise operations such as contrast enhancement, local
operations such as edge extraction or noise removal, or
geometrical transformations such as rotating the image.
This characterization implies that image
13 | P a g e
processing/analysis neither require assumptions nor
produce interpretations about the image content.

 Computer vision includes 3D analysis from 2D images.


This analyzes the 3D scene projected onto one or several
images, e.g., how to reconstruct structure or other
information about the 3D scene from one or several images.
Computer vision often relies on more or less complex
assumptions about the scene depicted in an image.

 Machine vision is the process of applying a range of


technologies & methods to provide imaging-based
automatic inspection, process control and robot guidance in
industrial applications. Machine vision tends to focus on
applications, mainly in manufacturing, e.g., vision based
autonomous robots and systems for vision based inspection
or measurement. This implies that image sensor
technologies and control theory often are integrated with
the processing of image data to control a robot and that
real-time processing is emphasised by means of efficient
implementations in hardware and software. It also implies
that the external conditions such as lighting can be and are
often more controlled in machine vision than they are in
general computer vision, which can enable the use of
different algorithms.

 There is also a field called imaging which primarily focus


on the process of producing images, but sometimes also
deals with processing and analysis of images. For

14 | P a g e
example, medical imaging includes substantial work on the
analysis of image data in medical applications.

 Finally, pattern recognition is a field which uses various


methods to extract information from signals in general,
mainly based on statistical approaches and artificial neural
networks. A significant part of this field is devoted to
applying these methods to image data.

Fig.3

15 | P a g e
Basic Concepts
Pixels and Resolution

Pixels are tiny little dots that form the image. They are the
smallest visual elements that can be seen. This makes them
physically located somewhere in a raster image. When an
image is stored, the image file contains the following
information:
 Pixel Location
 Pixel Intensity

Resolution basically refers to the total number of pixels in an


image. It is usually represented in m×n (pronounced m-cross-
n) format where m is the width of the image and n is the
height of the image.An image having a width of 100 px and
height of 100 px has a resolution of 100-cross-100 or
100×100. Sometimes resolution is also represented as a
multiplication of width and height, like a 100×100 image can
also be referred to as 10,000 pixels image.

Aspect Ratio
Aspect Ratio is basically a ratio of Width:Height of the image.
For instance a 256×256 image has an aspect ratio of 1:1. You
must have come across this thing in several context. Like
while watching a movie or some TV shows, you must have
come across different aspect ratio standards. There are
basically three aspect ratio standards —
 Academy Standard – 4:3
 US Digital Standard – 16:9
 Anamorphic Scope Standard – 21:9

16 | P a g e
Image Representation
Images can be represented in three ways —
 Black and White Images (i.e. Binary Images)
 Grayscale Images
 Color Images

Black and White Images


A black and white image is an image which comprises of only
two colours — black and white. Black is usually represented
as zero (0) and white is represented as 1, thus making it a
Binary Image.

Fig.4
Grayscale Images
In a Grayscale Image, images are represented by several
shades ranging in between black and white. Black is usually
represented as 0 and white is represented as 1. But unlike
binary images, intermediate values in between 0 and 1 are
also possible here thus resulting in different shades of gray.

Fig.5
Color Images

Color Images are images formed by the combination of the


three primary colors – Red, Green and Blue. Each one of

17 | P a g e
these colors has its own plane of pixel intensities in form of
separate channels. The channels correspond to different color
spaces as well.

Fig.6
Depth

Depth represents the number of shades of a


particular colour used in the formation of an image. It applies
to grayscale as well as colour images. For instance, an 8-bit
image has 28 = 256 shades in between black and white,
whereas a 16-bit image has 216 = 65,536 shades in between
black and white. Obviously, greater the depth of the image,
greater number of unique colours/shades are used to represent
the image.
 1-bit : 21 = 2 shades (black & white / binary)
 8-bit : 28 = 256 shades
 24-bit : 224 = 16,777,216 shades (true color)
 64-bit : 264 = 18,446,744,073,709,551,616 shades
Depth also represents the space required by each pixel to be
stored in the memory. Each pixel of the image stores 8 bits of
information. But this isn’t always the exact size of an image.
The exact size of the image also depends upon the different
colour shades in the image and the compression technique
used.

Relationship between Depth and Intensity


Next, n-bit image also refers that each of its pixels stores the
intensities in n-bit fashion. Each pixel of an image has an
intensity. This intensity is directly related to the depth of the
image. In the previous point, it is stated that depth represents
different shades of a color. Here, I state that shades are
18 | P a g e
nothing but different intensity levels. For example, for an 8-
bit grayscale image, there are 28 = 256 different shades of
gray. This also means that each pixel can have 256 different
intensity levels, level 0 to level 255. Here level zero (0)
corresponds to black (0) and level 255 corresponds to white
(1). Any value in between 0 and 255 merely represents a gray
shade. For instance, a pixel intensity of 127 corresponds of
0.5 gray, 56 corresponds to 56 ÷ 256 = 0.218, 198
corresponds to 198 ÷ 256 = 0.773, etc. Similarly, for a 16-bit
grayscale image, there are 216 = 65,536 different shades of
gray. Here intensity level zero (0) corresponds to black (0)
and level 65535 corresponds to white (1). Again, any value in
between 0 and 65535 represents a gray shade.
Now that we are aware that each pixel has an intensity value
in between 0 and 2n. Considering 8-bit image, each pixel has
256 intensities, which means that each pixel is represented in
an 8-bit format.

Color Spaces

Every digital color image is represented according to a color


space. There are many types of color spaces, some of which
are
– RGB,RGBA, HSV, HSL, CMYK, YIQ, YUV, YCbCr, YPb
Pr, etc.RGB and HSV are mostly used.

RGB Color Space


RGB stands for Red-Green-Blue. This color space utilizes a
combination of the three primary colors viz. Red (R), Green
(G) and Blue (B) to represent any color in the image. This
makes it the most widely used, intuitive and easy to use color
model. It uses the technique of additive color mixing to create
19 | P a g e
new colors. By mixing different intensities of Red, Green and
Blue, we can get any possible color. For 8-bit images, each
channel (R, G and B) can have an intensity value in between 0
and 255. Thus a mix of these three colors can result in
256 × 256 × 256 = 16,777,216 different colors.

Fig.7
This picture contains several bands of colors. The top band
represents pure red color, below that it shows how it fades
into white color. The same is repeated with pure green and
pure blue, followed by yellow, cyan and magenta (which are a
combination of two of the primary colors) and then two bands
of white and black color.

HSV

HSV stands for Hue-Saturation-Value.

Fig.8

20 | P a g e
Suppose we want to extract the yellow region of the ball. In
this case, there is a lot of variation in the color intensity due to
the ambient lighting. The top portion of the ball is very bright,
whereas the bottom portion is darker as compared to the other
regions. This is where the RGB color model fails. Due to such
a wide range of intensity and color mix, there is no particular
range of RGB values which can be used for extraction. This is
where the HSV color model comes in. Just like the RGB
model, in HSV model also, there are three different
parameters.
Hue: In simple terms, this represents the “color”. For
example, red is a color. Green is a color. Pink is a color. Light
Red and Dark Red both refer to the same color red.
Light/Dark Green both refer to the same color green. Thus, in
the above image, to extract the yellow ball, we target the
yellow color, since light/dark yellow refer to yellow.
Saturation: This represents the “amount” of a particular color.
For example we have red (having max value of 255), and we
also have pale red (some lesser value, say 106, etc).
Value: Sometimes represented as intensity, it differentiates
between the light and dark variations of that color. For
example light yellow and dark yellow can be differentiated
using this.
This makes HSV color space independent of illumination and
makes the processing of images easy. But it isn’t much
intuitive and some people may have some difficulty to
understand its concepts.Each and every color has a separate
hue value.Different shades of red have the same hue value.
Saturation refers to the amount of color. That’s why you can
see such a variation in the saturation channel. The value (or
intensity) is the same in a computer generated image. In real
images, there will be variation in the intensity channel as well.

21 | P a g e
The converse is equally true. When you combine three
grayscale images, it will result in a color image. And this is no
magic as well! The three grayscale images are combined and
represented as a color image.

Channels

In the RGB image, there are three channels – R, G and B


channels; In the HSV image, there are three channels – H, S
and V channels. So this is not a new concept.
As per Wikipedia, a channel is a grayscale image comprising
only of one of the components (R/G/B or H/S/V) of the
image.
Usually software like OpenCV and MATLAB support images
up to four channels. Usually we have three channels (like
RGB, HSV), but sometimes we also have a fourth channel
(like RGBA, CMYK).

22 | P a g e
Typical tasks of computer vision

Recognition
The most important thing in computer vision, image
processing, and machine vision is to determine whether the
image data contains some specific object or not.

 Object recognition –
One or several pre-specified or learned objects or object
classes can be recognized, usually together with their 2D
positions in the image or 3D poses in the scene.
 Identification –
An individual instance of an object is recognized.
Examples include identification of a specific person's face
or fingerprint, identification of handwritten digits, or
identification of a specific vehicle.
 Detection –
The image data are scanned for a specific condition.
Examples include detection of possible abnormal cells or
tissues in medical images or detection of a vehicle in an
automatic road toll system. Detection based on relatively
simple and fast computations is sometimes used for finding
smaller regions of interesting image data which can be
further analyzed by more computationally demanding
techniques to produce a correct interpretation.

23 | P a g e
Motion analysis
Several tasks relate to motion estimation where an image
sequence is processed to produce an estimate of the velocity
either at each points in the image or in the 3D scene, or even
of the camera that produces the images . Examples of such
tasks are:
 Egomotion – determining the 3D rigid motion (rotation and
translation) of the camera from an image sequence
produced by the camera.
 Tracking – following the movements of a (usually) smaller
set of interest points or objects (e.g., vehicles or humans) in
the image sequence.
 Optical flow – to determine, for each point in the image,
how that point is moving relative to the image plane, i.e.,
its apparent motion. This motion is a result both of how the
corresponding 3D point is moving in the scene and how the
camera is moving relative to the scene.

Scene reconstruction
Given one or more images of a scene, or a video, scene
reconstruction aims at computing a 3D model of the scene. In
the simplest case the model can be a set of 3D points. More
sophisticated methods produce a complete 3D surface model.
The advent of 3D imaging not requiring motion or scanning,
and related processing algorithms is enabling rapid advances
in this field. Grid-based 3D sensing can be used to acquire 3D
images from multiple angles. Algorithms are now available to
stitch multiple 3D images together into point clouds and 3D
models.
24 | P a g e
Image restoration
The aim of image restoration is the removal of noise (sensor
noise, motion blur, etc.) from images. The simplest possible
approach for noise removal is various types of filters such as
low-pass filters or median filters. More sophisticated methods
assume a model of how the local image structures look like, a
model which distinguishes them from the noise. By first
analysing the image data in terms of the local image
structures, such as lines or edges, and then controlling the
filtering based on local information from the analysis step, a
better level of noise removal is usually obtained compared to
the simpler approaches.

Computer vision system methods


The organization of a computer vision system is highly
application dependent. Some systems are stand-alone
applications which solve a specific measurement or detection
problem, while others constitute a sub-system of a larger
design which, for example, also contains sub-systems for
control of mechanical actuators, planning, information
databases, man-machine interfaces, etc. The specific
implementation of a computer vision system also depends on
if its functionality is pre-specified or if some part of it can be
learned or modified during operation. Many functions are
unique to the application. There are, however, typical
functions which are found in many computer vision systems.
 Image acquisition – A digital image is produced by one or
several image sensors, which, besides various types of
light-sensitive cameras, include range sensors, tomography
devices, radar, ultra-sonic cameras, etc. Depending on the

25 | P a g e
type of sensor, the resulting image data is an ordinary 2D
image, a 3D volume, or an image sequence. The pixel
values typically correspond to light intensity in one or
several spectral bands (gray images or colour images), but
can also be related to various physical measures, such as
depth, absorption or reflectance of sonic or electromagnetic
waves, or nuclear magnetic resonance.

 Pre-processing – Before a computer vision method can be


applied to image data in order to extract some specific
piece of information, it is usually necessary to process the
data in order to assure that it satisfies certain assumptions
implied by the method. Examples are
 Re-sampling in order to assure that the image coordinate

system is correct.
 Noise reduction in order to assure that sensor noise does

not introduce false information.


 Contrast enhancement to assure that relevant information

can be detected.
 Scale space representation to enhance image structures at

locally appropriate scales.

 Feature extraction – Image features at various levels of


complexity are extracted from the image data. Typical
examples of such features are
 Lines, edges and ridges.

 Localized interest points such as corners, blobs or points.

More complex features may be related to texture, shape


or motion.

26 | P a g e
 Detection/segmentation – At some point in the
processing a decision is made about which image points
or regions of the image are relevant for further
processing. Examples are
 Selection of a specific set of interest points

 Segmentation of one or multiple image regions which

contain a specific object of interest.

 High-level processing – At this step the input is typically


a small set of data, for example a set of points or an
image region which is assumed to contain a specific
object. The remaining processing deals with, for
example:
 Verification that the data satisfy model-based and

application specific assumptions.


 Estimation of application specific parameters, such as

object pose or object size.


 Image recognition – classifying a detected object into

different categories.
 Image registration – comparing and combining two

different views of the same object.


 Decision making- Making the final decision required for


the application,]for example:
 Pass/fail on automatic inspection applications

 Match / no-match in recognition applications

 Flag for further human review in medical, military,

security and recognition applications.

27 | P a g e
Applications and Future Prospects

Well, there are several applications of computer vision which


makes it a good thing to know and master.

 Computer vision can be used for navigation of unmanned


vehicles,rover and submarines,etc. The Curiosity rover,
a car-sized robotic rover exploring the surface of Mars as
part of NASA’s Mars Science Laboratory mission.

 In industries, nowadays, computer vision based part


inspection is becoming more and more popular, since it
reduces human influence, makes the overall process
faster, more efficient and less error prone.

 Computer Vision can also be used to detect something


like counting number of people, traffic monitoring,
detecting unclaimed objects in public places by means
CCTV, face detection, text detection,

 Not only detection, but it can also track the detected


object, like a robot following an object, etc.There’s an
airship which was implemented with computer vision to
detect specific objects on ground and to track them from
overhead, which is desirable in places where land
traversal is very difficult.

28 | P a g e
Fig.9

 It can be used to interact with the user. Best examples are


gesture recognition and motion sensing.

 In industries, it is also used to automate a process, like in car


manufacturing, fixing of the windshield is automated using
machine vision.

 Another emerging application of computer vision is in the


field of Medicine. Medical Image Processing techniques ha ve
been enhanced a lot these days like MRI Imaging, etc.
 Computer Vision is also used in Biometrics, like identifying
fingerprints, iris, face recognition, etc.

 It is also used for information security like watermarking,


steganography, etc.

29 | P a g e
 Apart from all these applications, there are lots of other
applications as well, which I haven’t mentioned here like
scene reconstruction, image restoration, robotic control, etc
etc.

Research Areas in Computer Vision

Understanding images
Image understanding with tens of layers,
millions of classes, billions of images.
Common Objects in Context (Microsoft
CoCo)

Understanding Humans
So much of computer vision is ultimately
for humans, images of humans are an
important special case
Human body pose estimation for Kinect

Making images better.


Pictures are an important part of our lives,
and computer vision gives us the tools to
enjoy better pictures.
Image and video editing

30 | P a g e
Learning and Optimization
Computer vision often requires the
solution of especially large or difficult
problems in machine learning and
nonlinear optimization, and we innovate
in these domains.

Models for Video


One view of video is "all of the above, but
faster". We also try to explore new
representations of video and new modes
of interaction
Unwrap mosaics

Where are we?


Localization problems occur
everywhere, from augmented reality to
medical imaging to 3D modelling.
3D modelling from images

31 | P a g e
Conclusion
There are many kinds of computer vision systems,
nevertheless all of them contain these basic elements: a power
source, at least one image acquisition device (i.e. camera, ccd,
etc.), a processor as well as control and communication cables
or some kind of wireless interconnection mechanism. In
addition, a practical vision system contains software, as well
as a display in order to monitor the system. Vision systems for
inner spaces, as most industrial ones, contain an illumination
system and may be placed in a controlled environment.
Furthermore, a completed system includes many accessories
like camera supports, cables and connectors.Computer Vision
is an emerging field of research and if implemented properly
can be used in several regular operations as well as disaster
management.Computer Vision is a building block of Artificial
Intelligence which is be implemented in critical work like
diagnosis and evolutionary robotics.

32 | P a g e
References
http://moodle.epfl.ch/mod/resource/view.php?id=12423

http://maxembedded.com/2012/12/basic-concepts-of-
computer-vision/

http://en.wikipedia.org/wiki/Computer_vision

http://ieeexplore.ieee.org/xpl/topAccessedArticles.jsp?
punumber=2200

33 | P a g e

You might also like