Face Recognition System With Face Detection (Bt3002)
Face Recognition System With Face Detection (Bt3002)
ON
FACE RECOGNITION SYSTEM WITH FACE DETECTION
Project group no. :- BT3002
Aman Singh
Enrollment no.-19021011990
Rishika Baranwal
Enrollment no.- 19021011707
Pranav Bhardwaj
Enrollment no.-19021011989
The face is one of the easiest ways to distinguish the individual identity of each other.
Face recognition is a personal identification system that uses personal characteristics of a
person to identify the person's identity. Human face recognition procedure basically consists of
two phases, namely face detection, where this process takes place very rapidly in humans,
except under conditions where the object is located at a short distance away, the next is the
introduction, which recognize a face as individuals. Stage is then replicated and developed as
a model for facial image recognition (face recognition) is one of the much-studied biometrics
technology and developed by experts. There are two kinds of methods that are currently popular
in developed face recognition pattern namely, Eigenface method and Fisherface method. Facial
image recognition Eigenface method is based on the reduction of facedimensional space using
Principal Component Analysis (PCA) for facial features. The main purpose of the use of PCA
on face recognition using Eigen faces was formed (face space) by finding the eigenvector
corresponding to the largest eigenvalue of the face image. The area of this project face detection
system with face recognition is Image processing. The software requirements for this project is
matlab software.
Extension: There are vast number of applications from this face detection project, this project
can be extended that the various parts in the face can be detect which are in various directions
and shapes.
INDEX
CONTENTS page
LIST OF FIGURES
ABSTRACT
1.INRODUCTION……………………………………………………………….1
1.1. FACE RECOGNIZATION………………………………………………….1
1.1.1 GEOMWTRIC……………………………………………………...1
1.1.2 PHOTOMETRIC……………………………………………….......1
1.2 FACE DETECTION……………………………………………………….2
1.2.1 PRE-PROCSSING…………………………………………………..2
1.2.2 CLASSIFICATION…………………………………………………3
1.2.3 LOCALIZATION…………………………………………………...3
2. LITERATURE SURVEY…………………………………………………….4
2.1.FEATURE BASE APPROCH……………………………………………...4
2.1.1 DEFORMABLE TEMPLATES…………………………………….....5
2.1.2 POINT DISTRIBUTION MODEL(PDM)…………………………….6
2.2. LOW LEVEL ANALYSIS………………………………………………….6
2. 3.MOTION BASE……………………………………………………………..8
2.3.1 GRAY SCALE BASE………………………………………………8
2.3.2 EDGE BASE………………………………………………………..9
2.4 FEATURE ANALYSIS……………………………………………………..9
2.1.1 FEATURE
SEARCHING………………………………………..…9
2.5 CONSTELLATION METHOD……………………………………………10
2.5.1 NEURAL NETWORK………………………………………...…..10
2.6 LINEAR SUB SPACE METHOD………………………………………....11
2.7 STASTICAL APPROCH………………………………………………..…12
3. DIGITAL IMAGE PROCESSING…………………………………...……..13
3.1. DIGITAL IMAGE PROCESSING………………………………………..13
3.2. FUNDAMENTAL STEPS IN IMAGE PROCESSING……..……………14
3.3. ELEMENTS OF DIGITAL IMAGE PROCESSING SYSTEM………….15
3.3.1 A SIMPLE IMAGE FORMATION
MODEL………………..……….15
4. MATLAB……………………………………………………………..................17
4.1. INTROUDUCTION………………………………………………………....
17
4.2. MATLAB's POWER OF COMPUTAIONAL
MATHMATICS…………....17
4.3. FEATURES OF
MATLAB………………………………………………….18
4.4. USES OF
MATLAB…………………………………………………………18
4.5. UNDERSTANDING THE MATLAB
ENVIRONMENT…………………..19
4.6. COMMONLY USED OPERATORS AND SPATIAL
CHARECTERS……21
4.7. COMMANDS………………………………………………………………
…22
4.7.1 COMMANDS FOR MANAGING A
SESSION……………………......22
4.8. INPUT AND OUTPUT
COMMAND…………………………………………23
4.9. M
FILES……………………………………………………………………….
23
4.10. DATA TYPES AVAILABLE IN
MATLAB………………………………...24
5. FACE DETECTION…………………………………………………………...….26
5.1. FACE DETECTION IN
IMAGE…………………………………………..…..26
5.2. REAL TIME FACE
DETECTION………………………………………..……27
5.3. FACE DETECTION
PROCESS…………………………………………..……29
5.4. FACE DETETION
ALGORITHM……………………………………….…….32
6. FACE RECOGNITION……………………………………………………………34
6.1. FACE RECOGNITION USING GEOMETRICAL
FEATURES……………....34
6.1.1 FACE RECOGNITION USING TEMPLATE
MATCHING……...…….35
6.2. PROBLEM SCOPE AND SYSTEM
SPECIFICATIONS………...…………36
6.3. BRIEF OUTLINE OF THE IMPLEMENTED
SYSTEM……..………………36
6.4. FACE RECOGNITION
DIFFICULTIE………………….……………………38
6.4.1 INTER CLASS
SIMILARITY……………….………………………….39
6.4.2 INTRA CLASS
SIMILARITY………….……………………………….39
6.5. PRINCIPAL COMPONENT
ANALYSIS………………………………………40
6.6. UNDER STANDING EIGEN
FACES……..……………………………………40
6.7. IMPROVING FACE DETECTION USING
RECONSTRUCTION…………....44
6.8. POSE INVARIENT FACE
RECOGNITION……………………………………45
7. CONCLUSION………………………………………………………………………..47
8. REFERENCES ……………………………………………………………………….49
9. APPENDIX……………………………………………………………………………51
LIST OF FIGURES
1.2 FACE DETECTION ALGORITHM…………………………………………..………3
2.1 DETECTION METHODS……………………………………………………..……....4
2.2 FACE DETECTION……………………………………………………………..….....7
3.2 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING………………..…14
3.3 ELEMENTS OF DIGITAL IMAGE PROCESSING SYSTEM……………………...15
5.1 A SUCCESSFUL FACE DETECTION IN AN IMAGE WITH A FRONTAL
VIEW OF A HUMAN FACE……………………………………………………….26
5.2.1 FRAME 1 FROM CAMERA……………………………………………………….28
5.2.2 FRAME 2 FROM CAMERA………………………………………………….……28
5.2.3 SPATIO - TEMPORALLY FILTERED IMAGE……………………………….….28
5.3 FACE DETECTION……………………………………………………………..….29
5.3.1 AVERAGE HUMAN FACE IN GREY-SCALE…………………………………..29
5.3.2 AREA CHOSEN FOR FACE DETECTION………………………………………30
5.3.3: BASIS FOR A BRIGHT INTENSITY INVARIANT SENSITIVE TEMPLATE..30
5.3.4 SCANED IMAGE DETECTION………………………………………………..…31
5.4 FACE DETECTION ALGORITHM………………………………………….…….32
5.4.1 MOUTH DETECTION……………………………………………………………31
1 1 3 1 1 3 1 1 3 1 1 3 1 1 3 1 1 3 1 1 3 1 1 3
5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0
1 EXISTING
SYSTEM
STUDY
2 PLANNING
PHASE
3 PROBLEM
DEFINATION
4 INVESTIGATI
ON OF
SYSTEM
REQUIREME
NTS
5 APPENDING
KNOWLEDGE
BASE OF
SYSTEM
6 DESIGN OF
FACE
DETECTION
7 COLLECTION
OF
DATABASE)
8 DESIGN OF
FACE
RECOGNITION
CHAPTER-1
INTRODUCTION
There are two predominant approaches to the face recognition problem: Geometric
(feature based) and photometric (view based). As researcher interest in face recognition
continued, many different algorithms were developed, three of which have been well studied in
face recognition literature.
Face detection involves separating image windows into two classes; one containing faces
(tarning the background (clutter). It is difficult because although commonalities exist between
faces, they can vary considerably in terms of age, skin colour and facial expression. The problem
is further complicated by differing lighting conditions, image qualities and geometries, as well as
the possibility of partial occlusion and disguise. An ideal face detector would therefore be able to
detect the presence of any face under any set of lighting conditions, upon any background. The
face detection task can be broken down into two steps. The first step is a classification task that
takes some arbitrary image as input and outputs a binary value of yes or no, indicating whether
there are any faces present in the image. The second step is the face localization task that aims to
take an image as input and output the location of any face or faces within that image as some
bounding box with (x, y, width, height).
The face detection system can be divided into the following steps:-
1. Pre-Processing: To reduce the variability in the faces, the images are processed before they
are fed into the network. All positive examples that is the face images are obtained by cropping
images with frontal faces to include only the front view. All the cropped images are then corrected
for lighting through standard algorithms.
3. Localization: The trained neural network is then used to search for faces in an image and if
present localize them in a bounding box. Various Feature of Face on which the work has done on:-
Position Scale Orientation Illumination
CHAPTER-2
Active Shape ModelActive shape models focus on complex non-rigid features like actual
physical and higher level appearance of features Means that Active Shape Models (ASMs) are
aimed at automatically locating landmark points that define the shape of any statistically modelled
object in an image. When of facial features such as the eyes, lips, nose, mouth and eyebrows. The
training stage of an ASM involves the building of a statistical
b) 1.1)Snakes:The first type uses a generic active contour called snakes, first introduced by Kass
et al. in 1987 Snakes are used to identify head boundaries [8,9,10,11,12]. In order to achieve
the task, a snake is first initialized at the proximity around a head boundary. It then locks onto
nearby edges and subsequently assume the shape of the head. The evolution of a snake is
achieved by minimizing an energy function, Esnake (analogy with physical systems), denoted
asEsnake = Einternal + EExternal WhereEinternal and EExternal are internal and external
energy functions.Internal energy is the part that depends on the intrinsic properties of the
snake and defines its natural evolution. The typical natural evolution in snakes is shrinking
or expanding. The external energy counteracts the internal energy and enables the contours
to deviate from the natural evolution and eventually assume the shape of nearby features—
the head boundary at a state of equilibria.Two main consideration for forming snakes i.e.
selection of energy terms and energy minimization. Elastic energy is used commonly as
internal energy. Internal energy is vary with the distance between control points on the snake,
through which we get contour an elastic-band characteristic that causes it to shrink or expand.
On other side external energy relay on image features. Energy minimization process is done
by optimization techniques such as the steepest gradient descent. Which needs highest
computations. Huang and Chen and Lam and Yan both employ fast iteration methods by
greedy algorithms. Snakes have some demerits like contour often becomes trapped onto false
image features and another one is that snakes are not suitable in extracting non convex
features.
Deformable templates were then introduced by Yuille et al. to take into account the a priori
of facial features and to better the performance of snakes. Locating a facial feature boundary is not
an easy task because the local evidence of facial edges is difficult to organize into a sensible global
entity using generic contours. The low brightness contrast around some of these features also makes
the edge detection process.Yuille et al. took the concept of snakes a step further by incorporating
global information of the eye to improve the reliability of the extraction process.
Deformable templates approaches are developed to solve this problem. Deformation is based on
local valley, edge, peak, and brightness .Other than face boundary, salient feature (eyes, nose,
mouth and eyebrows) extraction is a great challenge of face recognition.E = Ev + Ee + Ep + Ei +
Based on low level visual features like color, intensity, edges, motion etc.Skin Color
BaseColor is avital feature of human faces. Using skin-color as a feature for tracking a face has
several advantages. Color processing is much faster than processing other facial features. Under
certain lighting conditions, color is orientation invariant. This property makes motion estimation
much easier because only a translation model is needed for motion estimation.Tracking human faces
using color as a feature has several problems like the color representation of a face obtained by a
camera is influenced by many factors (ambient light, object movement, etc
Majorly three different face detection algorithms are available based on RGB, YCbCr, and
HIS color space models.In the implementation of the algorithms there are three main steps viz.
Crowley and Coutaz suggested simplest skin color algorithms for detecting skin pixels.
The perceived human color varies as a function of the relative direction to the illumination.
When useof video sequence is available, motion informationcan be used to locate moving
objects. Movingsilhouettes like face and body parts can be extractedby simply thresholding
accumulated framedifferences . Besides face regions, facial featurescan be located by frame
differences .
Gray information within a face canalso be treat as important features. Facial features such as
eyebrows, pupils, and lips appear generallydarker than their surrounding facial regions. Various
recent feature extraction algorithms searchfor local gray minima within segmented facial regions.
In these algorithms, the input imagesare first enhanced by contrast-stretching and gray-scale
morphological routines to improvethe quality of local dark patches and thereby make detection
easier. The extraction of darkpatches is achieved by low-level gray-scale thresholding. Based
method and consist three levels. Yang and huang presented new approach i.e. faces gray scale
behaviour in pyramid (mosaic) images. This system utilizes hierarchical Face location consist three
levels. Higher two level based on mosaic images at different resolution. In the lower level, edge
detection method is proposed. Moreover this algorithms gives fine response in complex background
where size of the face is unknown
Face detection based on edges was introduced by Sakai et al. This workwas based on
analysing line drawings of the faces from photographs, aiming to locate facialfeatures. Than later
Craw et al. proposed a hierarchical framework based on Sakai et al.‘swork to trace a human head
outline. Then after remarkable works were carried out by many researchers in this specific area.
Method suggested by Anila and Devarajan was very simple and fast. They proposed frame work
which consist three stepsi.e. initially the images are enhanced by applying median filterfor noise
removal and histogram equalization for contrast adjustment. In the second step the edge imageis
constructed from the enhanced image by applying sobel operator. Then a novel edge
trackingalgorithm is applied to extract the sub windows from the enhanced image based on edges.
Further they used Back propagation Neural Network (BPN) algorithm to classify the sub-window
as either face or non-face.
These algorithms aimto find structural features that exist even when thepose, viewpoint, or
lighting conditions vary, andthen use these to locate faces. These methods aredesigned mainly for
face localization
Paul Viola and Michael Jones presented an approach for object detection which minimizes
computation time while achieving high detection accuracy. Paul Viola and Michael Jones [39]
proposed a fast and robust method for face detection which is 15 times quicker than any technique
at the time of release with 95% accuracy at around 17 fps.The technique relies on the use of simple
Haar-like features that are evaluated quickly through the use of a new image representation. Based
on the concept of an ―Integral Image‖ it generates a large set of features and uses the boosting
algorithm AdaBoost to reduce the overcomplete set and the introduction of a degenerative tree of
the boosted classifiers provides for robust and fast interferences. The detector is applied in a
scanning fashion and used on gray-scale images, the scanned window that is applied can also be
scaled, as well as the features evaluated.
All methods discussed so far are able to track faces but still some issue like locating faces of
various poses in complex background is truly difficult. To reduce this difficulty investigator form a
group of facial features in face-like constellations using more robust modelling approaches such as
statistical analysis. Various types of face constellations have been proposed by Burl et al. . They
establish use of statistical shape theory on the features detected from a multiscale Gaussian
derivative filter. Huang et al. also apply a Gaussian filter for pre-processing in a framework based
on image feature analysis.Image Base Approach.
CHAPTER-3
Interest in digital image processing methods stems from two principal application areas:
1. Improvement of pictorial information for human interpretation
2. Processing of scene data for autonomous machine perception
In this second application area, interest focuses on procedures for extracting image information
in a form suitable for computer processing.
Examples includes automatic character recognition, industrial machine vision for product
assembly and inspection, military recognizance, automatic processing of fingerprints etc.
Image:
Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial coordinates
and the value of f at any point (x, y) is proportional to the brightness or gray levels of the image
at that point. A digital image is an image f(x, y) that has been discretized both in spatial
coordinates and brightness. The elements of such a digital array are called image elements or
pixels.
To be suitable for computer processing, an image f(x, y) must be digitalized both spatially
and in amplitude. Digitization of the spatial coordinates (x, y) is called image sampling.
Amplitude digitization is called gray-level quantization.
The storage and processing requirements increase rapidly with the spatial resolution and the
number of gray levels.
Example: A 256 gray-level image of size 256x256 occupies 64k bytes of memory.
2. Image pre-processing: to improve the image in ways that increases the chances for success of
the other processes.
3. Image segmentation: to partitions an input image into its constituent parts of objects.
4. Image segmentation: to convert the input data to a from suitable for computer processing.
5. Image description: to extract the features that result in some quantitative information of
interest of features that are basic for differentiating one class of objects from another.
6. Image recognition: to assign a label to an object based on the information provided by its
description.
Segmentation Representation
and description
Pre-processing
Recognition
Image
And
acquisition Knowledge base
problem
image processing
• Optical discs
• Tape
• Video tape
• Magnetic discs
Communication channel
1. Acquisition
2. Storage
3. Processing
4. Communication
5. Display
The name MATLAB stands for MATrix LABoratory. MATLAB was written originally to
provide easy access to matrix software developed by the LINPACK (linear system package) and
EISPACK (Eigen system package) projects. MATLAB is a high-performance language for technical
computing. It integrates computation, visualization, and programming environment. MATLAB has
many advantages compared to conventional computer languages (e.g., C, FORTRAN) for solving
technical problems. MATLAB is an interactive system whose basic data element is an array that
does not require dimensioning. Specific applications are collected in packages referred to as toolbox.
There are tool boxes for signal processing, symbolic computation, control theory, simulation,
optimization, and several other fields of applied science and engineering.
MATLAB is used in every facet of computational mathematics. Following are some commonly
used mathematical calculations where it is used most commonly:
• Dealing with Matrices and Arrays
• Linear Algebra
• Algebraic Equations
• Non-linear Functions
• Statistics
• Data Analysis
• Integration
• Transforms
• Curve Fitting
• It also provides an interactive environment for iterative exploration, design and problem
solving.
• It provides vast library of mathematical functions for linear algebra, statistics, Fourier
analysis, filtering, optimization, numerical integration and solving ordinary differential
equations.
• It provides built-in graphics for visualizing data and tools for creating custom plots.
• MATLAB's programming interface gives development tools for improving code quality,
maintainability, and maximizing performance.
• It provides functions for integrating MATLAB based algorithms with external applications
and languages such as C, Java, .NET and Microsoft Excel.
MATLAB is widely used as a computational tool in science and engineering encompassing the
fields of physics, chemistry, math and all engineering streams. It is used in a range of applications
including:
• control systems
• computational finance
• computational biology
Current Folder - This panel allows you to access the project folders and files.
Command Window - This is the main area where commands can be entered at the command line.
It is indicated by the command prompt (>>).
Workspace - The workspace shows all the variables created and/or imported from files.
Fig.4.5.4.workspace
Command History - This panel shows or rerun commands that are entered at the command line.
MATLAB supports the following commonly used operators and special characters:
Operator Purpose
+ Plus; addition operator.
- Minus, subtraction
operator.
.^ Array exponentiation
operator.
\ Left-division operator.
Array right-division
./ operator.
4.7 COMMANDS
MATLAB is an interactive program for numerical computation and data visualization. You
can enter a command by typing it at the MATLAB prompt '>>' on the Command Window.
Commands Purpose
Clc Clear command window
Clear Removes variables from memory
Exist Checks for existence of file or
variable.
Command Purpose
Disp Displays content for an array or
string.
4.9 M FILES
MATLAB allows writing two kinds of program files:
Scripts:
script files are program files with .m extension. In these files, you write series of
commands, which you want to execute together. Scripts do not accept inputs and do not return any
outputs. They operate on data in the workspace.
Functions:
functions files are also program files with .m extension. Functions can accept inputs and
return outputs. Internal variables are local to the function.
To create scripts files, you need to use a text editor. You can open the MATLAB editor in
two ways:
Using the command prompt
Edit
or
edit<file name>
The following table shows the most commonly used data types in MATLAB:
Datatype Description
Int8 8-bit signed integer
Unit8 8-bit unsigned integer
Int16 16-bit signed integer
Unit16 16-bit unsigned integer
Int32 32-bit signed integer
unit32 32-bit unsigned integer
Int64 64-bit signed integer
Unit64 64-bit unsigned integer
Single Single precision numerical data
Double Double precision numerical data
Logical Logical variables are
1or0,represent true &false
respectively
CHAPTER-5
Figure 5.1 A successful face detection in an image with a frontal view of a human face.
Most face detection systems attempt to extract a fraction of the whole face, thereby
eliminating most of the background and other areas of an individual's head such as hair that are not
necessary for the face recognition task. With static images, this is often done by running a across
the image. The face detection system then judges if a face is present inside the window (Brunelli
and Poggio, 1993). Unfortunately, with static images there is a very large search space of possible
locations of a face in an image
Most face detection systems use an example based learning approach to decide whether or
not a face is present in the window at that given instant (Sung and Poggio,1994 and Sung,1995). A
neural network or some other classifier is trained using supervised learning with 'face' and 'nonface'
examples, thereby enabling it to classify an image (window in face detection system) as a 'face' or
'non-face'.. Unfortunately, while it is relatively easy to find face examples, how would one find a
There is another technique for determining whether there is a face inside the face detection
system's window - using Template Matching. The difference between a fixed target pattern (face)
and the window is computed and thresholded. If the window contains a pattern which is close to the
target pattern(face) then the window is judged as containing a face. An implementation of template
matching called Correlation Templates uses a whole bank of fixed sized templates to detect facial
features in an image (Bichsel, 1991 & Brunelli and Poggio, 1993). By using several templates of
different (fixed) sizes, faces of different scales (sizes) are detected. The other implementation of
template matching is using a deformable template (Yuille, 1992). Instead of using several fixed size
templates, we use a deformable template (which is non-rigid) and there by change the size of the
template hoping to detect a face in an image.
A face detection scheme that is related to template matching is image invariants. Here the
fact that the local ordinal structure of brightness distribution of a face remains largely unchanged
under different illumination conditions (Sinha, 1994) is used to construct a spatial template of the
face which closely corresponds to facial features. In other words, the average grey-scale intensities
in human faces are used as a basis for face detection. For example, almost always an individuals eye
region is darker than his forehead or nose. Therefore an image will match the template if it satisfies
the 'darker than' and 'brighter than' relationships (Sung and Poggio, 1994).
Real-time face detection involves detection of a face from a series of frames from a
videocapturing device. While the hardware requirements for such a system are far more stringent,
from a computer vision stand point, real-time face detection is actually a far simpler process
thandetecting a face in a static image. This is because unlike most of our surrounding environment,
people are continually moving. We walk around, blink, fidget, wave our hands about, etc.
Since in real-time face detection, the system is presented with a series of frames in which to
detect a face, by using spatio-temperal filtering (finding the difference between subsequent frames),
the area of the frame that has changed can be identified and the individual detected (Wang and
Adelson, 1994 and Adelson and Bergen 1986).Further more as seen in Figure exact face locations
can be easily identified by using a few simple rules, such as,
1)the head is the small blob above a larger blob -the body
Real-time face detection has therefore become a relatively simple problem and is possible
even in unstructured and uncontrolled environments using these very simple image processing
techniques and reasoning rules.
It is process of identifying different parts of human faces like eyes, nose, mouth, etc… this
process can be achieved by using MATLAB codeIn this project the author will attempt to detect
faces in still images by using image invariants. To do this it would be useful to study the greyscale
intensity distribution of an average human face. The following 'average human face' was
constructed from a sample of 30 frontal view human faces, of which 12 were from females and 18
from males. A suitably scaled colormap has been used to highlight grey-scale intensity differences.
1. Face
2. Non - Face
FACE RECOGNITION
Over the last few decades many techniques have been proposed for face recognition. Many
of the techniques proposed during the early stages of computer vision cannot be considered
successful, but almost all of the recent approaches to the face recognition problem have been
creditable. According to the research by Brunelli and Poggio (1993) all approaches to human face
recognition can be divided into two strategies:
(1) Geometrical features and
Figure 6.1 Geometrical features (white) which could be used for face recognition
The advantage of using geometrical features as a basis for face recognition is that recognition
is possible even at very low resolutions and with noisy images (images with many disorderly pixel
intensities). Although the face cannot be viewed in detail its overall geometrical configuration can
be extracted for face recognition. The technique's main disadvantage is that automated extraction of
Figure 6.3.1: Principal Component Analysis transform from 'image space' to 'face space'.
Using Principal Component Analysis, the segmented frontal view face image is transformed
from what is sometimes called 'image space' to 'face space'. All faces in the face database are
transformed into face space. Then face recognition is achieved by transforming any given test image
into face space and comparing it with the training set vectors. The closest matching training set
vector should belong to the same individual as the test image.Principal Component Analysis is of
special interest because the transformation to face space is based on the variation of human faces (in
Face recognition and detection system is a pattern recognition approach for personal
identification purposes in addition to other biometric approaches such as fingerprint recognition,
signature, retina and so forth. Face is the most common biometric used by humans applications
ranges from static, mug-shot verification in a cluttered background.
2.3 expressions
Face recognition and detection system is a pattern recognition approach for personal
identification purposes in addition to other biometric approaches such as fingerprint recognition,
signature, retina and so forth. The variability in the faces, the images are processed before they are
fed into the network. All positive examples that is the face images are obtained by cropping images
with frontal faces to include only the front view. All the cropped images are then corrected for
lighting through standard algorithms.
Principal Component Analysis (or Karhunen-Loeve expansion) is a suitable strategy for face
recognition because it identifies variability between human faces, which may not be immediately
obvious. Principal Component Analysis (hereafter PCA) does not attempt to categorise faces using
familiar geometrical differences, such as nose length or eyebrow width. Instead, a set of human faces
is analysed using PCA to determine which 'variables' account for the variance of faces. In face
recognition, these variables are called eigen faces because when plotted they display an eerie
resemblance to human faces. Although PCA is used extensively in statistical analysis, the pattern
recognition community started to use PCA for classification only relatively recently. As described
by Johnson and Wichern (1992), 'principal component analysis is concerned with explaining the
variance- covariance structure through a few linear combinations of the original variables.' Perhaps
PCA's greatest strengths are in its ability for data reduction and interpretation. For example a
100x100 pixel area containing a face can be very accurately represented by just 40 eigen values.
Each eigen value describes the magnitude of each eigen face in each image. Furthermore, all
interpretation (i.e. recognition) operations can now be done using just the 40 eigen values to
represent a face instead of the manipulating the 10000 values contained in a 100x100 image. Not
only is this computationally less demanding but the fact that the recognition information of several
thousand.
consider as a vector of N2. For example, a typical 100x100 image used in this thesis will have to be
2.Projected image is a face and is not transformed near a face in the face database
3.Projected image is not a face and is transformed near a face in the face database
4. Projected image is not a face and is not transformed near a face in the face database
While it is possible to find the closest known face to the transformed image face by calculating the
Euclidean distance to the other vectors, how does one know whether the image that is being
transformed actually contains a face? Since PCA is a many-to-one transform, several vectors in the
image space (images) will map to a point in face space (the problem is that even non-face images
may transform near a known face image's faces space vector). Turk and Pentland (1991a), described
a simple way of checking whether an image is actually of a face. This is by transforming an image
into face space and then transforming it back (reconstructing) into image space. Using the previous
notation,
I' = UT *U * (I - A
0.8235
0.0661
- 0.8786
-0.4727
-0.0646
0.6642
-0.4840
-0.4501 -0.2506
0.1591
0.3359
0.0048
0.0745
………..
Hippo in image space Hippo in face space Reconstructed hippo in image space
0.7253
-0.0392 0.2896
-0.1725
-0.2642
- 0.0814
- 0.0054
-0.0623 -
0.0965
- 0.0879
0.0745
-0.0261
…………
Face in image space Face in face space Reconstructed face in image space
Figure 6.6.3 Images and there reconstruction. The Euclidean distance between a face image and
its reconstruction will be lower than that of a non-face image
978 74 31 60
2804 86 60 36
2903 86 62 36
3311 89 62 30
3373 91 63 26
3260 92 64 24
3305 93 64 22
3393 94 65 20
an image of the same individual is submitted within a 30o angle from the frontal view he or she can
be identified.
Nine images in face database from a single known individual
when an individual's frontal view and 30 o left view known, even the individual's 15 o left view can
be recognised
APPENDIX
FACE DETECTION:
BB =
52 38 73 73
379 84 71 71 198
57 72 72
NOSE DETECTION:
BB=step(NoseDetect,I);
EXPLANATION:
To denote the object of interest as 'nose', the argument 'Nose' is
passed. vision.CascadeObjectDetector('Nose','MergeThreshold',16);
Based on the input image, we can modify the default values of the
parameters passed to vision.CascaseObjectDetector. Here the default
value for 'MergeThreshold' is 4.
MOUTH DETECTION:
%To detect Mouth
MouthDetect = vision.CascadeObjectDetector('Mouth','MergeThreshold',16);
BB=step(MouthDetect,I);
figure, imshow(I);
hold on for i =
1:size(BB,1)
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-
','EdgeColor','r'); end
EYE DETECTION:
BB=step(EyeDetect,I);
figure,imshow(I);
rectangle('Position',BB,'LineWidth',4,'LineStyle','-','EdgeColor','b');
title('Eyes Detection'); Eyes=imcrop(I,BB); figure,imshow(Eyes);
FACE RECOGNITION
function closeness=recognition(input_image,U,R);
% By L.S. Balasuriya
vinput=reshape(input,[10000 1]);
facespace=voutput'*U;
%********Eucleadian distance
******************************************
[p,ignor]=size(R);
distance_vecs=R-repmat(facespace,[p 1]);
distance=sum(abs(distance_vecs)')';
[ignor,closeness]=sort(distance);