Project Report
Project Report
REPORT
(DOCUMENTATION WORK)
ON
PROJECT
(ONLY STUDY PURPOSE)
FACE
RECOGNITION
USING
BIOMETRICS
ON
JAVA
PROJECT
1. MINOR PART
2. MAJOR PART
1. MINOR PART
CONTENTS
Chapter No.
Title
Abstract
Page no.
1.
Introduction
2.
3.
4.
Face Recognition
12
5.
15
6.
Results
17
7.
25
8.
Conclusion
27
References
Appendix
29
ABSTRACT
CHAPTER:1
INTRODUCTION
Introduction
Face recognition has been studied extensively for more than 20 years now. Since
the beginning of 90s the subject has became a major issue; mainly due to its
important real-world applications in areas like video surveillance, smart cards,
database security, internet and intranet access.
The face plays a major role in our social intercourse in conveying identity and
emotion. The human ability to recognize faces is remarkable. We can recognize
thousands of faces learned throughout our lifetime and identify familiar faces at a
glance even after years of separation. The skill is quite robust, despite large
changes in the visual stimulus due to viewing conditions, expression, aging, and
distractions such as glasses or changes in hairstyle.
Computational models of faces have been an active area of research since late
1980s, for they can contribute not only to theoretical insights but also to practical
applications, such as criminal identification, security systems, image and film
processing, and human-computer interaction, etc. However, developing a
computational model of face recognition is quite difficult, because faces are
complex, multidimensional, and subject to change over time. The basic task, given
as input the visual image of a face, is to compare the input face against models of
faces stored in a library and report a match if one is found. The problem of locating
the face- distinguishing it from cluttered background is usually avoided by imaging
face against a uniform background.
Face recognition is difficult for two major reasons. First, face form a class of
similar objects; all faces consist of the same facial features in roughly the same
geometrical configuration, which makes the recognition problem a fine
discrimination task. The Second source of diffcuility lies in the wide variation in
the appearance of a particular face due to changes in pose, lighting, and facial
expression.
2
The face representation was performed by using two categories. The First category
is global approach or appearance-based, which uses holistic texture features and
is applied to the face or specific region of it. The second category is feature-based
or component-based, which uses the geometric relationship among the facial
features like mouth, nose, and eyes. (Wiskott et al.,1997) implemented featurebased approach by a geometrical model of a face by 2-D elastic graph.
Principal components analysis (PCA) method (Sirovich & Kirby, 1987; Kirby &
Sirovich,1990) which is also called eigenfaces (Turk & Pentland, 1991; Pentland &
Moghaddam, 1994) is appearance-based technique used widely for the
dimensionality reduction and recorded a great performance in face recognition.
PCA is known as Eigenspace Projection which is based on linearly Projection the
image space to a low dimension feature space that is known as Eigenspace. It tries
to find Eigen vectors of Covariance matrix that corresponds to the direction of
Principal Components of original data.
PCA based approaches typically include two phases
1) Training
2) Classification
In the training phase, an eigenspace is established from the training samples using
PCA and the training face images are mapped to the eigenspace for classification.
In the classification phase, an input face is projected to the same eigenspace and
classified by an appropriate classifier.
CHAPTER:2
PRINCIPAL COMPONENT ANALYSIS
1) Mean:If we denote a set of data by X = (x1, x2, ..., xn). Then the mean
n= no.of data
4)
nn
is called an eigenvalue of A if
such that
Ax x
.
The nonzero vector x is called an eigenvector of A associated with the
eigenvalue
CHAPTER:3
HOW TO WORK WITH PCA
2) Subtract the mean:- For PCA to work properly, we have to subtract the
mean from each of the data dimensions. The mean subtracted is the average across
each dimension. So, all the x values have (the mean of the x values of all the data
points) subtracted, and all the y values have subtracted from them. This produces a
data set whose mean is zero.
We
get eigenvectors and eigenvalues from the previous section, we will notice that the
eigenvalues are quite different values. In fact, it turns out that the eigenvector with
the highest eigenvalue is the principle component of the data set. In general, once
eigenvectors are found from the covariance matrix, the next step is to order them
by eigenvalue highest to lowest. This gives the components in order of
significance. We can decide to ignore the components of lesser significance. We do
lose some information, but if the eigenvalues are small, we dont lose much. If we
leave out some components, the final data set will have less dimensions than the
original. Feature vector constructed by taking the eigenvectors that we want to
keep from the list of eigenvectors, and forming a matrix with these eigenvectors in
the columns.
It will give us the original data solely in terms of the vectors we chose. Our
original data set had two axes, x and y , so our data was in terms of them. It is
possible to express data in terms of any two axes that we like. If these axes are
perpendicular, then the expression is the most efficient. This was why it was
important that eigenvectors are always perpendicular to each other. We have
changed our data from being in terms of the axes x and y , and now they are in
terms of our 2 eigenvectors. In the case of when the new data set has reduced
dimensionality, ie. We have left some of the eigenvectors out, the new data is only
in terms of the vectors that we decided to keep.
we have transformed our data so that is expressed in terms of the patterns between
them, where the patterns are the lines that most closely describe the relationships
between the data. This is helpful because we have now classified our data point as
a combination of the contributions from each of those lines. Initially we had the
simple x and y axes. This is fine, but the x and y values of each data point dont
really tell us exactly how that point relates to the rest of the data. Now, the values
of the data points tell us exactly where (ie. above/below) the trend lines the data
point sits. In the case of the transformation using both eigenvectors, we have
simply altered the data so that it is in terms of those eigenvectors instead of the
usual axes.
10
Face Database
Training Set
Testing Set
Projection of
Test Image
PCA
(Feature Extraction)
Feature vector
Feature vectors
Classifier
(Euclidean Distance)
Decision Making
PCA approach for face recognition
11
CHAPTER:4
FACE RECOGNITION
12
Face Recognition
Face recognition is performed by principal component analysis (PCA). It is a
method of identifying patterns in data. It is mainly useful in expressing the data in
such a way which will highlight their similarities and differences.
A small database is created with images. Each of these images are m pixels high
and n pixels wide For each image in the database an image vector is created and
are put in a matrix form which gives a start point for PCA. Covariance is found
from the matrix of images and from the covariance the eigen vectors are found for
the original set of images. The way this algorithm works is by treating face
recognition as a "two-dimensional recognition problem, taking advantage of the
fact that faces are normally upright and thus may be described by a small set of 2D characteristics views. Face images are projected onto a feature space ('face
space') that best encodes the variation among known face images. The face space is
defined by the eigenfaces which are the eigenvectors of the set of faces; they do
not necessarily correspond to isolated features such as eyes, ears, and noses. So
when a new image is passed from the blob detected image, the algorithm measures
the difference between the new image and the original images, not along the
original axes, but along the new axes derived from the PCA analysis. It proves out
that these axes works much better for recognizing faces, because the PCA analysis
has given us the original images in terms of the differences and similarities
between them.
The eigenfaces approach for face recognition involves the following initialization
operations:
1. Acquire a set of training images.
2. Calculate the eigenfaces from the training set, keeping only the best M
images with the highest eigenvalues. These M images define the face
space. As new faces are experienced, the eigenfaces can be updated.
3. Calculate the corresponding distribution in M-dimensional weight space for
each known individual (training image), by projecting their face images onto
the face space.
13
Having initialized the system, the following steps are used to recognize new face
images:
1. Given an image to be recognized, calculate a set of weights of the M
eigenfaces by projecting the it onto each of the eigenfaces.
2. Determine if the image is a face at all by checking to see if the image is
sufficiently close to the face space.
3. If it is a face, classify the weight pattern as eigher a known person or as
unknown.
The two systems consist of two phases which are the PCA feature extraction
phase, and the neural network classification phase. The introduced systems provide
improvement on the recognition performances over the conventional PCA face
recognition systems.
The neural networks are among the most successful decision making systems that
can be trained to perform complex functions in various fields of applications
including pattern recognition, optimization, identification, classification, speech,
vision, and control systems.
PCA followed by a feedforward neural network (FFNN) called PCA-NN.
14
CHAPTER:5
FEEDFORWARD NEURAL NETWORK
15
images in the training set are calculated and then used to train the neural network.
These architectures are called PCA-NN for eigenfaces. In this type of networks
connections to the neurons in the same or previous layers are not permitted.
Output layer
Hidden layer
Input layer
CHAPTER:6
RESULTS
17
Results
The face recognition system was tested using a set of face images. All the training
and testing images are grayscale images of size 120x128. There are 16 persons in
the face image database, each having 27 distinct pictures taken under different
conditions (illuminance, head tilt, and head scale).
The training images are chosen to be those of full head scale, with head-on
lighting, and upright head tilt. The initial training set consists of 12 face images of
12 individuals, i.e. one image for one individual (M=12). These training images are
shown in Figure 1. Figure 2 is the average image of the training set.
18
After principal component analysis, M=11 eigenfaces are constructed based on the
M=12 training images. The eigenfaces are demonstrated in Figure 3. The
associated eigenvalues of these eigenfaces are 119.1, 135.0, 173.9, 197.3, 320.3,
363.6, 479.8, 550.0, 672.8, 843.5, 1281.2, in order. The eigenvalues determine the
Figure 3. Eigenfaces
19
a.
b.
c.
Figure 4. Training image and test images with different head tilts.
a. training image; b. test image 1; c. test image 2
If the system correctly relates the test image with its correspondence in the training
set, we say it conducts a true-positive identification (Figures 5and 6); if the system
relates the test image with a wrong person (Figure 7), or if the test image is from
an unknown individual while the system recognizes it as one of the persons in the
database, a false-positive identifaction is performed; if the system identifies the test
image as unknown while there does exist a correspondence between the test image
and one of the training images, the system conducts a false-negative detection.
The experiment results are illustrated in the Table 1:
Table 1: Recognition with different head tilts
Number of test images
Number of true-positive identifications
Number of false-positive identifications
Number of false-negative identifications
24
11
13
0
20
a.
b.
c.
a.
b.
a.
b.
c.
d.
21
Recognition with varying illuminance:
Each training image (with head-on lighting) has two corresponding test images
one with light moved by 45 degrees and the other with light moved by 90 degrees.
Other conditions, such as head scale and tilt, remain the same as in the training
image. The experiment results are shown in Table 2.
Table 2: Recognition with varying illuminance
Number of test images
Number of true-positive identifications
Number of false-positive identifications
Number of false-negative identifications
24
21
3
0
Figure 8 shows the difference between the training image and test images.
a.
b.
c.
a.
b.
22
a.
b.
24
7
17
a.
b.
c.
Figure 11. Training image and test images with varying head scale.
a. training image; b. test image 1: medium head scale; c. test image 2: small head scale
23
Figures 12 and 13 illustrate a true-positive example and a false-positive one
respectively.
a.
c.
b.
a.
b.
24
CHAPTER:7
PERFORMANCE ANALYSIS AND DISCUSSIONS
25
After calculating the eigenfaces using PCA the projection vectors are calculated for
the training set and then used to train the neural network. This architecture is called
PCA-NN.
When a new image from the test set is considered for recognition, the image is
mapped to the eigenspace. Hence, the image is assigned to a feature vector. Each
feature vector is fed to its neural network and the network outputs are compared.
PCA
Neutral
Network
PCA-NN
26
CHAPTER:7
CONCLUSIONS
27
CONCLUSIONS
An eigenfaces-based face recognition approach was implemented in MatLab. This
method represents a face by projecting original images onto a low-dimensional
linear subspaceface space, defined by eigenfaces. A new face is compared to
known face classes by computing the distance between their projections onto face
space.
28
CODING PART1.Coding for database load.
2.Coding for taking picture to database.
3.Coding for run all mathematical functions on face means face recognition
coding.
29