Facial Recognition and Detection
Facial Recognition and Detection
Intelligence
▪ In this project we discuss and present Facial detection and Facial recognition which are
accomplished using different techniques and are described below.
▪ We explain the algorithms used such as HAAR-cascades, Eigenface, Fisherface and Local
binary pattern histogram (LBPH).
▪ Next, the methodology and the results of the project are described.
▪ A discussion regarding the challenges and the resolutions are describe.
▪ Finally, a conclusion is provided.
SYSTEM ANALYSIS
• Existing System
• Proposed System
• Before, security guards had to perform manual identification of a person that took
too much time and did not boast high accuracy. This is not cost efficient and is
prone to human error which can put valuable assets and data at risk.
PROPOSED SYSTEM
• A Facial recognition system is completely independent in the identification
process and not only takes seconds but is also incredibly accurate.
• We want a system that can easily be integrated and does not require to spending
of additional money.
• This can help provide a secure an reliable way of Automated Identification which
is faster compared to pre-existing methods.
SOFTWARE AND HARDWARE REQUIREMENTS
Software Requirements:
• Operating Systems: Windows 10
• Programming Language: Python (PyChram IDE)
• Database: Microsoft SQL
Hardware Requirements:
• Hard Disk: 1TB
• RAM: 8GB
• Processor: Intel i-5
• Accessories: Webcam
SYSTEM DESIGN
• Architecture
• Entity-Relationship(E-R) diagram
• UML Diagrams
ARCHITECTURE:
E-R Diagram
UML Diagrams
In UML-Diagrams we are going to discuss the following:
• Use-Case Diagram
• Activity Diagram
• Class Diagram
• Sequence Diagram
• State Chart Diagram
• Component Diagram
Use-Case (Webcam-Database)
Use-Case (Admin-Database)
Class Diagram
Activity
Diagram
Sequence
Diagram
State-chart Diagram
Component Diagram
IMPLEMENTATION
Type of Modules:
• The combination of LBP with histogram oriented gradients was introduced that increased its
performance in certain datasets. The image is divided into cells (4 x 4 pixels). Using a clockwise or
counter-clockwise direction surrounding pixel values are compared with the central.
• The value of intensity or luminosity of each neighbour is compared with the centre pixel. Depending
if the difference is higher or lower than 0, a 1 or a 0 is assigned to the location. The result provides an
8-bit value to the cell.
• The advantage of this technique is even if the luminosity of the image is changed, the result is the
same as before.
• Histograms are used in larger cells to find the frequency of occurrences of values making process
faster. By analysing the results in the cell, edges can be detected as the values change.
• By computing the values of all cells and concatenating the histograms, feature vectors can be
obtained. Images can be classified by processing with an ID attached. Input images are classified
using the same process and compared with the dataset and distance is obtained. By setting up a
threshold, it can be identified if it is a known or unknown face.
• Eigenface and Fisherface compute the dominant features of the whole training set while LBPH
analyse them individually.
EigenFace Algorithm
• Eigenface is based on analysis that classify images to extract features using a set
of images. It is important that the images are in the same lighting condition and
the eyes match in each image.
• Images used in this method must contain the same number of pixels and should be
in grayscale.
• All the images in the dataset are stored in a single matrix resulting a matrix with
columns corresponding the number of images.
• The matrix is averaged (normalised) to get an average human face. By subtracting
the average face from each image vector unique features to each face are
computed.
• In the resulting matrix, each column is a representation of the difference each face
has to the average human face.
FisherFace Algorithm
• Fisherface technique builds upon the Eigenface and is based on LDA derived from
linear discriminant technique used for pattern recognition. However, it uses labels
for classes as well as data point information.
• When reducing dimensions, PCA looks at the greatest variance, while LDA, using
labels, looks at an interesting dimension such that, when you project to that
dimension you maximise the difference between the mean of the classes
normalised by their variance.
• LDA maximises the ratio of the between-class scatter and within-class scatter
matrices. Due to this, different lighting conditions in images has a limited effect on
the classification process using LDA technique.
• Eigenface maximises the variations while Fisherface maximises the mean distance
between and different classes and minimises variation within classes.
• Furthermore, it takes less amount of space and is the fastest algorithm in this
project. Because of these PCA is more suitable for representation of a set of data
while LDA is suitable for classification.
HaarCascade Algorithm
• Haar Cascade is a machine learning object detection algorithm used to identify objects in an image
or video and based on the concept of features proposed by Paul Viola and Michael Jones in their
paper "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001.
• It is a machine learning based approach where a cascade function is trained from a lot of
positive and negative images. It is then used to detect objects in other images.
• The algorithm has four stages:
• Haar Feature Selection
• Creating Integral Images
• Adaboost Training
• Cascading Classifiers
• It is well known for being able to detect faces and body parts in an image, but can be trained to
identify almost any object.
TESTING
Software testing is the process of executing a program or application with an intent
of finding errors.
We are testing this project using:
• Unit Testing
• Integration Testing
SCREENSHOTS
• The computational models, which were implemented in this project, were chosen
after extensive research, and the successful testing results confirm that the choices
made were reliable.
• This system was tested under very robust conditions in this experimental study
and it is envisaged that real-world performance will be far more accurate. The
fully automated frontal view face detection system displayed virtually perfect
accuracy and further work need not be conducted in this area.
• The most suitable real-world applications for face detection and recognition
systems are for mugshot matching and surveillance.
REFERENCES
• Adelson, E. H., and Bergen, J. R. (1986) The Extraction of Spatio-Temporal Energy in
• Human and Machine Vision, Proceedings of Workshop on Motion: Representation and
• AAFPRS(1997). A newsletter from the American Academy of Facial Plastic and Reconstructive Surgery.
Third Quarter 1997, Vol. 11, No. 3. Page 3.
• Baron, R. J. (1981). Mechanisms of human facial recognition. International Journal of Man Machine
Studies, 15:137-178
• Beymer, D. and Poggio, T. (1995) Face Recognition From One Example View, A.I. Memo No. 1536,
C.B.C.L. Paper No. 121. MIT
• Bichsel, M. (1991). Strategies of Robust Objects Recognition for Automatic Identification of Human Faces.
PhD thesis, , Eidgenossischen Technischen Hochschule, Zurich.
• Brunelli, R. and Poggio, T. (1993), Face Recognition: Features versus Templates. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 15(10):1042-1052
• Deffenbacher K.A., Johanson J., and O'Toole A.J. (1998) Facial ageing, attractiveness, and distinctiveness.
Perception. 27(10):1233-1243
THANK YOU