A Project Report
A Project Report
A PROJECT REPORT
on
“Face Recognition”
Submitted to
KIIT Deemed to be University
BACHELOR’S DEGREE IN
COMPUTER SCIENCE & ENGINEERING
BY
Acknowledgements
Siddharth Himanshee
Priyanshu Shekhar
Harsh Kumar Sharma
Aayushman Attreya
3
TABLE OF CONTENTS
Acknowledgments………………………………………………………… 2
Table Of Contents……..………………………………………………...... 3
Abstract…….....………………………………………………………....... 4
I. Introduction……………………………………………………………… 4
D. Application ……………………………………………………………. 6
E. Objectives……………………………………………………………… 6
F. Motivation…………………………………………………………....... 6
B. Proposed workflow….……………………………………………...... 8
A. Dataset Details……………………………………………………….. 10
IV. Results………………………………………………………………….. 11
V. Conclusion ………………..………………………..…………………… 13
4
ABSTRACT
Imagine unlocking your phone with a glance or having your favorite store greet you by name as you walk in. These
are just a few possibilities brought to life by facial recognition technology.Facial recognition is a rapidly evolving
field in computer vision that allows computers to identify or verify a person's identity based on their face. Just like
humans effortlessly recognize friends and family, facial recognition systems aim to achieve the same feat using
sophisticated algorithms and vast data sets.This technology works by:
Detection: First, the system locates and isolates faces within an image or video frame.
Recognition: Unique facial features are then extracted and compared against a database of known faces for
identification. These features can be distances between eyes, nose shape, or even patterns learned from training
data.
Security: It can be used for access control, surveillance, and criminal identification.
Consumer Technology: Unlocking devices, personalized experiences in stores, and photo tagging on social
media are just a few examples.
Law Enforcement: Identifying missing persons or suspects from video footage can be a valuable tool.
Keywords: Brain tumors, MRI, Convolution Neural Network, VGG19,
5
6
I.INTRODUCTION
A.FACE RECOGNITION SYSTEM:
Face recognition technology has become a prominent area in computer vision due to its wide range of applications.
This project explores the development of a face recognition system capable of identifying individuals from images or
videos.
The system typically involves two key stages: face detection and recognition. In the first stage, the system locates and
isolates faces within the input image or video frame. Various techniques, such as skin tone detection or feature
extraction, can be employed for this purpose.
Once a face is detected, the recognition stage extracts unique features that represent the individual. These features can
be geometric distances between facial landmarks or patterns learned from a training dataset using machine learning
algorithms. The extracted features are then compared against a database of known faces to identify the individual in
the image.
This project will detail the chosen methods for face detection, feature extraction, and recognition. The implemented
system will be evaluated on a benchmark dataset to assess its accuracy and performance. The impact of factors like
pose variation, illumination changes, and occlusion on recognition accuracy will also be discussed.
While facial recognition offers undeniable benefits, it's crucial to acknowledge the ongoing discussions surrounding
its use. Concerns regarding privacy, potential bias in algorithms, and the possibility of misuse necessitate careful
consideration and regulations.
This overview provides a foundational understanding of facial recognition systems. As you delve deeper, you'll
explore the technical intricacies, the evolving landscape of applications, and the critical discussions shaping the
responsible use of this powerful technology.
7
Facial Recognition: Once a face is pinpointed, the system gets down to business. Here, unique features are extracted
to create a digital fingerprint of the face. These features can be:
Geometric: Distances between facial landmarks like eyes, nose, and mouth.
Learned Patterns: Machine learning algorithms analyze training datasets to identify patterns specific to each face.
With a digital representation in hand, the system then performs a comparison against a database of known faces. This
matching process determines whether the individual is recognized or not. It's like comparing a fingerprint scan to a
database of registered individuals.
Infrared Facial Recognition: This is the most likely scenario. Facial recognition systems often use infrared
cameras to capture images, especially in low-light conditions. Infrared light is invisible to the human eye but
allows the camera to "see" heat patterns, which can be helpful for facial recognition.
"3D Scanning": Some facial recognition systems use 3D scanning techniques to create a more detailed map of a
person's face. This might be misinterpreted as "saccing," but it's a non-invasive process that projects light patterns
and analyzes the reflection to create a depth map.
8
Infrared Facial Imaging: This is a broader term encompassing any use of infrared cameras to capture facial
images.
Depth Sensing in Facial Recognition: This refers to techniques that go beyond a simple 2D image and capture
depth information for more accurate recognition.
D.APPLICATION:
Diagnosing genetic disorders. ...
Facilitating mental therapy. ...
Checkout-free software solutions. ...
Loyalty programs. ...
Personalized shopping experience
E.OBJECTIVE:
Major Objectives
Save Images To Database
Detect Faces
Match detected faces to Database
Recognize Faces
Provides accurate information about them
Fig
9
F.MOTIVATION:
Facial Recognition, also known as facial detection, is briefly a computer technology that relies on artificial
intelligence (AI) and Machine Learning (ML). It is used to detect human faces in images or videos. Thanks to face
detection algorithms, It is possible to detect faces in an image or video regardless of the camera angles, the position of
the subject's head, lighting, or skin color.
When this technology is combined with biometric security systems (especially facial recognition), it makes it possible
to track people's faces in real time. Face detection is usually the first step in apps that use facial tracking, analysis, and
recognition, and it dramatically affects how the next steps in the app will work.
Face detection helps with facial analysis as well. It helps to figure out which parts of a video or picture should be
focused on to determine gender, age, or feelings. In the same way, face detection data is built into the algorithms of
facial recognition systems, which create "faceprint" maps of facial features. Face detection assists in identifying the
elements of the video or image that are necessary to generate a faceprint.
10
Facial recognition systems are like detectives for faces. They analyze an image or video to identify a person based on
their unique facial characteristics. Here's a breakdown of how they work:
Imagine searching for a specific person in a crowded room. Facial recognition does something similar. First, it needs
to isolate the faces itself. Here are some common techniques used for face detection:
Skin Tone Detection: The system might look for pixels with colors within the typical range of human skin tones. This
is a simple but less robust approach.
Haar Feature Cascade Classifiers: These are machine learning models trained to identify specific features like edges
and corners that often form patterns around eyes, nose, and mouth.
Once a face is isolated, the system extracts a unique "facial signature" to distinguish it from others. Here's where
things get interesting:
Facial Landmark Detection: The system identifies key points on the face like the corners of the eyes, tip of the nose,
and edge of the mouth. Measuring the distances and ratios between these landmarks creates a basic descriptor of the
face.
Feature Extraction: More sophisticated techniques go beyond landmarks. Deep learning algorithms can analyze the
entire face image and extract complex patterns specific to each individual.
Facial Recognition Database: The system compares the extracted features against a database of known faces. This
database can be local (for unlocking your phone) or stored on a server (for security systems).
Matching Algorithm: A sophisticated algorithm calculates the similarity between the extracted features and faces in
the database. The face with the closest match is considered the identified person.
Facial recognition is impressive, but it's not perfect. Factors like lighting variations, pose changes, and even facial
expressions can affect accuracy. Additionally, bias in training data can lead to unfair recognition across different
demographics.
12
B. PROPOSED WORKFLOW:
The recognition stage typically uses an intensity (grayscale) representation of the image compressed by the
2D-DCT for further processing. This grayscale version contains intensity values for skin pixels. A block
diagram of the proposed technique of the face recognition system .The second stage uses a self-organizing map
(SOM) with an unsupervised learning technique to classify vectors into groups to recognize if the subject in
the input image is “present” or “not present” in the image database.If the subject is classified as present, the
best match image found in the training database is displayed as the result, else the result displays that the
subject is not found in the image database.
13
A. DATASET DETAIL:
The CASIA-WebFace dataset is used for face verification and face identification tasks.
The dataset contains 494,414 face images of 10,575 real identities collected from the web.
The MS-Celeb-1M dataset is a large-scale face recognition dataset consists of 100K identities, and each identity
has about 100 facial images.
• Jupyter Notebook: Jupyter Notebook is a popular open-source web application used for creating and
sharing documents that include live code, equations, visualizations, and text. It is commonly used for
tasks such as data cleaning, statistical modeling, and machine learning.
• Noise Removal and Sharpening: Unwanted elements in the data can be removed using filters, and
images can be sharpened. Grayscale images are often used as input for this process.
• Erosion and Dilation: These operations are typically applied to binary images but can also be used with
grayscale images, depending on the variant. The basic effect of these operators on binary images is
erosion towards the boundaries of regions.
• Negation: Negation involves creating a negative image, where the lightest areas in the original image
appear darkest and vice versa. This technique is commonly used in photography.
15
• Subtraction: Image subtraction involves taking the digital pixel values of one image and subtracting
them from another. This process can, for example, help to isolate a white tumor from the rest of the
image.
• Threshold:Thresholding is a method of segmenting images by converting grayscale images into binary
images.
IV. RESULTS
A. MODEL BUILDING:
B. MODEL COMPILATION:
C. MODEL TRAINING:
Obviously, in order to test the system some faces are required. There are so many standard face databases for testing
and rating a face detection algorithm. A standard database of face imagery is essential to supply standard imagery to
the algorithm developers and to supply a sufficient number of images to allow testing of these algorithms. Without
such databases and standards, there will be no way to accurately evaluate or compare facial recognition algorithms.
All the experiments described here have been executed mainly on the faces provided by the ORL face database.
G. PLOT PERFORMANCE:
In conclusion, Facial recognition system has successfully demonstrated the potential of facial recognition
technology. By achieving accurate face detection, robust feature extraction, and reliable recognition, this
project has contributed to the advancement of this field. As facial recognition continues to evolve,
responsible development and ethical considerations will be paramount. Facial recognition system serves
as a stepping stone towards a future where facial recognition technology can be a powerful tool for
identification and verification, while ensuring privacy and fairness for all.
Effective Face Detection: Facial recognition system successfully employed techniques like infrared and 3d
imaging to isolate faces within images or videos, paving the way for recognition.
Accurate Feature Extraction: You've explored techniques like 2C DCT to extract unique facial signatures,
creating a digital representation for identification.
Reliable Recognition: The system compared these features against a database of known faces, achieving a
recognition of good accuracy under certain circumstances.
I would like to thank again to our respected teacher Professor Ramakant Parida sir for his valuable guidance.
Thank-you…..
20
21
22
REFERENCES:
[1].Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D, Menon, D.
K., ... &
Glocker, B. (2017). Efficient multi-scale 3D 2D-CDT Image Analysis, 36, 61-78.
[2].Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., & Pal, C.
(2017).
Brain tumor segmentation with deep neural networks. Medical Image Analysis, 35, 18-31.
[3].Pereira, S., Pinto, A., Alves, V., & Silva, C. A. (2016). 35(5), 1240-
1251.
[4].Razzak, M. I., Naz, S., & Zaib, A. (2018). Deep learning for image processing:
Overview,
challenges and the future. In Classification in BiometricApps (pp. 323-350). Springer, Cham.
[5].Zhao, Q., Feng, Y., Cheng, S., Dong, Z., & Tu, Z. (2015). An automatic detection
system. (pp. 424-432).
[6].Han, Z., Wei, B., Zheng, Y., Yin, Y., & Li, K. (2017). AI face detection model based on
deep learning. In 2017 12th IEEE Conference on Industrial Electronics and Applications
(ICIEA) (pp.
1258-1262). IEEE.
[7].Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., & Maier-Hein, K. H. (2018). No
new-net.
In International MICCAI Workshop (pp. 234-244). Springer, Cham.
[8].Shin, H. C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., ... & Summers, R. M.
(2016). Deep
convolutional networks for computer-aided detection: 2D-DCT architectures, dataset
characteristics and transfer learning.35(5), 1285-1298.
a