Real Time Face Attendance System Project
Real Time Face Attendance System Project
DEEP LEARNING
REAL TIME FACE ATTENDANCE SYSTEM
A project report submitted in partial fulfilment of the requirements for the award
of Bachelor of Engineering ________
Engineering
I hereby declare that this project report is based on my original work except for citations and
quotations which have been duly acknowledged. I also declare that it has not been previously
and concurrently submitted for any other degree or award at UTAR or other institutions.
Signature :
Name : ID No. :
Date :
APPROVAL FOR SUBMISSION
I certify that this project report entitled “REAL TIME FACE ATTENDANCE
SYSTEM” was prepared by __________ has met the required standard for
submission in partial fulfilment of the requirements for the award of Bachelor of
Engineering _______________________________
Approved by,
Signature :
Supervisor :
Date :
ACKNOWLEDGEMENTS
I would like to thank everyone who has contributed to the successful completion of this
project. First, I would like to express my utmost gratitude to my research supervisor,
_______________ who in spite of being extraordinarily busy with her/his duties, took time
to give invaluable advice and guidance throughout the development of the research.
Last but not the least, I am grateful for the unselfish cooperation and assistance
that my friends had given me to complete this task
ABSTRACT
Face is the representation of one’s identity. Hence, we have proposed an automated student
attendance system based on face recognition. Face recognition system is very useful in life
applications especially in security control systems. The airport protection system uses face
recognition to identify suspects and FBI (Federal Bureau of Investigation) uses face
recognition for criminal investigations. In our proposed approach, firstly, video framing is
performed by activating the camera through a user- friendly interface. The face ROI is
detected and segmented from the video frame by using Viola-Jones algorithm. In the pre-
processing stage, scaling of the size of images is performed if necessary in order to prevent
loss of information. The median filtering is applied to remove noise followed by conversion
of colour images to grayscale images. After that, contrast-limited adaptive histogram
equalization (CLAHE) is implemented on images to enhance the contrast of images. In face
recognition stage, enhanced local binary pattern (LBP) and principal component analysis
(PCA) is applied correspondingly in order to extract the features from facial images. In our
proposed approach, the enhanced local binary pattern outperform the original LBP by
reducing the illumination effect and increasing the recognition rate. Next, the features
extracted from the test images are compared with the features extracted from the training
images. The facial images are then classified and recognized based on the best result obtained
from the combination of algorithm, enhanced LBP and PCA. Finally, the attendance of the
recognized student will be marked and saved in the excel file. The student who is not
registered will also be able to register on the spot and notification will be given if students
sign in more than once. The average accuracy of recognition is 100 % for good quality
images, 94.12 % of low-quality images and
95.76 % for Yale face database when two images per person are trained.
TABLE OF CONTENTS
CHAPTER
1 INTRODUCTION 1
1.1 Background 1
1.2 Problem Statement 3
1.3 Aims and Objectives 4
1.4 Thesis Organization 4
2 LITERATURE REVIEW 5
2.1 Student Attendance System 5
2.2 Face Detection 6
2.2.1 Viola-Jones Algorithm 10
2.3 Pre-Processing 12
2.4 Feature Extraction 16
2.4.1 Types of Feature Extraction 20
2.5 Feature Classification And Face Recognition 21
2.6 Evaluation 22
3 METHODOLOGY 24
3.1 Methodology Flow 24
3.2 Input Images 27
3.2.1 Limitations of the Images 28
3.3 Face Detection 29
3.3.1 Pre-Processing 29
3.3.1.1 Scaling of Image 29
3.3.1.2 Median Filtering 30
3.3.1.3 Conversion to Grayscale Image 31
3.3.1.4 Contrast Limited Adaptive Histogram
Equalization 32
3.4 Feature Extraction 32
3.4.1 Working Principle of Original LBP 33
3.4.2 Working Principle of Proposed LBP 34
3.4.3 Working Principle of PCA 37
3.4.4 Feature Classification 40
3.4.5 Subjective Selection Algorithm and Face
Recognition 40
REFERENCES 60
χ2 Chi-square statistic
𝑑 distance
𝑥 input feature points
𝑦 trained feature points
𝑚𝑥 mean of x
𝑆𝑥 covariance matrix of x
INTRODUCTION
The main objective of this project is to develop face recognition based automated
student attendance system. In order to achieve better performance, the test images and
training images of this proposed approach are limited to frontal and upright facial
images that consist of a single face only. The test images and training images have to
be captured by using the same device to ensure no quality difference. In addition, the
students have to register in the database to be recognized. The enrolment can be done
on the spot through the user-friendly interface.
1.1 Background
Face recognition is crucial in daily life in order to identify family, friends or someone
we are familiar with. We might not perceive that several steps have actually taken in
order to identify human faces. Human intelligence allows us to receive information
and interpret the information in the recognition process. We receive information
through the image projected into our eyes, by specifically retina in the form of light.
Light is a form of electromagnetic waves which are radiated from a source onto an
object and projected to human vision. Robinson-Riegler, G., & Robinson-Riegler, B.
(2008) mentioned that after visual processing done by the human visual system, we
actually classify shape, size, contour and the texture of the object in order to analyse
the information. The analysed information will be compared to other representations
of objects or face that exist in our memory to recognize. In fact, it is a hard challenge
P
to build an automated system to have the same capability as a human to recognizeA
faces. However, we need large memory to recognize different faces, for example, inG
E
the Universities, there are a lot of students with different race and gender, it is1
impossible to remember every face of the individual without making mistakes. In order0
to overcome human limitations, computers with almost limitless memory, high
processing speed and power are used in face recognition systems.
The work on face recognition began in 1960. Woody Bledsoe, Helen Chan Wolf
and Charles Bisson had introduced a system which required the administrator to locate
eyes, ears, nose and mouth from images. The distance and ratios between the located
features and the common reference points are then calculated and compared. The
studies are further enhanced by Goldstein, Harmon, and Lesk in 1970 by using other
features such as hair colour and lip thickness to automate the recognition. In 1988,
Kirby and Sirovich first suggested principle component analysis (PCA) to solve face
recognition problem. Many studies on face recognition were then conducted
continuously until today (Ashley DuVal, 2012).
P
1.2 Problem Statement A
G
E
Traditional student attendance marking technique is often facing a lot of trouble. The1
face recognition student attendance system emphasizes its simplicity by eliminating0
classical student attendance marking technique such as calling student names or
checking respective identification cards. There are not only disturbing the teaching
process but also causes distraction for students during exam sessions. Apart from
calling names, attendance sheet is passed around the classroom during the lecture
sessions. The lecture class especially the class with a large number of students might
find it difficult to have the attendance sheet being passed around the class. Thus, face
recognition student attendance system is proposed in order to replace the manual
signing of the presence of students which are burdensome and causes students get
distracted in order to sign for their attendance. Furthermore, the face recognition based
automated student attendance system able to overcome the problem of fraudulent
approach and lecturers does not have to count the number of students several times to
ensure the presence of the students.
The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facial
identification. One of the difficulties of facial identification is the identification
between known and unknown images. In addition, paper proposed by Pooja G.R et al.
(2010) found out that the training process for face recognition student attendance
system is slow and time-consuming. In addition, the paper proposed by Priyanka Wagh
et al. (2015) mentioned that different lighting and head poses are often the problems
that could degrade the performance of face recognition based student attendance
system.
Image
Acquisi Fa Feat Fac Attenda
ce ure e nce
tion detect extract recognit
from ion ion ion
video
Figure 1.1 Block Diagram of the General Framework
Chapter 2 includes a brief review of the approaches and studies that have been done
previously by other researchers whereas Chapter 3 describe proposed methods and
approaches used to obtain the desired output. The results of the proposed approach would
be presented and discussed in Chapter 4. The conclusion, as well as some
recommendations would be included in Chapter 5.
P
A
G
E
1
CHAPTER 2 0
LITERATURE REVIEW
Difference between face detection and face recognition are often misunderstood. Face
detection is to determine only the face segment or face region from image, whereas face
recognition is to identify the owner of the facial image. S.Aanjanadevi et al. (2017) and
Wei-Lun Chao (2007) presented a few factors which cause face detection and face
recognition to encounter difficulties. These factors consist of background, illumination,
pose, expression, occlusion, rotation, scaling and translation. The definition of each
factor is tabulated in Table 2.2.
P
Table 2.2 Factors Causing Face Detection Difficulties (S.Aanjanadevi et al., 2017) A
G
Background Variation of background and environment around people E
in the image which affect the efficiency of face 1
0
recognition.
Illumination Illumination is the variation caused by various lighting
environments which degrade the facial feature detection.
There are a few face detection methods that the previous researchers have
worked on. However, most of them used frontal upright facial images which consist
of only one face. The face region is fully exposed without obstacles and free from the
spectacles.
Akshara Jadhav et al. (2017) and by P. Arun Mozhi Devan et al. (2017)
suggested Viola-Jones algorithm for face detection for student attendance system.
They concluded that out of methods such as face geometry- based methods, Feature
Invariant methods and Machine learning based methods, Viola-Jones algorithm is not
P
only fast and robust, but gives high detection rate and perform better in differentA
lighting condition. Rahul V. Patil and S. B. Bangar (2017) also agreed that Viola-JonesG
E
algorithm gives better performance in different lighting condition. In addition, in the1
paper by Mrunmayee Shirodkar et al. (2015), they mentioned that Viola-Jones0
algorithm is able to eliminate the issues of illumination as well as scaling and rotation.
In addition, Naveed Khan Balcoh (2012) proposed that Viola-Jones algorithm is the
most efficient among all algorithms for instance the AdaBoost algorithm, the
FloatBoost algorithm, Neural Networks, the S-AdaBoost algorithm, Support Vector
Machines (SVM) and the Bayes classifier.
Varsha Gupta and Dipesh Sharma (2014) studied Local Binary Pattern (LBP),
Adaboost algorithm, local successive mean quantization transform (SMQT) Features,
sparse network of winnows (SNOW) Classifier Method and Neural Network-based
face detection methods in addition to Viola-Jones algorithm. They concluded that
Viola-Jones algorithm has the highest speed and highest accuracy among all the
methods. Other methods for instance Local Binary Pattern and SMQT Features have
simple computation and able to deal with illumination problem, their overall
performance is weaker than Viola-Jones algorithm for face detection. The advantages
and disadvantages of the methods is studied and tabulated in Table 2.3.
P
Table 2.3 Advantages & Disadvantages of Face Detection Methods (Varsha Gupta and
A
Dipesh Sharma, 2014) G
E
1
Face detection Advantages Disadvantages 0
method
Viola jones 1. High detection speed 1. Long training time.
algorithm 2. High accuracy. 2. Limited head pose.
3. Not able to detect dark
faces.
Burak Ozen (2017) and Chris McCormick (2013), they have mentioned that
Adaboost which is also known as ‘Adaptive Boosting’ is a famous boosting technique
in which multiple “weak classifiers” are combined into a single “strong classifier”. The
training set is selected for each new classifier according to the results of the previous
classifier and determines how much weight should be given to each classifier in order
to make it significant.
However, false detection may occur and it was required to remove manually
based on human vision. Figure 2.3 shows an example of false face detection (circle
with blue).
P
A
G
E
1
0
2.3 Pre-Processing
Subhi Singh et al. (2015) suggested cropping of detected face and colour image was
converted to grayscale for pre-processing. They also proposed affine transform to be
applied to align the facial image based on coordinates in middle of the eyes and scaling
of image to be performed. Arun Katara et al (2017), Akshara Jadhav et.al (2017),
Shireesha Chintalapati, and M.V. Raghunadh (2013), all of the 3 papers have proposed
histogram equalization to be applied to facial image, and scaling of images was
performed for pre-processing.
Figure 2.4 Images Show Checkerboard Effect Significantly Increasing from Left to
Right (Gonzalez, R. C., & Woods, 2008)
There are a few methods to improve the contrast of images other than
Histogram Equalization. Neethu M. Sasi and V. K. Jayasree (2013) studied Histogram
Equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE) in
order to enhance myocardial perfusion images. Aliaa A. A. Youssif (2006) studied
contrast enhancement together with illumination equalization methods to segment
retinal vasculature. In addition, in paper by A., I. and E.Z., F. (2016) Image Contrast
Enhancement Techniques and performance were studied. Unlike Histogram
equalization, which operate on the data of the entire image, CLAHE operates on data
of small regions throughout the image. Hence, the Contrast Limited Adaptive
Histogram Equalization is believed to outperform the conventional Histogram
Equalization. Summary of the literature review for contrast improvement is tabulated
in Table 2.4.
P
Table 2.4 Summary of Contrast Improvement A
G
Method Concept Advantages Disadvantages E
1
Histogram Contrast 1. Less sensitive 1. It depends on0
equalization enhancement is to noise. the global
performed by statistics of an
transforming the image.
intensity values, 2. It cause over
resulting in enhancement
uniformly for some part,
distributed while
histogram. peripheral
region need
more
enhancement.
Contrast Unlike, HE 1. It prevent over 1. More sensitive
Limited which works on enhancement to noise
Adaptive entire image, it as well as compared to
Histogram works on small noise histogram
Equalization data regions. amplification. equalization.
(CLAHE) Each tile's
contrast is
enhanced to
ensure uniformly
distributed
histogram.
Bilinear
interpolation is
then used to
merge the
neighbouring
tiles.
P
2.4 Feature Extraction A
G
E
The feature is a set of data that represents the information in an image. Extraction of1
facial feature is most essential for face recognition. However, selection of features0
could be an arduous task. Feature extraction algorithm has to be consistent and stable
over a variety of changes in order to give high accuracy result.
There are a few feature extraction methods for face recognition. In the paper of
Bhuvaneshwari et al. (2017), Abhishek Singh and Saurabh Kumar (2012) and Liton
Chandra Paul and Abdulla Al Sumam (2012), they proposed PCA for the face
recognition. D. Nithya (2015) also used PCA in face recognition based student
attendance system. PCA is famous with its robust and high speed computation.
Basically, PCA retains data variation and remove unnecessary existing correlations
among the original features. PCA is basically a dimension reduction algorithm. It
compresses each facial image which is represented by the matrix into single column
vector. Furthermore, PCA removes average value from image to centralize the image
data. The Principle Component of distribution of facial images is known as Eigen faces.
Every single facial image from training set contributes to Eigen faces. As a result, Eigen
face encodes best variation among known facial images. Training images and test
images are then projected onto Eigen face space to obtain projected training images
and projected test image respectively. Euclidean distance is computed by comparing
the distance between projected training images and projected test image to perform the
recognition. PCA feature extraction process includes all trained facial images. Hence,
the extracted feature contains correlation between facial images in the training set and
the result of recognition of PCA highly depends on training set image.
P
A
G
E
1
0
Figure 2.6 PCA Dimension Reduction (Liton Chandra Paul and Abdulla Al Sumam,
2012)
Figure 2.7 Class Separation in LDA (Suman Kumar Bhattacharyya and Kumar Rahul,
2013)
The original LBP (Local Binary Patterns) operator was introduced by the paper
of Timo Ojala et al. (2002). In the paper by Md. Abdur Rahim et al. (2013), they
proposed LBP to extract both texture details and contour to represent facial images.
LBP divides each facial image into smaller regions and histogram of each region is
extracted. The histograms of every region are concatenated into a single feature vector.
This feature vector is the representation of the facial image and Chi square statistic is
used to measure similarities between facial images. The smallest window size of each
region is 3 by 3. It is computed by thresholding each pixel in a window where middle
pixel is the threshold value. The neighborhood larger than threshold value is assigned
to 1 whereas the neighborhood lower than threshold value is assigned to 0. Then the
resulting binary pixels will form a byte value representing center pixel.
Neural network is initially used only in face detection. It is then further studied
to be implemented in face recognition. In the paper by Manisha M. Kasar et al. (2016),
Artificial Neural Network (ANN) was studied for face recognition. ANN consists of
the network of artificial neurons known as "nodes". The nodes act as human brain in
order to make recognition and classification. These nodes are interconnected and
values are assigned to determine the strength of their connections. High value indicates
strong connection. Neurons were categorized into three types of nodes or layers which
are input nodes, hidden nodes, and output nodes. Input nodes are given weight based
on its impact. Hidden nodes consist of some mathematical function and thresholding
function to perform prediction or probabilities that determine and block unnecessary
inputs and result is yield in output nodes. Hidden nodes can be more than one layer.
Multiple inputs generate one output at the output node.
Figure 2.9 Artificial Neural Network (ANN) (Manisha M. Kasar et al., 2016)
Divyarajsinh N. Parmar and Brijesh B. Mehta (2013) face recognition system can be
categorized into a few Holistic-based methods, Feature-based methods and Hybrid
methods. Holistic-based methods are also known as appearance-based methods, which
mean entire information about a face patch is involved and used to perform some
transformation to obtain a complex representation for recognition. Example of
Holistic-based methods are PCA(Principal Component Analysis) and LDA(Linear
dependent Analysis).On the other hand, feature-based methods directly extract detail
from specific points especially facial features such as eyes, noses, and lips whereas
other information which is considered as redundant will be discarded. Example of
feature-based method is LBP (Local Binary Pattern). These methods mentioned are
usually combined to exist as Hybrid method, for example Holistic-based method
combine with Feature-based in order to increase efficiency.
P
2.5 Feature Classification And Face Recognition A
G
E
Classification involves the process of identification of face. Distance classifier finds1
the distance between the test image and train image based on the extracted features.0
The smaller the distance between the input feature points and the trained feature points,
the higher the similarity of the test image and training image. In other words, the facial
images with the smallest/minimum distance will be classified as the same person.
Deepesh Raj (2011) mentioned several types of distance classifiers such as Euclidean
Distance, City Block Distance and Mahalanobis distance for face recognition. Md.
Abdur Rahim et al. (2013) implemented Chi-Square statistic as distance classifier for
LBP operator. The equation of each classification method is defined below.
χ2 = ∑ oberved
frequency−expected
frequency.
exp
ecte
d
freq
uen
cy
𝑑(𝑥,
𝑦) =
|𝑥 −
𝑦|2
where, X is the input feature points and
Y is the trained featured points.
P
The Mahalanobis A
distance is G
(2.4)
defined in E
1
)𝑇 0
𝑑
(
𝑥
,
𝑦
)
=
(
(
𝑦
−
𝑚
𝑆𝑥 𝑥
Md. Abdur Rahim et.al (2013), after performing the LBP feature extraction,
Chi-Square statistic is suggested to be used as dissimilarity measures for histograms
to compute the distance between two images. Abhishek Singh and Saurabh Kumar
(2012) proposed Euclidean distance to compute the distance between two images after
PCA feature extraction was performed. Threshold can be set for the distance calculated
from the classifier. A face is classified as belonging to a class only if its distance is
below the chosen threshold, otherwise the face is classified as unknown.
P
A
G
E
1
2.6 Evaluation 0
Different databases are used in order to evaluate the system performance. The database
provided by previous researchers with different variable conditions, for example,
lighting and expression will be used to justify the system and for study purpose.
Furthermore, our own database will be used to analyse the system for real time
application. From the literature review of the previous researchers, the common
method to justify the performance of the system is by finding the accuracy of
recognition.
𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑡𝑜𝑡𝑎𝑙
𝑚𝑎𝑡𝑐ℎ𝑒𝑑 𝑖𝑚𝑎𝑔𝑒𝑠
𝑥100
𝑡𝑜𝑡𝑎𝑙
𝑡𝑒𝑠𝑡𝑒
𝑑
𝑖𝑚𝑎𝑔
𝑒𝑠
P
Table 2.5 Summary of Feature Extraction, The Accuracy Obtained from Handbook A
of Research on Emerging Perspectives in Intelligent Pattern Recognition (NK G
E
Kamila, 2015) 1
Method Advantages Disadvantages Accuracy 0
(ATT
database)
Eigen face/ High speed in Face recognition is 77.97 %
Kernel PCA training and depend on training
(Principal recognition. database.
component
Analysis)
Fisher Images of individual 1. Bigger database is 82.45 %
face/ LDA with different required because
(Linear illumination, facial images of
Discriminant expressions able to different
Analysis ) be recognized if expression of the
more samples are individual have to
trained. be trained in same
class.
2. It depend more on
database compared
to PCA.
LBP(Local It is able to Training time is longer 90.93 %
Binary overcome variety of than PCA and LDA.
Pattern) facial expressions,
varying illumination,
image rotation and
aging of person.
METHODOLOGY
The approach performs face recognition based student attendance system. The
methodology flow begins with the capture of image by using simple and handy
interface, followed by pre-processing of the captured facial images, then feature
extraction from the facial images, subjective selection and lastly classification of the
facial images to be recognized. Both LBP and PCA feature extraction methods are
studied in detail and computed in this proposed approach in order to make comparisons.
LBP is enhanced in this approach to reduce the illumination effect. An algorithm to
combine enhanced LBP and PCA is also designed for subjective selection in order to
increase the accuracy. The details of each stage will be discussed in the following
sections.
The flow chart for the proposed system is categorized into two parts, first
training of images followed by testing images (recognize the unknown input image)
shown in Figure 3.1 and Figure 3.2 respectively.
P
A
Training database S
tart G
E
Read face images 1
from file 0
Crop face
Scaled to
size
P
r Colour Image
e or Colou Grayscale r Median filtering
(R, on 3
p Image
r
o Grayscale Conversion of colour
c image to
Median
e filtering
s
s Contrast Limited Adaptive
i Histogram
n
g Enhanced LBP and PCA
F feature
e
a
t Compare the extracted features of captured
image to
u
r
e
e Subjective
xR
tre
ac
co Recog Unrecog
tig
on Recog
ni R
t D
i
o
n Write attendance to
Yale face database is used as both training set and testing set to evaluate the
performance. Yale face database contains one hundred and sixty-five grayscale images
of fifteen individuals. There are eleven images per individual; each image of the
individual is in different condition. The conditions included centre-light, with glasses,
happy, left-light, without glasses, normal, right-light, sad, sleepy, surprised and wink.
These different variations provided by the database is able to ensure the system to be
operated consistently in variety of situations and conditions.
For our own database, the images of students are captured by using laptop built
in camera and mobile phone camera. Each student provided four images, two for
training set and two for testing set. The images captured by using laptop built in camera
are categorized as low quality images, whereas mobile phone camera captured images
are categorized as high quality images. The high quality images consists of seventeen
students while low quality images consists of twenty-six students. The recognition rate
of low quality images and high quality images will be compared in Chapter 4 to draw
a conclusion in term of performance between image sets of different quality.
P
A
G
E
1
0
The input image for the proposed approach has to be frontal, upright and only a single
face. Although the system is designed to be able to recognize the student with glasses
and without glasses, student should provide both facial images with and without
glasses to be trained to increase the accuracy to be recognized without glasses. The
training image and testing image should be captured by using the same device to avoid
quality difference. The students have to register in order to be recognized. The
enrolment can be done on the spot through the user-friendly interface.
3.3.1 Pre-Processing
Testing set and training set images are captured using a camera. There are unwanted
noise and uneven lighting exists in the images. Therefore, several pre-processing steps
are necessary before proceeding to feature extraction.
Pre-processing steps that would be carried out include scaling of image, median
filtering, conversion of colour images to grayscale images and adaptive histogram
equalization. The details of these steps would be discussed in the later sections.
Scaling of images is one of the frequent tasks in image processing. The size of the
images has to be carefully manipulated to prevent loss of spatial information.
(Gonzalez, R. C., & Woods, 2008), In order to perform face recognition, the size of
the image has to be equalized. This has become crucial, especially in the feature
extraction process, the test images and training images have to be in the same size and
dimension to ensure the precise outcome. Thus, in this proposed approach test images
and train images are standardize at size 250 × 250 pixels.
P
3.3.1.2 Median Filtering A
G
E
Median filtering is a robust noise reduction method. It is widely used in various1
applications due to its capability to remove unwanted noise as well as retaining useful0
detail in images. Since the colour images captured by using a camera are RGB images,
median filtering is done on three different channels of the image. Figure 3.3 shows the
image before and after noise removal by median filtering in three channels. If the input
image is a grayscale image, then the median filtering can be performed directly without
separating the channels.
CLAHE HE
Figure 3.9 Contrast Improvement
Different facial images mean there are changes in textural or geometric information.
In order to perform face recognition, these features have to be extracted from the facial
images and classified appropriately. In this project, enhanced LBP and PCA are used
for face recognition. The idea comes from nature of human visual perception which
performs face recognition depending on the local statistic and global statistic features.
Enhanced LBP extracts the local grayscale features by performing feature extraction
on a small region throughout the entire image. On the other hand, PCA extracts the
global grayscale features which means feature extraction is performed on the whole image.
P
3.4.1 Working Principle of Original LBP A
G
E
LBP is basically a texture based descriptor which it encoded local primitive into binary1
string. (Timo Ojala et al., 2002). The original LBP operator works on a 3 × 3 mask0
size. 3 × 3 mask size contains 9 pixels. The center pixel will be used as a threshold to
convert the neighboring pixels (the other 8 pixels) into binary digit. If the neighboring
pixel value is larger than the center pixel value, then it is assigned to 1, otherwise it is
assigned to 0. After that, the neighborhoods pixel bits are concatenated to a binary
code to form a byte value representing the center pixel. Figure 3.6 shows an example
of LBP conversion.
(3.1)
7
where Pc indicates centre pixel and Pn (n = 0,…, 7) are 8 of its neighbouring pixels
respectively.
P
The starting point of the encoding process can be any of neighbouring pixelsA
as long as the formation of binary string is following the order either in clockwise orG
E
anticlockwise rotation. The thresholding function f(y) can be written as follows 1
(3.2)0
𝑦 < 0;
𝑓(𝑦) = {0
𝑦 ≥ 0;
1
The original LBP operator is composed of 3 × 3 filter size with 9 pixels. Instead of the
circular pattern, it looks more rectangular in shape. The 9 pixels adjacent to each other
means every detail will be taken as sampling points even the non-essential details. It
is more affected by uneven lighting condition because the small filter size emphasizes
small scale detail (Lee and Li, 2007), even the shadow created by non-uniform lighting
condition. In our proposed approach, a larger radius size, R is implemented in LBP
operator. In the paper of Md. Abdur Rahim et.al (2013), the equation of modifying the
radius size has been introduced. However, the paper did not mention the effect of
changing the radius size. In the proposed approach, analysis is done on different radius
sizes in order to enhance the system and reduce the illumination effect. By increasing
the radius size, the filter size will be increased. R indicates radius from the centre pixel,
𝜃 indicates the angle of the sampling point with respect to the center pixel and P
indicates number of sampling points on the edge of the circle taken to compare with
the centre pixel. Given the neighbouring’s notation (P, R, 𝜃) is implemented, the
coordinates of the centre pixel (Xc, Yc) and the coordinates of the P neighbours (Xp,
Yp) on the edge of the circle with radius R can be computed with the sines and cosines
shown in the equation (Md. Abdur Rahim et.al,2013):
(3.3)
𝑋𝑝 = 𝑋𝑐 + 𝑅𝑐𝑜𝑠(𝜃/𝑃)
𝑌𝑝 = 𝑌𝑐 + 𝑅𝑠𝑖𝑛(𝜃/𝑃)
P
Although the radius has been increased, total 8 sampling points are taken whichA
is similar to the original LBP operator. In the approach, CLAHE is performed on theG
E
grayscale input facial images to improve the contrast. The contrast improved images1
remain as grayscale images. The proposed LBP operator extracts the grayscale features0
from the contrast improved grayscale images which requires only 8 bit computation.
After that, the pixels at the sampling points will be encoded as 8 bit binary string in
the same way as original LBP operator encoding process. Enhanced LBP with radius
size two, perform better compared to original LBP and has more consistent recognition
rate compared to other radius size. Hence, enhanced LBP with radius size two will be
used as proposed approach. The proposed LBP operator will be further explained in
Chapter 4 (result and discussion).
R=4 R=5
Basically, the increasing in the size of the radius means extending the circular
pattern of LBP externally. The green spots within the blocks indicate the sampling
pixels to be encoded into binary string. For the sampling pixel located in between the
P
blocks, it indicates the average pixel value is computed from the adjacent pixels A
(diagonal). G
E
1
0
Figure 3.12 Proposed LBP Operator with Radius 2 and Its Encoding Pattern.
The feature vector of the image is constructed after the Local Binary Pattern of
every pixel is calculated. The histogram of the feature vector image is computed in
order to be classified by distance classifier. However, it loss spatial information
because histogram representation does not include spatial information but only discrete
information. (Gonzalez, R. C., & Woods, 2008). In order to overcome this problem,
the feature vector image is then divided into blocks. A histogram is constructed in each
region respectively. Every bin in a histogram represents a pattern and contains the
frequency of its appearance in the region. The feature vector of entire image is then
constructed by concatenating the regional histograms in the sequence to one histogram.
(Md. Abdur Rahim et al., 2013). This histogram remains its regional spatial
information and represents the identity of single image which is then classified to
perform the recognition.
P
A
G
E
1
0
In this proposed approach, PCA face recognition is studied, as it is one of the popular
face recognition methods that was suggested and used by the previous researchers. The
accuracy of PCA is computed in order to compare with the enhanced LBP.
PCA includes a few steps which will briefly be described in the following
paragraphs. For PCA, the image scale, length (M) and height (M) is not so important.
This is because PCA is mostly dealing with number of total images, N instead of M.
However, same size of test image and training image is a must for PCA computation.
Same length and height of the image is assumed in the following equation for
illustration. Given a training set of N images with size 𝑀 × 𝑀, the first step of PCA is
to convert two dimensional vectors to one dimensional vector. The one dimensional
vector can be either column vector or row vector. In this approach, the column vector
conversion is done. For each facial image with matrix notation 𝑀 × 𝑀 will be
converted to column vector Γi, with dimension 𝑀2 × 1.There are N facial images, each
face is represented by column vector Γ1, Γ2, Γ3, .., ΓN. Feature vector of each face is
stored in this column vector. The dimension reduced face matrix is constructed by
concatenating every single column vector.
P
PCA is briefly explained by using the equation in the following steps. A
Step1: Prepare the data, G
E
(3.4)1
Dimension reduced matrix 0
Γ1 Γ2 Γ3 ΓN Γ1 Γ2 ΓN
𝑎11 𝑎11 𝑎11 𝑎11 𝑎11 𝑎12 …
𝑎12 𝖥𝑎12 𝑎12 𝖥 𝑎12 𝖥
𝑎21 𝑎1𝖥
𝑎13 𝑎13 𝑎13 … 𝑎13 = 𝑎31 𝑁
1 1 1 𝑎22 …1
𝑎2
𝑁1
𝑎32 …
𝑎3
𝑁
𝜑=1∑Γ
𝑁 𝑖
𝑖=1
Mean face, 𝜑 (3.6)
𝑎11 +
𝑎12 +
⋯+
𝑎1𝑁
𝖥
𝑁
= I1
I
Φ𝑖 = Γ𝑖 − 𝜑
(3.8)
i =1,2,…,N Dimension
reduced matrix
Mean face, 𝜑
𝐶 = 1 ∑ Φ Φ𝑇 = , (𝑀2𝑥𝑀2)
𝐴𝐴𝑇
𝑖 𝑖
𝑁
𝑖=1
𝐴 = [Φ1 Φ2 ⋯ Φ𝑁 ] , (𝑀2𝑥𝑁)
where A is the matrix constructed from the concatenation of the column vectors
after remove the mean face.
which is extremely large to be calculated. 𝐴𝐴𝑇 ,and 𝐴𝑇𝐴 have the same
i =1,2,…,N-1
The facial image is projected on the Eigen face by using the equation to obtain
the projected image Ω. Γ𝑖 − 𝜑 is the centered vector, which the mean face is
removed.
Steps 1 to 6 are used to train the training image set. For test image only step
1,2, 3 and 6 is required. Step 4 and 5 are not required for test image as the Eigen face
is needed only to compute once while training. The Euclidean distance is then used as
distance classifier to calculate the shortest distance between the projected image and
projected test image for recognition.
Chi-square statistic is used as a dissimilarity measure for LBP to determine the shortest
distance between training image and the testing image. On the other hand, Euclidean
distance is used to compute the shortest distance between trained and test image after
PCA feature extraction. Both classifiers, Chi-square statistic and Euclidean distance
determine the closest or nearest possible training image to the testing image for face
recognition. However, the nearest result might not be always true. Therefore, an
algorithm to combine enhanced LBP and PCA is applied in order to increase the
accuracy of the system.
The feature classification that has been performed in previous part gives the closest
P
A
result but not absolute. In order to increase the accuracy and suppress the false
G
E
1
0
P
recognition rate, an algorithm to combine enhanced LBP and PCA is designed in thisA
proposed approach. G
E
1
In this proposed approach, best five results are obtained from enhanced LBP0
and PCA. This means that five individuals which have closest distance with respect to
input image will be identified. LBP and PCA are two different algorithms which have
a different working principle. Hence, LBP and PCA will not have exactly the same
five individuals identified. In order to ensure the system capability to suppress the false
recognition, one is only classified as recognized if and only if he or she is the first
common individual that is identified by both LBP and PCA. From chapter 2, LBP
shows higher accuracy compared to PCA. Thus, LBP is designed to have higher
priority compared to PCA. This is shown in the Figure 3.14, Student_1 is recognized
instead of Student_3 because LBP is prioritized. As a result, the first common
individual is selected from PCA with respect to LBP and classified as recognized. If
there is no common term between LBP and PCA then the system will not recognize
any subject. This subjective selection algorithm is designed to be automated in the
system.
L P
B C Student_3
Student_
P A
1
Stude Student_
nt_2 S1
Common term
Stude t
nt_3 u
Stude d
nt_4 e
n
t
_
7
S
t
u
d
e
n
P
A
t _ 4
G
E
1
The input image will be recognized as Student_1.
0
Figure 3.14 Subjective Selection Algorithm
P
A
G
E
1
CHAPTER 4 0
4.1 Result
In this proposed approach, face recognition student attendance system with user-
friendly interface is designed by using MATLAB GUI(Graphic User Interface). A few
buttons are designed in the interface, each provides specific function, for example, start
button is to initialize the camera and to perform face recognition automatically
according to the face detected, register button allows enrolment or registrations of
students and update button is to train the latest images that have been registered in the
database. Lastly, browse button and recognize button is to browse facial images from
selected database and recognized the selected image to test the functionality of the
system respectively.
In this part, enhanced LBP with radius two is chosen and used as proposed
algorithm. The analysis of choosing the radius size will be further explained in the
discussion.
P
A
G
E
1
0
P
A
G
E
1
0
4.2 Discussion
P
A
G
This proposed approach provides a method to perform face recognition for the student
E
attendance system, which is based on the texture based features of facial images. Face1
0
recognition is the identification of an individual by comparing his/her real-time
captured image with stored images in the database of that person. Thus, the training
set has to be chosen based on the latest appearance of an individual other than taking
important factors for instance illumination into consideration.
The proposed approach is being trained and tested on different datasets. Yale
face database which consists of one hundred and sixty-five images of fifteen
individuals with multiple conditions is implemented. However, this database consists
of only grayscale images. Hence, our own database with color images which is further
categorized into high quality set and the low quality set, as images are different in their
quality: some images are blurred while some are clearer. The statistics of each data set
have been discussed in the earlier chapter.
Some pre-processing steps are performed on the input facial image before the
features are extracted. Median filtering is used because it is able to preserve the edges
of the image while removing the image noises. The facial image will be scaled to a
suitable size for standardizing purpose and converted to grayscale image if it is not a
grayscale image because CLAHE and LBP operator work on a grayscale image.
One of the factors that are usually a stumbling stone for face recognition
performance is uneven lighting condition. Hence, many alternatives have been
conducted in this proposed approach in order to reduce the non-uniform lighting
condition.Before feature extraction takes place, pre-processing is performed on the
cropped face image (ROI) to reduce the illumination problem.
P
In the previous chapters, Contrast Limited Adaptive Histogram EqualizationA
(CLAHE) is proposed in pre-processing in order to improve the image contrast andG
E
reduce the illumination effect. Most of the previous researchers have implemented1
histogram equalization in their approach. In order to study the difference between the0
CLAHE and histogram equalization, comparison is made and tabulated in Table 4.2.
For the comparison, our own database and Yale face database are used. From
the result tabulated, CLAHE appears to perform better compared to histogram
equalization. From the image of our own database, the left hand side of the original
image appears to be darker compared to right hand side. However, histogram
equalization does not improve the contrast effectively, which causes the image remains
darker at left hand side. Unlike histogram equalization, CLAHE appears to improve
the contrast more evenly throughout the entire facial image. This could help to reduce
uneven illumination. In Yale face database, CLAHE prevents some region appears to
be washed out as well as reduce over enhancement of noise. Besides, CLAHE shows
a clear edge and contour compared to histogram equalization. In addition, by referring
to the histograms, the pixel is widely span over the intensity scale axis 0 to 255 for
CLAHE whereas for histogram equalization the pixel span from 0 to only about 200
over the intensity scale axis. Hence, it can be said that the contrast of the image is more
evenly improved throughout the image by CLAHE compared to histogram
equalization based on the result obtained.
For evaluation purpose, Yale face database with different condition is used for
comparison. The normal facial image of each individual in Yale face database is
trained and the facial images with varying condition is input as the test image. The
recognition rate with the different radius size of LBP operator is computed and
tabulated in Table 4.4.
P
From the Table 4.4,when the radius size increase,only facial images withA
conditions right light, left light and center light are affected whereas for the otherG
E
conditions the recognition rate remains constant.This shows that by increasing the1
radius, uneven lighting effect can be reduced without distorting the detail of the image.0
From Figure 4.6, the line graph shows that the accuracy of different light conditions
increase when radius increases. In addition,it shows that among the different lighting
conditions,the system work the best in left light condition followed by center light
condition and the last is right light condition.
The recognition rate of LBP operator with different radius is then computed by
using our own database. However, LBP operator with different radius does not give
significant results because there is no critical illumination problem exists in the images
of our own database. Hence,the pixels of good quality images of our own database are
modified to generate the illumination effects in order to determine the impact of
different size LBP operator. Figure 4.7 shows conditions I, II, III and IV which
illustrate different illumination effects.
By increasing the radius size, the detail information is simplified and the
contour or shape of the face is emphasized. This illustrates that some of the useless or
redundant information is removed and more emphasis is on the critical details for
recognition.
The fact that, the radius might not be the larger the better because larger radius
with respect to larger filter size emphasizes complementary information to small scale
P
detail but at the same time it loss discriminative information. The discriminative A
information is important, for instance to recognize students with glasses free G
E
condition. 1
0
However, it does prove that the enhanced LBP operator with increased radius
performs better compared to original LBP in case of illumination effect reduction.
Hence, the radius size of the LBP operator has to be wisely selected in order to reduce
the illumination effect without sacrificing much of the recognition rate.
From the result, the condition II appears to have lower accuracy compared to
others. This is due to the lighting effect of the training image. The training images have
its left side relatively darker compared to its right side which is directly opposite of the
test image (condition II).
From the result of proposed LBP in Table 4.6, database with good quality
colour images, achieves the highest accuracy (100 %) either one image or two images
per individual is trained whereas database with poor quality color images have average
accuracy of (86.54 %) when only one image per individual is trained and average
accuracy of (88.46 %) when two images per individual are trained. It can be said that
the approach works best with good quality images, poor quality images could degrade
P
the performance of the algorithm. Poor quality images were captured by using LaptopA
camera. The poor quality images might include the relatively darker images, blurG
E
images or having too much unwanted noise. In blurred images, the face is blurred out.1
Unwanted noise can be reduced by applying median filtering, but for those blurred0
images there are no suitable ways to get rid of it.
In this proposed approach, PCA face recognition is performed in order to identify the
differences with respect to LBP by using the same database.From the result obtained
in Table 4.7, supposedly PCA should have worked better with high quality images
which is similar to enhanced LBP.However,it gives slightly lower accuracy in
recognition in high quality images compared to low quality images.This is due to
different size of the database are used in the proposed approach.For high quality
images there are only seventeen students in the database, whereas low quality images
involve twenty-six students, which is almost ten students more than high quality
images.It is the PCA’s nature to be more affected by the size of the database compared
to LBP. Hence,the larger the size of the database which means the more students
include in the database,the lower the recognition rate of PCA.
Automated
Automated Class
Attendance
Attendance
Management System
System based on
Proposed Based On Face
Paper/difference face recognition
algorithm Recognition
using PCA
Algorithms(Shireesha
Algorithm(D.
Chintalapati, M.V.
Nithya, 2015)
Raghunadh ,2013)
Contrast Limited
Image Adaptive Histogram
None
enhancement Histogram equalization
Equalization
Enhanced LBP
Featured based PCA PCA/LDA/LBPH
and PCA
P
Own database A
G
Database Own database and Yale face NITW-database
E
database 1
0
Subjective
selection by
Write attendance enhanced LBP Write attendance to
Attendance
to Excel file and PCA,and Excel file
write attendace to
Excel file
From the Table 4.10, proposed algorithm is compared with face recognition student
attendance system proposed by previous researchers. The techniques used by the
previous researchers to process the images is compared in this proposed approach.
The research, published in the year 2015 used PCA for feature extraction.
While the paper published in the year 2013 used multiple feature extraction algorithms.
These feature extraction algorithms are PCA, LDA and LBPH. In this proposed
approach, other than enhanced LBP algorithm, PCA is also computed in order to make
comparison and to understand their property and performance respectively. In the
paper of year 2013, either one of the feature extraction methods PCA, LDA and LBPH
P
is used each time. In this proposed approach, enhanced LBP and PCA are both used A
as combination to ensure consistent results. G
E
1
The previous researcher who published the paper in 2015 used their own0
databases of images in study. The paper published in year 2013 used an image database
of 80 individuals (NITW-database) with 20 images of each person, while the paper in
year 2015 did not mention the size of image database used. The proposed algorithm
uses multiple image databases, which include Yale face database with different
lighting and expression for training and testing. In fact, Yale face database allows the
study of performance of the proposed algorithm in uneven lighting and variety of
expression condition. However, Yale face database consists of only grayscale images
without background, thus our own database with colour images is also used in real
time application to perform face recognition.
In addition, both papers did not apply technique for removal of image noise. In
proposed algorithm, Median filtering is used to filter out noises in the image. If the
noises on the images are not removed, the algorithm might recognize the noises as part
of the crucial features. These will probably affect the overall performance of the
algorithm.
Luxand Face Recognition (Luxand.com, 2018) is an app that used to perform real-time
face recognition.Luxand Face Recognition demo version was installed in the
laptop.This is to compare with the proposed algorithm by using the same camera
device. Five individuals in this proposed approach use Luxand Face Recognition and
proposed algorithm to recognize their faces to make comparisons.
From Table 4.11, both of the algorithm is able to recognize all the five
individuals. The proposed algorithm has to wait to update database,whenever a new
individual is registered and added to database. The waiting time is about 30 seconds
for each training. On the other hand,Luxand Face Recognition app allows the new
individual to click on the face detected in the video frame to add their name for
registration. This process lasts about 10 seconds. Hence,Luxand Face Recognition app
have faster training time compared to the proposed algorithm.
The proposed algorithm can only work with a single face. Multiple faces appear in the
same image causes each of them to be small. Small face region gives inaccurate
features, this will decreases the performance of the system. Hence,whenever more than
a face is detected, the system will not perform the recognition.
The LBP algorithm is highly sensitive to image quality and higly affected by
the blurred image. LBP is the texture based descriptor which extracts the local
grayscale features by performing feature extraction on a small region throughout the
entire image. Hence, test image and train image have to be the same quality and
captured by the same device in order to have high accuracy.
The laptop built in webcam is the default device in this proposed approach to
capture image. The webcam and lighting source of the laptop have low performance
which cause the captured images appear to be darker and blurred. This cause the
system only function the best if the test image and train image are both captured at the
same place under approximately same illumination.
Besides, false recognition occurs when the facial image is blurred. The blurred
image caused by the after image created by movement will degrade the performance.
The face feature extracted from the blurred image would be totally different compared
to train image resulting in false recognition.
Figure 4.12 shows images with different intensity by adding different constants
to pixel. The performance of the proposed algorithm is tabulated in the Table 4.9.
From the Table 4.9, the proposed algorithm function the best when the intensity
increase by a constant at the range of 25 and 50.Further increasing or decreasing the
intensity level out of this range will cause the recognition rate to drop to (94.12 %)
.Hence,it can be said that the system work better in a relatively brighter image then a
darker image.
P
4.7 Problems Faced and Solutions Taken A
G
E
One of the problems in real-time face recognition is the difficulty to obtain sufficient1
and suitable images for training and testing purpose. It is hard to obtain in real-time0
databases with a variety of variables, and it is hard to obtain publicly available
databases. Yale face database is one of the databases that could be downloaded by the
public. Hence, Yale face database is adopted and used in this proposed approach.
However, Yale face database consists of only grayscale images without any
background. Hence, our own database consists of colour images which is categorized
to high- quality images and low quality-images are also used.
Besides, it is very difficult to obtain an open source or the free face recognition
software in order to make comparisons. In this proposed approach, Luxand Face SDK
window demo version software is downloaded and implemented in the laptop. By
using laptop built in webcam to recognize faces, the proposed algorithm and Luxand
Face SDK demo able to be compared.
Viola-Jones algorithm can cause false face detection. This can be solved by
increasing the detection threshold (Mathworks.com, 2018). The threshold indicates the
number of detections needed to declare a final detection around an object. By using
MATLAB built in function, MergeThreshold, the detection threshold can be adjusted
to reduce the false face detection.
P
A
G
E
1
CHAPTER 5 0
5.1 Conclusion
In fact, a better camera with a better lighting source able to reduce the
illumination problem and also able to avoid the captured of blurred images. In this
proposed approach, laptop built in camera is a default device. However the lighting
source of the laptop camera is very dim, this cause the system to be unstable. For future
work, a better camera and a better lighting source can be used in order to obtain better
result. This can reduce the dependency on the brightness of environment, especially
the places to capture test and train images. Furthermore, a face recognition system
which has more faces other than a single facial image can be designed. This can
increase the efficiency of the system. The test image and train image in this approach
is highly related to each other and highly dependent on the image captured device. The
capture device has to be the same for this approach to perform better. Thus, other
algorithms can be used instead of LBP, for example A.I (artificial intelligence)
algorithm which can be implemented to perform the face recognition. CNN
(Convolution Neural Network) which is a hot topic recently, is a machine deep learning
algorithm which is able to perform recognition with less dependency on a particular
train image given a large database. However, CNN requires an extremely large
database to increase its accuracy or having relatively small class size to have high
performance.
Solon, O. (2017). Facial recognition database used by FBI is out of control, House
committee hears. [online] the Guardian. Available at:
https://www.theguardian.com/technology/2017/mar/27/us-facial-recognition-
database-fbi-drivers-licenses-passports [Accessed 25 Mar. 2018].
Robert Silk. (2017). Biometrics: Facial recognition tech coming to an airport near you:
Travel Weekly. [online] Available at:
http://www.travelweekly.com/Travel-News/Airline-News/Biometrics-Facial-
recognition-tech-coming-airport-near-you [Accessed 25 Mar. 2018].
Sidney Fussell. (2018). NEWS Facebook's New Face Recognition Features: What
We Do (and Don't) Know. [online] Available at:
https://gizmodo.com/facebooks-new-face-recognition-features-what-we-do-an-
1823359911 [Accessed 25 Mar. 2018].
deAgonia, M. (2017). Apple's Face ID [The iPhone X's facial recognition tech
explained]. [online] Computerworld. Available at:
https://www.computerworld.com/article/3235140/apple-ios/apples-face-id-the-
iphone-xs-facial-recognition-tech-explained.html [Accessed 25 Mar. 2018].
Jesse Davis West. (2017). History of Face Recognition - Facial recognition software.
[online] Available at: https://www.facefirst.com/blog/brief-history-of-face-
recognition-software/ [Accessed 25 Mar. 2018].
Wagh, P., Thakare, R., Chaudhari, J. and Patil, S. (2015). Attendance system based on
face recognition using eigen face and PCA algorithms. International Conference on
Green Computing and Internet of Things.
Arun Katara, Mr. Sudesh V. Kolhe, Mr. Amar P. Zilpe, Mr. Nikhil D. Bhele, Mr.
Chetan J. Bele. (2017). “Attendance System Using Face Recognition and Class
Monitoring System”, International Journal on Recent and Innovation Trends in
Computing and Communication, V5 (2).
P. Arun Mozhi Devan et al., (2017). Smart Attendance System Using Face Recognition.
Advances in Natural and Applied Sciences. 11(7), Pages: 139-144
Rahul V. Patil and S. B.Bangar. (2017). Video Surveillance Based Attendance system.
IJARCCE, 6(3), pp.708-713.
Naveed Khan Balcoh. (2012). Algorithm for Efficient Attendance Management: Face
Recognition based approach.International Journal of Computer Science Issues, V9
(4), No 1.
Varsha Gupta, Dipesh Sharma. (2014), “A Study of Various Face Detection Methods”,
International Journal of Advanced Research in Computer and Communication
Engineering), vol.3, no. 5.
Mekha Joseph et al. (2016). Children's Transportation Safety System Using Real Time
Face Recognition. International Journal of Advanced Research in Computer and
Communication Engineering V5 (3).
P
Srushti Girhe et al. (2015). Computer Vision Based Semi-automatic Algorithm for faceA
detection. International Journal on Recent and Innovation Trends in ComputingG
and Communication V3(2). E
1
Burak Ozen. (2017).Introduction to Boosting Methodology & Adaboost algorithm.0
[online] Available at: https://www.linkedin.com/pulse/introduction-boosting-
methodology-adaboost-algorithm-burak-ozen [Accessed 12 Apr. 2018].
Gonzalez, R. C., & Woods, R. E. (2002). Digital image processing. Upper Saddle
River, N.J., Prentice Hall.
Abhishek Singh and Saurabh Kumar. (2012). Face Recognition Using PCA and Eigen
Face Approach. [online] Available at: http://ethesis.nitrkl.ac.in/3814/1/Thesis.pdf
[Accessed 10 Apr. 2018].
P
LC Paul and Abdulla Al Sumam. (2012). Face Recognition Using PrincipalA
Component Analysis Method. IJARCET, V1 (9). G
E
D. Nithya (2015). Automated Class Attendance System based on Face Recognition 1
using PCA Algorithm. International Journal of Engineering Research and, V4 0
(12).
Suman Kumar Bhattacharyya & Kumar Rahul. (2013), “Face Recognition by Linear
Discriminant Analysis”, International Journal of Communication Network Security,
V2(2), pp 31-35.
Md. Abdur Rahim (2013), Face Recognition Using Local Binary Patterns. Global
Journal of Computer Science and Technology Graphics & Vision V13 (4) Version
1.0.
Kasar, M., Bhattacharyya, D. and Kim, T. (2016). Face Recognition Using Neural
Network: A Review. International Journal of Security and Its Applications, 10(3),
pp.81-100.
Deepesh Raj (2011), A Realtime Face Recognition system using PCA and various
Distance Classifiers. CS676 : Computer Vision and Image Processing. Available
at: http://home.iitk.ac.in/~draj/cs676/project/index.html [Accessed March 25,
2018].
Kalyan Sourav Dash. (2014). Face recognition using PCA - File Exchange -
MATLAB Central. [online] Mathworks.com.
Available at:
https://www.mathworks.com/matlabcentral/fileexchange/45750-face-recognition-
using-pca [Accessed 11 Apr. 2018].