Face Recognition with Gabor Filters & RF
Face Recognition with Gabor Filters & RF
Abstract: Research on face recognition has been evolving for decades. There are numerous approaches developed with highly
desirable outcomes in constrained environments. In contrast, approaches to face recognition in an unconstrained environment
where varied facial posing, occlusion, aging, and image quality still pose vast challenges. Thus, face recognition in the
unconstrained environment still an unresolved problem. Many current techniques are not performed well when experimented
in unconstrained databases. Additionally, most of the real-world application needs a good face recognition performance in the
unconstrained environment. This paper presents a comprehensive process aimed to enhance the performance of face
recognition in an unconstrained environment. This paper presents a face recognition system in an unconstrained environment.
The fusion between Gabor filters and Maximum Response (MR) filters with Random Forest classifier is implemented in the
proposed system. Gabor filters are a hybrid of Gabor magnitude filters and Oriented Gabor Phase Congruency (OGPC)
filters. Gabor magnitude filters produce the magnitude response while the OGPC filters produce the phase response of Gabor
filters. The MR filters contain the edge- and bar-anisotropic filter responses and isotropic filter responses. In the face features
selection process, Monte Carlo Uninformative Variable Elimination Partial Least Squares Regression (MC-UVE-PLSR) is
used to select the optimal face features in order to minimize the computational costs without compromising the accuracy of
face recognition. Random Forests is used in the classification of the generated feature vectors. The algorithm performance is
evaluated using two unconstrained facial image databases: Labelled Faces in the Wild (LFW) and Unconstrained Facial
Images (UFI). The proposed technique used produces encouraging results in these evaluated databases in which it recorded
face recognition rates that are comparable with other state-of-the-art algorithms.
Keywords: Face recognition, labelled faces in the wild, unconstrained facial images.
approach allows flexible deformation at the feature reduction. By fusing this technique with PCA, the
points, which is good for face images with pose authors had shown further improvement in the
variation. Local Binary Pattern (LBP) is one of the classification accuracy.
methods to discriminate the textures and edges within Another local feature descriptor, a histogram of
an image [5, 28]. The LBP kernel operates on the gradients descriptors such as Scale Invariant Feature
change in intensity in the neighbourhood of a pixel. Transform (SIFT) [25] and Histogram of Oriented
Ahonen et al. [1] had used the histogram of LBP Gradients (HOG) [10], have shown encouraging results
values as the facial feature representation. However, in face verification. Simonyan et al. [36] proposed to
the LBP-based feature extraction carries out histogram use Fisher vectors on densely sampled SIFT features
computation on a uniform and predetermined grid in that achieved a good face verification performance on
the facial image and does not consider the properties of large scale databases like Labeled Faces in the Wild.
an image. Therefore, Lenc and Král [18] proposed an Seo and Milanfar [33] invented a new descriptor called
automatic face recognition system based on the LBP Locally Adaptive Regression Kernel (LARK) that can
with Gabor wavelets and k-means clustering approach determine the self-similarity between the centre pixel
to solve the problem of feature position in the and surrounding pixels.
conventional LBP approach. The same authors [20] Štruc and Pavešić [37] integrated Gabor magnitude
further enhanced the system to recognise-fixed and phase feature information with LDA to devise a
coordinates and facial fiducial points even though there new method known as complete Gabor-Fisher
are large differences in the positions of facial features classifier. Both authors claimed that the method
between images with large pose variations. outperformed Principal Component Analysis. Yi et al.
There have been many improvements made to the [49] utilized the Gabor filter for feature extraction and
LBP. Vu et al. [46] proposed the Patterns of Oriented proposed Pose Adaptive Filter (PAF), which converted
Edge Magnitudes (POEM) approach where the LBP the Gabor filter based on the pose variation of the face
based structure is applied to the oriented magnitudes. images. This was achieved by using the 3-D
Vu [45] later further improved the POEM approach deformable model created according to the face image.
and proposed a novel feature set called Patterns of 3-D deformable model or manual annotation on the
Orientation Difference (POD), which can acquire respective face image was needed for facial landmark
information on the self-similarity of an image. Lin and localization. Sagonas et al. [32] utilized a statistical
Chiu [23] used LBP edge-mapped descriptor that model based on hundreds of frontal images to perform
utilize the maxima of gradient magnitude points [12] landmark localization. With this technique, the frontal
for face recognition. This approach can exhibit facial view of a face image in unconstrained situations can be
contours with a low computational cost. In order to reconstructed. They named the technique Robust
overcome the problem of noise and illumination Statistical Frontalization (RSF). Based on the concept
changes in the face image, Kral and Vrba [17] of the Gabor filter, Pinto et al. [29] introduced modern
enhanced the LBP by measuring the features from multiple kernel learning (MKL) techniques using V1-
point-sets instead of the isolated points. Juefei-Xu et like features. This technique is unsusceptible to
al. [15] proposed a technique capable of generating a lighting and image variations. Arashloo and Kittler [2,
highly discriminative matching score without precise 3] proposed a nonlinear binary class-specific kernel
face image alignment for unconstrained face images. discriminant analysis classifier fused with the Markov
The authors used only one image per subject for Random Fields (MRF) approach. In this approach, the
training, and with a wide range of Three Dimensional used image is inserted into a discriminative subspace.
(3-D) rotations, a set of new face images are generated. In recent years researchers explored the neural
The periocular regions of these images are segmented network approach in facial recognition. Numerous
out. Walsh Local Binary Patterns (WLBP) is used to mathematical models were introduced under this
extract the periocular features from these images. They approach. The deep convolutional neural network
called the proposed technique as Spartans. Zhang et al. approach has produced encouraging results in face
[51] found that by encoding the (n-1)th-order local recognition in the unconstrained environment [26, 40].
derivative direction variations, more data are collected Sun et al. [39] proposed a hybrid model of the neural
than the first-order local pattern as in LBP. The authors networks approach. They created a hybrid Convolution
name this approach as Local Derivative Pattern (LDP). Neural Network (CNN) as feature extractors and
Based on the concept of LBP, Ylioinas et al. [50] Restricted Boltzmann Machine (RBM) as classifiers.
developed a high dimensional features representation The proposed model had achieved a good result under
for face recognition by getting histograms of Binarized the LFW database [14] in unrestricted with label-free
Statistical Image Features (BSIF) codes. This approach outside data and labelled outside data. Xi et al. [48]
was claimed to provide an optimal discriminative introduced an approach called Local Binary Pattern
vector representation for the face. Barkan et al. [4] Network (LBPNet) based on unsupervised deep
modified the original LBP and introduced the learning, which adapts the concept of CNN. This
Diffusion Maps technique for dimensionality model has two sections, the deep network, which
Gabor and Maximum Response Filters with Random Forest Classifier for ... 799
utilized the LBP and PCA for feature extraction and process, the feature vectors generated are fed into the
the regular network for classification. Devi and classifiers using Random Forest algorithms to produce
Hemachandran [11] proposed a system that utilized a trained and learned model. The feature vectors are
PCA, Wavelet Transformation and Gabor wavelet for utilized to grow a Random Forest. In the testing
feature extraction. The modular neural network is used process, the prediction is based on the obtained feature
for image retrieval, and the Support Vector Machine vectors of the testing images using the trained decision
(SVM) is used as a classifier. With this combination, trees. The prediction function generates a matrix of
the training time for the large database is reduced. matching scores that state the probability of that testing
Based on the literature as discussed early, face image belonging to a specific class. The class
recognition in the wild continues to be a challenging candidate with the highest matching scores suggests
task due to pose, illumination variations, occlusion, that the testing image most probably belongs to that
etc. Researchers are still finding the balance between class. The performance of random forest constructed is
computational complexity and recognition accuracy [2, evaluated by analysing the Receiver Operating
3, 4, 15, 23, 40, 45, 46, 50]. In the recent literature, the Characteristic (ROC) curve. ROC curve is the
deep learning technique [11, 39, 48] gives promising evaluation graph that investigates the interaction
results in face recognition. But this technique needs between true-positive-rate and false-positive-rate. The
millions of parameters that lead to high requirements recognition rate will also be carried out to measure the
of processing power, memory size etc. classification accuracy rate.
In this paper, the following contributions are made,
Two filters are used in the features extractions
process. Gabor filter is to extract the features in
spatial and frequency domains. A Maximum
Response Filter (MR) is used to respond to oriented
image patches and anisotropic textures. MR filter
uses both Gaussian and Laplacian of Gaussian
(LoG) filters. Compared with the Gabor filter, the
MR filter has an additional LOG element that
detects edges by looking for zero crossings in the
image. The combination of Laplacian and Gaussian
functions assists in smoothening the image and edge
detection.
The feature selection using Monte Carlo
Uninformative Variable Elimination Partial Least
Squares Regression (MC-UVE-PLSR) is included to
trim the large dimensionality of feature sets to save
computation costs while maintaining the accuracy a) The training process.
( Am , n ( x, y ) )
Gaussian orientation = m 0
A = Gamma, ratio between the center frequency Where is a constant to prevent equation divided by
s = size of Gaussian envelope zero; ) is the phase angle of Gabor function;
Maximum frequency of the filter m is the scales and n is the orientations; p is the total
Štruc and Pavešić[37] recommends that number of scales;
A=s= and . Filter bank of five scales and is the phase deviation given by
eight orientations is constructed with m
and n . The filter bank has the real (7)
and imaginary terms of the Gabor wavelet. The real
term is used in the facial feature extraction process.
The input image is a greyscale image of a face u ,v ( x, y)
Where is the mean phase angle at u-the
having the size of pq pixels. Gabor filter is denoted as u , v ( x, y )
with centre frequency fm and orientation orientation and is the phase angle of the
The transfer function of the filter is the convolution Gabor filter given as
between the greyscale image, B(x,y) and Gabor filter, I m , n ( x, y )
m, n ( x, y) tan 1 ( ) (8)
[38, 52]. Rm, n ( x, y )
(2)
The output of OGPC is the illumination and contrast
Hm,n(x,y) refers to the complex value output of the filter independent facial feature representations. The OGPCs
function. Hm,n(x,y) is broken down into real term are then downsized by the down-sampling factor. Z-
Rm,n(x,y) and imaginary term, Im,n(x,y). score normalization is applied to the downsized
Gabor and Maximum Response Filters with Random Forest Classifier for ... 801
OGPCs, and the OGPC feature column vectors are Monte Carlo (MC) method and Uninformative
concatenated together becoming augmented OGPC Variable Elimination (UVE) method is used to select
feature vectors. features generated by Gabor filters. Usually, the UVE
The MR filter has a total of 38 filters. The filters employs the leave-one-out procedure. However, the
include one Gaussian filter and one LoG isotropic filter Monte Carlo method is used instead of the leave-one-
at scale 10. Gaussian filter helps to smooth the out procedure [7, 30]. The samples are divided
image to reduce noise before Laplacian filter is used randomly into a training set, evaluation set and
for edge detection [6]. There is the edge (first prediction set. The Monte Carlo randomly chooses a
derivative) anisotropic filters with six orientations and few subsamples from the training set (at 75%) to build
three scales. Similarly, there is a bar (second the Partial Least Square (PLS) model, and this process
derivative) anisotropic filters with six orientations and repeats 1000 times. The PLS regression coefficients
three scales. Figure 2 shows the filter response at and the stability of each feature set are determined as:
different orientations and scales of the anisotropic and (12)
isotropic filters. The isotropic filter responses are used
without any further processing, but for the anisotropic Where is the prediction; X is the information of the
filter, only the maximum filter responses at every scale feature sets; . is the regression coefficients; is the
across entire orientations are chosen. This generates offset.
eight filter responses, and they are rotationally The reliability of a feature is determined by its
invariant. The respective equations are shown as stability level, given as in Equation (13). The is the
follows [13, 42]: regression coefficients that contribute the selected
features to the prediction model.
(9)
mean of i
i (13)
(10) standard deviation of i
Figure 3 shows the fusion process of the testing 3.2. The Effect of Gabor Magnitude Filter
phase that utilizes the Gabor magnitude, OGPC and Parameters on Recognition Rate
MR filters as features extractors.
Referring to Equation (1), Gamma(A) determines the
ellipticity of the Gaussian function and the width of the
Gaussian window. s specifies the linear size of the
visual receptive field simulated by the Gabor filter. It is
found that by setting to lower Gamma values and s, the
finer discrimination of the texture of the facial region
is obtained. It is also found that the higher value of the
maximum central frequency improves the recognition
rate. The lower Gamma value gives smaller Gaussian
bandwidth and a sharper filter so that the tails of the
two Gaussians do not overlap much at the origin,
which produces only a few non-zero DC components.
Figure 3. Block diagram of fusing between Gabor magnitude, The lower value of Gamma will also help to obtain
OGPC and MR filters in the testing phase. maximal spatial localization of frequency information
of the facial image.
3. Experimental Results and Discussion The higher value of maximum central frequency,
fmax moves the two Gaussian functions further apart
3.1. Datasets
so that the overlap does not happen excessively. This is
The proposed hybrid technique is evaluated by piloting to restrain the frequency value within the scope of
face recognition experiments on two popular face Nyquist frequency. The excessive overlapping and
databases namely, LFW [14], and UFI [19]. higher frequency bandwidth (due to higher Gamma)
The LFW dataset consists of 13,233 face images will cause smaller coverage of the spectrum in the
collected in the wild on 5,749 people. Sample images spatial domain. Since 40 Gabor filters are used, the
of the LFW database are shown in Figure 4-a). The excessive overlapping will cause a narrower spectrum
words “in the Wild” means the face images are of detected and extracted features, thus lowering the
obtained without going through any parameter recognition rate. In our design, the maximum
adjustments and taken in “natural” conditions frequency of theas inis set to 2 Hz.
filter
daily life routine images. The UFI dataset is a real- 4
world dataset containing cropped images at the size of The number of features generated by Gabor filters
128 x 128 pixels from 605 people. Reporters of the and MR filters is still large, and it causes high
czech news agency collected these images. Sample computation costs during the classification process.
images of the UFI database are shown in Figure 4-b). Thus, 2000 most useful features (best features) are
selected through the Monte Carlo Uninformative
Variable Elimination PLS Regression method (MC-
UVE-PLSR). The selection of the 2000 highest
importance features is decided to strike an optimal
balance between maintaining optimal classification
accuracy level and controlling the computation costs
incurred during the classification process. The MC-
UVE-PLSR process ranks the features according to
their information importance. A subset of features with
the highest information importance is selected. The
reason for using feature selection is explained as
a) Labeled Faces in the Wild (LFW).
follows. If the number of features fed into the classifier
is too many, the classification process consumes
disproportionately longer time, and it is unfeasible in
terms of limited computation resources. The
performance of the proposed algorithm is compared
with the full feature selection, where all the features,
4000 features are selected.
algorithms are implemented using MATLAB, R2016. Based on Table 3, it is shown that Gabor-MR-
The default MATLAB accuracy is used to report the Random Forest (full features selection) obtains the
results. Three decimal points are considered in highest recognition accuracy at 74.961% compared to
reporting the verification performance. Four algorithms other reported existing algorithms for the UFI
are proposed and tested on LFW and UFI databases. database. In Table 4, the implemented Gabor-MR-
These are Gabor-Random Forest (full features Random Forest algorithm gets 96.01% Area Under
selection-4000 features), Gabor-Random Forest (best ROC (AUC) which is higher than the Hybrid of CBIR.
features selection-2000 features), Gabor-MR-Fusion-
Table 1. Mean Verification Accuracy on the LFW database (No
Random Forest (full features selection-4000 features) outside training data used).
and Gabor-MR-Fusion-Random Forest (best features
Computation
selection-2000 features. Algorithm
Mean Verification
Time
Accuracy(%) ±SE
In the first experiment, the image-restricted, no (second)
outside training data protocol results are used to MRF-Fusion-CSKDA [3] 95.891 ± 0.0194 N/A
ConvNet-RBM [39] 93.831 ± 0.0052 N/A
compare with our proposed algorithms. Based on Table CVPR13' high-dim LBP + JB [9] 93.182 ± 0.0107 N/A
1 (in bold text), it is found that the full features DM+PCA fusion [4] 92.051 ± 0.0045 N/A
selection performs second-best among the compared Hierarchical-PEP (layers fusion) [21] 91.106 ± 0.0147 N/A
Joint Bayesian [8] 90.908 ± 0.0148 N/A
algorithms in terms of verification accuracy, standing Eigen-PEP [22] 88.972 ± 0.0132 N/A
at 95.872% accuracy for full features selection. If the RSF [32] 88.812 ± 0.0078 N/A
best features are selected, the accuracy is reduced to Spartans [15] 87.553 ± 0.0021 N/A
BMVC13' Fisher vector faces [36] 87.471± 0.0149 N/A
92.285%. In the second experiment, the MR filter is V1-like/MKL, funneled [29] 79.351 ± 0.0055 N/A
omitted, and only Gabor Magnitude Response and MRF-MLBP [2] 79.082 ± 0.0014 N/A
OGPC are used in the feature extraction process. The Hybrid descriptor-based, funnelled
78.47 ± 0.0051 N/A
[47]
reason for this is to determine the impact of the MR Proposed Method-Gabor-Random
90.746 ± 0.0602 33087.575*3
filter against the verification rate. Based on the Forest (full features selection) *1
Proposed Method-Gabor-Random
tabulated results in Table 1, it is found that the Forest (best features selection) *2
88.983 ± 0.0243 12933.979*4
proposed algorithm (Gabor-Random Forest) performs Proposed Method-Gabor-MR-
worse than the proposed algorithm (Gabor-MR- Fusion-Random Forest (full 95.872 ± 0.0197 37496.72*3
features selection) *1
Random forest) in terms of verification accuracy, Proposed Method-Gabor-MR-
standing at 90.746% accuracy, whereas for Gabor-MR- Fusion-Random Forest(best 92.285 ± 0.0198 14471.72*4
Random forest, standing at 95.872%. If the best features selection)*2
features are selected (Gabor-Random forest), the *14000 features are selected, *22000 features are selected N/A=not
accuracy is reduced to 88.893% and for Gabor-MR- available.
Random forest at 92.285%. These results support our *3Computation time for Full features=Feature extraction
time+Classification time.
hypothesis that the MR filter contributes in terms of *4Computation time for Best features=Extraction time+Selection
the verification rate. Although full-features selection time+Classification time).
gives the highest accuracy in both experiments, the
computation cost is high. For the Gabor-MR-Random Table 2. Comparison of AUC among algorithms on the LFW
database (unsupervised setting).
Forest, the best features consume 14471.71 seconds for
the feature extraction process, feature selection process Algorithm AUC
MRF-Fusion-CSKDA [3] 0.9894
and feature classification process. On the other hand, Spartans [15] 0.9428
the Gabor-MR-Random Forest (full features selection) Pose Adaptive Filter (PAF) [49] 0.9405
consumes 37496.71 seconds for the feature extraction LBPNet [48] 0.9404
SA-BSIF, WPCA, aligned [50] 0.9318
process and feature classification process. It was an
MRF-MLBP [2] 0.8994
increment of 159.1% for full features selection. LHS, aligned [34] 0.8107
In the third experiment, the proposed algorithm is LARK unsupervised, aligned [33] 0.7830
compared with methods under the unsupervised H-XS-40, 81x150 [31] 0.7547
protocol. The comparison results are shown in Table 2. GJD-BC-100, 122x225 [31] 0.7392
Proposed Method-Gabor-MR-Fusion-Random Forest
These results are obtained from the respective cited (full features selection) *1
0.9887
papers. It is found that Gabor-MR-fusion with Random Proposed Method-Gabor-MR-Fusion-Random Forest
0.9865
Forest (full features selection) performs second-best (best features selection)*2
among the compared algorithms with 0.9887 Area *14000 features are selected, *22000 features are selected.
Under the ROC Curve (AUC). If only half features are
selected, the proposed algorithm performs third-best
among the existing algorithms in terms of AUC at
0.9865. Although full-features selection gives the
highest accuracy in both experiments, the computation
cost is high.
804 The International Arab Journal of Information Technology, Vol. 18, No. 6, November 2021
[10] Déniz O., Bueno G., Salido J., and Torre F., Proceedings of Asian Conference on Computer
“Face Recognition using Histograms of Oriented Vision, Singapore, pp. 17-33, 2014.
Gradients, ”
Pattern Recognition Letters, vol. 32, [23] Lin J. and Chiu C., “LBP Edge-Mapped
no. 12, pp. 1598-1603, 2011. Descriptor Using MGM Interest Points for Face
[11] Devi N. and Hemachandran K., “Content based Recognition,” in Proceedings of International
Feature Combination Method for Face Image Conference on Acoustics, Speech and Signal
Retrieval using Neural Network and SVM Processing, New Orleans, pp. 1183-1187, 2017.
Classifier for Face Recognition,” Indian Journal [24] Liu W. and Wang Z., “Facial Expression
of Science and Technology, vol. 10, no. 24, pp. 1- Recognition Based on Fusion of Multiple Gabor
11, 2017. Features,” in Proceedings of 18th International
[12] Faraji M., Shanbehzadeh J., Nasrollahi K., and Conference on Pattern Recognition, Hong Kong,
Moeslund T., “Extremal Regions Detection pp. 536-539, 2006.
Guided by Maxima of Gradient Magnitude,” [25] Lowe D., “Distinctive Image Features From
IEEE Transactions on Image Processing, vol. 24, Scale-Invariant Keypoints,” International
no. 12, pp. 5401-5415, 2015. Journal of Computer Vision, vol. 60, no. 2, pp.
[13] Geusebroek J., Smeulders A., and Van-De- 91-110, 2004.
Weijer J., “Fast Anisotropic Gauss Filtering,” [26] Lu C. and Tang X., “Surpassing Human-Level
IEEE Transactions on Image Processing, vol. 12, Face Verification Performance on LFW with
no. 8, pp. 938-943, 2003. Gaussian Face,” in Proceedings of 29th AAAI
[14] Huang G., Ramesh M., Berg T., and Learned- Conference on Artificial Intelligence, pp. 3811-
Miller E., “Labeled Faces in The Wild: A 3819, 2015.
Database for Studying Face Recognition in [27] Muruganantham S. and Jebarajan A T., “
Unconstrained Environments,” Technical Report, Comprehensive Review of Significant
University of Massachusetts, 2007. Researches on Face Recognition Based on
[15] Juefei-Xu F., Luu K., and Savvides M., Various Conditions,” International Journal of
“Spartans: Single-Sample Periocular-Based Computer Theory and Engineering, vol. 4, no. 1,
Alignment-Robust Recognition Technique pp. 7-15, 2012.
Applied to Non-Frontal Scenarios,” IEEE [28] Ojala T., Pietikainen M., and Maenpaa T.,
Transactions on Image Processing, vol. 24, no. “Multiresolution Gray-Scale and Rotation
12, pp. 4780-4795, 2015. Invariant Texture Classification with Local
[16] Kovesi P., “Phase Congruency: A Low-Level Binary Patterns,” Transactions on Pattern
Image Invariant,” Psychological Research, vol. Analysis and Machine Intelligence, vol. 24, no. 7,
64, no. 2, pp. 136-148, 2000. pp. 971-987, 2002.
[17] Král P. and Vrba A., “Enhanced Local Binary [29] Pinto N., DiCarlo J., and Cox D., “How Far Can
Patterns for Automatic Face Recognition,” arXiv You Get with A Modern Face Recognition Test
preprint arXiv:1702.03349, 2017. Set using Only Simple Features?,” in
[18] Lenc L. and Král P., “Automatically Detected Proceedings of Conference on Computer Vision
Feature Positions for LBP Based Face and Pattern Recognition, Miami, pp. 2591-2598,
Recognition,” in Proceedings of IFIP 2009.
International Conference on Artificial [30] Quah K. and Quek C., “MCES: A Novel Monte
Intelligence Applications and Innovations, Carlo Evaluative Selection Approach for
Rhodos, pp. 246-255, 2014. Objective Feature Selections,
IEEE ”
[19] Lenc L. and Král P., “Unconstrained Facial Transactions on Neural Networks, vol. 18, no. 2,
Images: Database for Face Recognition Under pp. 431-448, 2007.
Real-World Conditions, in ”Proceedings of [31] Ruiz-del-Solar J., Verschae R., and Correa M.,
Mexican International Conference on Artificial “Recognition of Faces in Unconstrained
Intelligence, Cuernavaca, pp. 349-361, 2015. Environments: A Comparative Study,” EURASIP
[20] Lenc L. and Král P., “Local Binary Pattern Based Journal on Advances in Signal Processing, vol.
Face Recognition with Automatically Detected 2009, no. 1, pp.1-19, 2009.
Fiducial Points,” Integrated Computer-Aided [32] Sagonas C., Panagakis Y., Zafeiriou S., and
Engineering, vol. 23, no. 2, pp. 129-139, 2016. Pantic M., “Robust Statistical Frontalization of
[21] Li H. and Hua G., “Hierarchical-PEP Model for Human and Animal Faces,” International journal
Real-World Face Recognition,” in Proceedings of Computer Vision, vol. 122, no. 2, pp. 270-291,
of the Conference on Computer Vision and 2017.
Pattern Recognition, Boston, pp. 4055-4064, [33] Seo H. and Milanfar P., “Face Verification Using
2015. The Lark Representation,” IEEE Transactions on
[22] Li H., Hua G., Shen X., Lin Z., and Brandt J., Information Forensics and Security, vol. 6, no. 4,
“Eigen-Pep for Video Face Recognition,” in pp. 1275-1286, 2011.
806 The International Arab Journal of Information Technology, Vol. 18, No. 6, November 2021
[34] Sharma G., Hussain ul S., and Jurie F., “Local [47] Wolf L., Hassner T., and Taigman Y.,
Higher-Order Statistics (LHS) for Texture “Descriptor Based Methods in the Wild,” in
Categorization and Facial Analysis,” in Workshop on Faces In'real-Life'images:
Proceedings of European Conference on Detection, Alignment, and Recognition, 2008.
Computer Vision, Florence, pp. 1-12, 2012. [48] Xi M., Chen L., Polajnar D., and Tong W.,
[35] Shen L. and Bai L., “A Review on Gabor “Local Binary Pattern Network: A Deep
Wavelets for Face Recognition,” Pattern Learning Approach for Face Recognition,” in
Analysis and Applications, vol. 9, no. 2, pp. 273- Proceedings of International Conference on
292, 2006. Image Processing, Phoenix, pp. 3224-3228,
[36] Simonyan K., Parkhi O., Vedaldi A., and 2016.
Zisserman A., “Fisher Vector Faces in the Wild,” [49] Yi D., Lei Z., and Li S., “Towards pose Robust
in Proceedings of British Machine Vision Face Recognition,” in Proceedings of the IEEE
Conference, pp. 1-13, 2013. Conference on Computer Vision and Pattern
[37] Štruc V. and Pavešić N., “Gabor-Based Kernel Recognition, Portland, pp. 3539-3545, 2013.
Partial-Least-Squares Discrimination Features [50] Ylioinas J., Kannala J., Hadid A., and Pietikäinen
For Face Recognition,” Informatica, vol. 20, no. M., “Face Recognition using Smoothed High-
1, pp. 115-138, 2009. Dimensional Representation,” in Proceedings of
[38] Štruc V. and PavešićN., “The Complete Gabor- Scandinavian Conference on Image Analysis,
Fisher Classifier for Robust Face Recognition,” Copenhagen, pp. 516-529, 2015.
EURASIP Journal on Advances in Signal [51] Zhang B., Gao Y., Zhao S., and Liu J., “Local
Processing, vol. 2010, no. 1, pp. 1-26, 2010. Derivative Pattern Versus Local Binary Pattern:
[39] Sun Y., Wang X., and Tang X., “Hybrid Deep Face Recognition with High-Order Local Pattern
Learning for Face Verification,” in Proceedings Descriptor,” IEEE Transactions on Image
of International Conference on Computer Vision, Processing, vol. 19, no. 2, pp. 533-544, 2010.
Sydney, pp. 1489-1496, 2013. [52] Zhang B., Shan S., Chen X., and Gao W.,
[40] Taigman Y., Yang M., Ranzato M., and Wolf L., “Histogram of Gabor Phase Patterns (Hgpp): A
“Deepface: Closing The Gap to Human-Level Novel Object Representation Approach for Face
Performance in Face Verification,” in Recognition,” IEEE Transactions on Image
Proceedings of the IEEE Conference on Processing, vol. 16, no. 1, pp. 57-68, 2007.
Computer Vision and Pattern Recognition,
Columbus, pp. 1701-1708, 2014. Yuen-Chark See is an Assistant
[41] Tan X. and Triggs B., “Enhanced Local Texture Prorfessor in Lee Kong Chian
Feature Sets for Face Recognition Under Faculty of Engineering and Science,
Difficult Lighting Conditions,” IEEE Universiti Tunku Abdul Rahman,
Transactions on Image Processing, vol. 19, no. 6, Sungai Long Campus. He received
pp. 1635-1650, 2010. his PhD from Universiti Teknologi
[42] Tan Y., Qi J., and Ren F., “Real-Time Cloud Malaysia. His research interests
Detection in High Resolution Images using include machine learning, embedded systems, and
Maximum Response Filter and Principle wireless sensor networks
Component Analysis,” in Geoscience and
Remote Sensing Symposium, Beijing, pp. 6537- Eugene Liew received [Link] in
Electrical and Electronic Engineering
6540, 2016. from Universiti Tunku Abdul Rahman.
[43] Turhan C. and Bilge H., “Class-Wise Two-
Dimensional PCA Method for Face
Recognition,” IET Computer Vision, vol. 11, no.
4, pp. 286-300, 2016.
[44] Turk M. and Pentland A., “Eigenfaces for Norliza Mohd Noor is a Professor
Recognition,” Journal of Cognitive in Razak Faculty of Technology and
Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. Informatics, Universiti Teknologi
[45] Vu N., “Exploring Patterns of Gradient Malaysia (UTM), Kuala Lumpur
Orientations and Magnitudes for Face Campus. She received her [Link]. in
Recognition, ” Transactions on Information
IEEE Electrical Engineering from Texas
Forensics and Security, vol. 8, no. 2, pp. 295- Tech University in Lubbock, Texas,
304, 2013. and Master (by research) and PhD both in Electrical
[46] Vu N., Dee H., and Caplier A., “Face Engineering from UTM. Her research is in machine
Recognition using the POEM Descriptor,” learning and image analysis for medical and industry
Pattern Recognition, vol. 45, no. 7, pp. 2478- applications.
2488, 2012.