0% found this document useful (0 votes)
57 views

New Approach To Similarity Detection by Combining Technique Three-Patch Local Binary Patterns (TP-LBP) With Support Vector Machine

Recognition systems have received a lot of attention because of their various uses in people's daily lives, for example in robotic intelligence, smart cameras, security surveillance or even criminal identification. Determining the similarity of faces by different face variations is based on robust algorithms. The validation of our experiment is done on two sets of data. In this paper, we compare two facial recognition system techniques according to the recognition rate and the average authentication time: in order to increase the accuracy rate and decrease the processing time. our approach is based on feature extraction by two algorithms principal components analysis scale invariant feature transform (PCA-SIFT) and speeded up robust features (SURF), then uses the random sample consensus (RANSAC) technique to cancel outliers. Finally, face recognition is established on the basis of proximity determination. The second technique is based on the association of support vector machine (SVM) classifier with the key point recovery technique. the results obtained by the second technique is better for both databases: The recognition rate of the base olivetti research laboratory (ORL) should be 98.125800 and that of the Grimace base 97.2851500. The evaluation according to the time of the second technique does not exceed 300ms on average.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

New Approach To Similarity Detection by Combining Technique Three-Patch Local Binary Patterns (TP-LBP) With Support Vector Machine

Recognition systems have received a lot of attention because of their various uses in people's daily lives, for example in robotic intelligence, smart cameras, security surveillance or even criminal identification. Determining the similarity of faces by different face variations is based on robust algorithms. The validation of our experiment is done on two sets of data. In this paper, we compare two facial recognition system techniques according to the recognition rate and the average authentication time: in order to increase the accuracy rate and decrease the processing time. our approach is based on feature extraction by two algorithms principal components analysis scale invariant feature transform (PCA-SIFT) and speeded up robust features (SURF), then uses the random sample consensus (RANSAC) technique to cancel outliers. Finally, face recognition is established on the basis of proximity determination. The second technique is based on the association of support vector machine (SVM) classifier with the key point recovery technique. the results obtained by the second technique is better for both databases: The recognition rate of the base olivetti research laboratory (ORL) should be 98.125800 and that of the Grimace base 97.2851500. The evaluation according to the time of the second technique does not exceed 300ms on average.

Uploaded by

IAES IJAI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 12, No. 4, December 2023, pp. 1644~1653


ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i4.pp1644-1653  1644

New approach to similarity detection by combining technique


three-patch local binary patterns (TP-LBP) with support vector
machine

Ahmed Chater, Hicham Benradi, Abdelali Lasfar


Laboratory of System Analysis, Information Processing Management and Industry, University Mohammed V, Rabat, Morocco

Article Info ABSTRACT


Article history: Recognition systems have received a lot of attention because of their various
uses in people's daily lives, for example in robotic intelligence, smart cameras,
Received May 15, 2022 security surveillance or even criminal identification. Determining the
Revised Jan 1, 2023 similarity of faces by different face variations is based on robust algorithms.
Accepted Jan 10, 2023 The validation of our experiment is done on two sets of data. In this paper, we
compare two facial recognition system techniques according to the
recognition rate and the average authentication time: in order to increase the
Keywords: accuracy rate and decrease the processing time. our approach is based on
feature extraction by two algorithms principal components analysis scale-
Facial recognition invariant feature transform (PCA-SIFT) and speeded up robust features
Feature extraction (SURF), then uses the random sample consensus (RANSAC) technique to
Measure the minimum distance cancel outliers. Finally, face recognition is established on the basis of
Processing time proximity determination. The second technique is based on the association of
Recognition rate support vector machine (SVM) classifier with the key point recovery
Support vector machine technique. the results obtained by the second technique is better for both
databases: The recognition rate of the base olivetti research laboratory (ORL)
should be 98.125800 and that of the Grimace base 97.2851500. The evaluation
according to the time of the second technique does not exceed 300ms on
average.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Ahmed Chater
Laboratory of System Analysis, Information Processing Management and Industry
University Mohammed V Rabat
Rabat, Morocco
Email: [email protected]

1. INTRODUCTION
Facial recognition is a technology that identifies or verifies a subject using a facial image, video, or
any audiovisual element of the subject's face. Among applications using facial recognition, such as security or
voiceprints. Each face is unique and has inimitable characteristics. Facial recognition systems, programs, or
software compare facial biometrics and recognition algorithms with other applications. The resolution of
contributions to machine learning is based on two steps which are feature extraction and classification.
Feature extraction is the first step of the authentication process and is performed by robust techniques
such as principal components analysis scale-invariant feature transform (PCA-SIFT), speeded up robust
features (SURF) and three patch local binary pattern (TP-LBP), then the second step is performed by classifiers
(distance measure and support vector machine (SVM)). Over the past few decades, many characteristics have
been developed and the most well-known and popular is the local binary patterns (LBP) [1], three and four
pache local binary pattern (TP-LBP, TF-LBP) [2], complete local binary pattern (CLBP) [3], [4], scale.

Journal homepage: http://ijai.iaescore.com


Int J Artif Intell ISSN: 2252-8938  1645

Invariant feature transform principal component analysis (PCA-SIFT) [5], speeded up robust features (SURF)
[6]. The properties of this technique characterized by a simple calculation that facilitates real-time facial
analysis and applied to real applications are its robustness to changes in grayscale caused, for example, by
variations in lighting.
This work is based on our publication in [7], [8]. We use the three techniques of extraction of the
descriptor vector by (TP-LBP), PCA-SIFT [9], SURF [6] and their classification to determine the similarity
between the bases (Training and Test) which are based on distance metrics and linear SVM [10] to avoid the
sensitivity of the parameters to measure the recognition rate. To validate our experiment, we will use the two
databases of facial images (ORL) [11] and Grimace [12]. The results obtained give good results in terms of
recognition rate by method (TPLBP with SVM); the similarity rate reaches 98.125800% and the processing
time does not exceed 300ms. They are applied by real-time applications, e.g. in the field of security and
robotics.

2. FEATURE EXTRACTION
The next section, we will deal with feature extraction by the three techniques which are: (three-patch
LBP, PAC-SIFT and SURF). The determination of the key point extraction technique is based on statistical
measurements of key points. We have used key points since they describe regions of the image where the
information is important. This approach is generally used to recognize objects [13] and in facial and biometric
recognition algorithms [6]. For the calculation of the descriptor vector in the proximity. There are many
techniques such as scale invariant feature transform (SIFT) [14], shape contexts [15], and speed up robust
features (SURF), to name a few [6].
Among these techniques, the SIFT technique proposed by Lowe [14] is retained for two main reasons.
Firstly, the SIFT algorithm is efficient for scaling and 2D rotation. Second, a comparative study [16] of different
descriptors shows that SIFT is the most efficient. The SIFT algorithm was also used by Berretti et al. [17] in
the case of 3D facial recognition. Then the descriptor (TP-LBP) which is based on the comparison of square
patches as described in the next section [18].

2.1. Three-patch local binary patterns (TP-LBP)


Three-patch local binary patterns (TP-LBP) extend the operator by allowing multi-scale (multi-
resolution) processing of an image [4]. The detector (TP-LBP) of a pixel is computed by comparing the three
patches to determine a binary value in the code assigned to it. The choice of the best pixel by the detector
(TPLBP) is made by considering that the area of a region is focused on the pixel and by placing the probe𝑚 in
a circle of radius 𝑟 pixels.
On the other hand, the choice of the best pixel by the detector (LBP) based on the determination of
the circle radius then the comparison of neighboring pixels and so on. The confrontation based on the windows
𝑚 chosen by the detector (TPLBP) is done by comparing two neighboring windows that separate by an angle𝛼.
Every confrontation that happens from a single bit is activated. Specifically, we apply the technique of TP-
LBP algorithm to determine the relevant features, which base on the basis of (1) is mentioned,

𝑇𝑃𝐿𝐵𝑃𝑟,𝑠,𝑤,𝛼 = ∑𝑠𝑖 𝑓(𝑑(𝑐𝑖 , 𝑐𝑝 ) − 𝑑(𝑐𝑖+𝛼 𝑚𝑜𝑑 𝑠 , 𝑐𝑝 ))2𝑖 (1)

translate (1) into Figure 1.

Figure 1. The three-patch LBP code

New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater)
1646  ISSN: 2252-8938

The technique (TP-LBP) allows to determine the important descriptors in the images. The extraction
of these is based on a number of parameters. These parameters are the filter size w, α the deviation angel and
the filter center, as shown in Figure 1 and the calculation of the descriptors is based on (2).

𝑇𝑃 − 𝐿𝐵𝑃𝑟,𝑆,3,2 (𝑝) = 𝑓(𝑑(𝑐0 , 𝑐𝑝 ) − 𝑑(𝑐2 , 𝑐𝑝 ))20 + 𝑓(𝑑(𝑐1 , 𝑐𝑝 ) − 𝑑(𝑐3 , 𝑐4 ))21 +


(𝑑(𝑐2 , 𝑐𝑝 ) − 𝑑(𝑐4 , 𝑐5 ))22 + 𝑓(𝑑(𝑐3 , 𝑐𝑝 ) − 𝑑(𝑐5 , 𝑐6 ))23 + 𝑓(𝑑(𝑐4 , 𝑐𝑝 ) − 𝑑(𝑐6 , 𝑐7 ))24 +
(2)
𝑓(𝑑(𝑐5 , 𝑐𝑝 ) − 𝑑(𝑐7 , 𝑐8 ))25 + 𝑓(𝑑(𝑐6 , 𝑐𝑝 ) − 𝑑(𝑐0 , 𝑐1 ))26 + 𝑓(𝑑(𝑐7 , 𝑐𝑝 ) − 𝑑(𝑐1 , 𝑐2 ))27

Where 𝑐𝑖 and 𝑐𝑖+𝛼 𝑚𝑜𝑑 𝑠 are two zones along the ring and C is the zone of the nucleus. The function
𝑑(. , . )is any distance function between two patches (for example, the Euclidean distance norm of their
grayscale differences) and f(x) is defined by the (3). We apply a value 𝜏 a little more than zero, 𝜏 to obtain a
better visibility in homogeneous areas [9]. In application, we use closest neighbor sampling to retrieve patches
instead of interpolating them, which speeds up the processing time with little or no performance. Thirdly, the
linking of the data (characteristics) chosen by the histogram.

1 𝑖𝑓 𝑥≥𝜏
𝑓(𝑥) = { (3)
0 𝑖𝑓 𝑥<𝜏

2.2. Detector principal component analysis -scale invariant feature transform (PAC-SIFT)
The detector (PCA-SIFT) [5], [9], like the SIFT [14], also use the measure of the similarity of the
descriptors is done by the metric called Euclidean Distance. Which is based on the four essential elements that
are: Detection of spatial extrema, location of key points, steering control. The scale space of an image is defined
as a function, L(x, y, σ), which is generated from the convolution of a scalable Gaussian, G(x, y, σ), with an
input image I(x, y) is written as,

𝐿(𝑥, 𝑦, 𝑘𝜎) = 𝐺(𝑥, 𝑦, 𝑘𝜎) ∗ 𝐼(𝑥, 𝑦) (4)

where ∗ is the convolution operator in x and y and G (x, y, σ), the Gaussian equation. And then to determine
the key points that are stable. We apply the Gaussian difference between two neighboring levels separated by
a k-factor is written as,

𝐷(𝑥, 𝑦, 𝜎) = 𝐿(𝑥, 𝑦, 𝑘𝜎) − 𝐿(𝑥, 𝑦, 𝜎) (5)

and then using Hessian matrix to determine the threshold in order to retain the relevant key points. The Hessian
matrix defined as (6),

𝐷𝑥𝑥 𝐷𝑥𝑦
𝐻=[ ] (6)
𝐷𝑥𝑦 𝐷𝑦𝑦

but the downside of this step is getting a lot of points of interest. From there hessian matrix represents in the
(7) it can be determined the threshold metric.

(𝜆1 +𝜆2 )2 (𝑟𝜆1 +𝜆1 )2 (𝑟+1)2


= = (7)
𝜆1 ×𝜆2 𝑟𝜆2
1 𝑟

With: Det(H): the determinant of matrix Hessian, Tr(H) the trace of matrix Hessian a high value of
the parameter r ensures that the point of interest of high intensity variation. A key point is characterized by the
five parameters (𝑥, 𝑦, 𝜃, 𝜎, 𝑢) ; The pair (x, y) corresponds to its position in the original image; The pair (σ, θ)
describes its scale and orientation. And the u vector is its descriptor vector which is obtained from its
neighborhood. The neighborhood is split by a 4 × 4 grid. Then, the gradient is calculated on each of the 16
locations of the grid and quantified according to an 8-orientation histogram. The concatenation of these features
allows us to get a descriptor vector with 128 elements. Figure 2 represents the concatenation of descriptor
vectors and the extraction of key points by the SIFT technique.
Then the association this detector SIFT by the technique principal component analysis (PCA).
Principal component analysis (PCA) [19], [20] is a well-known technique for dimensionality reduction that
transforms 2D images into 1D column vectors, it projects high-dimensional data into an affine subspace. First,
the (2D) image is transformed into a 1D vector which is written in the form in (8).

Int J Artif Intell, Vol. 12, No. 4, December 2023: 1644-1653


Int J Artif Intell ISSN: 2252-8938  1647

Figure 2. Example of feature extraction by algorithm PAC-SIFT

𝑇
𝑏 𝑖 = [𝑏1𝑖 , . 𝑏2𝑖 . . . . . . , 𝑏𝑁𝑖 ] (8)

The second step is the normalization of the input images by subtracting each image element from the
average of all the training images according to the following (9),
1 𝑝
𝑏̄ 𝑖 = 𝑏 𝑖 − 𝑚 avec m= ∑𝑖=1 𝑏 𝑖 (9)
𝑝

thirdly combine the set of vectors side by side to obtain a size matrix (PxN) where (P is the number of
training images, N the vector size of the image). Fourth, calculate the covariance matrix according to the
following (10).

𝐶 = 𝑋̄𝑋̄ 𝑇 (10)

Then the eigenvalues and eigenvectors of the covariance matrix are calculated. The different steps are
summarized in the algorithm,

Algorithm: Determination of (PCA)


Input: X matrix
Outputs: mean value, eigenvectors and eigenvalues
Normalization is done by (9)
If the high dimensional data then
Determination of the eigenvalues 𝑢𝑖 the eigenvectors 𝜆𝑖 of the matrix M
Determination of the final elements by:
𝑢𝑖 = 𝑋𝑒𝑖
𝜆𝑖 = 𝛿𝑖 𝑈 = [𝑢1 , . . . . . . 𝑢𝑖 ]
𝜆 = [𝜆1 . . . . . . . . . . 𝜆𝑖 ]
Otherwise
Determination of the elements of the matrix 𝑐 by:
𝑈 = [𝑢1 , . . . . . . 𝑢𝑖 ]
𝜆 = [𝜆1 . . . . . . . . . . 𝜆𝑖 ]
Return 𝑥̄ , 𝑈, 𝜆

2.3. Detector speed up robust features (SURF)


SURF Detector [6] proposes a new method for local description of points of interest. Strongly
influenced by the SIFT [14] approach, it couples a recording step of the analysis area with the construction of
a histogram of the oriented gradients. The computational technique allows to establish the rotation tab, for each
New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater)
1648  ISSN: 2252-8938

local interest point filtering window. The authors propose to apply Haar wavelets to the integral image in order
to decrease the processing time. This technique is based on the calculation of the drift along the horizontal and
vertical axes. The solutions obtained by the wavelets can be used in order to plot the gradient creep and
determine the deviation angle from the initial image.
Compared to other techniques in terms of robustness to different face changes. The latter give a good
result in terms of processing time [21]–[23]. The SURF technique which allows to extract the points of great
variation. This last one is based on the following modalities:
− Points of interest based on the Hessian matrix.
− Location of points of interest.
− Description of the point of interest.
− Descriptor components.

3. CLASSIFICATION
3.1. Support vector machine (SVM)
This section describes how to compare two feature vectors and gives a brief overview of the classifiers
used by SVM and distance measurement. After the extraction of the characteristics of each face image by
(SIFT-PCA, SURF and TPLBP) the classification is carried out in the last step as indicated in our method
proposed in Figure 3. SVM is a powerful statistical learning technique and generally used to solve shape
detection difficulties. Initially, the SVM is used as a binary classification technique, which is based on a two-
set problem. The binary SVM tries to optimize the hyperplane to divide the set into two subsets by maximizing
the difference between the hyperplane and the two sets labeled -1 and 1.
Suppose that 𝐵 is a dataset; 𝑥𝑖 : 𝑖:takes values between 1 to K are the features extracted by the faces
that represent the k-dimensional and 𝑦𝑖 are the labels, learning feature set, the separation of the whole to give
in the (11),

𝐵 = {(𝑥𝑖 , 𝑦𝑗 )|𝑥𝑖 ∈ 𝑅𝐾 , 𝑦𝑖 ∈ (−1, +1)} (11)

the function of separation between the two sets by the linear technique can be expressed by (12),

𝑓(𝑥) = (𝑤 ∗ 𝑥) + 𝑏 (12)

reliability of the SVM classifier linked by the choice of the kernel function choices explains in [24]. The
illustration of the technique of classification of our data by SVM algorithms in two classes (x1 and x2) by
linear in Figure 3(a) and nonlinear in Figure 3(b) methods.

(a) (b)

Figure 3. Linear and non-linear classification techniques; SVM (a) linearly SVM and (b) non-linearly SVM

In the case of nonlinear classification by (12) is not valid. In this way, the input characteristics are
transformed into a high dimensional space based on the kernel function, which helps to improve the accuracy
of the classification [25], [26]. The nuclei of the linear classification and the polynomials give good results that
the nucleus Radial Basis Function (RBF) in the application of the easy recognition [27]. Then, the SVM

Int J Artif Intell, Vol. 12, No. 4, December 2023: 1644-1653


Int J Artif Intell ISSN: 2252-8938  1649

classifier used to answer the multi-class challenge. The multi-class SVM technique is divided into two sets:
One-versus-all and one-versus-one [28]. In our work, we used the kernel Gaussian to make a check between
the two bases (test and training). Evaluation of the performance of our approach and comparison with other
exciting techniques. The evaluation of our approach is tested on two databases. Each database is divided into
two sets with different percentages. The test of our approach is done by (12),

𝑁𝑜.𝑜𝑓 𝑖𝑚𝑎𝑔𝑒𝑠 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑒𝑑 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦


recognition rate = (12)
𝑇 𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑡𝑒𝑠𝑡 𝑖𝑚𝑎𝑔𝑒𝑠

3.2. Classification by distance measurement


The similarity between vectors is based on the measure of distance. The latter serves as the basis for
calculating the minimum distance between two vectors. Among these distances is the Euclidean distance and
the Manhattan distance. A distance on a set E is an application in [29], [30].

𝑑: 𝐸 × 𝐸 → 𝑅+ 𝑎𝑠
∀𝑝, 𝑞 ∈ 𝐸, 𝑑(𝑝, 𝑞) = 𝑑(𝑞, 𝑝)
∀𝑝, 𝑞 ∈ 𝐸, 𝑑(𝑝, 𝑞) = 0 ⇔ (𝑞 = 𝑝)

𝑑(𝑝, 𝑞) Designated a function that calculates a scalar value, representing the similarity of two
vectors(𝑝, 𝑞∈ 𝑅𝑁 ); Usual distances include the Manhattan distance (or 1-distance) and the Euclidean distance
(or 2-distance) the measurement of the similarity is done by the two (13) and (14). The determination of the
classification rates is based on the measurement of the minimum distances between the two vectors, using the
distances of (1-distance and 2-distance).

𝑑1 (𝑝, q) = ∑𝑁
𝑝,𝑞=1 |𝑥𝑝 − 𝑥𝑞 | + |𝑦𝑝 − 𝑦𝑞 | (14)

1
𝑑2 (𝑝, q) = ∑𝑁 2 2
𝑝,𝑞=1[(𝑥𝑝 − 𝑥𝑞 ) + (𝑦𝑝 − 𝑦𝑞 ) ]2 (15)

4. DATABASE ORL AND DATABASE GRIMACE OF FACIAL EXPRESSIONS


The choice of the database to evaluate our contribution on the facial expression recognition or
detection system is important. For the validation of our contribution, we use two databases (ORL) reference in
[11] and Grimace reference in [31] contains. The (ORL) database contains 40 subjects with 10 variations. And
the Grimace database contains 18 subjects with 20 poses and variations of the image lighting, expression, as
we show in Figure 4.

Figure 4. A certain variation of the faces belonging to the databases (ORL) and Grimace

New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater)
1650  ISSN: 2252-8938

4.1. Proposed method


Our technique is based on the work referred to in [7], [8]. First, we merge the main database into two
databases which are the training database and the text database with different percentages. For example, 80%
test base and 20% training and training base 60%, 40% test base. In the second step, we extracted the key points
using the following three techniques (PCA-SIFT, SURF and TP-LBP). Finally, classification by the statistical
methods (SVM), Manhattan and Euclidean to determine the recognition rate. The steps to follow in our
technique are summarized in the flowchart of Figure 5.
The figures represent some results of our simulation applied to the database (ORL). Figure 6 shows
the determination of similar points by the PCA-SIFT and the SURF detector combined by RANSAC. the result
of our simulation shows face authentication by different variations based on distance measurement.

Figure 5. Proposed method for face recognition

Figure 6. Descriptor similarity measurement by our technique: application on the base (ORL) and
minimum distance

Int J Artif Intell, Vol. 12, No. 4, December 2023: 1644-1653


Int J Artif Intell ISSN: 2252-8938  1651

Figure 7 shows some simulations of our results represented in Table 1 applied to the database ORL
[10]. The results of our simulations show that the second technique is more accurate in terms of calculating the
similarity rate than the first technique. Our results exceed the results of the literature.
Tables 2 and 3 represent the simulation results by the two techniques applied to the Grimace database
[31]. the evaluation of our results on two parameters which are the similarity rate and the processing time. The
simulated results show that the second technique is more accurate in calculating the similarity rate than the first
technique. and also, in terms Authentication does not exceed 0.560 on the ORL database and 0.675 on the
database GRIMACE.

Figure 7. Descriptor similarity measurement by our technique: Application on the base (Grimace)

Table 1. Performance evaluation on the database (ORL)


Local descriptor Classifier Classification rate in %
The first technique [22] PCA-SIFT Euclidean 94.500000
Manhattan 92.850000
SURF Euclidean 96.657500
Manhattan 93.125000
The second technique TPLBP SVM 98.125800

Table 2. Performance evaluation on the database Grimace


Local descriptor Classifier Classification rate in %
The first technique [22] PCA-SIFT Euclidean 93.300000
Manhattan 91.050000
SURF Euclidean 95.257500
Manhattan 92.228000
The second technique TPLBP SVM 97.2851500

Table 3. Estimation of the average processing time for each variation: application to databases ORL and
Grimace
Database PCA-SIFT SURF The proposed method
Estimation of the average time of different variations according to ORL. 0.69 0.360 0.560
Estimation of the average time of different variations according to. Grimace 0.90 0.712 0.675

The technique offers good results in terms of recognition rate and an acceptable time in terms of
processing. On the other hand, the first technique gives good results in terms of processing time and the
disadvantage of this technique and the recognition rate decreases. In the next works, we text our technique on
the public database .ck, oulu-CASIA [31]. The latter contains more variation compared to other databases.

5. CONCLUSION
In this article, we propose two approaches to measure the classification rate and the average processing
time which are. Evaluation of the first technique (PCA-SIFT and SURF with RANSAC) in terms of recognition
rate. The results of the simulations applied on the two databases for the first technique shows that when using
the given Euclidean distance metric good results compared to the Manhattan metric. The second technique
extracts the characteristics by (TP-LBP+SVM) and then measures the decision rate. The validation of our
technique is performed on two databases with several changes. Then we do the reptation on two databases
called the test database and the training database with different percentage. Simulation results show that the
New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater)
1652  ISSN: 2252-8938

second technique performs better than the first one for both databases in terms of recognition rate which should
be 98.125800 and average processing time of 0.6175s.

REFERENCE
[1] L. Zhang, R. Chu, S. Xiang, S. Liao, and S. Z. Li, “Face detection based on multi-block LBP representation,” Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4642
LNCS, pp. 11–18, 2007, doi: 10.1007/978-3-540-74549-5_2.
[2] L. Liu, P. Fieguth, G. Zhao, M. Pietikäinen, and D. Hu, “Extended local binary patterns for face recognition,” Information Sciences,
vol. 358–359, pp. 56–72, 2016, doi: 10.1016/j.ins.2016.04.021.
[3] F. Ahmed and E. Hossain, “Automated facial expression recognition using gradient-based ternary texture patterns,” Chinese Journal
of Engineering, vol. 2013, pp. 1–8, 2013, doi: 10.1155/2013/831747.
[4] M. Roschani, “Evaluation of local descriptors on the labeled faces in the wild dataset,” Institute for Anthropomatics-German, 2009.
[5] P. O. Ladoux, C. Rosenberger, and B. Dorizzi, “Palm vein verification system based on SIFT matching,” Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 5558 LNCS,
pp. 1290–1298, 2009, doi: 10.1007/978-3-642-01793-3_130.
[6] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features (SURF),” Computer Vision and Image Understanding,
vol. 110, no. 3, pp. 346–359, 2008, doi: 10.1016/j.cviu.2007.09.014.
[7] M. Kang, K. Ji, X. Leng, X. Xing, and H. Zou, “Synthetic aperture radar target recognition with feature fusion based on a stacked
autoencoder,” Sensors, vol. 17, no. 12, p. 192, Jan. 2017, doi: 10.3390/s17010192.
[8] Q. Meng, S. Zhao, Z. Huang, and F. Zhou, “MagFace: a universal representation for face recognition and quality assessment,”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14225–14234, 2021.
[9] S. Kiranyaz, O. Avci, O. Abdeljaber, T. Ince, M. Gabbouj, and D. J. Inman, “1D convolutional neural networks and applications:
A survey,” Mechanical Systems and Signal Processing, vol. 151, 2021, doi: 10.1016/j.ymssp.2020.107398.
[10] J. Yang and J. Y. Yang, “From image vector to matrix: A straightforward image projection technique-IMPCA vs. PCA,” Pattern
Recognition, vol. 35, no. 9, pp. 1997–1999, 2002, doi: 10.1016/S0031-3203(02)00040-7.
[11] A. D. of Faces, “ORL face database,” Http://Www.Uk.Research.Att.Com/Facedatabase.Html, 2021, [Online]. Available:
http://www.cl.cam.ac.uk/research/dtg/attarchive/faceda.
[12] L. Spacek, “Face recognition data,” 2009, [Online]. Available: https://cmp.felk.cvut.cz/$~$spacelib/faces/.
[13] J. Lamarca, S. Parashar, A. Bartoli, and J. M. M. Montiel, “DefSLAM: Tracking and mapping of deforming scenes from monocular
sequences,” IEEE Transactions on Robotics, vol. 37, no. 1, pp. 291–303, 2021, doi: 10.1109/TRO.2020.3020739.
[14] D. G. Lowe, “Distictive image features from scale- nvariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2,
pp. 91–110, Nov. 2004, doi: 10.1023/B:VISI.0000029664.99615.94.
[15] H. Benradi, A. Chater, and A. Lasfar, “A hybrid approach for face recognition using a convolutional neural network combined with
feature extraction techniques,” IAES International Journal of Artificial Intelligence, vol. 12, no. 2, pp. 627–640, 2023,
doi: 10.11591/ijai.v12.i2.pp627-640.
[16] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, vol. 2, 2003, doi: 10.1109/cvpr.2003.1211478.
[17] S. Berretti, A. Del Bimbo, P. Pala, B. Ben Amor, and M. Daoudi, “A set of selected SIFT features for 3D facial expression
recognition,” in 2010 20th International Conference on Pattern Recognition, Aug. 2010, vol. 76, no. 6, pp. 4125–4128,
doi: 10.1109/ICPR.2010.1002.
[18] Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, “Face recognition systems: a survey,” Sensors, vol. 20, no. 2, p. 342, 2020,
doi: 10.3390/s20020342.
[19] Kwang In Kim, Keechul Jung, and Hang Joon Kim, “Face recognition using kernel principal component analysis,” IEEE Signal
Processing Letters, vol. 9, no. 2, pp. 40–42, Feb. 2002, doi: 10.1109/97.991133.
[20] V. L. Deringer, A. P. Bartók, N. Bernstein, D. M. Wilkins, M. Ceriotti, and G. Csányi, “Gaussian process regression for materials
and molecules,” Chemical Reviews, vol. 121, no. 16, pp. 10073–10141, 2021, doi: 10.1021/acs.chemrev.1c00022.
[21] A. Chater and A. Lasfar, “Robust Harris detector corresponding and calculates the projection error using the modification of the
weighting function,” International Journal of Machine Learning and Computing, vol. 9, no. 1, pp. 62–66, 2019,
doi: 10.18178/ijmlc.2019.9.1.766.
[22] A. Chater and A. Lasfar, “New approach to the identification of the easy expression recognition system by robust techniques (SIFT,
PCA-SIFT, ASIFT and SURF),” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 18, no. 2, p. 695,
Apr. 2020, doi: 10.12928/telkomnika.v18i2.13726.
[23] A. Chater and A. Lasfar, “New approach to calculating the fundamental matrix,” International Journal of Electrical and Computer
Engineering, vol. 10, no. 3, pp. 2357–2366, 2020, doi: 10.11591/ijece.v10i3.pp2357-2366.
[24] C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” ACM Transactions on Intelligent Systems and
Technology, vol. 2, no. 3, 2011, doi: 10.1145/1961189.1961199.
[25] A. B. Moreno, Á. Sánchez, J. F. Vélez, and F. J. Díaz, “Face recognition using 3D local geometrical features: PCA vs. SVM,”
Image and Signal Processing and Analysis, 2005. ISPA 2005. Proceedings of the 4th International Symposium, vol. 2005,
pp. 185–190, 2005, doi: 10.1109/ispa.2005.195407.
[26] Y. Lei, M. Bennamoun, and A. A. El-Sallam, “An efficient 3D face recognition approach based on the fusion of novel local low-
level features,” Pattern Recognition, vol. 46, no. 1, pp. 24–37, 2013, doi: 10.1016/j.patcog.2012.06.023.
[27] S. Li and W. Deng, “Deep facial expression recognition: A survey,” IEEE Transactions on Affective Computing, vol. 13, no. 3,
pp. 1195–1215, 2022, doi: 10.1109/TAFFC.2020.2981446.
[28] A. Rocha and S. K. Goldenstein, “Multiclass from binary: Expanding one-versus-all, one-versus-one and ECOC-based approaches,”
IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 2, pp. 289–302, 2014,
doi: 10.1109/TNNLS.2013.2274735.
[29] Y. Ma, S. Lao, E. Takikawa, and M. Kawade, “Discriminant analysis in correlation similarity measure space,” ACM International
Conference Proceeding Series, vol. 227, pp. 577–584, 2007, doi: 10.1145/1273496.1273569.
[30] M. Paci, L. Nanni, A. Lahti, K. Aalto-Setala, J. Hyttinen, and S. Severi, “Non-binary coding for texture descriptors in sub-cellular
and stem cell image classification,” Current Bioinformatics, vol. 8, no. 2, pp. 208–219, 2013, doi: 10.2174/1574893611308020009.

Int J Artif Intell, Vol. 12, No. 4, December 2023: 1644-1653


Int J Artif Intell ISSN: 2252-8938  1653

[31] B. C. Ko, “A brief review of facial emotion recognition based on visual information,” Sensors (Switzerland), vol. 18, no. 2, 2018,
doi: 10.3390/s18020401.

BIOGRAPHIES OF AUTHORS

Ahmed Chater born on December 30, 1986, in Taounate. Degree in Engineering


Sciences and Techniques, specialty: Image Processing, Laboratory of Systems Analysis,
Information Processing and Industrial Management, Mohamed V University Rabat
(Mohammadia School). My research focuses on the Segmentation and restoration of different
types of color and grayscale images. Classification and recognition of facial expressions and
machine learning. He can be contacted at email: [email protected].

Hicham Benradi born on October 07, 1985. He holds a bachelor's degree in Mobile
Application Engineering from the Ecole Supérieure de Technologie in Salé, and a master's
degree in Data Engineering and Software Development from the Faculty of Science at Mohamed
V University. He is a Ph.D. student at the Mohammadia School of Engineering in Rabat. His
research focuses on facial recognition methods and image processing. He can be contacted at
the following email address: [email protected].

Abdelali Lasfar was born on January 10, 1971 in Salé. He is a Professor of Higher
Education at Mohammed V Agdal University, Salé Higher School of Technology, Morocco.
His research focuses on compression methods, indexing by image content and image indexing,
and knowledge extraction from images. He can be contacted at email: [email protected].

New approach to similarity detection by combining technique three-patch local binary … (Ahmed Chater)

You might also like