Local Texture Description Framework For Texture Based Face Recognition
Local Texture Description Framework For Texture Based Face Recognition
773
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
for nth order LDPs. This operator has been successfully applied 1.3 ORGANIZATION OF THE PAPER
for face recognition. Guo et al. [8] proposed Local Binary
Pattern Variance (LBPV) that characterizes local contrast The latter part of the paper is organized as follows. A brief
information into a one-dimensional LBP histogram. Lei et al. review of texture descriptors LBP, LTP, and LTrPs are reported
[16] introduced a method that merges information obtained from in section 2. In section 3 the proposed LTDF is presented.
image space, scale and orientation and have proved that in face Section 4 gives the face recognition algorithm in detail.
recognition their method outperforms the one which considers Section 5 is devoted to the experimental results and discussions
the individual domain alone. In our early work [27], of the proposed LTDF model for five different conditions
performance of LBP, Multivariate Local Binary Pattern (MLBP) expression variation, illumination variation, partial occlusion
[19], LBPV, DLBP, Local Texture Pattern (LTP) and LDP are with spectacle, pose variation and general recognition. Finally,
evaluated for different face recognition issues, and have found the conclusion is given in section 6.
that LTP and LDP outperforms other descriptors.
Subrahmanyam Murala et al. [25] proposed Local Tetra Patterns 2. RELATED WORK
(LTrPs) for content based image retrieval and have proved that
their method has high discrimination power. 2.1 TEXTURE DESCRIPTION
Except Multi-scale Local Binary Pattern Histogram
(MLBPH) [5], almost all the existing local texture descriptors Texture is a term that characterizes the contextual property of
describe patterns by relating the closest neighbors around a an image. A texture descriptor can characterize an image as a
pixel. When pixels at certain distance apart are considered, it whole. Texture descriptor Grey Level Co-occurrence Matrix
may likely to acquire the features of different facial components (GLCM) [21] belongs to this category. Alternatively, it can also
like eyes, nose, mouth etc. This is the idea behind developing a characterize an image locally at the micro level and by global
general framework for describing a texture pattern over a local texture description at the macro level. In local description, the
region with pixels at certain distance apart. Both the face and relationship between a pixel and its neighborhood can be
the components of the face can be either circular or elliptical in expressed in terms of local texture patterns. The occurrence
nature. Hence proposing a new texture description that captures frequency of such patterns (PTN) will be collected in a
features along circular or elliptical neighborhood is expected to histogram (H) using (1) which describes the global feature of the
have high discrimination power even when all the face image. The texture descriptors LBP, LTP, and LTrPs follow the
recognition challenges are considered. Justified by these facts, a second approach.
framework LTDF is proposed for either circular or elliptical N M
neighborhood. H p f PTN i, j , p ; (1)
i 1 j 1
1.2 OUTLINE OF THE PROPOSED APPROACH
p 1, P (2)
Overall process of face recognition is illustrated by Fig.1. At
1 if x y
f x, y
first, all the images are converted into gray-scale images. Then
they are preprocessed to align into same canonical position. (3)
0 otherwise
Subsequently certain region of interest is cropped from the
images so as to prevent processing of unnecessary details. The where, N M represents the size of the input image and P the
system is then trained by extracting texture features from gallery total number of patterns.
images by the proposed LTDF, and is stored separately for every
image in the database. While testing a probe image, texture 2.2 LOCAL BINARY PATTERN
features are extracted from that image, and are matched against
all the images in the database using nearest neighborhood Ojala et al. introduced LBP operator [23] for texture
classifier with chi-square dissimilarity metric. classification by which a texture pattern around a pixel in an
image can be computed by comparing its gray value with its
neighbors as demonstrated in Fig.2.
774
ISSN: 0976-9102(ONLINE) ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, FEBRUARY 2014, VOLUME: 04, ISSUE: 03
Fig.2. Pattern string computation from a sample image Subrahmanyam et al. introduced this model in which a local
texture description of a pixel can be obtained using two things:
1) the direction of the pixel with its horizontal and vertical
LBP is deliberated as follows,
neighbors. 2) The magnitudes of horizontal and vertical first-
p1t g p , g c
p 8 order derivatives. For every direction, a tetra pattern is first
if U 2
LBPP, R (4) obtained using Eq.(14) which is further divided into three binary
9 otherwise pattern using Eq.(16). Therefore the total number of binary
pattern that an LTrPs can give is 13 including the magnitude
where, information. The detailed explanation with example is available
0 if g p g c in [25].
t g p ,g c (5) Given an image I, the first-order derivatives at a pixel gc for
1 if g p g c direction one can be calculated as,
In the above equations gc, gp are the grey level of center pixel I 1 0 o g c I g h I g c (10)
c and a vicinity pixel p respectively, P is the number of
neighbors and R is the radius of the neighborhood. The pattern I 190o g c I g v I g c (11)
can be classified as either uniform or non-uniform. It is said to
where, gh and gv are the grey values of horizontal and vertical
be uniform if in the pattern string there are at most only two
neighbors. The direction of the center pixel can be written as,
transitions from 0 to 1 or vice versa. Usage of uniform patterns
reduces the total number of bins required. Image analysis 1, I 1 0 o g c 0 and I 1 90 o g c 0
requires only 9 bins for uniform patterns and one extra bin for all
non-uniform patterns thus requires a total of 10 bins. The 2,
I 1 0 o g c 0 and I 1 90 o g c 0
uniformity measure U is computed as follows, I 1Dir g c (12)
3, I 1 0 o g c 0 and I 1 90 o g c 0
U st g8 , g c , t g1, g c i2 st gi , gc , t gi1, gc
8
(6)
4, I 1 0 o g c 0 and I 1 90 o g c 0
where, The second-order LTrP2(gc) is expressed as,
1 if x y 0
sx,y (7)
LTrP 2 g c f I 1Dir g c , I 1Dir g1 , f I 1Dir g c , I 1Dir g 2 ,...
0 otherwise f I Dir g , I Dir g | P 8
1
c
1
P
775
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
of binary patterns are 12 (4 3). The 13th binary pattern (LP) is where,
obtained by using the magnitudes of horizontal and vertical first-
order derivatives using, hr 2 vr 2
Ri (20)
hr 2 sin 2 i vr 2 cos 2 i
M I 1 g
p I 0 g I 90 g
1 o
p
2 1 o
p
2
(17)
and
776
ISSN: 0976-9102(ONLINE) ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, FEBRUARY 2014, VOLUME: 04, ISSUE: 03
LTDFm using LBP alone. Fig.5 shows the LTDF coded faces of
a sample subject from AT&T database for the local texture
descriptors LBP, LTP and LTrPs. Sample images used from
different databases are displayed in Fig.6.
Testing Phase:
For a probe image, do the following,
a) Determine global texture description of the image using
steps a to e in the training phase. Fig.5. Feature maps obtained for an image from AT&T database
b) Find out the dissimilarity between the texture feature of
the probe image and texture feature of the gallery images
stored in the database using Chi-square statistic as
defined below,
H G i H P i 2
i 1
n2m
2 H G , H P (26)
H G i H P i
where, HG(i) is the ith feature value of the gallery image
and HP(i) is the ith feature value of the probe image, m is
the number of patterns and n2 is the number of sub
(a) Sample images used for expression variation experiment
regions.
from JAFFE database
c) The gallery image which yields least dissimilarity
measure with the probe image is considered as the
recognized one.
777
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
5.1 RESULTS ON EXPRESSION VARIATION comparison with the tested base models are reported in
Table.1(b) in terms of mean and standard deviation error. In 6-
Robustness in face recognition under different facial class face recognition, neutral expression images are not
expression is the most challenging issue. Facial expressions included whereas in 7-classs face recognition, all the expressions
result in temporally deformed facial features that lead to false are considered.
recognition. In order to test the effectiveness of the proposed
The results are evident for the effectiveness of the proposed
model, experiment is conducted for expression variation images
framework in recognizing faces with different expressions. The
by JAFFE database [20]. The database contains 213 frontal face
performance of the base models are enhanced when the proposed
images of 10 Japanese female models with seven different
framework is applied on them. The reason behind this might be
expressions. Two types of experiments are conducted on this
that the expression variations affect local regions and so when
database: one to recognize faces with varying expression and
pixels lie at certain region apart are used to form a texture
another to understand facial expression.
pattern, it has high discrimination power.
Initially the performance of the proposed LTDFc is
To analyze the ability of LTDFs on recognizing different
experimented for the standard LBP, by varying the number of
expressions, the confusion matrices are obtained for the base
rings and the difference in radius between the rings. Experiment
models, LTDFcs and LTDFes and are given in Table.1(c),
is conducted by setting one neutral expression image per subject
Table.1(d) & Table.1(e) respectively.
in the gallery set and the rest of the images in the probe set. The
results are tabulated in Table.1(a). From the results it is evident It is observed from the table results that both LTDFcs and
that the LTDFcs is capable of achieving a recognition accuracy of LTDFes outperform their base models to identify facial
94.08% which is greater than that of LTDFcm with three rings expression. Moreover LTDFes performs better in distinguishing
which yields an accuracy of 93.59% when d is 3. Higher the different expressions especially fear, happiness and surprise.
number of rings, greater being the total number of patterns
represented by the model. This shows the effectiveness of Table.1(b). Recognition rate (%) on the JAFFE database for
LTDFcs model. several methods
Table.1(c). Confusion Matrix of 7-Class Facial Expression Recognition using LBP, LTP and LTrPs on JAFFE database
778
ISSN: 0976-9102(ONLINE) ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, FEBRUARY 2014, VOLUME: 04, ISSUE: 03
Table.1(d). Confusion Matrix for Facial Expression Recognition obtained by LTDFcs for JAFFE database
779
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
Table.1(e). Confusion Matrix for Facial Expression Recognition using LTDFes on JAFFE database
780
ISSN: 0976-9102(ONLINE) ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, FEBRUARY 2014, VOLUME: 04, ISSUE: 03
Table.2(b). Recognition rate (%) on the Essex-illu dataset for Table.3(b). Recognition rate (%) on the Indian Face database for
one sample training problem on several methods one sample training problem (one frontal image per subject) on
several methods
Recognition
Methods
accuracy Recognition
Methods
accuracy
LBP 77.77
LBP 19.69
LTP 93.82
LTP 36.36
LTrPs 93.41
LTrPs 29.54
LTDFcs _LBP(5) 98.35
s
LTDFc _LBP(6) 36.36
LTDFcs _LTP(5) 100
s
LTDFc _LTP(6) 47.72
LTDFcs _LTrPs(3) 97.53
s
LTDFc _LTrPs(6) 43.18
LTDFes _LBP(4,8) 98.35
s
LTDFe _LBP(9,7) 42.42
LTDFes _LTP(2,6) 100
s
LTDFe _LTP(5,1) 49.24
LTDFes _LTrPs(2,5) 97.53
s
It can be understood from Table.3 that the proposed LTDF is LTDFe _LTrPs(4,5) 45.45
more suited for pose variant images. For instant, the proposed
LTDFcs using LBP yields an accuracy of 36.36% whereas the
5.4 RESULTS ON PARTIAL OCCLUSION WITH
base model LBP gives an accuracy of 19.69%. This shows that OBJECTS
the proposed model performs better producing an accuracy of Occlusions appear as local distortion away from a common
17% greater than that of LBP and hence it has higher face representing human population [9]. In order to study the
discrimination power. capability of the model for recognizing faces occluded with
It is also noticed that LTDFcm and LTDFes using LBP objects, frontal face images of 13 persons with spectacles are
produce an accuracy of 47.92% and 42.42% respectively. collected from Essex database [7] and the image set is referred in
LTDFcm give highest result with four rings when d is 5 for all the this paper as Essex-po. One image per individual is randomly
rings. Both the models seem to be more efficient when chosen as gallery set and 12 images per person are kept in the
compared with LTDFcs for pose variant images. This is due to probe set. Table.4 gives the experimental results.
the fact that for pose variant images certain information can be From the experimental results in Table.4(a), it is observed
lost and so facial features are captured differently when these that the proposed LTDFcs is able to achieve a highest recognition
models are used. By this analysis, it is very well understood, that accuracy of 97.43% for faces partially occluded with spectacle.
LTDFes performs better for the pose variant images. The results By knowing the effectiveness of LTDFcs, the experiment is
prove the effectiveness of LTDFes. conducted with LTDFes. Experimental results reveal that the
Table.3(a). Performance evaluation of LTDFcm (loosely coupled recognition accuracy obtained by LTDFes is very similar to that
of LTDFcs. In addition it is noticed that the result produced by
DAISY) using LBP on pose variation
circular model using LBP is about 21% greater than that of its
Recognition rate (%) base model which produces an accuracy of 76.28%. This shows
1 the efficiency of LTDFs in recognizing images partially occluded
No:
2 3 4 5 6 with spectacle.
Rings LTDFcs
Bins 10 20 30 40 50 60 5.5 RESULTS ON GENERAL RECOGNITION
1 After observing the effects of the proposed LTDFcs and
Distance between rings (d)
781
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
Table.4(a). Performance evaluation of LTDFcm (loosely coupled Table.5(b). N-fold Cross-Validation result on AT&T database
DAISY) using LBP on partial occlusion with spectacle for several methods
782
ISSN: 0976-9102(ONLINE) ICTACT JOURNAL ON IMAGE AND VIDEO PROCESSING, FEBRUARY 2014, VOLUME: 04, ISSUE: 03
number of bins. Moreover it is observed that for LTDF s, [11] M. Heikkilla, M. Pietikainen and C. Schmid, “Description
elliptical shape performs better than the circular shape. of interest regions with Local Binary Pattern”, Pattern
Computation of patterns using eight pixels at certain distance Recognition, Vol. 42, No. 3, pp. 425-436, 2009.
apart from a pixel causes many patterns to fall outside the [12] http://www.anefian.com/research/face_reco.htm.
boundary of an image. This decreases the number of patterns in
[13] X. Jiang, B. Mandal and A. Kot, “Eigenfeature
every bin which results in high speed.
regularization and extraction in face recognition”, IEEE
Proposed texture descriptor can be viewed as a new approach Transactions on Pattern Analysis and Machine
in describing texture pattern and hence it is applicable for all the Intelligence, Vol. 30, No. 3, pp. 383-394, 2008.
texture descriptors which use nearest neighbors of a pixel to
[14] Jiang X, “Asymmetric principal component and
describe a texture. In this paper the proposed method is
discriminant analyses for pattern classification”, IEEE
experimented for face recognition, but this can also be suitable
Transactions on Pattern Analysis and Machine
for other pattern recognition application such as fingerprint
Intelligence, Vol. 31, No. 5, pp. 931-937, 2009.
analysis, iris recognition etc. In this work, nearest neighborhood
classifier with chi-square distance metric is used. But [15] M. Kirby and L. Sirovich, “Application of Karhunen-loeve
performance of the model can be improved by using other procedure for the characterization of human faces”, IEEE
classifiers and distance metrics. Transactions on Pattern Analysis and Machine
Intelligence, Vol. 12, No. 1, pp. 103-108, 1990.
REFERENCES [16] Z. Lei, S. Liao, M. Pietikainen and S. Z. Li, “Face
recognition by exploring information jointly in space, scale
[1] Ahonen T, Hadid A and Pietikainen M, “Face description and orientation”, IEEE Transactions on Image Processing,
with local binary patterns: Application to face recognition”, Vol. 20, No. 1, pp. 247-256, 2011.
IEEE Transactions on Pattern Analysis and Machine [17] S. Liao, W. K. Law and A. C. S. Chung, “Combining
Intelligence, Vol. 28, No. 12, pp. 2037-2041, 2006. microscopic and macroscopic information for rotation and
[2] Andrzej Materka and Michal Strzelecki, “Texture Analysis histogram equalization invariant texture classification”,
Methods – A Review”, Institute of Electronics, Technical Proceedings of the 7th Asian Conference on Computer
University of Lodz, COST B11 report, pp. 1-33, 1998. Vision – Volume Part I, pp. 100-109, 2006.
[3] P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, [18] S. Liao, M. W. K. Chung and A. C. S. Chung, “Dominant
“Eigenfaces vs. fisherfaces: recognition using class specific Local Binary Patterns for texture classification”, IEEE
linear projection”, IEEE Transactions on Pattern Analysis Transactions on Image Processing, Vol. 18, No. 5,
and Machine Intelligence, Vol. 19, No. 7, pp. 711-720, pp. 1107-1118, 2009.
1997. [19] Lucieer A, Stein A and Fisher P, “Multivariate texture-
[4] Cevikalp H, Neamtu M, Wilker M and Barkana A, based segmentation of remotely sensed imagery for
“Discriminative common vectors for face recognition”, extraction of objects and their uncertainity”, International
IEEE Transactions on Pattern Analysis and Machine Journal of Remote Sensing, Vol. 26, No. 14, pp. 2917-
Intelligence, Vol. 27, No. 1, pp. 4-13, 2005. 2936, 2005.
[5] C. H. Chan, J. Kittler and K. Messer, “Multi-scale Local [20] M. Lyons, S. Akamatsu, M. Kamachi and J. Gyoba,
Binary Pattern Histograms for face recognition”, “Coding Facial Expressions with Gabor Wavelets”,
Proceedings of the International Conference on Advances Proceedings of the 3rd International Conference on
in Biometrics, pp. 809-818, 2007. Automatic Face and Gesture Recognition, pp. 200-205,
1998.
[6] Cong Geng and Xudong Jiang, “Fully Automatic Face
Recognition Framework Based on Local and Global [21] Mihran Tuceryan and Anil K. Jain, “Texture Analysis”, The
Features”, Machine Vision and Applications, Vol. 24, No. Handbook of Pattern Recognition and Computer Vision,
3, pp. 537-549, 2013. 2nd Edition, World Scientific Publishing Co., pp. 207-248,
1998.
[7] Face Recognition Data, University of Essex, UK, The Data
Archive, http://cswww.essex.ac.uk/mv/allfaces/index.html. [22] Ojala T, Pietikainen M and Harwood D, “A comparative
study of Texture Measures with Classification based on
[8] Z. Guo, L. Zhang and D. Zhang, “Rotation invariant texture
Featured Distributions”, Pattern Recognition, Vol. 29, No.
classification using LBP variance (LBPV) with global
1, pp. 51-59, 1996.
matching”, Pattern Recognition, Vol. 43, No. 3, pp. 706-
719, 2010. [23] Ojala T, Pietikainen M and Maenpaa T, “Multiresolution
gray-scale and rotation invariant texture classification with
[9] Hamidreza Rashidy Kanan and Karim Faez, “Recognizing
local binary patterns”, IEEE Transactions on Pattern
faces using Adaptively Weighted Sub-Gabor Array from a
Analysis and Machine Intelligence, Vol. 24, No. 7, pp.
single sample image per enrolled subject”, Image and
971–987, 2002.
Vision Computing, Vol. 28, No. 3, pp. 438-448, 2010.
[24] Samaria F. S and Harter A. C, “Parameterisation of a
[10] Heikkilla M and Pietikainen M, “A texture based method
stochastic model for human face Identification”,
for modeling the background and detecting the moving
Proceedings of the 2nd IEEE Workshop on Applications of
objects”, IEEE Transactions on Pattern Analysis and
Computer Vision, pp.138-142, 1994.
Machine Intelligence, Vol. 28, No. 4, pp. 657-662, 2006.
783
R REENA ROSE et.al: LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION
[25] Subrahmanyam Murala, Maheshwari R. P and for dimensionality reduction”, IEEE Transactions on
Balasubramanian R, “Local Tetra Patterns: A New Feature Pattern Analysis and Machine Intelligence, Vol. 29, No. 1,
Descriptor for Content-Based Image Retrieval”, IEEE pp. 40-51, 2007.
Transactions on Image Processing, Vol. 21, No. 5, pp. [33] Yang J, Frangi A, Yang J, Zhang D and Jin Z, “KPCA plus
2874-2886, 2012. LDA: a complete kernel fisher Discriminant framework for
[26] Suruliandi A and Ramar K, “Local Texture Patterns – A feature extraction and recognition”, IEEE Transactions on
Univariate Texture Model for Classification of Images”, Pattern Analysis and Machine Intelligence, Vol. 27, No. 2,
Proceedings of the International Conference on Advanced pp. 230-244, 2005.
Computing and Communications, pp. 32-39, 2008. [34] Ye J, Janardan R, Park C and Park H, “An optimization
[27] Suruliandi A, Meena K and Reena Rose R, “Local binary criterion for generalized discriminant analysis on
pattern and its derivatives for face recognition”, IET undersampled problems”, IEEE Transactions on Pattern
Computer Vision, Vol. 6, No. 5, pp. 480-488, 2012. Analysis and Machine Intelligence, Vol. 26, No. 8, pp. 982-
[28] Tola Engin, V. Lepetit and P. Fua, “DAISY: An efficient 994, 2004.
dense descriptor applied to Wide-Baseline Stereo”, IEEE [35] Yousra Ben Jemaa and Sana Khanfir, “Automatic Local
Transactions on Pattern Analysis and Machine Gabor features extraction for face recognition”,
Intelligence, Vol. 32, No. 5, pp. 815-830, 2010. International Journal of Computer Science and
[29] Topi Maenpaa and Matti Pietikainen, “Texture Analysis Information Security, Vol. 3, No. 1, 2009.
with Local Binary Patterns”, Handbook of Pattern [36] Zhang B, Gao Y, Zhao S and Liu J, “Local derivative
Recognition and Computer Vision, 3rd Edition, World pattern versus local binary pattern: Face recognition with
Scientific, 197-216, 2005. higher-order local pattern descriptor”, IEEE Transactions
[30] Vidit Jain, Amitabha Mukherjee, 2002, The Indian Face on Image Processing, Vol. 19, No. 2, pp. 533-544, 2010.
Database, http://vis_www.cs.umass.edu/~vidit/Indian Face [37] Zhao G and Pietikainen M, “Dynamic texture recognition
Database/. using local binary patterns with an applications to facial
[31] Wang X and Tang X, “A unified framework for subspace expressions”, IEEE Transactions on Pattern Analysis and
face recognition”, IEEE Transactions on Pattern Analysis Machine Intelligence, Vol. 29, No. 6, pp. 915-928, 2007.
and Machine Intelligence, Vol. 26, No. 9, pp. 222-1228, [38] Zheng W and Tang X, “Fast algorithm for updating the
2004. Discriminant vectors of dual-space IDA”, IEEE
[32] Yan S, Xu D, Zhang B, Yang Q, Zhang H and Lin S, Transactions on Information Forensics and Security,
“Graph embedding and extensions: a general framework Vol. 4, No. 3, pp. 418-427, 2009.
784