Anti Spoofing
Anti Spoofing
net/publication/311895447
CITATIONS READS
13 8,640
4 authors, including:
Xiaoyi Feng
Politecnico di Milano
181 PUBLICATIONS 3,665 CITATIONS
SEE PROFILE
All content following this page was uploaded by Zahid Akhtar on 07 September 2017.
13.1 Introduction
Over the past decade, automated face recognition systems have been adopted in
various applications because of face’s rich features that offer a strong biometric cue
for recognizing individuals [8, 53]. In fact, facial recognition systems are already
being used at large scale. For instance, UIDAI program provides identity to all
persons resident in India using face, and Microsoft Kinect employs face recognition
to access dashboard and automatic sign-in to Xbox Live profile. Likewise, face bio-
metrics based access control is a ubiquitous feature available now on mobile devices
as an alternative to passwords, e.g., Android KitKat mobile OS, Lenovo VeriFace,
Asus Smart-Logon, and Toshiba SmartFace. Since deployment of face recognition
systems are growing year after year, people are also becoming more familiar to their
use in daily life. Consequently, security weakness of face recognition systems is
getting better known to the general public. However, vulnerabilities of face systems
to attacks are mainly overlooked [7, 17, 18], even when it is not difficult nowadays to
Z. Boulkenafet (!)
University of Oulu, Oulu, Finland
e-mail: [email protected]
Z. Akhtar
INRS-EMT, University of Quebec, Varennes, QC, Canada
e-mail: [email protected]
X. Feng
Northwestern Polytechnical University, Xi’an, China
e-mail: [email protected]
A. Hadid
University of Oulu, Oulu, Finland
Northwestern Polytechnical University, Xi’an China
e-mail: [email protected]
3. Override
feature extractor System
6. Modify template
Database
Application
Feature device
Camera Matcher Decision
Extractor (e.g.,
Smartphone)
Fig. 13.2 In 2010, a passenger boarded a plane in Hong Kong with an old man mask and arrived
in Canada to claim asylum
find web sites or even tutorial videos giving detailed guidance on how to attack face
systems to gain unauthorized access. Eight possible different points where security
of face recognition systems can be compromised are shown in Fig. 13.1.
In particular, existing systems are vulnerable to facial spoofing attacks [26].
Facial spoof attack is a process in which a fraudulent user can subvert or attack
a face recognition system by masquerading as registered user and thereby gaining
illegitimate access and advantages [3, 6, 10, 15, 34, 42, 53]. Face spoofing attack is
also known as “direct attack” or “presentation attack”. Face spoofing is also a major
issue for companies selling face biometric-based identity management solutions
[10]. For example, in Hong Kong, a young passenger boarded a plane while wearing
an old-man mask and arrived in Canada to claim asylum in 2010 as also shown
in Fig. 13.2. It is worth mentioning that face spoofing does not require advanced
technical skills, which increases thereby potential number of attackers. Moreover,
as illustrated in Fig. 13.3, face images captured from spoofing attacks can look very
similar to images captured from real ones, which makes very difficult to detect face
spoofing attacks.
13 Face Anti-spoofing in Biometric Systems 301
Fig. 13.3 Example of images captured from real faces (upper row) and from photo spoofing
attacks (lower row)
In [53] authors reported experimental results to point out that the probability of
spoofed faces being successfully accepted as genuine user might be up to 70 %,
even when a state-of-the-art Commercial Off-The-Shelf (COTS) face recognition
system is employed. Hence, one can extrapolate that current COTS face systems are
predominantly not designed to effectively counteract against spoofed faces. This
vulnerability is now listed in the National Vulnerability Database of the National
Institute of Standards and Technology (NIST) in the USA.
Quintessential face anti-spoofing technique is face liveness detection, which aims
at disambiguating human live face samples from spoof artifacts [10, 45]. A large
number of face liveness detection methods have been proposed in the literature [10,
12, 13, 36, 39, 40, 46, 49, 56]. Besides liveness detection, multibiometrics is also
considered as natural countermeasure against face spoofing attacks [2, 4, 5, 22].
Face liveness detection is an onerous task that demands certain difficult require-
ments [10, 19] such as: (1) non-invasive, the method should not be harmful for the
user or require an excessive contact with the user; (2) speed, the outcome has to
be generated in a very short interval; (3) performance, beside having good spoof
detection rate, the anti-spoofing must not decrease the recognition performance of
the main face recognition system; (4) easy to embed in already functional face
recognition systems plus no requirement of new piece of hardware.
302 Z. Boulkenafet et al.
Though, big advances have been reached in face anti-spoofing over the last
decade, the face spoofing techniques have also evolved and become more and
more sophisticated. Inevitably, many of the existing face anti-spoofing techniques
are still vulnerable to spoofing, including various commercial systems that claim
to have some degree of face spoof detection embedded. A coherent survey on
face spoofing and anti-spoofing [33] clearly points out the face spoofing being yet
to be a huge challenge for existing face recognition systems. Namely, there are
many issues to be addressed in the detection of face spoofing attacks. Specifically,
existing face anti-spoofing techniques suffer from two main drawbacks: (1) lack of
generalization—current approaches are spoof material- and/or trait-dependent, such
that feature descriptors proposed for face spoofing may not function effectively if
employed for iris or fingerprint spoofing and vice versa. Likewise, the performance
of face liveness detectors drastically drops when they are presented with novel
fabrication materials (not used during the system design/training stage); (2) high
error rates—none of the methods still have shown to reach a very low acceptable
error rates.
In this chapter, after an assiduous review of the state-of-the-art in face spoofing
and anti-spoofing, we present a novel software-based face anti-spoofing method
via color texture analysis. The color Local Binary Patterns (LBP) descriptor [27]
is utilized to extract the joint color-texture information from the face images.
Specifically, the uniform LBP histograms are extracted from the individual image
bands. These histograms are subsequently concatenated to form the final descriptor.
To attain insight into which color space is more discriminative to distinguish
real face from fake ones, three color spaces, namely RGB, HSV, and YCbCr are
considered independently. Extensive experiments on two challenging benchmark
datasets, namely CASIA face anti-spoofing and Replay-Attack databases, clearly
indicate that color texture based method outperforms gray-scale counterparts in
detecting various types of spoofing attacks. Moreover, inter-database evaluation
shows very promising generalization capabilities of the proposed method compared
to state-of-the-art techniques.
The rest of the chapter is organized as follows. In Sect. 13.2, an exhaustive
overview of published works in the field of face spoofing and anti-spoofing is
outlined. A brief description of large and publicly available face spoofing databases
is presented in Sect. 13.2.3. A case study based on the use of color texture analysis
as face anti-spoofing tool is introduced in Sect. 13.3. Future research directions and
conclusions are described in Sects. 13.4 and 13.5, respectively.
In this section, we present an overview of face spoofing with their liveness detection
methods.
13 Face Anti-spoofing in Biometric Systems 303
Fig. 13.4 Examples of face spoofing using (a) photograph, (b) video, (c) 3Dmask, (d) sketch, (e)
reverse-engineered face image, (f) make-up (skillful application of make-up to look like Michel
Jackson), (g) plastic surgery (this boy underwent excessive plastic surgery to look like Justin
Bieber), and (h) face generated using computer graphics
Despite great deal of progress in face recognition systems, face spoofing still poses
a serious threat. Most of the existing academic and commercial facial recognition
systems may be spoofed by Akhtar [2] and Akhtar et al. [10] (see Fig. 13.4): (1) a
photo of a genuine user; (2) a video of a genuine user; (3) a 3D face model (mask) of
a genuine user; (4) a reverse-engineered face image from the template of a genuine
user; (5) a sketch of a genuine user; (6) an impostor wearing specific make-up to
look like a genuine user; (7) an impostor who underwent plastic surgery to look
like a genuine user; (8) a photo or a video, generated using computer graphics, of
a genuine user. The easiest and most common face spoofing attack is to submit a
photograph or video of a legitimate user to the face recognition systems.
Fig. 13.5 Spoofing attacks might be detected by hardware- or software-based presentation attack
detection methods. Software-based methods are inexpensive and non-invasive because they use
signal-processing algorithms
detection methods have been proposed [26, 33, 53]. The published face anti-spoofing
techniques can be broadly grouped into four categories: (1) motion analysis based
methods, (2) texture analysis based methods, (3) image quality analysis based
methods, (4) hardware based methods (Fig. 13.5).
Methods in this category are mainly based on the spontaneous movement clues
generated by two-dimensional spoofing attacks such as photographs and videos.
They analyze the motion features of the input to determine the realism of the face
sample to the system. For instance, it is well known that human eye-blink occur
once every 2–4 s (Pan et al.’s). Therefore, [45] exploited such spontaneous eye-
blinking factor to devise a liveness detection for photo-spoofing. In particular, the
proposed algorithm utilizes an undirected conditional random field framework to
model the eye blinking, while relaxing the independence assumption of generative
modelling and state dependence limitations from hidden Markov models. Likewise,
Tan et al. [49] exploited another object movement fact that the real human 3D
faces significantly move differently from planer objects, and such deformation
patterns can be used for face anti-spoofing. The features are extracted using a
variational retinex-based and difference-of-Gaussians (DoG) [39] based approaches.
The extracted features are then used for live or spoof classification. However, the
above-mentioned methods are very vulnerable to spoofing attacks using videos. To
overcome this limitation, authors in [36] designed an algorithm heavily based on a
short sequence of images using a binary detector. The method captures and tracts
the subtle movements of different selected facial parts using a simplified optical flow
analysis followed by a heuristic classifier. The same authors also presented another
technique to fuse different experts systems introduced in the former works as
liveness attributes, e.g., eye-blinks and mouth movements. Bao et al. [13] proposed
a systems to estimate face motion via optical-flow system to detect attacks produced
with planar media such as prints or screens.
13 Face Anti-spoofing in Biometric Systems 305
The methods in this category assume that surface properties (e.g., pigments) of real
faces are different from spoof prints, thereby examining the skin properties such
as skin texture and skin reflectance be helpful for spoof detection. Most common
detectable texture patterns due to artifacts are printing failures or blurring effects.
Contrary to motion analysis based methods, techniques in this category need only
a single static image sample rather than video data. Consequently, these algorithms
are generally faster and user-friendly. Li et al. [40] described a method for print-
attack face spoofing by exploiting differences in the 2-D Fourier spectra of live
and spoof images. The method presumes that photographs are normally smaller
in size and contain fewer high frequency components compared to real faces. The
method only works well for down-sampled photos of the attacked identity, but likely
fails for higher-quality samples. In [9, 11, 12, 35, 37, 55] authors developed micro-
texture analysis based methods to detect printed photo attacks using “bidirectional
reflectance distribution function (BRDF),” “local binary pattern (LBP),” “modified
census transform (MCT),” “local phase quantization (LPQ),” “vector quantization
(VQ),” and “CENsus TRansform hISTogram (CENTRIST),” respectively . One
drawback of the presented methods is the requirement of reasonably sharp input
images.
technologies. Majority of the solutions in this category employ light spectrum that
is outside of the visual spectrum (e.g., 3D depth [52], complementary infrared
(CIR) or near infrared (NIR) images [56]) to compare the reflectance information
of real faces and spoof materials. To this aim, a specific setup of LEDs and photo-
diodes at two different wavelengths is used. Lately, authors in [30] carried out a
study on thermal imaging for face anti-spoofing by collecting large database of
thermal face images for real and spoofed access attempts. Also, it is commonly
believed that multimodal biometric systems are natural anti-spoofing technique [2].
To this end, Chetty et al. [22] combined face and voice, and analyzed the correlation
between the lips movement and the speech being produced. In particular, the
system used a microphone and a speech analyzer for anti-spoofing. On the whole,
though hardware-based solutions tend to provide better results and performances,
they require extra piece of hardware thereby increasing the cost of the system.
A summary with relevant features of the most representative works in face liveness
detection is presented in Table 13.1.
The Replay-Attack Database [23] consists of video recordings of real accesses and
attack attempts to 50 clients (see, Fig. 13.7, for cropped and normalized example
images). Using a built-in camera of a MacBook Air 13-in. laptop, a number
of videos were recorded of each person in the database under two illumination
conditions: controlled, i.e. uniform background and a fluorescent lamp was used
to illuminate the scene, and adverse, i.e. nonuniform background and the day-light
was the only source of illumination. Under the same conditions, a high resolution
pictures and videos were taken for each person using a Canon PowerShot SX150 IS
camera and an iPhone 3GS camera. These recordings were used to generate the fake
face attacks.
308 Z. Boulkenafet et al.
Fig. 13.6 Cropped and normalized example face images from the CASIA FASD. From top to
bottom: low, normal, and high quality images. From the left to the right: real faces and the
corresponding warped photo, cut photo, and video replay attacks
Three types of attacks were designed: (1) print attacks, i.e. high resolution
pictures were printed on A4 paper and displayed to the camera); (2) mobile attacks,
i.e. high resolution pictures and videos were displayed on the iPhone 3GS screen;
and (3) high definition attacks, i.e. the pictures and the videos were displayed on
an iPad screen with resolution of 1024 by 768 pixels. According to the support
used in presenting the fake face devices in front of the camera, two types of attacks
were defined: hand based attacks, i.e. the attack devices were held by the operator
and fixed-support attacks, i.e. the attack devices were set on a fixed support. For
the evaluation, the 50 subjects were divided on three subject-disjoint subsets for
training, development, and testing.
Texture analysis of gray-scale face images can provide sufficient means to reveal the
recapturing artifacts of fake faces if the image resolution (quality) is good enough
to capture the fine details of the observed face. However, if we take a close look
at the cropped facial images of a genuine human face and corresponding fake ones
in Fig. 13.8, it is basically impossible to explicitly name any textural differences
between them because the input image resolution is not high enough.
13 Face Anti-spoofing in Biometric Systems 309
Fig. 13.7 Cropped and normalized example face images from the Replay-Attack Database. The
first row presents images taken from the controlled scenario, while the second row corresponds to
the images from the adverse scenario. From the left to the right: real faces and the corresponding
high definition, mobile and print attacks
Fig. 13.8 Example of a genuine face and corresponding print and video attacks in RGB, gray-
scale and YCbCr colour space
310 Z. Boulkenafet et al.
To emulate the color perception properties of the human visual system, color
mapping algorithms give a huge importance to the preservation of the spatially
local luminance variations at the cost of the chroma information [16]. Human eye is
indeed more sensitive to luminance than to chroma, thus fake faces still look very
similar to the genuine ones when the same facial images are shown in color (see,
Fig. 13.8). However, if only the corresponding chroma component is considered,
some characteristic differences can be already noticed. While the gamut mapping
and other artifacts cannot be observed clearly in the gray-scale or color images, they
are very distinctive in the chrominance channels. Thereby, color texture analysis of
the chroma images can be used for detecting these gamut mapping and other (color)
reproduction artifacts.
Inspired by the aforementioned observations, we propose, in this work, a new
face anti-spoofing method based on color texture analysis. The color Local Binary
Patterns (LBP) descriptor proposed in [27] is used to extract the joint color-texture
information from the face images. In this descriptor, the uniform LBP histograms
are extracted from the individual image bands. Subsequently, these histograms are
concatenated to form the final descriptor. To gain insight into which color space is
more discriminative to distinguish real face from fake ones, we considered three
color spaces, namely RGB, HSV, and YCbCr.
RGB is the most used color space for sensing, representation, and displaying of
color images. However, its application in image analysis is quite limited due to the
high correlation between the three color components (red, green, and blue) and the
imperfect separation of the luminance and chrominance information.
In this work, we considered two other color spaces to explore the color texture
information in addition to RGB: the HSV and the YCbCr. Both of these color spaces
are based on the separation of the luminance and the chrominance information. In
the HSV color space, the hue and the saturation dimensions define the chrominance
of the image while the value dimension corresponds to the luminance. The YCbCr
space separates the RGB components into luminance (Y), chrominance blue (Cb ),
and chrominance red (Cr ). More details about these color spaces can be found, e.g.,
in [41]. In the HSV and YCbCr color spaces, the representation of the luminance and
chrominance components is different, thus they can provide complementary facial
color texture descriptions for spoofing detection.
The LBP descriptor proposed by Ojala et al. [44] is a highly discriminative gray-
scale texture descriptor. For each pixel in an image, a binary code is computed
by thresholding a circularly symmetric neighborhood with the value of the central
13 Face Anti-spoofing in Biometric Systems 311
where
ˇ ˇ
ˇ .i/ .i/ ˇ
U .i/ D ˇı.rp!1 ! rc.i/ / ! ı.r0 ! rc.i/ /ˇ
P ˇ
X ˇ (13.2)
ˇ .i/ .i/ ˇ
C ˇı.rn ! rc.i/ / ! ı.rn!1 ! rc.i/ /ˇ
nD1
To detect spoofing attacks, the color LBP features extracted from face images are
fed into a Support Vector Machine (SVM) classifier as shown in Fig. 13.9.
In our experiments, we followed the defined protocols of the two databases which
allows a fair comparison against the state-of-the-art. On CASIA-FA database,
the model parameters are trained and tuned using fourfold subject-disjoint cross-
validation on the training set and the results are reported in terms of Equal Error Rate
(EER) on the test set. Replay-Attack database provides also a separate validation set
for tuning the model parameters. Thus, the results are given in terms of EER on the
development set and the Half Total Error Rate (HTER) on the test set.
.i/
In all our experiments, we used the LBP8;1 operator (i.e., P D 8 and R D 1) to
extract the textural features from the normalized (64 " 64) face images. To capture
both of the appearance and the motion variation of the face images, we average
the features within a time windows of 3 and 4 s on CASIA-FA and Replay-Attack
databases, respectively. In order to get more training data, these time windows are
taken with 2 s overlap in the training stage. In the test stage, only the average
of the features within the first time window is used to classify each video. The
classification was done using a Support Vector Machine [21] (SVM) with RBF
Kernel.
N
X .Hx .i/ ! Hy .i//2
d"2 .Hx ; Hy / D ; (13.4)
iD1
Hx .i/ C Hy .i/
where Hx and Hy are two LBP histograms with N bins. In addition to its simplicity,
the chi-square distance is shown to be effective to measure the similarity between
two LBP histograms. From Fig. 13.10, we can observe that the chi-square distance
between gray-scale LBP histograms of the genuine face and the printed fake face
is smaller than the one between two genuine face images. Moreover, the difference
in similarity between the texture descriptions of genuine faces and the chi-square
distance between the genuine face and the video attack is not significant. It is worth
noting, however, that similarity measured with pure chi-square distance does not
necessarily indicate that there are no intrinsic disparities in the gray-scale texture
representation that could be exploited for face spoofing detection.
Tables 13.2 and 13.3 present the results of different LBP based color texture
descriptions and their gray-scale counterparts. From these results, we can clearly see
13 Face Anti-spoofing in Biometric Systems 313
Fig. 13.10 The similarity between the holistic LBP descriptions extracted from genuine faces and
fake ones. The original RGB images are shown on the left. In the middle, the similarity between
the LBP descriptions extracted from the gray-scale images is presented. The similarity between the
LBP descriptions extracted from the different color channels in the YCbCr color space is presented
on the right
that the color texture features significantly improve the performance compared to the
gray-scale LBP-based countermeasure. When comparing the different color spaces,
YCbCr based representation yields to the best overall performance. The color LBP
features extracted from the YCbCr space improves the performance on CASIA-
FA and Replay-Attack databases by 64.5 and 81.4 %, respectively, compared to the
gray-scale LBP features.
From Table 13.2, we can also observe that the features extracted from the HSV
color space seem to be more effective against video attacks than those extracted from
314 Z. Boulkenafet et al.
the YCbCr color space. Thus, we studied the benefits of combining the two color
texture representations by fusing them at feature level. The color LBP descriptions
from the two color spaces were concatenated, thus the size of the resulting histogram
is 59 " 3 " 2. The results in Tables 13.2 and 13.3 indicate that a significant
performance enhancement is obtained, thus confirming the benefits of combining
the different facial color texture representations.
Table 13.4 compares the performance of our proposed countermeasure against
the state-of-the-art face anti-spoofing methods. From this table, we can notice that
our method outperforms the state-of-the-art results on the challenging CASIA-FA
database, and yields in very competitive results on the Replay-Attack database.
To gain insight into the generalization capabilities of our proposed method, we
conducted a cross-database evaluation. In these experiments, the countermeasure
13 Face Anti-spoofing in Biometric Systems 315
was trained and tuned with one database (CASIA-FA or Replay-Attack) and then
tested on the other database. The results of these experiments are summarized in
Table 13.5.
In the first experiment, we evaluated the performance on CASIA-FA database
while training and tuning the countermeasure on the Replay-Attack database.
Table 13.5 reports a HTER values of 47:5 and 43:9 % on the training and the
testing sets, respectively. While, in the second experiment, when the countermeasure
is trained and tuned on CASIA-FA database and then tested on Replay-Attack
database, the HTER values on the development and the test sets are 22:5 and 20:6 %,
respectively. Although these results are very competitive to those of state-of-the-art
methods, specially on Replay-Attack database, they are still degraded compared to
the intra-test results (when the countermeasure is trained and tested on the same
database).
Complex classifiers, like SVM-RBF, might be more sensitive to over-fitting than
simpler classification schemes. The two face anti-spoofing benchmark datasets are
rather small and the variations in the provided data are also limited which increases
the chance of over-fitting with powerful texture features and complex classification
schemes. Inspired by the observations, e.g., in [23, 29], we proposed to mitigate this
problem by using linear SVM instead of SVM-RBF
The experiments using linear SVM models show very interesting results com-
pared to those of the SVM-RBF models. On the CASIA-FA database, the H TER
values on the training and the testing sets have been reduced to 38:6 and 37:6 %,
respectively. On Replay-Attack database, the HTER values have been reduced
to 17:7 and 16:7 % (on the development and the test sets, respectively), that are
comparable to those obtained with the gray-scale LBP descriptor in the intra-test
evaluation (15:3 and 15:6 %).
The model optimized on the Replay-Attack dataset is not able to generalize as
well as the model based on the CASIA FA. The reason behind this is that the CASIA
FA dataset contains more variations in the collected data (e.g., imaging quality and
proximity between the camera and face) compared to the Replay-Attack database.
Therefore, the model optimized for Replay-Attack database has difficulty to perform
well in the new environmental conditions. One way to deal with this problem is to
train countermeasure with a joint training set by combining the train set of both
databases, as described in [29].
In this section, we will discuss some open issues and some research directions for
face anti-spoofing.
316 Z. Boulkenafet et al.
Many visual cues for non-intrusive spoofing detection have been already explored
and impressive results have been reported on individual databases. However, the
varying nature of spoofing attacks and acquisition conditions makes it impossible
to predict how single anti-spoofing techniques, e.g. facial texture analysis, can
generalize the problem in real-world applications. Moreover, we cannot foresee all
possible attack scenarios and cover them in databases because the imagination of the
human mind always finds out new tricks to fool existing systems. As one obviously
cannot foresee all possible types of fake faces, one-class approach modeling only
the genuine facial texture distribution could be a promising direction. This has been
successfully applied in voice anti-spoofing [1], for instance.
Face images captured from face spoofs may visually look very similar to the images
captured from live faces. Thus, face spoofing detection may be difficult to perform
based on only single face image or a relatively short video sequence. Depending on
the imaging and fake face quality, it is nearly impossible, even for humans, to tell the
difference between a genuine face and a fake one without any scene information or
unnatural motion or facial texture patterns. However, we can immediately notice if
there is something suspicious in the view, e.g. if someone is holding a video display
or a photograph in front of the camera. Therefore, scenic cues can be exploited for
determining whether display medium is present in the observed scene as shown in
the Fig. 13.11.
Liveness and motion analysis based spoofing detection is rather difficult to perform
by observing only spontaneous facial motion during short video sequences. This
problem can be simplified by prompting the user to do some specific random
action or challenge (such as a smiling and moving the head to the right). The
user’s response (if any) will provide liveness evidences. This is called challenge-
response approach for spoofing detection. The drawback of such an approach is
that it requires user cooperation, thus making the authentication process a time-
consuming. Another advantage of non-intrusive techniques is that from challenge-
response based countermeasures it is rather easy to deduce which liveness cues need
to be fooled. For instance, the request for uttering words suggests that analysis of
synchronized lip movement and lip reading is utilized, whereas rotating head in a
318 Z. Boulkenafet et al.
certain direction reveals that the 3D geometry of the head is measured. For non-
intrusive approaches, it is usually not known which countermeasures are used, thus
the system might be harder to deceive [46].
13.5 Conclusions
To discriminate between the real and the fake face images many methods and
approaches have been proposed. These techniques can be grouped into four
categories: motion analysis based methods, texture analysis based methods, image
quality analysis based methods, and hardware based methods. In this work, we
proposed to approach the problem of face anti-spoofing from the color texture
analysis point of view. Instead of the gray-scale texture features used in the
previous works, we investigated the importance of the joint color texture features
in discriminating between the real and the fake images. The LBP texture features
were extracted from the individual image channels of the: RGB, HSV, and YCbCr
color spaces than concatenated to form the final descriptors. Extensive experiments
on two challenging spoofing databases, CASIA-FA and Replay-Attack, showed
excellent results. On CASIA-FA database, the face representation based on the
combination of HSV and YCbCr color spaces beat the state-of-the-art. Furthermore,
in our inter-database evaluation, the proposed approach showed very promising
generalization capabilities. As future work, more experiments should be conducted
in order to get more insight into the color texture based face anti-spoofing and to
derive problem-specific facial color representations. In addition to further improve
the performance of the color texture features, we have also discussed some open
issues and some research directions that should be investigated in the future to
enhance the robustness of the biometric systems against the spoofing attacks.
References
27. J.Y. Choi, K.N. Plataniotis, Y.M. Ro, Using colour local binary pattern features for face
recognition, in Proceedings of IEEE International Conference on Image Processing (ICIP)
(2010), pp. 4541–4544
28. T. de Freitas Pereira, A. Anjos, J.M. De Martino, S. Marcel, LBP-TOP based countermeasure
against face spoofing attacks, in Asian Conference on Computer Vision Workshops (2012),
pp. 121–132
29. T. de Freitas Pereira, A. Anjos, J.M. De Martino, S. Marcel, Can face anti-spoofing counter-
measures work in a real world scenario? in International Conference on Biometrics (ICB),
2013 (2013), pp. 1–8
30. T.I. Dhamecha, A. Nigam, R. Singh, M. Vatsa, Disguise detection and face recognition in
visible and thermal spectrums, in Proceedings of IEEE International Conference on Biometrics
(ICB), (2013), pp. 1–6
31. N. Erdogmus, S. Marcel, Spoofing face recognition with 3D masks. IEEE Trans. Inf. Forensics
Secur. 9(7), 1084–1097 (2014)
32. J. Galbally, S. Marcel, Face anti-spoofing based on general image quality assessment, in
International Conference on Pattern Recognition (ICPR) (2014), pp. 1173–1178
33. J. Galbally, S. Marcel, J. Fierrez, Biometric antispoofing methods: a survey in face recognition.
IEEE Access 2, 530–1552 (2014)
34. J. Galbally, S. Marcel, J. Fierrez, Image quality assessment for fake biometric detection:
application to iris, fingerprint and face recognition. IEEE Trans. Image Process. 23, 710–724
(2014)
35. D. Gragnaniello, G. Poggi, C. Sansone, L. Verdoliva, An investigation of local descriptors for
biometric spoofing detection. IEEE Trans. Inf. Forensics Secur. 10(4), 849–863 (2015)
36. K. Kollreider, H. Fronthaler, J. Bigun, Non-intrusive liveness detection by face images. Image
Vis. Comput. 27(3), 233–244 (2009)
37. J. Komulainen, A. Hadid, M. Pietikainen, A. Anjos, S. Marcel, Complementary countermea-
sures for detecting scenic face spoofing attacks, in Proceedings of International Conference on
Biometrics (2013), pp. 1–7
38. N. Kose, J.L. Dugelay, Mask spoofing in face recognition and countermeasures. Image Vis.
Comput. 32(10), 779–789 (2014)
39. Y. Li, X. Tan, An anti-photo spoof method in face recognition based on the analysis of fourier
spectra with sparse logistic regression, in Chinese Conference on Pattern Recognition (CCPR)
(2009)
40. J. Li, Y. Wang, T. Tan, A.K. Jain, Live face detection based on the analysis of fourier spectra,
in Biometric Technology for Human Identification (2004), pp. 296–303
41. R. Lukac, K.N. Plataniotis, Color Image Processing: Methods and Applications (CRC Press,
Boca Raton, 2006)
42. J. Maatta, A. Hadid, M. Pietikainen, Face spoofing detection from single images using texture
and local shape analysis. IET Biom. 1(1), 3–10 (2012)
43. D. Menotti et al., Deep representations for iris, face, and fingerprint spoofing detection. IEEE
Trans. Inf. Forensics Secur. 10(4), 864–879 (2015)
44. T. Ojala, M. Pietikäinen, T. Mäenpää, Multiresolution gray-scale and rotation invariant texture
classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7),
971–987 (2002)
45. G. Pan, L. Sun, Z. Wu, S. Lao, Eyeblink-based anti-spoofing in face recognition from a generic
webcamera, in International Conference on Computer Vision (ICCV) (2007), pp. 1–8
46. G. Pan, L. Sun, Z. Wu, Y. Wang, Monocular camera-based face liveness detection by combining
eye-blink and scene context. J. Telecommun. Syst. 47, 215–225 (2009)
47. N.K. Ratha, J.H. Connell, M.R. Bolle, An analysis of minutiae matching strength, in Inter-
national Conference on Audio- and Video-based Biometric Person Authentication (AVBPA)
(2001), pp. 223–228
48. W.R. Schwartz, A. Rocha, H.P. Edrini, Face spoofing detection through partial least squares and
low-level descriptors, in International Joint Conference on Biometrics (IJCB) (2011), pp. 1–8
13 Face Anti-spoofing in Biometric Systems 321
49. X. Tan, Y. Li, J. Liu, L. Jiang, Face liveness detection from a single image with sparse low
rank bilinear discriminative model, in 11th European Conference on Computer Vision (ECCV),
vol. 6316 (2010), pp. 504–517
50. L. Thalheim, J. Krissler, P.M. Ziegler, Biometric access protection devices and their programs
put to the test. C’T (2002)
51. R. Tronci, D. Muntoni, G. Fadda, M. Pili, N. Sirena, G. Murgia, F. Roli, Fusion of multiple
clues for photo-attack detection in face recognition systems, in International Joint Conference
on Biometrics (IJCB) (2011), pp. 1–6
52. T. Wang, J. Yang, Z. Lei, S. Liao, S.Z. Li, Face liveness detection using 3d structure recovered
from a single camera, in International Conference on Biometrics (ICB) (2013), pp. 1–6
53. D. Wen, H. Han, A.K. Jain, Face spoof detection with image distortion analysis. IEEE Trans.
Inf. Forensics Secur. 10(4), 746–761 (2015)
54. J. Yan, Z. Zhang, Z. Lei, D. Yi, S.Z. Li, Face liveness detection by exploring multiple scenic
clues, in 12th International Conference on Control Automation Robotics and Vision (ICARCV)
(2012), pp. 188–193
55. J. Yang, Z. Lei, S. Liao, S.Z. Li, Face liveness detection with component dependent descriptor,
in Proceedings of International Conference on Biometrics (ICB), (2013) pp. 1–6
56. Z. Zhang, D. Yi, Z. Lei, S.Z. Li, Face liveness detection by learning multispectral reflectance
distributions, in International Conference on Face and Gesture (2011), pp. 436–441
57. Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, S.Z. Li, A face anti-spoofing database with diverse
attacks, in International Conference on Biometrics (ICB) (2012), pp. 26–31