3-D_Face_Morphing_Attacks_Generation_Vulnerability_and_Detection
3-D_Face_Morphing_Attacks_Generation_Vulnerability_and_Detection
Abstract—Face Recognition systems (FRS) have been found including border control (both automatic and human). Because
to be vulnerable to morphing attacks, where the morphed face most countries still use printed passport image for the pass-
image is generated by blending the face images from contributory port application process, face morphing attacks have indicated
data subjects. This work presents a novel direction for generat-
ing face-morphing attacks in 3D. To this extent, we introduced the vulnerability of both humans and automatic FRS [3], [4].
a novel approach based on blending 3D face point clouds cor- Face morphing is the process of blending multiple face images
responding to contributory data subjects. The proposed method based on either facial landmarks [5] or Generative Adversarial
generates 3D face morphing by projecting the input 3D face point Networks [6] to generate a morphed face image. The extensive
clouds onto depth maps and 2D color images, followed by image analysis reported in the literature [7], [8], [9], [10] demon-
blending and wrapping operations performed independently on
the color images and depth maps. We then back-projected the 2D strates the vulnerability of 2D face morphing images to both
morphing color map and the depth map to the point cloud using deep learning and commercial off-the-shelf FRS.
the canonical (fixed) view. Given that the generated 3D face mor- There exist several techniques to detect the 2D face morph-
phing models will result in holes owing to a single canonical view, ing attacks that can be classified as [11] (a) Single image-based
we have proposed a new algorithm for hole filling that will result Morph Attack Detection (S-MAD): where the face Morphing
in a high-quality 3D face morphing model. Extensive experiments
were conducted on the newly generated 3D face dataset compris- Attack Detection (MAD) techniques will use the single face
ing 675 3D scans corresponding to 41 unique data subjects and image to arrive at the final decision (b) Differential Morphing
a publicly available database (Facescape) with 100 data subjects. Attack Detection (D-MAD): where a pair of 2D face images
Experiments were performed to benchmark the vulnerability of are used to arrive at the final decision. S-MAD and D-MAD
the proposed 3D morph-generation scheme against automatic techniques have been extensively studied, resulting in sev-
2D, 3D FRS, and human observer analysis. We also presented a
quantitative assessment of the quality of the generated 3D face- eral MAD techniques. The reader is advised to refer to a
morphing models using eight different quality metrics. Finally, recent survey by Venkatesh et al. [11] to obtain a comprehen-
we propose three different 3D face Morphing Attack Detection sive overview of the existing 2D MAD techniques. Despite
(3D-MAD) algorithms to benchmark the performance of 3D face the rapid progress in 2D MAD techniques, a recent evalu-
morphing attack detection techniques. ation report from NIST FRVT MORPH [12] indicated the
Index Terms—Biometrics, face recognition, vulnerability, 3D degraded detection of 2D face morphing attacks. Thus, 2D
morphing, point clouds, image morphing, morphing attack MAD attacks, particularly in the S-MAD scenario, present
detection. significant challenges for reliable detection. These factors
motivated us to explore 3D face morphing, so that depth
I. I NTRODUCTION information may provide a reliable cue that makes morphing
ACE Recognition Systems (FRS) are being widely
F deployed in numerous applications related to security set-
tings, such as automated border control (ABC) gates, and
detection easier. Over the past several decades, 3D face recog-
nition has been widely studied, resulting in several real-life
security-based applications with 3D face photo-based national
commercial settings, such as e-commerce and e-banking sce- ID cards [13], [14], [15], 3D face photo-based driving license
narios. The rapid evolution of FRS can be attributed to cards [15] and 3D face-based automatic border control gates
advances in deep learning FRS [1], [2], which improved the (ABC) [16]. The real case reported in [17] demonstrated the
accuracy in real-world and uncontrolled scenarios. These fac- use of a 2D rendered face image from a 3D face model instead
tors have accelerated the use of 2D face images in electronic of a real 2D face photo to obtain the ID card bypassing human
machine-readable documents (eMRTD), which are exclusively observers in the ID card issuing protocol. Although most real-
used to verify the owner of a passport at various ID services, life 3D face applications are based on comparing 3D face
models with 2D face images for verification, this is mainly
Manuscript received 30 May 2022; revised 26 December 2022 and 24 April
2023; accepted 7 October 2023. Date of publication 16 October 2023; date of because e-passports use 2D face images.
current version 8 March 2024. This article was recommended for publication However, the use of 3D to 3D comparison will be realis-
by Associate Editor A. Ross upon evaluation of the reviewers’ comments. tic, especially in the border control scenario, as both ICAO
(The authors contributed equally to this work.) (Corresponding author: Jag
Mohan Singh.) 9303 [18] and ISO/IEC 19794-5 [19] standards are well
The authors are with the Norwegian Biometrics Laboratory, the Department defined to accommodate the 3D face model in the 3rd gen-
of Information Security and Communication Technology, Norwegian eration e-passport. The 3D face ID cards are a reality, as
University of Science and Technology, 2816 Gjøvik, Norway (e-mail:
[email protected]; [email protected]). they are being deployed in countries such as the UAE [13],
Digital Object Identifier 10.1109/TBIOM.2023.3324684 which can facilitate both human observers and automatic
c 2023 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
104 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
FRS to achieve accurate, secure, and reliable ID verification. • We present three different 3D MAD techniques based on
Further, evolving technology has made it possible for 3D face the deep features from point clouds to benchmark the 3D
imaging on handheld devices and smartphones (e.g., Apple face MAD.
Face ID [20] uses 3D face recognition) that can further • A new 3D face dataset with bona fide and morphed mod-
enable remote ID verification based on 3D face verifica- els is developed corresponding to 41 unique data subjects
tion. These factors motivated us to investigate the feasibility resulting in 675 3D scans. We collected a new 3D face
of generating 3D face morphing and study their vulnerabil- dataset as we were interested in capturing high-resolution
ity and detection. An early attempt in [21] employed the (suitable for ID enrolment) inner face data [23] Our 3D
3DMM [22] technique to generate a 3D face morphing model. face dataset consists of raw 3D scans (number of 3D ver-
However, the reported results indicate the lowest vulnera- tices between 31289 & 201065) and processed 3D scans
bility to conventional FRS, indicating a limitation of the (number of 3D vertices between 35950 & 121088), which
3DMM. is much higher than existing 3D face datasets.1
This work presents a novel method for generating 3D face • The proposed method is benchmarked on both publicly
morphing using 3D point clouds. Given the 3D scans from available dataset (FaceScape) and the newly constructed
the accomplished and malicious actors, the proposed method 3D face dataset.
projects the 3D point clouds to the depth maps and 2D In the rest of the paper, we introduce the proposed method
color images, which are independently blended, warped, and in Section II and experiments & results in Section III. This
back-projected to the 3D to obtain 3D face morphing. The is followed by a discussion about the different aspects of the
motivation for projecting to 2D for morphing is to effec- proposed method in Section IV, followed by limitations &
tively address the non-rigid registration, especially with the potential future-works in Section V and finally conclusions in
high volume of point clouds (85 K) that need to be registered Section VI.
between two unique data subjects. Further, using canonical
view generation to project from 3D to 2D and back project II. P ROPOSED M ETHOD
to 3D will assure a high-quality depth even for the morphed
Figure 1 shows a block diagram of the proposed 3D face-
face images, thus indicating the high vulnerability of the FRS.
morphing generation framework based on the 3D point clouds.
Therefore, this is the first framework to address the generation
We are motivated to employ 3D point clouds over traditional
of 3D face morphing of two unique face 3D scans, which can
3D triangular meshes for two main reasons. The first is that
result in vulnerability to FRS. More specifically, we aimed
connectivity information in a 3D triangular mesh leads to
to answer the following research questions, which will be
overhead storage, processing, management, and manipulation
answered systematically in this study:
of the triangular meshes. Thus, 3D triangular meshes will
• RQ#1: Does the proposed 3D face morphing generation
significantly increase computing and memory, making them
technique yield a high-quality 3D morphed model?
less suitable for low-computing devices. The second reason
• RQ#2: Does the generated 3D face morphing model indi-
is that commodity scanning devices (for example, the Artec
cate the vulnerability for both automatic 3D FRS and
Sensor) can reproduce detailed colored point clouds that cap-
human observers?
ture appearance and geometry. This allows us to generate
• RQ#3: Are the generated 3D face morphing models more
high-quality 3D face morphing attacks.
vulnerable when compared to 2D face morphing images
However, 3D face morphing generation using point clouds
for both automatic 3D FRS and human observers?
introduces numerous challenges: (a) Establishing a dense 3D
• RQ#4: Does the 3D point cloud information be used to
correspondence between two different bona fide 3D point
detect the 3D face morphing attacks reliably?
clouds to be morphed. Because 3D face point clouds from
We systematically address these research questions through
two different subjects are affected by various factors, such
the following contributions:
as differences in input point density, reliable detection of 3D
• We present a novel 3D face morphing generation method
facial key points, and estimation of affine/perspective warping,
based on the point clouds obtained by fusing depth maps
(b) locally affine deformation between two different 3D point
and 2D color images to generate the 3D face morphing
clouds to be morphed is difficult to estimate [24], [25], [26].
model.
(c) The misalignment of dense 3D correspondence between
• Extensive analysis of the vulnerability of the generated
the two different 3D point clouds to be morphed increases
3D face morphing is studied by quantifying the attack
with nonrigid deformation [27].
success rate to 3D FRS. In addition, vulnerability anal-
A crucial part of 3D morphing using point clouds is
ysis was performed using 2D FRS (deep learning and
reliable alignment before performing the morphing opera-
COTS).
tion. Given the 3D face point clouds on the source and
• Human observer analysis for detecting the 3D face mor-
target faces, the point cloud registration can be defined as
phing and 2D face morphing is presented to study
aligning a source point cloud to a target point cloud. The
the significance of depth information in detecting the
point cloud registration can be grouped into three broad
morphing attack.
categories [28] namely 1) Deformation Field, 2) Extrinsic
• The quantitative analysis of the generated 3D morphed
face models is presented using eight different quality 1 The reader is referred to Table I of 3D face datasets (inner face data only)
features representing color and geometry. from the survey by Egger et al. [23]).
SINGH AND RAMACHANDRA: 3-D FACE MORPHING ATTACKS: GENERATION, VULNERABILITY AND DETECTION 105
Methods and 3) Learning-based methods. Deformation-field- convergence to local minima. This was overcome by learning-
based techniques can be defined as the computation of defor- based data-driven methods of two types: (1) supervised meth-
mation between two point clouds, which can be achieved ods and (2) unsupervised methods. Supervised methods require
either by assuming pointwise positions [29] or by pointwise ground-truth data for training [33] but can work with varying
affine transformations [30]. Pointwise position variable meth- point cloud density and underlying geometry. Unsupervised
ods are simplistic because they do not model deformations methods do not require ground-truth data and can be trained
compared with pointwise affine transformations, which model using a deformation module based on a CNN followed by an
local rotations. However, because local transformations must alignment module to compute the deformation [34].
be stored and computed at a per-point level, this results in high However, the use of existing point cloud registration for this
computational and memory costs. This limitation was over- precise application of 3D face morphing point cloud genera-
come by deformation-field-based methods using deformation tion poses challenges, such as registration using the same
graph embedding over the initial point set, which consisted of individual, and point cloud registration has mainly focused
fewer nodes than the underlying point set [25], [31]. Extrinsic on the non-rigid registration of two-point clouds from the
methods are based on optimizing an energy function to com- same individual [28]. This is primarily because high-quality
pute the point set correspondence, which usually includes registration is aimed at producing a globally consistent 3D
an alignment term and regularization term [25]. However, mesh. Thus, the registration methods were not tested when two
optimization-based methods compute deterministic modeling different point clouds were registered, as compared to those
of the transformation. Probabilistic modeling of transforma- from the same individual. Vertex accurate correspondence:
tion was performed by Myronenko and Song [32] in their 3D Face Morphing requires perfect vertex correspondence
algorithm, Coherent Point Drift (CPD), which assumes that the between the source and target point clouds, which is challeng-
source points are centroids of an equally weighted Gaussian ing and has not been extensively evaluated. Low vertex count
with an isotropic covariance matrix in the Gaussian Mixture point clouds: Point cloud registration, especially when using
Model (GMM). CPD consists of alignment and regulariza- learning-based methods, has network architectures based on
tion terms for the transformation computation and performs point clouds with a low number of vertices (1024). Thus,
non-rigid registration, but has memory and computation costs. registering point clouds with many vertices (75 K) has
However, the main limitation of optimization-based meth- not been extensively evaluated and is therefore suitable for
ods is that they produce good results when the input sur- low-resolution face images. To address these challenges effec-
faces are close. Furthermore, they require good initialization tively, the proposed method consists of four stages: (1) point
of the correspondences and the lack of these, leading to cloud reconstruction and cleanup, (2) 3D morph generation,
106 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
(3) hole-filling algorithm, and (4) final cleanup. These steps Algorithm 1 3D Face Morphing Algorithm
are discussed in detail in the following subsections. Input (I1 , I2 , D1 , D2 , CV)
Output (PM )
A. Point Cloud Reconstruction & Cleanup 1: Detect Facial Keypoints on K1 on I1 , and K2 on I2 using
We captured a sequence of raw 3D scans by using the Artec Dlib [42], and generate key-points of the morph using
Eva sensor [35] from two data subjects to be morphed (S1 Equation (1).
2: Perform Delaunay Triangulation on KM which is obtained
and S2 ). In this work, we consider the case of morphing two
data subjects at a time because of its real-life applications, as by blending K1 and K2 using Equation (1).
3: Estimate Affine Warping between corresponding triangles
demonstrated in several 2D face morphing studies [3], [11].
We processed both S1 and S2 by performing a series of prepro- of K1 & KM denoted as wM 1 , and for K2 & KM denoted
cessing operations such as noise filtering, texturing, and fusion as wM 2 .
4: Apply affine warping wM 1 on I1 to obtain I1M ,
of input depth maps to generate the corresponding point clouds
P1 and P2 . These operations were performed using Artec Eva and on D1 to obtain D1M .
5: Apply affine warping wM 2 on I2 to obtain I2M ,
Studio SDK filters together with the Meshlab filter [36]. The
cleaned and processed point clouds are shown qualitatively in and on D2 to obtain D2M .
6: Obtain morphed color image IM using the warped key-
Figure 1.
points from the color images I1 , and I2 using Equation (1),
and morphed depth map DM using Equation (2).
B. 3D Morph Generation Pipeline 7: Obtain the morphed point cloud by back-projecting
In the next step, we process the point clouds P1 and P2 IM , and DM to obtain the colored 3D point cloud PM
to generate a 3D face morphing point cloud by the following with 3D coordinates ∀i∈{1, . . . , n3}(xi , yi , zi ) =
series of operations which are discussed below: (xi , yi , DM (xi , yi )) and color
1) Point-Cloud Centering & Scaling: First, we compute ∀i∈{1, . . . , n3}Color(xi , yi , zi ) = CM (xi , yi )) where
the minimum enclosing spheres using the algorithm from n3 = min(n1, n2).
Gärtner [37] to obtain two bounding spheres with centers and
radii (C1 , r1 ), & (C2 , r2 ) corresponding to the point clouds P1
and P2 respectively. Note P1 = (v11 , . . . , vn1
1 ) where v1 is the
i
PC2 . We perform the morphing operation as explained in
ith 3D vertex, n1 is the number of points in the point cloud the Algorithm 1. The primary idea is to perform the mor-
P1 , and P2 = (v12 , . . . , vn2
2 ) where v2 is the i 3D vertex and
i th
phing in 2D and back-project to 3D. The primary motivation
n2 is the number of points in the point cloud P2 . We then sub- for using a 2D morph generation method is to address the
tract the sphere center C1 from each 3D vertex of P1 and repeat challenge of finding correspondence between PC1 and PC2 .
the same operation on P2 with C2 . Finally, the centered point The underlining idea is to perform the steps of morph-
clouds were scaled to the common radius, normalizing the 3D ing (facial landmark detection, Delaunay triangulation, &
point clouds to a common scale. The resulting centered and warping) on 2D color images and re-use the same (facial
scaled point clouds corresponding to P1 and P2 are denoted landmark locations, triangulation, and warping) on the depth
as PC1 and PC2 , respectively. Figure 1 shows the qualitative maps. In this work, we have used the blending (morphing)
results of this operation, which show the centered and scaled factor (α) as 0.5 as it is well demonstrated to be highly
3D point clouds. vulnerable in the earlier works on 2D face morphing [6].
2) Canonical View Generation: This step performs fine The morphing is carried out as mentioned in the equation
alignment by projecting the 3D face point clouds PC1 and below:
PC2 onto the canonical (fixed) view. This step aims to keep
the view and projection matrix identical to those of the 3D face IM = α×I1 K1 + (1 − α)×I2 K2
point clouds PC1 and PC2 . We then projected PC1 and PC2 K1 = wM
1 (K1 )
to generate 2D color images and depth maps using canonical
view parameters. The generated 2D color images and depth K2 = wM
2 (K2 )
maps are denoted by (I1 , D1 ) and (I2 , D2 ), which correspond KM = α×K1 + (1 − α) ∗ K2 (1)
to the point clouds PC1 and PC2 respectively. In particular, we
choose the canonical view for fine alignment because the tra- where α is the blending factor, K1 denotes 2D facial landmark
ditional scheme of alignment, such as Iterative Closest Point locations corresponding to I1 , K2 denotes 2D facial landmark
(ICP) [27] does not provide a good alignment result when locations corresponding to I2 , KM is generated by blending
used on point clouds [25]. This can be attributed to the limi- K1 , & K2 , wM
1 denotes the warping function from K1 to KM ,
tations of the ICP to function when a locally affine/non-rigid wM2 denotes the warping function from K2 to KM , and IM is
deformation exists between the point clouds [38] The qualita- the morphed 2D color image. Similarly, the same operations
tive results of the canonical view transformation are shown in are carried out on the depth maps as shown in the equation
Figure 1, which shows the aligned 2D color images and depth below:
maps magnified in the inset image. DM = α×D1 K1 + (1 − α)×D2 K2 (2)
3) 3D Morph Generation: Given the 2D face color images
(I1 , I2 ) and depth-maps (D1 , D2 ) corresponding to PC1 , where DM is the morphed depth map.
SINGH AND RAMACHANDRA: 3-D FACE MORPHING ATTACKS: GENERATION, VULNERABILITY AND DETECTION 107
Fig. 2. Qualitative results of the hole filling algorithms (a) Input Point Cloud with holes (b) Point Cloud with Normals which has noise (c) Point Cloud with
Screened Poisson Reconstruction [39] where artifacts are shown in the inset (d) Point Cloud Reconstructed with APSS [40] (e) Point Cloud Reconstructed
with RIMLS [41] (f) Point Cloud Hole Filled using Proposed Method.
In the next step, we back-project IM , and DM to get the 3D Algorithm 2 Hole Filling Point Cloud
face morphing point cloud PM = (v1M , . . . , vn3 M ) where n3 =
Input (n4-views)
min(n1, n2) is the number of vertices. Note each 3D vertex Output (Chf ,Dhf ,Phf )
is obtained using i = 1n3 (xi , yi , zi ) = (x, y, DM (x, y)) and the 1: Generate n pairs of color-maps, and depth-maps
qualitative results is shown in Figure 1. However, generating {(C1 , D1 ), (C2 , D2 ), . . . , (Cj , Dj ), . . . , (Cn4 , Dn4 )}, trans-
the 3D face morphing will result in multiple holes due to lated from the canonical view.
a single canonical view. These holes are visible from other 2: for j ← 1 to n4 do
views. Therefore, we present a novel hole-filling algorithm to 3: Perform Image In-painting [43] on Cj , and Dj .
further improve the perceptual visual quality of the 3D face 4: Perform Image Registration of Cj with the
morphing. canonical view-point color-map CCV using
the following steps:
C. Hole Filling Algorithm 5: Feature Computation using Oriented
In this step, we propose a new hole-filling algorithm tai- FAST and Rotated BRIEF (ORB) Descriptor [44].
lored to this specific 3D face morphing generation problem. 6: Brute-Force Matching of features using Hamming
Because the holes are visible from different views, filling the Distance.
holes in these views is necessary to improve perceptual visual 7: Homography computation using inlier
quality. Note that the holes are generated when the bona fide features.
subject is looked at from a view different from the canonical 8: Perspectively warp the color and depth maps using
camera, especially in high curvature regions such as the nose, computed homography.
as such areas are not completely visible from one canonical 9: end for
view. Therefore, we transform the 3D face-morphing point 10: Average all the registered color-maps (Chf ) and the depth-
j
cloud PM multiple times independently to generate PM where maps (Dhf ).
j = 1. . . n4 and n4 is the number of transformations, and each 11: Back-Project the averaged color-map and
transformation is a 3D translation [45]. In this work, we empir- depth-map from 2D to 3D to generate
ically choose the number of 3D translations to 7 to balance the hole-filled point cloud (Phf ) using the canonical view
computational cost and the visual quality achieved after hole parameters.
filling. Using more 3D translations will significantly increase
the computational cost and fail to improve visual quality. We
tried the conventional approach of hole filling using 3D tri- Finally, we obtain the hole-filled 3D face morphing point
angulation of 3D point clouds proposed in [39], [40], [41]. cloud (Phf ), as indicated in Steps 10 and 11 in Algorithm 2.
Figure 2 shows the qualitative results of the three different Figure 2 (e) shows the qualitative results of the proposed
SOTA triangulation algorithms, which indicate unsatisfactory hole-filling method, which indicates superior visual quality
results. This is because the 3D orientation (3D normal) estima- compared to the existing methods.
tion indicates artifacts in the 3D triangulated mesh. Therefore,
filling holes directly in the 3D point cloud is challenging D. Final Cleanup Algorithm
because the underlying surface (manifold) is not known in
The final cleanup uses a clipping region outside a por-
advance. Errors in 3D orientation estimation make it difficult
tion of the bounding sphere. The final result corresponding
to employ conventional 3D hole-filling approaches.
to the proposed 3D face morphing, a point cloud, is shown in
This motivated us to devise a new approach for achiev-
Figure 3 for an example data subjects.2 The main advantages
ing effective hole filling. To this extent, we project each
j of the proposed method are as follows.
point cloud PM onto the 2D face morphing color image
(Cj ) and its corresponding depth map (Dj ). We fill the holes 2 Supporting Video is available at https://folk.ntnu.no/jagms/Supporting
in Cj & Dj using steps 2 to 9 described in Algorithm 2. Video.mp4.
108 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
Fig. 3. Illustration of 2D color image and depth maps for bona fide and morphs generated using the proposed method.
TABLE I
• The proposed method performs the alignment based on V ULNERABILITY OF SOTA ON C OMPARISON DATASET
2D facial key points, which preserves the identity in the
generated 3D face morphing attack sample.
• The proposed method results in low computation and
memory compared with existing 3D-3D techniques by
overcoming the 3D registration.
• The proposed method results in a high vulnerability of
FRS as the identity features are preserved for contributed and a more recent deep-learning-based method, FLAME, by
data subjects used to generate the morphing attack. Li et al. [46]. 3DMM introduced the concept of a morphable
Therefore, the proposed method can cause high-quality model, where parameters such as shape and texture can be
3D face-morphing attacks, resulting in vulnerability of controlled during 3D face synthesis. Furthermore, the 3DMM
both 2D and 3D face recognition systems. provided earlier SOTA results for 3D face generation from a
• The proposed method can handle wide variation in the 2D face image. FLAME enhances the quality of the generated
3D pose. 3D face model from a 2D face image by using more control-
lable parameters such as pose, expression, shape, and texture
during the 3D face synthesis process.
E. Qualitative and Quantitative Comparison of Proposed 1) Qualitative Comparison and Analysis: The results of
Method With SOTA qualitative comparison with SOTA are shown in Figure 4 and
To illustrate the effectiveness of the proposed method, we the quantitative vulnerability computed using MMPMR [7]
selected a few SOTA methods based on nonrigid point cloud and FMMPMR [47] (refer Section III-C for the definition of
registration and methods generating a 3D face model from a these metrics) is indicated in the Table I. It can be noticed from
2D face image. Our current evaluation of SOTA for nonrigid Figure 4 that SOTA methods don’t contain identity features of
point cloud registration (NRPCR) methods includes CPD by the 3D face morphing model to a large extent. However, CPD
Myronenko and Song [32] and Corrnet3D by Zeng et al. [34]. does contain the identity features of the 3D face morphing
CPD is based on optimization and was the SOTA method for model but fails on the alignment of the two input point clouds,
NRPCR earlier, whereas Corrnet3D is a more recent unsuper- which results in double features such as eyebrows. Corrnet3D
vised deep-learning-based method for NRPCR. Furthermore, produces lower-quality results, which can be attributed to the
to evaluate the methods for generating a 3D face model from fact that the authors have not yet focused exclusively on face
a 2D face image, we selected 3DMM by Blanz and Vetter [22] registration.
SINGH AND RAMACHANDRA: 3-D FACE MORPHING ATTACKS: GENERATION, VULNERABILITY AND DETECTION 109
Fig. 4. Illustration of the SOTA Comparison showing Bona fide and Morphs generated using (a) CPD [32], (b) Corrnet3D [34], (c) 3DMM [22]
(d) FLAME [46], (e) Proposed Method. Note that both 3DMM and FLAME need a single image as input, and in the current evaluation, we pass a 2D
rendering generated using the proposed method. Note that the proposed method shows high-quality rendering and identity features of the 2D face morphing
image.
TABLE II
Further, 3DMM and FLAME generate a 3D face model S TATISTICS OF N EWLY C OLLECTED 3D M ORPHING DATASET (3DMD)
from a 2D face image. Thus, we passed the rendering (2D
face image) of the 3D face morphing model as an input.
However, these methods fail to preserve the identity features
during the 3D face model generation, as seen from Figure 4.
The generated 3D model has a low resemblance to the identity
features of the face morphing image.
2) Quantitative Comparison and Analysis: The results of
the quantitative comparison are shown in Figure 5, where
we have evaluated two 3D point feature extraction methods,
namely LED3D [49] and Pointnet++ [48]. However, it can
be seen that 3D comparison results in low values for SOTA
compared to the proposed method. This can be attributed to
the low-resolution of the identity-specific depth generation by
the SOTA, which is also shown in Figure 6. in an indoor lighting environment. The subjects were asked to
A sample implementation of the proposed method is sit on the chair by closing their eyes to avoid strong reflection
available at.3 of light from the 3D scanner. The 3D scanner was moved
vertically to capture the 3D sequence.
III. E XPERIMENTS AND R ESULTS Artec Studio Professional 14 was used for 3D data col-
lection and processing. We collected 3D facial data from
In this section, we present the discussion on extensive exper-
41 subjects, including 28 males and 13 females. We captured
iments carried out on the newly acquired 3D face dataset.
nine to ten samples for each data subject in three differ-
We discuss the quantitative results of the various experiments,
ent sessions over three days. The statistics of the whole 3D
including vulnerability study on automatic FRS and human
face dataset are summarized in Table II. We name our newly
observer study, quantitative quality estimation based on color
collected dataset the 3D Morphing Dataset (3DMD).
and geometry of the generated 3D face morphing models and
We may have used existing 3D face datasets such as
automatic detection of 3D MAD attacks.
FRGC [50] and BU-3DFE [51]. However, the FRGC dataset
provides a single depth map and color image. Thus, a high-
A. 3D Face Data Collection
quality point cloud cannot be generated. Furthermore, the
In this study, we constructed a new 3D face dataset using dataset has a few misaligned color images and depth maps [52]
the Artec Eva 3D scanner [35]. Data collection was conducted which results in low-quality 3D morphing generation. The
3 Proposed Method Implementation https://github.com/jagmohaniiit/ BU-3DFE [51] dataset provides 3D models, but these are per-
3DFaceMorph fectly registered and the capture conditions are identical for
110 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
Fig. 5. Illustration showing scatter plot of Comparison scores using Bona fide and Morphs generated using Proposed Method (a) LED3D [49] and
(b) Pointnet++ [48] based where SOTA algorithms are 3DMM [22], FLAME [46], CPD [32].
Fig. 6. Illustration showing depth maps using SOTA and proposed method (a) 3DMM [22], (b) CPD [32], (c) FLAME [46] and (d) Proposed Method.
C. Vulnerability Study
In this work, we benchmarked the performance of automatic
Fig. 8. Illustration of average accuracy of human observer study, note that FRS on both 2D and 3D face models. The 2D face vulnerabil-
2D accuracy is always higher than 3D. ity was computed using the color image, and the 3D face vul-
nerability was calculated based on the depth map/point cloud.
We have used two different metrics to benchmark the vulner-
ability assessment: the Mated Morphed Presentation Match
directions to make their decisions effectively. Furthermore, Rate (MMPMR) [7] and Fully Mated Morphed Presentation
opportunities to zoom in and out of the 3D face model are also Match Rate (FMMPMR) [47]. MMPMR can be defined as
provided. We have mainly selected to present both 2D/3D face the percentage of morph samples, which can be verified with
images for human evaluation simultaneously to check whether all contributing data subjects [47]. However, MMPMR does
the 3D information might help detect morphing attacks. Due not consider the number of attempts made during the score
to the time factor, we used 19 bona fide and 19 morph samples computation. This is rectified in FMMPMR [47], where the
independently from 2D and 3D for the human observer study. morphed image sample should be verified across all attempts.
Thus, each human observer spent approximately 20 min on The higher value of MMPMR and FMMPMR indicates the
average completing this study. Detailed step-wise instructions higher vulnerability of the FRS. Vulnerability analysis was
on using the Web portal were available for every participant performed by enrolling the morphing image (2D/3D) and then
beforehand. obtaining the comparison score by probing both contributory
The human observer study used 36 observers with and with- data subjects’ facial images (2D/3D). To compute the vulnera-
out face morphing experience. The quantitative results of the bility of 2D face morphing images, we used two different FRS:
human observer study are shown in Figure 8. We summarize Arcface [2] and a commercial-off-the-shelf (COTS) FRS.5
the human observer’s results from the survey as follows. 3D face vulnerability analysis uses Deep Learning-based FRS
• The average detection accuracy of human observers for
such as Led3D [49] and PointNet++ [48]. The thresholds for
2D face bona fide samples is 55.83% and 42.5% in a all FRS used in this study were set at FAR=0.1%, following
3D face, respectively. The average detection accuracy the guidelines of Frontex for border control [54].
of human observers for morphs in 2D was 58.33% and 1) Quantitative Vulnerability Results on 3D Morphing
51.85% for a 3D face. Thus, the detection accuracy is Dataset: The results are summarized in Table III and the
similar for bona fides and morphs in 2D. However, the vulnerability plots are presented in Figure 9. Based on the
detection accuracy in 3D was lower for bona fide than obtained results, it can be noted that (1) both 2D and 3D
for morph. FRS are vulnerable to the generated face morphing attacks,
• The average detection accuracy is similar for observers
and (2) among the 2D FRS, COTS indicates the highest vul-
without morphing experience and basic morphing expe- nerability compared to Arcface FRS. (3) Among the 3D FRS,
rience. Human observers with advanced morphing expe- PointNet++ [48] indicates the highest vulnerability. Thus, the
rience had the highest average detection accuracy. The quantitative results of the vulnerability analysis indicate the
observers without morphing experience perform similarly effectiveness of the generated 3D face morphing attacks.
to observers with basic morphing experience, which can 2) Quantitative Vulnerability Results on Facescape
be attributed to the innate human capacity to distinguish Dataset: We used 100 unique databases with 56 male
between bona fide vs. morphed. and 44 female subjects. For each subject, we selected two
• The survey further validates that generated 3D morphs
3D face scans. One was used to generate the 3D face
are challenging to detect from human observations. The morphing and the other was used as the probe image to
average detection accuracy of human observers does not obtain the comparison score to compute the vulnerability
exceed 63.15%, which shows that the 2D and 3D morphs metrics. The proposed method was then used to obtain
developed in this study are of high quality and are difficult 3D morphing models, resulting in 2486 morphing models.
to detect. Figure 11 shows an example of the proposed 3D morphing
The average detection accuracy for a 2D face is higher than generation samples together with bona fide 3D scans from
that for a 3D face, which can be attributed to the following
reasons: 5 The name of the COTS is not indicated to respect confidentiality.
112 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
Fig. 9. Vulnerability Plots using 2D & 3D FRS on 3D Morphing dataset (3FMD) (a) 2D face FRS using Arcface [2], (b) 2D face FRS using COTS, and
(c) 3D face FRS using Led3D [49], and (d) 3D face FRS using Pointnet++ [48].
Fig. 10. Vulnerability Plots using 2D & 3D FRS on Facescape Dataset (a) 2D face FRS using Arcface [2], (b) 2D face FRS using COTS, and (c) 3D face
FRS using Led3D [49], and (d) 3D face FRS using Pointnet++ [48].
Fig. 11. Illustration of the Color Images and Depth Maps of Bona fide Samples and Face Morphs generated using the proposed method on Facescape
Dataset [53].
the Facescape Dataset [53]. The quantitative vulnerability Thus, based on the vulnerability analysis reported on the
results for the escape dataset are listed in Table IV, and the 3DMD and Facescape datasets with 2D and 3D FRS, the
vulnerability plots are shown in Figure 10. In addition, it can proposed 3D face morphing technique indicates consistently
be observed that the proposed 3D face morphing generation high vulnerability. The vulnerability is noted to be high
samples exhibit high vulnerability with both 2D and 3D FRS. with the Facescape dataset compared with the 3D morph-
Among the 2D FRS, both COTS and Arcface indicate similar ing dataset. The variation in the vulnerability performance
vulnerabilities with MMPMR = 100%. However, among the across different FRS can be attributed to the type of feature
3D FRS, PointNet++ [48] shows the highest vulnerability. extraction and classification techniques employed in individual
SINGH AND RAMACHANDRA: 3-D FACE MORPHING ATTACKS: GENERATION, VULNERABILITY AND DETECTION 113
TABLE III
V ULNERABILITY A NALYSIS OF 2D AND 3D FRS ON 3D M ORPHING DATASET
TABLE IV
V ULNERABILITY A NALYSIS OF 2D AND 3D FRS ON FACE S CAPE DATASET
TABLE V
Q UANTITATIVE VALUES OF Q UALITY F EATURES FOR 3D FACE P OINT geometry, indicate the near-complete overlapping for 3D bona
C LOUDS C ORRESPONDING TO 3D B ONA F IDE AND M ORPH fide and 3D morph. Thus, the proposed 3D face morphing
BASED ON C OLOR AND G EOMETRY generation did not degrade the depth quality. Instead, it has
achieved comparable quality based on geometry from bona
fide 3D models used for the morphing operation. A similar
observation can also be noted with the color image quality
estimation.
Fig. 12. Box plots showing the eight different 3D model quality estimation from 3D bona fide and 3D morph based on color and geometry.
TABLE VII
Q UANTITATIVE P ERFORMANCE OF THE P ROPOSED 3D MAD T ECHNIQUES good enrolment quality scans that may reflect real-life ID
enrolment scenarios. Thus, future studies should investigate
the proposed 3D morphing generation and detection tech-
niques using low-quality (depth) 3D scans. Furthermore,
extending the study to in-the-wild capture can also be con-
sidered in future work. Second, the analysis was conducted
using 41 data subjects due to the present pandemic outbreak.
However, we also present the results on the publicly avail-
• RQ#1: Does the proposed 3D face morphing generation able 3D face dataset, Facescape, with 100 unique IDs. Future
technique yield a high-quality 3D morphed model? work could benchmark the proposed method on large-scale
– Yes, the proposed method of generating the 3D face datasets with different 3D resolutions. Third, cleaning the
morphing has resulted in a high-quality morphed noise from 3D scans is tedious and sometimes requires manual
model almost similar to that of the original 3D bona intervention. Thus, future work can develop fully automated
fide. The quality analysis reported in Figure 12 and noise removal methods in 3D point clouds to easily generate
Table V also justifies the quality of the generated 3D morphs.
3D morphs quantitatively as the quality values from
3D morphing show larger overlapping with the 3D VI. C ONCLUSION
bona fide. In addition, the human observer analysis This work presented a new dimension for face morphing
reported in Section III-B also justifies the quality of attack generation and detection, particularly in 3D. We intro-
the proposed 3D face morphing generation method duced a novel algorithm to generate high-quality 3D face
as it is found reasonably difficult to detect based on morphing models using point clouds. To validate the attack
the artefacts. potential of the newly generated 3D face-morphing attacks,
• RQ#2: Does the generated 3D face morphing model indi- vulnerability analysis uses 2D and 3D FRS. Furthermore,
cate the vulnerability for both automatic 3D FRS and human observer analysis is presented to investigate the use-
human observers? fulness of 3D information in morph detection. The obtained
– Yes, based on the analysis reported in Section III-C, results justify the high vulnerability of the proposed 3D face
the generated 3D face morphing model indicates a morphing models. We also presented an automatic quality
high degree of vulnerability for both automatic 3D analysis of the generated 3D morphing models, which indi-
FRS and human observers. cated a similar quality to the bona fide 3D scans. Finally, we
• RQ#3: Are the generated 3D face morphing models more proposed three different 3D MAD algorithms to detect 3D
vulnerable when compared to 2D face images for both morphing attacks using pretrained point-based CNN models.
automatic 3D FRS and human observers? Extensive experiments indicated the efficacy of the proposed
– Equally vulnerable, the 3D face morphing models 3D MAD algorithms in detecting 3D face-morphing attacks.
are more vulnerable than their 2D counterparts, as
shown in Figure 9 when using automatic FRS. R EFERENCES
– However, the vulnerability is almost comparable [1] F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embed-
when evaluated by a human observer study (see ding for face recognition and clustering,” in Proc. IEEE Conf. Comput.
Section III-B), where one of the main reasons could Vis. Pattern Recognit. (CVPR), 2015, pp. 815–823.
[2] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angu-
be more prevalence of 2D morphs, which makes lar margin loss for deep face recognition,” in Proc. IEEE/CVF Conf.
human observers sensitive about which artifacts to Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 4685–4694.
look for. [3] M. Ferrara, A. Franco, and D. Maltoni, “The magic passport,” in Proc.
IEEE Int. Joint Conf. Biometr., 2014, pp. 1–7.
• RQ#4: Can the 3D point cloud information be used to [4] R. Raghavendra, K. B. Raja, and C. Busch, “Detecting morphed face
detect the 3D face morphing attacks reliably? images,” in Proc. IEEE 8th Int. Conf. Biometr. Theory, Appl. Syst.
– Yes, on using the proposed 3D face morphing attack (BTAS), 2016, pp. 1–7.
[5] M. Ferrara, A. Franco, and D. Maltoni, “Face demorphing,” IEEE Trans.
Detection approaches (see Section III-E) the point Inf. Forensics Security, vol. 13, no. 4, pp. 1008–1017, Apr. 2018.
cloud information can be used for reliable 3D mor- [6] H. Zhang, S. Venkatesh, R. Ramachandra, K. Raja, N. Damer, and
phing detection. C. Busch, “MIPGAN—Generating strong and high quality Morphing
attacks using identity prior driven GAN,” IEEE Trans. Biometr. Behav.
Ident. Sci., vol. 3, no. 3, pp. 365–383, Jul. 2021.
V. L IMITATIONS OF C URRENT W ORK AND [7] U. Scherhag et al., “Biometric systems under morphing attacks:
Assessment of morphing techniques and vulnerability reporting,” in
P OTENTIAL F UTURE W ORKS Proc. Int. Conf. Biometr. Special Interest Group (BIOSIG), 2017,
Although this work presents a new dimension for face pp. 1–7.
[8] R. Raghavendra, K. Raja, S. Venkatesh, and C. Busch, “Face morphing
morphing attack generation and detection, especially in 3D, versus face averaging: Vulnerability and detection,” in Proc. IEEE Int.
it has a few limitations. In the current scope of work, Joint Conf. Biometr. (IJCB), 2017, pp. 555–563.
3D morph generation and detection were carried out on [9] N. Damer, A. M. Saladie, A. Braun, and A. Kuijper, “MorGAN:
Recognition vulnerability and attack detectability of face morphing
high-quality 3D scans collected using the Artec Eva sen- attacks created by generative adversarial network,” in Proc. IEEE 9th
sor. We employed high-quality 3D face scans to achieve Int. Conf. Biometr. Theory, Appl. Syst. (BTAS), 2018, pp. 1–10.
116 IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, VOL. 6, NO. 1, JANUARY 2024
[10] S. Venkatesh, H. Zhang, R. Ramachandra, K. Raja, N. Damer, and [35] “Artec Eva sensor.” 2021. Accessed: Oct. 16, 2021. [Online]. Available:
C. Busch, “Can GAN generated morphs threaten face recognition https://bit.ly/3BiGnJ1
systems equally as landmark based morphs?-Vulnerability and detec- [36] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and
tion,” in Proc. 8th Int. Workshop Biometr. Forensics (IWBF), 2020, G. Ranzuglia, “MeshLab: An open-source mesh processing tool,” in
pp. 1–6. Eurographics Italian Chapter Conference, V. Scarano, R. D. Chiara,
[11] S. Venkatesh, R. Ramachandra, K. Raja, and C. Busch, “Face morphing and U. Erra, Eds. Vienna, Austria: Eurograph. Assoc., 2008.
attack generation and detection: A comprehensive survey,” IEEE Trans. [37] B. Gärtner, “Fast and robust smallest enclosing balls,” in
Technol. Soc., vol. 2, no. 3, pp. 128–145, Sep. 2021. European Symposium on Algorithms. Berlin, Germany: Springer, 1999,
[12] M. Ngan, P. Grother, K. Hanaoka, and J. Kuo, “Part 4: MORPH - pp. 325–338.
performance of automated face MORPH detection.” 2021. Accessed: [38] D. Haehnel, S. Thrun, and W. Burgard, “An extension of the ICP algo-
Oct. 16, 2021. [Online]. Available: https://pages.nist.gov/frvt/reports/ rithm for modeling nonrigid objects with mobile robots,” in Proc. IJCAI,
morph/frvt_morph_report.pdf vol. 3, 2003, pp. 915–920.
[13] A. A. Deeb. “UAE reviews features of new ID card, 3D photo [39] M. Kazhdan and H. Hoppe, “Screened poisson surface reconstruction,”
included.” 2020. Accessed: Oct. 16, 2021. [Online]. Available: https:// ACM Trans. Graph., vol. 32, no. 3, pp. 1–13, Jul. 2013. [Online].
www.gulftoday.ae/news/2021/08/05/uae-reviews-features-of-new-id-car Available: https://doi.org/10.1145/2487228.2487237
d-3d-photo-included [40] G. Guennebaud and M. Gross, “Algebraic point set surfaces,” ACM
[14] Stereo Laser Image, IDEMIA, Courbevoie, France, 2020. Accessed: Trans. Graph., vol. 26, no. 3, Jul. 2007, Art. no. 23–es. [Online].
Oct. 18, 2021. [Online]. Available: https://www.idemia.com/wp-content/ Available: https://doi.org/10.1145/1276377.1276406
uploads/2021/02/stereo-laser-image-idemia-brochure-202007.pdf [41] A. C. Öztireli, G. Guennebaud, and M. Gross, “Feature preserving point
[15] J. W. J. Ter Hennepe. “3D photo ID.” 2010. Accessed: Oct. 16, set surfaces based on non-linear kernel regression,” Comput. Graph.
2021. [Online]. Available: https://www.icao.int/Meetings/AMC/MRTD- Forum, vol. 28, no. 2, pp. 493–501, 2009.
SEMINAR-2010-AFRICA/Documentation/11_Morpho-3DPhotoID.pdf [42] D. E. King, “Dlib-ml: A machine learning toolkit,” J. Mach. Learn. Res.,
[16] “3D face enrolment for ID cards, D. face based ABC systems.” 2021. vol. 10, pp. 1755–1758, Dec. 2009.
Accessed: Oct. 18, 2021. [Online]. Available: http://cubox.aero/cubox/ [43] A. Telea, “An image inpainting technique based on the fast marching
php/en_product01-2.php?product=1/ method,” J. Graph. Tools, vol. 9, no. 1, pp. 23–34, 2004.
[17] S. Dent. “Using a 3D render as a french ID card ‘photo’.” 2017. [44] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient
Accessed: Oct. 16, 2021. [Online]. Available: https://engt.co/3EiPnQv alternative to SIFT or SURF,” in Proc. Int. Conf. Comput. Vis., 2011,
[18] Machine Readable Travel Documents. Part 11: Security Mechanisms for pp. 2564–2571.
MRTDs, ICAO, Montreal, QC, Canada, Rep. 9303, 2021. [45] J. D. Foley, A. Van Dam, S. K. Feiner, J. F. Hughes, and R. L. Phillips,
[19] Information Technology—Extensible Biometric Data Interchange Introduction to Computer Graphics. Reading, MA, USA: Addison-
Formats—Part 5: Face Image Data, IEC, Geneva, Switzerland, Wesley, 1994, vol. 55.
ISO/IEC 39794–5:2019, 2019. [46] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero, “Learning a model
[20] “Apple face ID.” 2017. [Online]. Available: https://en.wikipedia.org/ of facial shape and expression from 4D scans,” ACM Trans. Graph.,
wiki/Face_ID vol. 36, no. 6, p. 194, 2017. [Online]. Available: https://doi.org/10.1145/
[21] S. P. Vardam. “Vulnerability of 3D face recognition systems of morphing 3130800.3130813
attacks.” Aug. 2021. [Online]. Available: http://essay.utwente.nl/88470/ [47] S. Venkatesh, H. Zhang, R. Ramachandra, K. Raja, N. Damer, and
[22] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3D C. Busch, “Can GAN generated Morphs threaten face recognition
faces,” in Proc. 26th Annu. Conf. Comput. Graph. Interact. Techn., 1999, systems equally as landmark based Morphs? - Vulnerability and detec-
pp. 187–194. tion,” in Proc. 8th Int. Workshop Biometr. Forensics (IWBF), 2020,
[23] B. Egger et al., “3d morphable face models—Past, present, and future,” pp. 1–6.
ACM Trans. Graph., vol. 39, no. 5, pp. 1–38, 2020. [48] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep
[24] Y. Yao, B. Deng, W. Xu, and J. Zhang, “Quasi-Newton solver for robust hierarchical feature learning on point sets in a metric space,” 2017,
non-rigid registration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern arXiv:1706.02413.
Recognit. (CVPR), Jun. 2020, pp. 7597–7606. [49] G. Mu, D. Huang, G. Hu, J. Sun, and Y. Wang, “Led3D: A lightweight
[25] H. Li, R. W. Sumner, and M. Pauly, “Global correspondence and efficient deep approach to recognizing low-quality 3D faces,”
optimization for non-rigid registration of depth scans,” Comput. Graph. in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR),
Forum, vol. 27, no. 5, pp. 1421–1430, 2008. Jun. 2019, pp. 5773–5782.
[26] N. Gelfand, N. J. Mitra, L. J. Guibas, and H. Pottmann, “Robust global [50] P. Phillips et al., “Overview of the face recognition grand challenge,”
registration,” in Proc. Symp. Geom. Process., 2005, pp. 197–206. in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
[27] P. J. Besl and N. D. McKay, “Method for registration of 3-D shapes,” (CVPR’05), vol. 1, 2005, pp. 947–954.
in Sensor Fusion IV: Control Paradigms Data Structures (International [51] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D
Society for Optics and Photonics 1611). Bellingham, WA, USA: SPIE, facial expression database for facial behavior research,” in Proc.
1992, pp. 586–606. 7th Int. Conf. Automat. Face Gesture Recognit. (FGR06), 2006,
[28] B. Deng, Y. Yao, R. M. Dyke, and J. Zhang, “A survey of non-rigid 3D pp. 211–216.
registration,” 2022, arXiv:2203.07858. [52] T. Maurer et al., “Performance of geometrix ActiveIDˆ TM 3D face
[29] M. Liao, Q. Zhang, H. Wang, R. Yang, and M. Gong, “Modeling recognition engine on the FRGC data,” in Proc. IEEE Comput.
deformable objects from a single depth camera,” in Proc. IEEE 12th Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR)-Workshops, 2005,
Int. Conf. Comput. Vis., 2009, pp. 167–174. p. 154.
[30] J. Yang, D. Guo, K. Li, Z. Wu, and Y.-K. Lai, “Global 3D non- [53] H. Yang et al., “FaceScape: A large-scale high quality 3D
rigid registration of deformable objects using a single RGB-D cam- face dataset and detailed riggable 3D face prediction,” in Proc.
era,” IEEE Trans. Image Process., vol. 28, no. 10, pp. 4746–4761, IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020,
Oct. 2019. pp. 598–607.
[31] K. Zampogiannis, C. Fermüller, and Y. Aloimonos. “Topology-aware [54] Best Practice Technical Guidelines for Automated Border Control ABC
non-rigid point cloud registration.” 2018. [Online]. Available: http:// Systems, FRONTEX, Warsaw, Poland, 2015.
arxiv.org/abs/1811.07014. [55] Z. Zhang, “No-reference quality assessment for 3D colored point cloud
[32] A. Myronenko and X. Song, “Point set registration: Coherent point drift,” and mesh models,” 2021, arXiv:2107.02041.
IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 12, pp. 2262–2275, [56] A. Goyal, H. Law, B. Liu, A. Newell, and J. Deng, “Revisiting point
Dec. 2010. cloud shape classification with a simple and effective baseline,” in Proc.
[33] G. Trappolini, L. Cosmo, L. Moschella, R. Marin, S. Melzi, and Int. Conf. Mach. Learn., 2021, pp. 3809–3820.
E. Rodolà, “Shape registration in the time of transformers,” in Proc. [57] Z. Wu, S. Song, A. Khosla, X. Tang, and J. Xiao. “3D ShapeNets for
Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 5731–5744. 2.5D object recognition and next-best-view prediction.” 2014. [Online].
[34] Y. Zeng, Y. Qian, Z. Zhu, J. Hou, H. Yuan, and Y. He, “CorrNet3D: Available: http://arxiv.org/abs/1406.5670.
Unsupervised end-to-end learning of dense correspondence for 3D point [58] “Information technology - biometric presentation attack detection—Part
clouds,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, 3: Testing and reporting,” IEC, Geneva, Switzerland, ISO/IEC IS 30107–
pp. 6048–6057. 3, ISO/IEC JTC1 SC37 Biometrics, 2017.
SINGH AND RAMACHANDRA: 3-D FACE MORPHING ATTACKS: GENERATION, VULNERABILITY AND DETECTION 117
Jag Mohan Singh (Member, IEEE) received the Raghavendra Ramachandra (Senior Member,
B.Tech. (Hons.) and M.S. by research degrees in IEEE) received the Ph.D. degree in computer sci-
computer science from the International Institute ence and technology from the University of Mysore,
of Information Technology, Hyderabad, in 2005 Mysore, India; Institute Telecom; and Telecom
and 2008, respectively. He is currently pursuing Sudparis, Evry, France (conducted as collaborative
the Ph.D. degree with the Norwegian Biometrics work) in 2010. He is currently a Full Professor
Laboratory, Norwegian University of Science and with the Institute of Information Security and
Technology, Gjøvik. He worked with the indus- Communication Technology, Norwegian University
trial research and development departments of Intel, of Science and Technology, Gjøvik, Norway. He is
Samsung, Qualcomm, and Applied Materials, India, also working as a Research and Development Chief
from 2010 to 2018. He has published several papers with MOBAI AS. He was a Researcher with the
at international conferences focusing on presentation attack detection, mor- Istituto Italiano di Tecnologia, Genoa, Italy, where he worked on video surveil-
phing attack detection, and ray-tracing. His current research interests include lance and social signal processing. He has authored several papers and is a
generalizing classifiers in the cross-dataset scenario and neural rendering. reviewer for Several international conferences and journals. He also holds
several patents for the biometric presentation attack detection and morphing
attack detection, respectively. He has participated (as a PI, a Co-PI or a con-
tributor) in several EU projects, IARPA USA, and other national projects.
His main research interests include deep learning, machine learning, data
fusion schemes, and image/video processing, with applications to biometrics,
multimodal biometric fusion, human behaviour analysis, and crowd behaviour
analysis. He has received several best paper awards. He has also been involved
in various conference organizing and program committees and has served as
an associate editor for various journals. He is serving as an Editor for the
ISO/IEC 24722 standards on multimodal biometrics and an Active Contributor
for ISO/IEC SC 37 standards for biometrics. He is a VP of Finance at the
IEEE Biometric Council.