0% found this document useful (0 votes)
50 views

Innovative Face Detection Using Artificial Intelligence

With the growing technology, various kinds of frauds are becoming so common, especially in the domain like face detection and other biometric systems. Additionally, it is hard for the service providers to keep data’s privacy. Moreover, it is required to protect the system from spoofing. Hackers could use fake-eyes, snaps for face identification to get themselves authenticated.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Innovative Face Detection Using Artificial Intelligence

With the growing technology, various kinds of frauds are becoming so common, especially in the domain like face detection and other biometric systems. Additionally, it is hard for the service providers to keep data’s privacy. Moreover, it is required to protect the system from spoofing. Hackers could use fake-eyes, snaps for face identification to get themselves authenticated.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Innovative Face Detection using


Artificial Intelligence
Hira Khalid
Department of Software Engineering
HITEC University
Taxila, Pakistan

Abstract:- With the growing technology, various kinds of distribution of the information, which is stolen, the benefits
frauds are becoming so common, especially in the of biometric technology are becoming disadvantages. On of
domain like face detection and other biometric systems. the technology is “Face Detection”, widely used for the
Additionally, it is hard for the service providers to keep purpose to identify the legitimate individual, which is relied
data’s privacy. Moreover, it is required to protect the on his physical and behavioral characteristics. In this
system from spoofing. Hackers could use fake-eyes, technology of identifying individual would be done by the
snaps for face identification to get themselves comparison of an existing pictures of individual in the DB
authenticated. These face recognitions could also be done with live features of an individual [1].
by face detection by video streaming and by capturing
specific moments of any individual. Also, these types of However, these images could be fooled effortlessly
frauds could be done easily as our systems are unable to through identifying system using some saved images
detect the real-life face and face extracted from photos without notifying to the legitimate individual. Images,
and videos. Additionally, these photos and videos would videos of identified individual or pictures taken from his/her
be freely available by Internet and other sources. Now-a- social media accounts could do spoofing, easily. Thus, for
days, many ideas are implemented for the detection of maintaining privacy of these pictures, also, to check
face liveness for the purpose of authentication. The pictures’ liveness for the purpose of identification, many
paper will represent innovative detection of faces by the investigators are applying various techniques.
use of features’ fusion by using machine learning
classifiers. In suggested work, the features, like, LuminanceR,
Luminance are extracted are taken from the image dataset of
Keywords:- Face Detection, Artificial Intelligence, 12146 pictures. Mean values of the recorded frames are
Classification, Feature Extraction. transmitted for the purpose of classification by various
classifier algorithms and in result, we will get trained model
I. INTRODUCTION of dataset. Moreover, for video, live face will be detected for
the purpose of preprocessing. Moreover, features would be
Biometrics technologies is commonly used term extracted and then trained model will be used for prediction.
referred where technology is used for the identification of a
person, which is relied upon the characteristics of an II. LITERATURE SURVEY
individual. The initial biometric technology was fingerprint
identification. Furthermore, with the vast number of Biometric identification has acquired much
technologies, more classifications came into picture, i.e., significance now-a-days. This biometric authorization could
palm print identification, iris recognition, speech be done through face recognition, palm-print recognition,
recognition, face recognition, DNA, and gesture recognition. fingerprint recognition, iris recognition etc. Face recognition
So, performance of identification relies on security, is the mostly used biometric application. As, it takes fewer
accuracy, and robustness of the technology. Initially, in human being involvement. However, precision is utmost
every biometric system, features of an individual must be significant factor for face recognition. Additionally, this type
saved in a database. Moreover, every time an individual of application needs least time to recognize identified
comes for identification, the freshly captured features would individual. There are different techniques applied by
be matched with the saved features of an individual and if researchers to recognize face is through Multi-level Block
both features matched, it would be accepted as approved Truncation Coding [1].
individual.
This experiment is implemented by using 4 levels of
Biometric technologies are widely used for the purpose Block truncation coding. It is done to extract feature vector
of authentication, with the technology advancement. for DB of 100 pictures. Algorithm performance is
Additionally, it has taken many applications, which are determined by noticing the ratio of Genuine Acceptance and
unable to maintain security of the information saved in the False Acceptance. In conclusion, it is noticed that the result
databases (DBs). These DBs would be effortlessly available precision increases with the increasing level of Block
and copied through the Internet because of no security of Truncation Coding.
information. As, nobody has the power to stop the

IJISRT24JAN401 www.ijisrt.com 2363


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Similarly, the security of DB is significant as the face III. PROPOSED METHOD
recognition is increasing, generally. These days, the DB can
be seen and spoofed, easily. Dynamic texture can be used to In face liveness detection, this method will extract
avoid spoofing in face detection [2]. In this research, 1 st grey features i.e., Luminance, RGB-Grey and LuminanceR. Fig 1
scaled frame, occurred from original frame would be will show each step:
transmitted by modified census transform. By considering
50 pixels of height and width of already detected faces were
normalized to matrix of 64 x 64 and lessened noise using
LBP-TOP computations. LBP operators were employed on
every plane and used for the calculation of histogram and
concatenation. After feature extraction step, next step is
Binary Classification, to differentiate real person from
spoofing attack to access pictures from DBs. By doing this
procedure, best outputs are attained using non-linear
Support Vector Machine classifier.

For avoiding spoofing attack, recaptured pictures are


used for the real face detection [3]. Researchers considered
the differentiation of spoofed and real pictures from NUAA
DB. Moreover, Hue Channel Distribution, blurriness and
specular ratio are used for checking the originality of an
image [3]. Finally, it is considered the effective technique to
recognize originality of images. Furthermore, detection rate Fig 1 ML Model for Face Detection
could be accomplished 20 fps on personal computer.
The proposed method is categorized in two phases. In
Live face detection can also be done by 3D face shape 1st phase, 1st pictures from the database of NUAA would be
analysis [4]. In this technique, 3D data is captured to traversed incrementally. After traversing, red, blue, and
simulate spoof attack. Experiment is done by real face in green planes would be extracted from each image. After
front of camera and already taken picture or snap. Further, that, Luminance, LuminanceR and Grey features will be
they have taken 3D features by 2D photographic source [4]. calculated and saved in a feature-table with their class. After
Due to lack of variation of surface, it makes it straight that completion, training of the system will be start by using
the scanned picture is originated from 2D image, and it is Machine Learning algorithms.
not original person’s face.
To calculate Luminance, LuminanceR and Grey features, we
Tracking of pupil could also be considered for the will use following formulas:
detection of spoofing attack [5]. Eye area would be
extracted through HAAR-CASCADE classifier [5]. Portion
of eye is cropped from the camera frame and then it would Grey = Average (Imagesa,b)a,b: Rows and Columns
be rotated in constant eye area. After that, pupil is extracted
from that specific eye area. Now,

After some frames, algorithm will select any of the Rows = Rows from images in DB
direction, it sends signal to Arduino for activation of chosen
direction of eight LED’s. Direction of eye would be noticed Columns = Columns from images in DB
that Pupil direction and LED match. It results in live-face
detection if it matches. Luminance = {(0.299 x Red) + (0.587 x Green) + (0.114 x
Blue)}

IJISRT24JAN401 www.ijisrt.com 2364


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Now, Table 1 Accuracy for Face Detection for 50-50 percent
training & testing image dataset
Red = Red plane values in the image

Green = Green plane values in the image

Blue = Blue plane values in the image

LuminanceR = {(0.2126 x Red) + (0.7152 x Green) +


(0.0722 x Blue)}

So,

LuminanceR means Luminance Relative

After that, system would fetch videos and images.


Then, extract face frames. System will extract red, blue, and
green planes from each face frame. Then, features will be
computed using above mentioned color planes, Luminance It is noticed from the results that performance is far
and Luminance Relative. Further, it will make fusion of the better after the fusion of all the Features by using machine
features which are similar. Moreover, all these calculated learning algorithms in comparison to consider all the
values would be used as an input in trained Machine features, separately.
Learning model for face detection. Finally, the specific face
would be detected as live or not. Table 2 Accuracy for Face Detection for 60% Train & 40%
test Image Dataset
Different machine learning classifiers i.e., Support
Vector Machine, Random Forest Algorithm, Random Tree,
Decision Table, Naïve Bayes, MLP, J48. Moreover,
Accuracy is calculated in percentage for every feature by
using these machine learning algorithms. Out of all the
images in the database, 50 percent images are used in
training and rest for testing and accuracy calculation. This
50 – 50 percentage is changed to 60 – 40 and 80 – 20 for
testing and training to observe the features after fusion i.e.,
Grey, Luminance and Luminance Relative.

IV. EXPERIMENTATION ENVIRONMENT

For this research, Webcam of 2 Megapixel from the


company of Logitech is utilized. Open CV library and
Python language is used on Windows 8. NUAA dataset is
It is again noticed that performance is better when
used for the collection of images. Total images are 12146.
working by the fusion of all the features, using machine
learning algorithms as compared to take features separately.
V. PERFORMANCE MEASURES
Table 3 Accuracy for Face Detection for 80% train & 20%
Generally, execution of features, like, Luminace,
test image dataset
Luminace Relative and Grey are calculated by using
Average Classification Accuracy, as:

TP + TN
Accuracy =
TP + TN + FP + FN

TP = True Positive, TN=True Negative

VI. RESULTS & COMPARISON

Below are the obtained results by the algorithm


proposed for face liveness detection. When NUAA DB is
divided 50-50 i.e., 50 percent images for training and rest
for testing,

IJISRT24JAN401 www.ijisrt.com 2365


Volume 9, Issue 1, January – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
VII. CONCLUSION [8]. Seo, J. and Chung, I.-J. (2019) ‘Face liveness
detection using thermal face-CNN with External
Face detection becomes very important now-a-days Knowledge’, Symmetry, 11(3), p. 360.
because number of biometric devices are working which doi:10.3390/sym11030360.
takes face for the detection. When there are many images [9]. Mohamed, A.A. et al. (2021) ‘Face liveness detection
saved in the database, chances of hacking and spoofing using a sequential CNN technique’, 2021 IEEE 11th
increase. All the saved images can be stolen from the Annual Computing and Communication Workshop
database for future authentication. Many systems, now-a- and Conference (CCWC) [Preprint].
days, grant access to the users from images and these doi:10.1109/ccwc51732.2021.9376030.
machines are not able to detect liveness of human being. In [10]. Policepatil, S. and Hatture, S.M. (2021) ‘Face
this method, NUAA database is used to train and test images liveness detection : An overview’, International
by using several machine learning algorithms i.e., Naïve Journal of Scientific Research in Science and
Bayes, Support Vector Machine, MLP, Decision table, Technology, pp. 22–29. doi:10.32628/ijsrst21843.
Decision trees, Random Forest, and j48. In proposed [11]. Sengur, A. et al. (2018) ‘Deep feature extraction for
method, Luminance, Luminance Relative and Grey features face liveness detection’, 2018 International
are extracted from the images of NUAA database. After that, Conference on Artificial Intelligence and Data
processing is done for videos and extracted features Processing (IDAP) [Preprint].
separately and fusion of all the features. Moreover, it is doi:10.1109/idap.2018.8620804.
taken as an input for model training. After all the procedure, [12]. Khairnar, S. et al. (2023a) ‘Face liveness detection
it is deduced that performance is better with fusion of using artificial intelligence techniques: A systematic
features i.e., grey, luminance and luminanceR. Number of literature review and Future Directions’, Big Data
images in observations are also varied for further and Cognitive Computing, 7(1), p. 37.
performance testing. 50-50%, 60-40% and 80-20% images doi:10.3390/bdcc7010037.
are taken for training and testing purposes. Moreover, the [13]. Farrukh, H. et al. (2020) ‘Facerevelio’, Proceedings
system developed could be used to detect face liveness for of the 26th Annual International Conference on
avoiding attacks by spoofing through fusion of features. Mobile Computing and Networking [Preprint].
doi:10.1145/3372224.3419206.
REFERENCES [14]. Fourati, E., Elloumi, W. and Chetouani, A. (2019)
‘Anti-spoofing in face recognition-based biometric
[1]. Dr. H. B. Kekre,Dr. Sudeep Thepade, Sanchit authentication using image quality assessment’,
Khandelwal, “Face Recognition using Multilevel Multimedia Tools and Applications, 79(1–2), pp.
Block Truncation Coding”, International Journal of 865–889. doi:10.1007/s11042-019-08115-w.
Computer Applications (0975 – 8887) ,Volume 36–
No.11, December 2011.
[2]. Tiago de Freitas Pereira, Jukka Komulainen, Andre
Anjos, Jose Mario De Martino, Abdenour Hadid,
Matti Pietikainen & Sebastien Marcel., "Face
liveness using dynamic texure" EURASIP Journal on
Image & Video Processing, Article No. 2(2014).
[3]. Xiao Luan, Huamming Wang, Weihua Ou, Linghui
Liu, " Face Liveness Detection with Recaptured
Feature Extraction”, IEEE,International Conference
on Security, Pattern Analysis & Cybermetics, 2019.
[4]. Andrea Lagorio, Massimo Tistarelli, Marinella
Cadoni, Clinton Fookes, Sridha Sridharan,"Liveness
Detection Based on 3D Face Shape Anyalasis”,
International Workshop on Biometrics & Forensics.
IEEE, 2013.
[5]. Galbally, Javier, et al. "Iris liveness detection based
on quality related features." Biometrics (ICB), 2012
5th IAPR International Conference on. IEEE, 2012.
[6]. Rehman, Y.A., Po, L.M. and Liu, M. (2018)
‘LiveNet: Improving features generalization for face
liveness detection using convolution neural
networks’, Expert Systems with Applications, 108,
pp. 159–169. doi:10.1016/j.eswa.2018.05.004.
[7]. Rehman, Y.A., Po, L.M. and Liu, M. (2018)
‘LiveNet: Improving features generalization for face
liveness detection using convolution neural
networks’, Expert Systems with Applications, 108,
pp. 159–169. doi:10.1016/j.eswa.2018.05.004.

IJISRT24JAN401 www.ijisrt.com 2366

You might also like