LR2
LR2
of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME 2023)
19-20 July 2023, Tenerife, Canary Islands, Spain
Abstract—The growing reliance of society on social media for This paper provides a new method of deepfake image
authentic information has done nothing but increase over the past detection that uses two different machine learning algorithms.
years. This has only raised the potential consequences of the Machine learning has been shown to be effective when used for
spread of misinformation. One of the growing methods in image classification [1], user authentication [2-13], and other
popularity is to deceive users through the use of a deepfake. A security functions [14-16]. This is evidence that it is also an
deepfake is a new invention that has come with the latest effective method for detecting deepfakes. The algorithms we
technological advancements, which enables nefarious online users have tested in this study are Long Short-Term Memory Network
to replace one’s face with a computer-generated, synthetic face of (LSTM) and Multilayer Perceptron (MLP). These methods have
numerous powerful members of society. Deepfake images and
all been shown to produce accurate results when used for
videos now provide the means to mimic important political and
cultural figures to spread massive amounts of false information.
deepfake detection [17-19]. The dataset we are using to test
Models that are able to detect these deepfakes to prevent the these algorithms is called 140k Real and Fake Faces [20], a
spread of misinformation are now of tremendous necessity. In this publicly available dataset retrieved from Kaggle. This dataset
paper, we propose a new deepfake detection schema utilizing two consists of 70,000 images from two different datasets, Flickr-
deep learning algorithms: long short-term memory and multilayer Faces-HQ [21], which contains entirely real faces, and the
perceptron. We evaluate our model using a publicly available Deepfake Detection Challenge dataset [22], containing deepfake
dataset named 140k Real and Fake Faces to detect images altered faces created using style Generative Adversarial Networks
by a deepfake with accuracies achieved as high as 74.7%. (GANs). The novel contributions of this study include the
testing of the two mentioned algorithms on their ability to
Keywords—Deepfake, Machine Learning, Fake Image classify real and fake images, all on the same dataset.
Detection, Long Short-Term Memory, Multilayer Perceptron
II. RELATED WORK
I. INTRODUCTION
A. LSTM
In the modern world, digital media has a large impact on the
opinions of the public. Specifically, media that originates from In one study [17], researchers use a convolutional LSTM-
based residual network, CLRNet, to detect deepfakes. This
well-known people such as politicians or celebrities. Deepfakes
method focuses on deepfake videos rather than images,
can take advantage of this impact and use it for malicious detecting the inconsistencies between frames of a video. It also
purposes. A deepfake is a digitally created photo or video of a uses a convolutional LSTM to overcome a lack of spatial
person in which it is not really them, but an altered version of information recorded with other LSTM methods. This includes
them. Deepfake technology has progressed to the point that 3D tensors that record two dimensions of spatial information.
almost anyone can easily impersonate someone else without Sets of five frames are taken from videos in multiple datasets,
their permission. This has allowed many people to maliciously resized, and put through data augmentation methods before
create fake photos and videos of well-known public figures, being evaluated by the algorithm. CLRNet is compared to
painting them in a negative light or making it seem as if they several current methods on three different tests of transfer
are saying or doing something that they have not. This media learning. The method performs best with an accuracy of 97.18%
can spread rapidly and cause public outrage or confusion when when a single source dataset and a single target dataset were
the deepfake is realistic enough to trick the average person. This used. CLRNet has shown to be a superior architecture when
is a prominent reason why research is needed to develop ways compared to previous baseline models, and provides a step
of detecting deepfakes accurately, helping to stop the spread of towards improved future deepfake detection.
malicious media and to create a more informed public. B. CNN