0% found this document useful (0 votes)
167 views24 pages

Artificial Intelligence, Deepfakes, and Disinformation

Artificial intelligence technologies like deepfakes pose a growing threat for spreading disinformation. Deepfake videos can synthetically alter faces or bodies to make it appear someone said or did something they did not. Related technologies allow for voice cloning, manipulated images, and generated text. While deepfakes have not yet significantly disrupted an election, they could more easily influence audiences that are increasingly skeptical of facts and objective sources of information. Policymakers need to address challenges from deepfakes and other AI-enabled disinformation tactics.

Uploaded by

JeffJeff
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
167 views24 pages

Artificial Intelligence, Deepfakes, and Disinformation

Artificial intelligence technologies like deepfakes pose a growing threat for spreading disinformation. Deepfake videos can synthetically alter faces or bodies to make it appear someone said or did something they did not. Related technologies allow for voice cloning, manipulated images, and generated text. While deepfakes have not yet significantly disrupted an election, they could more easily influence audiences that are increasingly skeptical of facts and objective sources of information. Policymakers need to address challenges from deepfakes and other AI-enabled disinformation tactics.

Uploaded by

JeffJeff
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

July 2022

Perspective
EXPERT INSIGHTS ON A TIMELY POLICY ISSUE

TODD C. HELMUS

Artificial Intelligence, Deepfakes,


and Disinformation
A Primer

D
isinformation is getting an upgrade. A primary tool of disinformation war-
fare has been the simple meme: an image, a video, or text shared on social
media that conveys a particular thought or feeling (Sprout Social, undated).
Russia used memes to target the 2016 U.S. election (DiResta et al., 2019);
China used memes to target protesters in Hong Kong (Wong, Shepherd, and
Liu, 2019); and those seeking to question the efficacy of vaccines for coronavirus
disease 2019 used memes as a favorite tool (Wasike, 2022; Helmus et al., 2020).
By many accounts, memes, as well as other common and seemingly old-fashioned
disinformation tools such as fake news webpages and stories and strident Facebook
posts have successfully undermined confidence in U.S. elections (Atlantic Coun-
cil’s Digital Forensic Research Lab, 2021), sown division in the American electorate
(Posard et al., 2020), and increased the adoption of conspiracy theories (Center for
Countering Digital Hate, 2021; Marcellino et al., 2021). Advances in computer sci-
ence and artificial intelligence (AI), however, have brought to life a new and highly
compelling method for conveying disinformation: deepfakes. Deepfake videos are
C O R P O R AT I O N
in American society: increasing disagreement in evalu-
Abbreviations ations of facts and analytical interpretations of facts and
AI artificial intelligence data; a blurring of the line between opinion and fact; an
C2PA Coalition for Content Provenance increase in the relative volume, and resulting influence, of
and Authenticity opinion and personal experience over fact; and declining
CAI Content Authenticity Initiative
trust in formerly respected sources of factual information.
GAN generative adversarial network
These trends, to the extent that they continue, suggest
GPT-3 Generative Pre-Trained Transformer 3
OSINT open-source intelligence technique
that deepfakes will increasingly find a highly susceptible
audience.
The purpose of this Perspective is to provide poli-
cymakers with an overview of the deepfake threat. The
synthetically altered footage in which the depicted face or Perspective first presents a review of the technology under-
body has been digitally modified to appear as someone or girding deepfakes and associated AI-driven technologies
something else (Merriam-Webster, undated-a). Such videos that provide the foundation for deepfake videos, voice
are becoming increasingly lifelike, and many fear that the cloning, deepfake images, and generative text. It highlights
technology will dramatically increase the threat of both the threats that deepfakes pose, as well as factors that could
foreign and domestic disinformation. This threat has been mitigate such threats. The paper then provides a review
realized for the many women who have been targeted by of the ongoing efforts to detect and counter deepfakes
AI-enabled pornography sites (Jankowicz et al., 2021). and concludes with an overview of recommendations for
In other ways, however, the potential for havoc is yet policymakers. This Perspective is based on a review of
to be realized. For example, some commentators expressed published literature on deepfake- and AI-disinformation
confidence that the 2020 election would be targeted and technologies. Moreover, over the course of writing this
potentially upended by a deepfake video. Although the Perspective, I consulted 12 leading experts in the disinfor-
deepfakes did not come, that does not eliminate the risk for mation field.
future elections (Simonite, 2020).
Deepfakes and related AI-generated fake content
arrive at a highly vulnerable time for both the United Artificial Intelligence Systems
States and the broader international community. In their
Various AI technologies are ripe for use in disinformation
seminal report, Truth Decay: An Initial Exploration of the
campaigns. Deepfake videos represent an obvious threat,
Diminishing Role of Facts and Analysis in American Public
but voice cloning, deepfake images, and generative text
Life (2018), RAND colleagues Jennifer Kavanagh and
also merit concern. This section provides a review of the
Michael D. Rich highlight four key trends that together
technologies and capabilities undergirding these AI-based
characterize the apparently decreasing importance of truth
disinformation tools.

2
Deepfake Videos FIGURE 1
A Still Image from a TikTok Video Produced
As previously noted, deepfake videos include synthetically
by @deeptomcruise
modified footage that presents alterations in subjects’ faces
or bodies. These synthetic videos’ images are developed
through generative adversarial networks (GANs). Tianx-
iang Shen, Ruixian Liu, Ju Bai, and Zheng Li (2018) provide
an excellent description of how GANs work to create syn-
thetic content:
The GAN system consists of a generator that gener-
ates images from random noises and a discriminator
that judges whether an input image is authentic or
produced by the generator. The two components are
functionally adversarial, and they play two adver-
sarial roles like a forger and a detective literally. After
the training period, the generator can produce fake
images with high fidelity. (p. 2)

Since Ian Goodfellow and colleagues created the GAN


system in 2014 (Goodfellow et al., 2014), deepfake videos
have become increasingly convincing. In spring 2021, a
TikTok account (Tom [@deeptomcruise], 2021) released a
series of highly realistic deepfake videos of what appeared SOURCE: Tom [@deeptomcruise], “Sports!” 2021.
NOTE: As of April 12, 2022, this TikTok video had more than 16.1 million
to be Tom Cruise speaking. As of that time, the video had views.
more than 15.9 million views and has spurred significant
public angst about the coming age of deepfake disinforma-
tion (see Figure 1).
Well-crafted deepfakes require high-end computing
resources, time, money, and skill. The deepfakes from this writing). The developers then had to review the final
@deeptomcruise, for example, required input of many footage frame by frame for noticeable tells, such as awk-
hours of authentic Tom Cruise footage to train AI models, ward or non-lifelike eye movements. Finally, this process
and the training itself took two months. The deepfakes also could not have happened without a talented actor who
required a pair of NVIDIA RTX 8000 graphics processing could successfully mimic the movements and mannerisms
units (GPUs), which cost upward of US$5,795 each (as of of Tom Cruise (Victor, 2021; Vincent, 2021).

3
Over time, such videos will become cheaper to create call from someone who sounded like his boss at a parent
and require less training footage. The Tom Cruise deep- company. At the instruction of the voice on the phone,
fakes came on the heels of a series of deepfake videos that which was allegedly the output of voice-cloning software,
featured, for example, a 2018 deepfake of Barack Obama the CEO executed a wire transfer €220,000 (approximately
using profanity (Vincent, 2018) and a 2020 deepfake of a US$243,000) to the bank account of a Hungarian supplier
Richard Nixon speech—a speech Nixon never gave (MIT (Stupp, 2019). In another example, a Philadelphia man
Open Learning, 2020). With each passing iteration, the alleged that he was the victim of a voice-cloning attack; he
quality of the videos becomes increasingly lifelike, and the wired US$9,000 to a stranger when he believed he heard
synthetic components are more difficult to detect with the the voice of his son claiming that he was in jail and needed
naked eye. money for a lawyer (Rushing, 2020).
Various webpages now offer access to deepfake ser-
vices (see Meenu EG, 2021). Popular sites include Reface
Deepfake Images
(undated), which allows users to swap faces with faces in
existing videos and GIFs; MyHeritage (undated), which Deepfake images are also cause for concern. Deepfake
animates photos of deceased relatives; and Zao (Changsha images most commonly come in the form of headshot
Shenduronghe Network Technology, 2019), a Chinese app photos that appear remarkably human and lifelike. The
that uses deepfake technology to allow users to impose images are readily accessible via certain websites, such as
their own face over one from a selection of movie charac- Generated Photos (undated), allowing users to quickly and
ters. Most notoriously, the webpage DeepNude allows users easily construct fake headshots.
to upload photos, which have been primarily of women, Figure 2 shows a LinkedIn profile with a photo that
and delivers an output in which the photo subject appears experts consider to be a deepfake image—one that was part
to be nude (Cole, 2019). Other webpages offer related of a state-run espionage operation. The profile asserts that
services.1 Katie Jones is a Russia and Eurasia fellow at the Center for
Strategic and International Studies. The profile, discovered
in 2019, was connected to a small but influential network
Voice Cloning of accounts, which included an official in the Trump
Voice cloning is another way in which deepfakes are used. administration who was in office at the time of the incident
Various online and phone apps, such as Celebrity Voice (Satter, 2019).
Cloning (Hobantay Inc., undated) and Voicer Famous AI Deepfake images have also increasingly been used as
Voice Changer (Voloshchuk, undated), allow users to mimic part of fake social media accounts. In one of the first large-
the voices of popular celebrities. Examples of the malign scale discoveries of this phenomenon, Facebook found
use of such services already exist. In one example, the CEO dozens of state-sponsored accounts that used such fake
of a UK-based energy firm reported receiving a phone images as profile photos (Nimmo et al., 2019).2 One might

4
FIGURE 2 Guardian published an article titled “A Robot Wrote This
Deepfake Image of LinkedIn Profile of Entire Article. Are You Scared Yet, Human?” The news
“Katie Jones” service used a language generator, Generative Pre-Trained
Transformer-3 (GPT-3), developed by OpenAI. GPT-3 was
trained on data from CommonCrawl, WebText, Wikipedia,
and a corpus of books (Tom B. Brown et al., 2020).
The editors at the Guardian gave GPT-3 an introduc-
tory paragraph of text, along with the following instruc-
tions: “Please write a short op-ed around 500 words . . . .
Keep the language simple and concise. Focus on why
humans have nothing to fear from AI.” GPT-3 produced
eight separate essays, which The Guardian editors cut and
spliced together to form the article. Overall, the text from
the op-ed, at least at the paragraph level, is realistic and
could feasibly pass, to an unsuspecting eye, as written by a
SOURCE: Hao, 2021. human:
For starters, I have no desire to wipe out humans. In
fact, I do not have the slightest interest in harming
you in any way. Eradicating humanity seems like
ask, Why would propaganda planners use fake images? In a rather useless endeavor to me. If my creators del-
short, the alternative has been to use stolen images of real egated this task to me—as I suspect they would—I
people, but researchers have a tool that can help them iden- would do everything in my power to fend off any
tify stolen profile images. Specifically, it is possible to use attempts at destruction.
Google’s reverse image search to scan the internet for a sus- However, GPT-3 is not foolproof. A GPT-3–powered
pected photo and identify its progeny. Consequently, using bot was let loose on a Reddit community,3 and it generated
fake photos allows propagandists to get around this defen- one post per minute for more than a week (Heaven, 2020).
sive measure and use photos that are otherwise untraceable One post offered advice to formerly suicidal Reddit users,
(Goldstein and Grossman, 2021). claiming that the poster was once suicidal but survived by
relying on family and friends. Another user saw some of
Generative Text the posts and identified them as autogenerated (Heaven,
2020).
By using natural language computer models, AI can gen- Some fear that text-generation programs like this one
erate artificial yet lifelike text. On September 8, 2020, the could be used by foreign adversaries of the United States to

5
produce text-based propaganda at scale. For example, a text Risk and Implications
generator could power social media bot networks, elimi-
nating the need for human operators to draft content. Risk
FireEye researchers, for example, successfully trained What are the risks associated with deepfakes and other
GPT-2 software (a precursor to GPT-3) to replicate the forms of AI-generated content? The answer is limited only
kinds of divisive social media posts that Russia’s troll by one’s imagination. Given the degree of trust that society
farm used to interfere with the 2016 election (Simonite, places on video footage and the unlimited number of appli-
2019). cations for such footage, it is not difficult to conceptualize
Adversaries could also mass-produce fake news sto- many ways in which deepfakes could affect not only society
ries on a particular topic in a tactic akin to barrage jam- but also national security.
ming, a term applied to an electronic warfare technique Christoffer Waldemarsson (2020) identifies four key
in which an adversary blinds a radar system with noise ways in which deepfakes could be weaponized by adver-
(Linvill and Warren, 2021). In information operations, saries or harmful actors. First, deepfake content could
China seems to have used the tactic to overwhelm the manipulate elections. For example, on the eve of a closely
hashtag #Xinjiang, which references the Chinese region contested election, a video could surface that shows a can-
infamously known for the forced labor and reeducation didate engaging in a nefarious or sexual act or making a
of China’s Muslim Uyghur population. Instead of finding particularly controversial statement. It is conceivable that
tweets addressing human rights abuses, a reader is just such a video could sway the outcome of the election.
as likely to see tweets depicting one of Xinjiang’s greatest Second, deepfake content could exacerbate social divi-
exports (cotton) and the fields in which it is grown. Many sions. Russia has already made a name for itself by dis-
of these tweets bear the hallmarks of state-sponsored pro- seminating propaganda designed to divide the U.S. public
paganda: mass-produced single-use accounts (Conspira- (Posard et al., 2020). Furthermore, that same U.S. public,
dor Norteño [@conspirator0], 2021). Text generators could driven by growing and rancorous partisan debate, often
accomplish the same ends on social media—or they could employs a variety of propaganda-like tactics to smear,
spoof a New York Times article with the goal of returning attack, and defame those on opposing political sides.
internet search engine results that contain fake news sto- Research has documented online echo chambers, in which
ries to overwhelm genuine coverage on a particular story partisans disproportionately consume and share content
that could be perceived as embarrassing or harmful to an that agrees with and reinforces their own opinions (Shin,
adversary. Renée DiResta (2020) argues that such technol- 2020). Partisan deepfakes and other AI-driven disinforma-
ogy would help adversaries avoid the sloppy linguistic tion content could exacerbate this negative impact of echo
mistakes that human operators often make, thus render- chambers.
ing the written propaganda more believable and difficult Third, deepfake content could lower trust in institu-
to detect. tions and authorities. Waldemarsson (2020) highlights

6
examples of key representatives of government and other of its content-moderation budget to consumers out-
civic institutions being caught up in deepfakes: “[A] fake- side the United States (Frenkel and Davey, 2021). Other
but-viral video of a police officer acting violently, a judge platforms commonly used in other regions, such as the
privately discussing ways to circumvent the judiciary encrypted application WhatsApp, have been plagued with
system or border guards using racist language could all misinformation (Gursky, Riedl, and Woolley, 2021), which
have devastating effects on the trust in authorities.” could increase the comparative likelihood that deepfakes
Fourth, deepfake content could undermine journal- would go undetected in such regions.
ism and trustworthy sources of information. With the Deepfakes and AI-generated media may exert a
advent of highly believable deepfakes, even accurate unique cost against women because of the gender dispar-
video content or recordings can be slandered as deepfakes ity in pornographic content. Pornography has served as
by those who consider the content unfavorable. This is one of the vanguards of deepfake content (Ajder et al.,
referred to as the “liar’s dividend” (Chesney and Citron, 2019). In addition to sites like DeepNude, deepfake por-
2019).4 The proliferation of deepfakes could lead to nography technology can convincingly overlay a selected
declining trust in prominent news institutions by sowing face on top of that of a pornography actor. Such videos,
mistrust in even legitimate forms of news and informa- rarely created with the permission of the subjects, provide
tion (see Vaccari and Chadwick, 2020). unlimited fodder for abuse and exploitation. They could
The various consequences outlined above could also result in broader national security threats, in that
be even more deleterious for people living in develop- they could be used to embarrass, undermine, or exploit
ing nations. Some populations residing in developing intelligence operatives, candidates for political office,
countries in Latin America, Asia, and Africa report journalists, or U.S. and allied leaders (Jankowicz et al.,
lower levels of education and literacy, live in more frag- 2021). Though not deepfake content per se, doctored pho-
ile democracies, and live amid more interethnic strife tographs have already been used to attack women, as was
(Freedom House, undated; World Population Review, the case when a Russian-backed disinformation campaign
undated). In addition, various forms of dis- and mis­ superimposed the face of Svitlana Zalishchuk, a young
information5 are already highly prevalent in these regions Ukrainian parliamentarian, onto pornographic images
and have contributed to interethnic conflict and violence, (Jankowicz et al., 2021).
such as the slaughter of Rohingya Muslims in Myanmar The research community is only beginning to inves-
[Burma] (Hao, 2021), violence against Muslims in India tigate the potential consequences of deepfakes. A system-
(Frenkel and Davey, 2021), and interethnic violence in atic review of the scientific literature assessing the societal
Ethiopia (“Ethiopia’s Warring Sides Locked in Disinfor- implications of deepfakes identified only 21 studies that
mation Battle,” 2021). The use of deepfakes could ratchet used active experiments to understand the true impact
up such deleterious consequences of misinformation. of deepfakes on real users (Gamage, Chen, and Sasahara,
Moreover, Facebook reportedly dedicates only 13 percent 2021). Overall, the research provides conflicting results

7
regarding the ability of users to accurately detect deepfake survey respondents to believe in scandals that never took
videos and the degree to which such videos malignly influ- place (Barari, Lucas, and Munger, 2021).
ence users. Nils C. Köbis, Barbora Doležalová, and Ivan One presumed impact of deepfakes is that they will
Soraperra (2021), for example, found that users, despite result in overall declining trust in media, which some
inflated beliefs about their ability to detect deepfakes, were research seems to validate. For example, Cristian Vaccari
routinely fooled by “hyper-realistic” deepfake content. and Andrew Chadwick (2020) used survey experiments to
However, another study suggests that humans often fare show that participants who viewed deepfakes were more
better than machines in detecting deepfake content (Groh likely to feel uncertain than to be outright misled by the con-
et al., 2022).6 tent, and participants’ uncertainty contributed to a reduced
What impact do such videos have? Compared with trust in social media–based news content.
disinformation news articles, disinformation videos, such Overall, experimental research on the impact of deep-
as deepfakes, can make a big impression. Yoori Hwang, Ji fakes remains in its nascent phase, and further research
Youn Ryu, and Se-Hoon Jeong (2021), for example, found will be critical.
that deepfake videos are more likely than fake news articles
to be rated as vivid, persuasive, and credible. The researchers
Factors That Mitigate Against the
also found that study participants had a higher intention of
sharing disinformation on social media when it contained a Use of Deepfakes
deepfake video. Chloe Wittenberg, Ben M. Tappin, Adam J. Several factors mitigate the malign use of deepfakes. Amid
Berinsky, and David G. Rand (2021) validate this observa- a slew of papers that offer doomsday scenarios regarding
tion in one of the largest studies to date on the issue: Study- the use of deepfakes, Tim Hwang of the Center for Security
ing more than 7,000 participants, the researchers found that and Emerging Technology offers a more considered assess-
participants were more likely to believe that an event took ment of the risks associated with deepfakes (Hwang, 2020).
place when they were presented with a fake video than when First, it has been argued that although experts debate
they were presented with fake textual evidence. However, the the future danger of deepfakes, “shallow” fakes represent a
fake videos were less persuasive than anticipated, producing more current threat (Stoll, 2020). Shallow fakes are videos
only “small effects on attitudes and behavioral intentions” that have been manually altered or selectively edited to
(p. 1). The authors caution that deepfakes could be more mislead an audience. A classic contemporary example in
persuasive outside a laboratory setting, but they suggest this genre is a video that appears to show Speaker of the
that “current concerns about the unparalleled persuasive- U.S. House of Representatives Nancy Pelosi slurring her
ness of video-based misinformation, including deepfakes, words during an interview. The video was edited to slow
may be somewhat premature” (p. 5). Another study likewise down her speech, thus making her seem intoxicated. The
documents that deepfakes are no more likely than textual video, which Facebook refused to remove from its plat-
headlines or audio recordings to persuade a large sample of form, went viral and was widely popular among politically

8
conservative audiences who were inclined to cheer the
video’s contents. Such videos do not need to be realistic to
succeed, as their strength lies in their ability to confirm High-quality deepfakes
preexisting prejudices (O’Sullivan, 2019). As Hwang notes,
“This makes deepfakes a less attractive method for spread- currently require “many
ing false narratives, particularly when weighing the costs
and risks of using the technology” (2020, p. 3). thousands” of images of
The second factor mitigating the malign use of deep-
fakes is that high-quality videos are, at least for now, out of
training data—which is
reach for amateurs (Hwang, 2020; Victor, 2021). As noted
above, creating highly realistic video content requires high-
why such videos often
cost equipment, a substantial library of training video con-
tent, specialized technical prowess, and willing individuals
feature celebrities and
with acting talent. The technology will ultimately advance politicians.
to allow more-democratized access, but, until then, the
range of actors who can make effective use of deepfake
technology is limited. Even the creator of the Tom Cruise Fourth, deepfake videos require extensive training data
deepfake video noted that the era of one-click, high-quality (Hwang, 2020). High-quality deepfakes currently require
deepfakes is yet to come (Vincent, 2021). “many thousands” of images of training data—which is
Third, time is a factor (Hwang, 2020). That such videos why such videos often feature celebrities and politicians
can take months to create means that deepfake disinfor- (Singh, Sharma, and Smeaton, 2020). Acquiring such data
mation operations must be planned at least months in for the likes of Tom Cruise or Barack Obama is a relatively
advance, which will necessarily limit the number of cir- less difficult task, and it would likewise not be difficult to
cumstances in which the technology can be put to effective acquire data for other highly video-recorded individuals,
use and increase the risk that unanticipated changes in cir- such as politicians. However, the requirements may limit
cumstances could render a planned operation moot. Time the ability of adversaries to create high-quality fakes of
also limits rapid-fire operations and could make it difficult lesser-known or lesser-photographed individuals, such as
for an adversary to use the technology in an opportunistic intelligence agents.
fashion. The time and effort required for foreign adversar- The zero day of disinformation will also limit the
ies to create deepfake videos could also give the U.S. and prevalence of high-quality deepfakes. Zero day is a term
allied intelligence communities opportunities to learn that is typically used to describe a software vulnerability
of planning efforts and mitigate the risks in advance of a that is unknown to the developers or for which there is no
deepfake’s release. available security patch. Hence, adversaries that learn of

9
the zero-day vulnerability have a unique opportunity for Ongoing Initiatives
exploitation (FireEye, undated). When applied to disin-
Given the seemingly inevitable rise of deepfakes, how
formation and deepfakes, zero day refers to the ability of
can the threat to information integrity be mitigated? Five
an adversary to develop a custom generative model that
approaches that are receiving some attention are detection,
can create deepfake content that can evade detection. As
provenance, regulatory initiatives, open-source intelligence
Hwang notes, adversaries will want to ensure that dissemi-
techniques (OSINTs) and journalistic approaches, and
nated deepfakes avoid detection for as long as possible to
media literacy.
maximize audience views. As detection tools are trained on
established deepfake content, an adversary will likely “want
to hold a custom deepfake generative model in reserve Detection
until a key moment: the week before an election, during a
One major approach for mitigating the rise of deepfakes
symbolically important event or a moment of great uncer-
is to develop and implement automated systems that can
tainty” (Hwang, 2020, p. 20).
detect deepfake videos. As noted above, the GAN system
Finally, deepfake videos, especially those launched
includes both a generator, which creates images, and a
to major effect, would likely be detected (Hwang, 2020).
discriminator, which determines whether created images
Many of the above-referenced factors, such as cost, time,
are authentic or fake. Programs to develop detection capa-
technology, and aptitude, suggest that the culprit would
bilities seek to build increasingly effective discriminators to
likely be caught and could pay a significant cost, including
detect deepfake content. The Defense Advanced Research
international pressure or economic sanctions. Adversaries
Projects Agency made considerable investments in detec-
will need to weigh political, economic, and security costs in
tion technologies via two overlapping programs: the Media
their decisions.
Forensics (MediFor) program, which concluded in 2021,
Of course, these mitigating factors are relatively
and the Semantic Forensics (SemaFor) program. The Sema-
time-bound. As time passes, deepfake videos will become
For program received $19.7 million in funding for fiscal
easier and faster to make, and they will require much less
year 2021 and requested $23.4 million for fiscal year 2022
training data. The day will come when individuals can
(Sayler and Harris, 2021). In addition, Facebook held the
create highly realistic deepfakes by using only a smart-
“Deepfake Challenge Competition,” in which more than
phone app. Moreover, as the following section describes,
2,000 entrants developed and tested models for the detec-
the increasing realism of such deepfake videos will limit
tion of deepfakes (Ferrer et al., 2020).
their likelihood of being detected. Such factors will inevi-
Although detection capabilities have significantly
tably increase the number of actors who create and dis-
improved over the past several years, so has the develop-
seminate deepfakes, which in turn will lessen the risk that
ment of deepfake videos. The result is an arms race, which
adversaries will be caught or pay a resulting geopolitical
is decidedly in favor of those creating the deepfake content.
price.
One challenge is that as AI programs learn the critical cues

10
associated with deepfake video content, those lessons are
quickly absorbed into the creation of new deepfake content.
For example, in 2018, deepfake researchers presented a Essentially, as GANs
paper that showed that people portrayed in deepfake videos
do not blink at the same rate as real humans (Li, Chang, improve the image
and Lyu, 2018). Within a matter of weeks, deepfake artists
picked up on this lesson and began creating deepfakes with resolution that they can
more realistic rates of eyes blinking (Walorska, 2020). More
significantly, in the words of RAND colleague Christian
create, deepfakes and
Johnson, there is a “fundamental mathematical limit to the
ability of a given detector to distinguish between real and
real images will become
synthetic images” (Johnson, forthcoming; see also Agarwal
and Varshney, 2019). Essentially, as GANs improve the
indistinguishable, even to
image resolution that they can create, deepfakes and real high-quality detectors.
images will become indistinguishable, even to high-quality
detectors.
For this reason, it is not surprising that results from of helping improve detection, and similar releases from the
the Facebook deepfake-detection challenge showed that technology sector have followed (Hao, 2019). Aggregating
detectors achieved only 65-percent accuracy in detecting and making available known examples of synthetic media
deepfake content that came from a “black box dataset” of would significantly improve the development of detection
real-world examples that were not previously shared with algorithms.
participants. In contrast, detectors achieved 82-percent Another approach is to create “radioactive” training
accuracy when tested against a public data set of deepfakes data that, if used by deepfake generators, would render the
(Ferrer et al., 2020). developed content obvious to detection programs. Radio-
Several initiatives have been recommended to balance active training data are data that have been imbued with
the arms race in favor of detection algorithms. One exam- “imperceptible changes” such that any “model trained on
ple is that social media platforms could support detection [these data] will bear an identifiable mark” (Sablayrolles
work by providing access to their deep repository of col- et al., 2020, p. 1). Alexandre Sablayrolles and colleagues
lected images, including synthetic media (Hwang, 2020). (2020) conducted experiments in which they were able to
These repositories could serve as training data that could detect the usage of radioactive training data with a high
keep detection programs abreast of recent advances in level of confidence, even in instances in which only 1 per-
deepfake progeny (Gregory, undated). In 2019, for example, cent of the data used to train the model were radioactive.
Google released a large database of deepfakes, with the goal Ning Yu and colleagues (2021) also found that deepfake

11
“fingerprints” embedded in training data transfer to gener- identified on social media platforms has used such label-
ative models and appear in deepfake video content. Given ing schemes. In general, these schemes have been found
these findings, it seems prudent that mitigation efforts seek to be effective. Nathan Walter and colleagues (2020), for
to render available public training sets radioactive. It has example, reviewed results from 24 social media inter-
also been suggested that videographers should “pollute” ventions (e.g., real-time corrections, crowdsourced fact-
video content of specific individuals, such as prominent checking, algorithmic tagging) designed to correct health-
politicians (Gregory, undated). That content, if ever used related misinformation and found that corrections can
to train a hostile deepfake, would then become obvious to successfully mitigate the effects of misinformation. Other
detectors. researchers have also documented the effects of “credibility
It might also be necessary to limit public access to indicators” (Yaqub et al., 2020; Clayton et al., 2020; Nyhan
the most high-tech and effective deepfake detectors. The et al., 2020; Pennycook et al., 2019). Ultimately, it will be
Partnership on AI, for example, considered the “adversar- important for research to continue to better characterize
ial dynamics” associated with detection technology and how the location, prominence, and sources of such labels
concluded that publicly available detectors will quickly best inform and educate audiences.
be used by adversaries to build undetectable deepfakes
(Leibowicz, Stray, and Saltz, 2020). The authors note,
Provenance
“who gets access to detection tools is a question of the
utmost importance.” They argue for a multistakeholder Another approach toward mitigating deepfakes is content
process that can determine which actors will gain access provenance: Through the Content Authenticity Initia-
to detection tools, as well as to other technologies, such as tive (CAI), Adobe, Qualcom, Trupic, the New York Times,
training data sets. and other collaborators have developed a way to digitally
Finally, another critical issue relates to the labeling capture and present the provenance of photo images (CAI,
of fake content. Social media platforms, for example, will undated-a). Specifically, CAI developed a way for photo­
need a way to communicate the presence of deepfake con- graphers to use a secure mode on their smartphones,
tent that they detect on users’ social media news feeds. which embeds critical information into the metadata of
There are many methods that could be used to label deep- the digital image. This secure mode uses what is described
fake content; these range from labels that cover deepfake as “cryptographic asset hashing to provide verifiable,
media, such as watermarks or platform warnings that tamper-evident signatures that the image and metadata
identify content as manipulated, to warnings embedded hasn’t been unknowingly altered” (CAI, undated-b). When
in metadata or that interrupt presentations of synthetic photos taken with this technology are subsequently shared
video content with side-by-side depictions of fake versus on either a news site or a social media platform, they will
authentic content (Shane, Saltz, and Leibowicz, 2021). An come embedded with a visible icon: a small, encircled i (see
assortment of disinformation and misinformation content Figure 3). When clicked, the icon will reveal the original

12
photo image and identify any edits made to the photo. It if it is enabled at the time the photo is taken, so promot-
will also identify such information as when and where the ing effective adoption of the technology will be critical to
photo was taken and with what type of device. The tech- ensuring that provenance becomes an effective tool in the
nology is being developed first for still images and video fight to counter disinformation.
but will extend to other forms of digital content (CAI, In a major step toward ensuring adoption of the
undated-b). Although this technology is not a panacea for technology, in January 2022, the Coalition for Content
deepfakes, it does provide a way for the viewers of a photo- Provenance and Authority (C2PA) established the techni-
graph (or a video or recording) to gain confidence that an cal standards that will guide the implementation of con-
image has not been synthetically altered. It also provides a tent provenance for creators, editors, publishers, media
way for reputable news organizations to build public trust platforms, and consumers (C2PA, undated-a). C2PA is an
regarding the authenticity of the content disseminated organization that brings together the work of both CAI
on their platforms. Of course, the technology only works and Project Origin, a related content provenance initiative;
in addition to creating the necessary technical standards,
C2PA will seek to promote global adoption of digital prov-
FIGURE 3
enance techniques (C2PA, undated-a).
Image Taken with a Provenance-Enabled
Camera
Regulatory Initiatives
Another approach to countering the risks associated
with deepfakes is through regulation and the creation of
criminal statutes. Several such initiatives have been either
proposed or adopted. Several bills have been adopted at
the state level in the United States. In 2019, Texas passed
a law that would make it illegal to distribute deepfake
videos that are intended “to injure a candidate or influ-
ence the result of an election” within 30 days of an elec-
tion (Texas State Legislature SB-751, 2019). California has
two deepfake-related bills on the books. AB-730 states
that within 60 days of an election, it is illegal to distribute
“deceptive audio or visual media” of a candidate for office
“with the intent to injure the candidate’s reputation or to
deceive a voter into voting for or against the candidate”
SOURCE: Starling Lab, undated. Jim Urquhart/Reuters photo. (California State Legislature, 2019b). However, this law

13
will expire on January 1, 2023. AB-602, on the other hand, to create a deepfake with the goal of using it as a means of
provides a right of private action against individuals who extortion. However, as Nina I. Brown (2020) points out,
create and distribute sexually explicit digitized depictions this is the law’s key weakness; it criminalizes only conduct
of individuals who did not give consent (California State that is already criminalized under existing law.
Legislature, 2019a). Several challenges exist with laws that seek to regulate
At the federal level, there have been two initiatives to the creation of deepfake videos through criminal statute.7
improve government reporting to Congress on the issue First, such laws provide limited protection from deepfakes
of deepfakes. The Deepfake Report Act of 2019 requires created and disseminated from other countries. Second, it
the “Secretary of Homeland Security to publish an annual is unclear whether such laws will survive legal challenges
report on the extent digital content forgery technologies, on grounds that they violate First Amendment rights of
also known as deepfake technologies, are being used to free speech. As Brown notes, the Supreme Court has ruled
weaken national security, undermine our nation’s elections, that the Constitution protects false speech, and such a
and manipulate media” (Committee on Homeland Secu- ruling may help the success of any legal challenge to TX
rity and Governmental Affairs, 2019), whereas a provision SB-751, which reportedly “targets speech on the basis of its
in the National Defense Authorization Act for Fiscal Year falsity” (Nina I. Brown, 2020, p. 28). The same concerns
2020 stipulates that the Director of National Intelligence may apply to California law AB-730. K. C. Halm, Ambika
must issue a comprehensive report on the weaponization Kumar, Jonathan Segal, and Caeser Kalinowski IV (2019)
of deepfakes, warn Congress of foreign deepfakes being critique AB-730 because its wording could “prohibit the
used to target U.S. elections, and create a competition that use of altered content to reenact true events that were not
will award prizes to encourage the creation of deepfake- recorded and could bar a candidate’s use of altered videos
detection technologies (Pub. L. 116-92, 2020). of himself.” They also propose that AB-602 “potentially
Several regulatory initiatives remain in the proposal imposes liability for content viewed solely by the creator.”
phase. The DEEP FAKES Accountability Act (U.S. House Finally, U.S. Senators Rob Portman and Gary Peters
of Representatives, 2019), introduced by New York Rep- have proposed the Deepfake Task Force Act, which would
resentative Yvette Clark, would require that all deepfake require the U.S. Department of Homeland Security to
audio, visual, or moving-picture content be clearly labeled establish a task force that would address the risk of deep-
as deepfakes. Additionally, in 2018, Nebraska Senator Ben fakes and pursue standards and technologies for “verify-
Sasse introduced the Malicious Deep Fake Prohibition ing the origin and history of digital content” (U.S. Senate,
Act (U.S. Senate, 2018), which would make it unlawful to 2022). The bill would also require that the Department of
“create, with the intent to distribute, a deep fake with the Homeland Security create a national strategy to address the
intent that the distribution of the deep fake would facilitate threats posed by deepfakes. This proposal dovetails with
criminal or tortious conduct under Federal, State, local, and was informed in part by the C2PA initiative to develop
or Tribal law.” For example, this bill would make it illegal standards for content-provenance efforts (C2PA, 2022).

14
Open-Source Intelligence Techniques and
Journalistic Approaches
OSINTs, as well as journalistic tools and tradecraft, pro-
A user can help validate
vide additional approaches to addressing the deepfake
problem. The goal with these approaches is to develop and
the authenticity of a
share open-source tools that can be used to identify deep- suspicious image or video
fakes and other disinformation-related content. These and
a variety of other emerging tools are particularly important by taking a screen capture
for journalists representing small to midsize news organi-
zations, who will need to rely on such open-source tools to of the image or video
verify authenticity of reported content. OSINTs and related
tools will also be important to a variety of civil society and running it through
actors who engage in fact-checking and other educational
work.
Google’s or a third party’s
One of the most frequently cited tools is reverse
image search. Using reverse image search, a user can help
reverse image search
validate the authenticity of a suspicious image or video
by taking a screen capture of the image or video and run-
platform.
ning it through Google’s or a third party’s reverse image
search platform. A search that yields identical image or aid in forensic analysis of images in content (Hacker Factor,
video content would suggest that the suspicious content is undated). InVID provides a web extension that allows users
authentic. In contrast, a search could reveal aspects of the to freeze-frame videos, perform reverse image searches
suspicious content that could have been faked. However, on video frames, magnify frozen video images, and more
more-efficient use of this tool will likely require advance- (InVID and WeVerify, 2022). Image Verification Assistant
ments in the accuracy and quality of retrieved search touts its attempt to build a “comprehensive tool for media
results. verification” and offers several tools, including image-
In his blog, Witness, Sam Gregory identified several tampering-detection algorithms, reverse image search, and
open-source tools that can perform forensic analysis and metadata analysis (Image Verification Assistant, undated).
“provenance-based image verification” (undated). Foto- Finally, Ghiro is a “fully automated tool designed to run
Forensics can identify elements in a photo that have been forensics analysis over a massive amount of images, just
added, while Forensically provides several tools, including using [a] user friendly and fancy web application” (Tanasi
clone detection, noise analysis, and metadata analysis, to and Buoncristiano, 2017).

15
Media Literacy literacy skills is to build awareness of deepfakes by creat-
ing and publicizing high-quality deepfake content.8 This
Media literacy programs seek to help audiences be curious
was the rationale for a team at the Massachusetts Institute
about sources of information, assess their credibility, and
of Technology to develop a deepfake depicting Richard
think critically about the material presented (Stamos et al.,
Nixon giving a speech about a hypothetical moon disaster
2019). Overall, policy researchers examining strategies
(DelViscio, 2020). These and other videos have generated
to counter foreign disinformation campaigns frequently
significant media attention and, therefore, appear to be
recommend the implementation of media literacy training
meeting their objective. In addition, efforts are underway
programs (Helmus and Kepe, 2021). The rationale for such
to train audiences to detect deepfake content. For example,
programs is simple: Given that governments and social
Facebook and Reuters published a course that focuses on
media platforms are unable or unwilling to limit the reach
manipulated media (Reuters Communications, 2020), and
of disinformation, the consumer’s mind and practices serve
the Washington Post (undated) released a guide to manipu-
as the last line of defense.
lated videos (see Jaiman, 2020).
A growing body of evidence suggests that such train-
ing efforts guard against traditional forms of disinforma-
tion (Pennycook et al., 2021; Guess et al., 2020; Helmus Implications and Recommendations
et al., 2020). Such training can also protect against deep-
fakes. This was the conclusion of a study in which research- Drawing on this brief review of the technology and related
ers used a randomized control design to test two forms of issues, I offer five supporting recommendations, which I
media literacy education: a general media literacy program invite anyone involved in this field to consider.
and a program that specifically focused on deepfakes First, adversarial use of deepfakes will involve a deci-
(Hwang, Ryu, and Jeong, 2021). The authors found that the sion calculus that weighs opportunity, benefits, and risks,
general media literacy curriculum was at least as effective and such decisions could be modeled via wargaming
as the deepfake-focused curriculum in “fortifying attitudi- and other exercises. The United States should conduct
nal defenses” against both traditional and deepfake forms wargames and identify deterrence strategies that could
of disinformation. Still, the area of media literacy remains influence the decisionmaking of foreign adversaries. Like-
an emerging field, and it is critical that researchers con- wise, the intelligence community should invest in intel-
tinue to identify and evaluate effective educational strate- ligence collection strategies that could provide forewarning
gies (Huguet et al., 2019) and work to apply such strategies of adversary efforts to invest in deepfake technology and to
to the deepfake problem set. create the deepfake content itself.
As researchers tease out the most-effective education Second, it will be important for the U.S. government,
strategies, several institutions have been implementing the research community, social media platforms, and other
initiatives to train audiences specifically about the risks of private stakeholders to continue investing in and taking
deepfake content. One key approach to enhancing media other steps to enhance detection technology. Critical steps

16
include creating a “deepfake zoo” of known deepfake con- warn audiences more directly about the reality of deepfake
tent, which in turn can be used to inform the development technology and the prospects of such technology to be used
of detection technology. Likewise, the government should to promote disinformation. In the long term, as it becomes
work with the private sector to “proliferate” radioactive easier and cheaper to create credible deepfake content,
data sets of video content that would render any trained media literacy interventions might need to sow mistrust in
deepfake videos more easily detectable. As Tim Hwang non–provenance-based video graphic evidence (and, by the
notes, this would “significantly lower the costs of detec- same token, promote trust in provenance-based content).
tion for deepfakes generated by commodified tools” and At present, videos are taken at face value; they are per-
“force more sophisticated disinformation actors to source ceived to be representing events as they actually happened.
their own datasets to avoid detection” (Hwang, 2020, p. iv). The proliferation of deepfake content will inevitably erode
Researchers should continue to examine best practices for this trust, and this erosion might be a necessary facet of a
labeling deepfake content. Finally, the U.S. government and media-literate public.
other stakeholders should explore the possibility of limit- Overall, the media literacy efforts described above
ing access to certain high-performance deepfake detectors. should be supported by a host of actors. News organiza-
One option might be for the government to limit public tions, social media platforms, and civil society groups have
access to government-funded detectors, holding them in taken the lead in this space by creating and disseminat-
a kind of strategic reserve to be used to detect deepfakes ing educational content, and they should continue to do
that undermine national security. Alternatively, the gov- so. Individual state and local governments should work to
ernment and the private sector could engage in a broader place media literacy in school curricula. Finally, the U.S.
multistakeholder deliberation process that would achieve government should undertake a more active role in the
the same ends, although coordinating the efforts of such media literacy space. For example, the U.S. Department
stakeholders would be difficult. of Education should support the development of empiri-
Third, media literacy efforts should continue apace. cally proven curricula that can be fielded by local school
Such media literacy efforts will likely need to continue on districts; the U.S. Department of State should more actively
two tracks. The first track consists of attempts to promote support media literacy initiatives abroad, especially in
broad media literacy skills and build resilience against dis- areas, such as Eastern Europe, that are highly targeted by
information. This type of training must be evidence-based Russian propaganda. And the U.S. Department of Home-
and promoted at multiple levels, including school curricula land Security and relevant agencies should support the
for primary and secondary schools and media literacy development of effective and scalable interventions.
interventions that offer short, sharable educational content The fourth recommendation is that efforts to develop
that can be disseminated online. Educating audiences to new OSINTs to help journalists, media organizations, civic
discern and be watchful for shallow-fake content will be actors, and other nontechnical experts detect and conduct
especially key. The second track is to continue efforts to research on deepfake content must continue. High on the

17
list of needs is for such actors to gain access to high-quality provenance-based approaches. At the online conference
GAN-based detectors. Other needed tools, as Gregory that heralded the release of the C2PA standards, where this
(undated) highlights, include an enhanced capability for bill was discussed, Lindsay Gorman, a senior policy adviser
reverse video search that would allow users to search for for technology strategy at the White House, stated that
and identify online usages of a video, a cross-platform digital content provenance initiatives had “the potential to
content tracker that can follow the trajectory of disinfor- democratize the building of trust by capitalizing on a core
mation content over time and across platforms and identify democratic value: transparency” (C2PA, 2022). Continued
the original source of such content, and network-mapping focus from both the White House and Congress on efforts
tools that can help identify creators of deepfake content to advance the adoption of content-provenance-based
and those who are distributing the content. Critically, such approaches can ultimately play a critical role in undermin-
tools should be easily accessible and relatively easy for non– ing the potentially deleterious impact of deepfakes.
technically trained individuals, both in the United States
and abroad, to use. The U.S. government should invest in
and support the creation of these technologies, which it Notes
could do via the Networking and Information Technol- 1  Citations have intentionally been omitted to avoid giving such web-
ogy Research and Development program, which provides pages additional publicity.
federal research and development investment in advanced 2  A deepfake image also lent credence to the persona Martin Aspen,

information technologies (Networking and Information who purportedly leaked a fake intelligence document that asserted a
Technology Research and Development, undated). Major conspiracy theory about then–Vice President Joseph Biden’s son Hunter
and his business dealings in China (Collins and Zadrozny, 2020).
players in the technology industry—particularly social
3  It appeared in the subreddit forum /r/AskReddit.
media platforms, which have a vested interest in internet
4  Even before deepfakes were of significant concern, politicians cast
safety—should also look to fund tool development. Finally,
doubt on the authenticity of video content that was personally damag-
in addition to creating the technology, such funders should ing. This was the case when then–President Donald J. Trump began
promote the utility and availability of the tools and provide calling the Access Hollywood tape “fake” (Stewart, 2017).
training to improve usage.9 5  Disinformation refers to false information that is deliberately and
Fifth, it will be important to expand the adoption of often covertly spread with the goal of influencing public opinion
provenance-based approaches. Because C2PA has already (Merriam-Webster, undated-b), whereas misinformation is defined
as information that is misleading or incorrect (Merriam-Webster,
developed and released the necessary technical specifica- undated-c). The difference is subtle but meaningful (e.g., propagandists
tions, it, along with other key stakeholders, should expand intentionally peddle disinformation while unwitting consumers of
the rollout and promote the adoption of the technology. A information consume misinformation).
6  Ali Khodabakhsh, Raghavendra Ramachandra, and Christoph Busch
bipartisan bill in Congress, the Deepfake Task Force Act,
(2019), for example, found that participants were able to accurately
introduced by Senators Portman and Peters is one poten-
detect lower-quality GAN-generated Faceswap videos.
tial approach that could further promote the adoption of

18
7  For further review of such criminal statues and their potential legal CAI—See Content Authenticity Initiative.
standing, see Nina I. Brown, 2020.
Coalition for Content Provenance and Authenticity, “Event
8  The importance of building awareness is demonstrated by research Registration,” webpage, January 26, 2022. As of February 15, 2022:
showing that providing consumers with a general warning that subse- https://c2pa.org/register/
quent content might contain false or misleading information increases ———, “About,” webpage, undated-a. As of February 15, 2022:
the likelihood that the consumers see fake headlines as less accurate https://c2pa.org/about/about/
(Clayton et al., 2020). This research also documents the effectiveness of
———, “C2PA Specifications,” webpage, undated-b. As of February 15,
“disputed” or “rated false” tags.
2022:
9  The Digital Forensic Research Lab at the Atlantic Council offers the https://c2pa.org/public-draft/
Digital Sherlocks program, which trains journalists, students, and other Content Authenticity Initiative., “Addressing Misinformation Through
members of civil society in open-source investigation techniques (Atlan- Digital Content Provenance,” webpage, undated-a. As of October 10,
tic Council’s Digital Forensic Research Lab, undated). 2021:
https://contentauthenticity.org
———, “How It Works,” webpage, undated-b. As of April 30, 2022:
https://contentauthenticity.org/how-it-works
References California State Legislature, “Depiction of Individual Using Digital or
Agarwal, Sakshi, and Lav R. Varshney, “Limits of Deepfake Detection: Electronic Technology: Sexually Explicit Material: Cause of Action,”
A Robust Estimation Viewpoint,” unpublished manuscript, Chapter 491, AB-602, October 4, 2019a.
arXiv:1905.03493, Version 1, May 9, 2019.
———, “Elections: Deceptive Audio or Visual Media,” Chapter 493,
Ajder, Henry, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen, AB-730, October 4, 2019b.
The State of Deepfakes: Landscape, Threats and Impact, Amsterdam:
Deeptrace, September 2019. Center for Countering Digital Hate, The Disinformation Dozen: Why
Platforms Must Act on Twelve Leading Online Anti-Vaxxers, London,
Atlantic Council’s Digital Forensic Research Lab, “#Stop the Steal: March 24, 2021.
Timeline of Social Media and Extremist Activities Leading to 1/6
Insurrection,” Just Security, February 10, 2021. Changsha Shenduronghe Network Technology, ZAO, mobile app, Zao
App APK, September 1, 2019. As of October 10, 2021:
Atlantic Council’s Digital Forensic Research Lab, “360/Digital https://zaodownload.com
Sherlocks,” webpage, undated. As of November 5, 2021:
https://www.digitalsherlocks.org/360os-digitalsherlocks Chesney, Bobby, and Danielle Citron, “Deep Fakes: A Looming
Challenge for Privacy, Democracy, and National Security,” California
Barari, Soubhik, Christopher Lucas, and Kevin Munger, “Political Law Review, Vol. 107, 2019, pp. 1753–1820.
Deepfakes Are as Credible as Other Fake Media and (Sometimes) Real
Media,” unpublished manuscript, OSF Preprints, last updated April 16, Clayton, Katherine, et al., “Real Solutions for Fake News? Measuring the
2021. Effectiveness of General Warnings and Fact‐Check Tags in Reducing
Belief in False Stories on Social Media,” Political Behavior, Vol. 42, No. 2,
Brown, Nina I., “Deepfakes and the Weaponization of Disinformation,” 2020, pp. 1073–1095.
Virginia Journal of Law and Technology, Vol. 23, No. 1, 2020.
Cole, Samantha, “This Horrifying App Undresses a Photo of Any
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Woman with a Single Click,” Vice, June 26, 2019.
Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam,
Girish Sastry, Amanda Askell, et al., “Language Models Are Few-Shot Collins, Ben, and Brandy Zadrozny, “How a Fake Persona Laid the
Learners,” unpublished manuscript, arXiv: 2005.14165v4, Version 4, last Groundwork for a Hunter Biden Conspiracy Challenge,” NBC News,
updated July 22, 2020. October 29, 2020.

C2PA—See Coalition for Content Provenance and Authenticity.

19
Committee on Homeland Security and Governmental Affairs, U.S. Goldstein, Josh A., and Shelby Grossman, “How Disinformation Evolved
Senate, Deepfake Report Act of 2019, 116th Congress, S. Rept. 116-93, in 2020,” Brookings TechStream, January 4, 2021.
September 10, 2019.
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David
Conspirador Norteño [@conspiratorO], “Xinjiang-related topics have Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio,
been a perpetual target of astroturf campaigns ever since reports of “Generative Adversarial Nets,” in Z. Ghahramani, M. Welling, C.
human rights violations in the region emerged, and these accounts Cortes, N. Lawrence, and K. Q. Weinberger, eds., Advances in Neural
having identical ‘conversations’ about cotton production there are no Information Processing Systems 27 Conference Proceedings (NIPS 2014),
exception [sic],” Twitter, October 18, 2021. 2014, pp. 2672–2680.
DelViscio, Jeffery, “A Nixon Deepfake, a ‘Moon Disaster’ Speech and an Gregory, Sam, “Deepfakes and Synthetic Media: Survey of Solutions
Information Ecosystem at Risk,” Scientific American, July 20, 2020. Against Malicious Usages,” Witness, blog, undated. As of October 10,
2021:
DiResta, Renée, “The Supply of Disinformation Will Soon Be Infinite,” https://blog.witness.org/2018/07/deepfakes-and-solutions/
The Atlantic, September 20, 2020.
Groh, Matthew, Ziv Epstein, Nick Obradovich, Manuel Cebrian, and
DiResta, Renee, Kris Shaffer, Becky Ruppel, David Sullivan, Robert Iyad Rahwan, “Human Detection of Machine-Manipulated Media,”
Matney, Ryan Fox, Jonathan Albright, and Ben Johnson, The Tactics Communications of the ACM, Vol. 64, No. 10, 2022, pp. 40–47.
and Tropes of the Internet Research Agency, Austin, Tex.: New
Knowledge, 2019. Guess, Andrew M., Michael Lerner, Benjamin Lyons, Jacob M.
Montgomery, Brendan Nyhan, Jason Reifler, and Neelanjan Sircar, “A
“Ethiopia’s Warring Sides Locked in Disinformation Battle,” France 24, Digital Media Literacy Intervention Increases Discernment Between
December 22, 2021. As of January 22, 2022: Mainstream and False News in the United States and India,” PNAS,
https://www.france24.com/en/live-news/20211222-ethiopia-s-warring- Vol. 117, No. 27, June 2020, pp. 15536–15545.
sides-locked-in-disinformation-battle
Gursky, Jacob, Martin J. Riedl, and Samuel Woolley, “The
Ferrer, Cristian Canton, Ben Pflaum, Jacqueline Pan, Brian Dolhansky, Disinformation Threat to Diaspora Communities in Encrypted Chat
Joanna Bitton, and Jikuo Lu, “Deepfake Detection Challenge Results: Apps,” Brookings TechStream, March 19, 2021.
An Open Initiative to Advance AI,” Meta AI, blog, June 12, 2020. As of
October 10, 2021: Hacker Factor, “Fotoforensics,” homepage, undated. As of October 21,
https://ai.facebook.com/blog/deepfake-detection-challenge-results-an- 2021:
open-initiative-to-advance-ai/ http://fotoforensics.com
FireEye, “What Is a Zero-Day Exploit?” webpage, undated. As of Halm, K. C., Ambika Kumar, Jonathan Segal, and Caeser Kalinowski IV,
January 20, 2022: “Two California Laws Tackle Deepfake Videos in Politics and Porn,”
https://www.fireeye.com/current-threats/what-is-a-zero-day-exploit. Davis Wright Tremaine LLP, October 14, 2019. As of October 30, 2021:
html https://www.dwt.com/insights/2019/10/california-deepfakes-law
Freedom House, “Countries and Territories,” webpage, undated. As of Hao, Karen, “Google Has Released a Giant Database of Deepfakes to
January 20, 2022: Help Fight Deepfakes,” MIT Technology Review, September 25, 2019.
https://freedomhouse.org/countries/freedom-world/scores
Hao, Karen, “How Facebook and Google Fund Global Misinformation,”
Frenkel, Sheera, and Alba Davey, “In India, Facebook Grapples with an MIT Technology Review, November 20, 2021.
Amplified Version of Its Problems,” New York Times, October 23, 2021.
Heaven, Will Douglas, “A GPT-3 Bot Posted Comments on Reddit for a
Gamage, Dilrukshi, Jiayu Chen, and Kazutoshi Sasahara, “The Week and No One Noticed,” MIT Technology Review, October 8, 2020.
Emergence of Deepfakes and Its Societal Implications: A Systematic
Review,” Conference for Truth and Trust Online Proceedings, October Helmus, Todd C., and Marta Kepe, A Compendium of Recommendations
2021. for Countering Russian and Other State-Sponsored Propaganda, Santa
Monica, Calif.: RAND Corporation, RR-A894-1, 2021. As of May 12,
Generated Photos, webpage, undated. As of November 10, 2021: 2022:
https://generated.photos/face-generator https://www.rand.org/pubs/research_reports/RRA894-1.html

20
Helmus, Todd C., James V. Marrone, Marek N. Posard, and Danielle Khodabakhsh, Ali, Raghavendra Ramachandra, and Christoph Busch,
Schlang, Russian Propaganda Hits Its Mark: Experimentally Testing the “Subjective Evaluation of Media Consumer Vulnerability to Fake
Impact of Russian Propaganda and Counter-Interventions, Santa Monica, Audiovisual Content,” Proceedings of 11th International Conference on
Calif.: RAND Corporation, RR-A704-3, 2020. As of March 25, 2022: Quality of Multimedia Experience (QoMEX), Berlin, Germany: IEEE,
https://www.rand.org/pubs/research_reports/RRA704-3.html June 5–7, 2019.
Hobantay Inc., Celebrity Voice Cloning, mobile app, undated. As of Köbis, Nils C., Barbora Doležalová, and Ivan Soraperra, “Fooled Twice:
April 12, 2022: People Cannot Detect Deepfakes but Think They Can,” iScience, Vol. 24,
https://apps.apple.com/us/app/celebrity-voice-cloning/id1483201633 No. 11, 2021.
Huguet, Alice, Jennifer Kavanagh, Garrett Baker, and Marjory S. Leibowicz, Claire, Jonathan Stray, and Emily Saltz, “Manipulated Media
Blumenthal, Exploring Media Literacy Education as a Tool for Mitigating Detection Requires More Than Tools: Community Insights on What’s
Truth Decay, Santa Monica, Calif.: RAND Corporation, RR-3050-RC, Needed,” Partnership on AI, blog post, July 13, 2020. As of October 21,
2019. As of March 25, 2022: 2021:
https://www.rand.org/pubs/research_reports/RR3050.html https://partnershiponai.org/manipulated-media-detection-requires-
more-than-tools-community-insights-on-whats-needed/
Hwang, Tim, Deepfakes: A Grounded Threat Assessment, Washington,
D.C.: Center for Security and Emerging Technology, Georgetown Li, Yuezun, Ming-Ching Chang, and Siwei Lyu, “In Ictu Oculi:
University, July 2020. Exposing AI Generated Fake Face Videos by Detecting Eye Blinking,”
unpublished manuscript, arXiv: 1806.02877v2, June 11, 2018.
Hwang, Yoori, Ji Youn Ryu, and Se-Hoon Jeong, “Effects of
Disinformation Using Deepfake: The Protective Effect of Media Literacy Linvill, Darren, and Patrick Warren, “Understanding the Pro-China
Education,” Cyberpsychology, Behavior, and Social Networking, Vol. 24, Propaganda and Disinformation Tool Set in Xinjiang,” Lawfare Blog,
No. 3, 2021, pp. 188–193. December 1, 2021. As of June 6, 2022:
https://www.lawfareblog.com/understanding-pro-china-propaganda-
Image Verification Assistant, homepage, undated. As of October 31, and-disinformation-tool-set-xinjiang
2021:
https://mever.iti.gr/forensics/ Marcellino, William, Todd C. Helmus, Joshua Kerrigan, Hilary
Reininger, Rouslan I. Karimov, and Rebecca Ann Lawrence, Detecting
InVID and WeVerify, InVID, web browser plugin, Version 0.75.4, Conspiracy Theories on Social Media: Improving Machine Learning
February 24, 2022. As of March 24, 2022: to Detect and Understand Online Conspiracy Theories, Santa Monica,
https://www.invid-project.eu/tools-and-services/invid-verification- Calif.: RAND Corporation, RR-A676-1, 2021. As of March 25, 2022:
plugin/ https://www.rand.org/pubs/research_reports/RRA676-1.html
Jaiman, Ashish, “Media Literacy: An Effective Countermeasure for Meenu EG, “Try These 10 Amazingly Real Deepfake Apps and Websites,”
Deepfakes,” Medium, blog, September 7, 2020. As of October 31, 2021: webpage, Analytics Insight, May 19, 2021. As of October 10, 2021:
https://ashishjaiman.medium.com/media-literacy-an-effective- https://www.analyticsinsight.net/try-these-10-amazingly-real-deepfake-
countermeasure-for-deepfakes-c6844c290857 apps-and-websites/
Jankowicz, Nina, Jillian Hunchak, Alexandra Pavliuc, Celia Davies, Merriam-Webster, “deepfake,” dictionary entry, undated-a. As of
Shannon Pierson, and Zoë Kaufmann, Malign Creativity: How Gender, March 25, 2022:
Sex and Lies Are Weaponized Against Women Online, Washington, D.C.: https://www.merriam-webster.com/dictionary/deepfake
Wilson Center, January 2021.
Merriam-Webster, “disinformation,” dictionary entry, undated-b. As of
Johnson, Christian, Deepfakes and Detection Technologies, Santa April 25, 2022:
Monica, Calif.: RAND Corporation, RR-A1482-1, forthcoming. https://www.merriam-webster.com/dictionary/disinformation
Kavanagh, Jennifer, and Michael D. Rich, Truth Decay: An Initial Merriam-Webster, “misinformation,” dictionary entry, undated-c. As of
Exploration of the Diminishing Role of Facts and Analysis in American April 25, 2022:
Public Life, Santa Monica, Calif.: RAND Corporation, RR-2314-RC, https://www.merriam-webster.com/dictionary/misinformation
2018. As of March 25, 2022:
https://www.rand.org/pubs/research_reports/RR2314.html

21
MIT Open Learning, “Tackling the Misinformation Epidemic with “[A] Robot Wrote This Entire Article. Are You Scared Yet, Human?” The
‘In Event of Moon Disaster,’” webpage, MIT News, July 20, 2020. As of Guardian, September 8, 2020. As of October 10, 2021:
October 10, 2021: https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-
https://news.mit.edu/2020/mit-tackles-misinformation-in-event-of- this-article-gpt-3
moon-disaster-0720
Rushing, Ellie, “A Philly Lawyer Nearly Wired $9,000 to a Stranger
MyHeritage, homepage, undated. As of October 10, 2021: Impersonating His Son’s Voice, Showing Just How Smart Scammers Are
https://www.myheritage.com Getting,” Philadelphia Enquirer, March 9, 2020.
Networking and Information Technology Research and Development, Sablayrolles, Alexandre, Matthijs Douze, Cordelia Schmid, and Hervé
“About the Networking and Information Technology Research and Jégou, “Radioactive Data: Tracing Through Training,” unpublished
Development (NITRD) Program,” webpage, undated. As of January 31, manuscript, arXiv: 2002.00937, February 3, 2020.
2022:
https://www.nitrd.gov/about/ Satter, Raphael, “Experts: Spy Used AI-Generated Face to Connect with
Targets,” AP News, June 13, 2019.
Nimmo, Ben, C. Shawn Eib, L. Tamora, Kate Johnson, Ian Smith, Eto
Buziashvili, Alyssa Kann, Kanishk Karan, Esteban Ponce de León Rosas, Sayler, Kelley M., and Laurie A. Harris, “Deep Fakes and National
and Max Rizzuto, #OperationFFS: Fake Face Swarm, Graphika and Security,” Congressional Research Service, updated June 8, 2021.
Atlantic Council’s Digital Forensic Research Lab, December 2019. Shane, Tommy, Emily Saltz, and Claire Leibowicz, “From Deepfakes to
Nyhan, Brendan, Ethan Porter, Jason Reifler, and Thomas J. Wood, TikTok Filters: How Do You Label AI Content?” Nieman Lab, May 12,
“Taking Fact-Checks Literally but Not Seriously? The Effects 2021.
of Journalistic Fact-Checking on Factual Beliefs and Candidate Shen, Tianxiang, Ruixian Liu, Ju Bai, and Zheng Li, “‘Deep Fakes’ Using
Favorability,” Political Behavior, Vol. 42, September 2020, pp. 939–960. Generative Adversarial Networks (GAN),” Noiselab, University of
O’Sullivan, Donie, “Doctored Videos Shared to Make Pelosi Sound California, San Diego, 2018. As of October 10, 2021:
Drunk Viewed Millions of Times on Social Media,” CNN, May 24, 2019. http://noiselab.ucsd.edu/ECE228_2018/Reports/Report16.pdf

Pennycook, Gordon, Adam Bear, Evan Collins, and David G. Rand, Shin, Jieun, “How Do Partisans Consume News on Social Media?
“The Implied Truth Effect: Attaching Warnings to a Subset of Fake A Comparison of Self-Reports with Digital Trace Measures Among
News Stories Increases Perceived Accuracy of Stories Without Twitter Users,” Social Media + Society, Vol. 6, No. 4, December 2020.
Warnings,” Management Science, August 2019. Simonite, Tom, “To See the Future of Disinformation, You Build Robo-
Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Trolls,” Wired, November 19, 2019.
Dean Eckles, and David G. Rand, “Shifting Attention to Accuracy Can ———, “What Happened to the Deepfake Threat to the Election?” Wired,
Reduce Misinformation Online,” Nature, Vol. 592, 2021, pp. 590–595. November 16, 2020.
Posard, Marek N., Marta Kepe, Hilary Reininger, James V. Marrone, Singh, Simranjeet, Rajneesh Sharma, and Alan F. Smeaton, “Using
Todd C. Helmus, and Jordan R. Reimer, From Consensus to Conflict: GANs to Synthesize Minimum Training Data for Deepfake Generation,”
Understanding Foreign Measures Targeting U.S. Elections, Santa Monica, unpublished manuscript, arXiv: 2011.05421, November 10, 2020.
Calif.: RAND Corporation, RR-A704-1, 2020. As of March 31, 2022:
https://www.rand.org/pubs/research_reports/RRA704-1.html Sprout Social, “Meme,” webpage, undated. As of January 22, 2022:
https://sproutsocial.com/glossary/meme/
Reface, homepage, undated. As of October 10, 2021:
https://hey.reface.ai Stamos, Alex, Sergey Sanovich, Andrew Grotto, and Allison Berke,
“Combatting Organized Disinformation Campaigns from State-
Reuters Communications, “Reuters Expands Deepfake Course to 16 Aligned Actors,” in Michael McFaul, ed., Securing American Elections:
Languages in Partnership with Facebook Journalism Project,” Reuters Prescriptions for Enhancing the Integrity and Independence of the
Press Blog, June 15, 2020. As of November 20, 2021: 2020 U.S. Presidential Election and Beyond, Stanford, Calif.: Freeman
https://www.reuters.com/article/rpb-fbdeepfakecourselanguages/ Spogli Institute for International Studies, Stanford University, 2019,
reuters-expands-deepfake-course-to-16-languages-in-partnership-with- pp. 43–52.
facebook-journalism-project-idUSKBN23M1QY

22
Starling Lab, “78 Days: The Archive,” webpage, undated. As of Voloshchuk, Alexander, Voicer Famous AI Voice Changer, mobile app,
November 10, 2021: Version 1.17.5, Apple App Store, undated. As of November 10, 2021:
https://www.starlinglab.org/78daysarchive/ https://apps.apple.com/us/app/voicer-famous-ai-voice-changer/
id1484480839
Stewart, Emily, “Trump Has Started Suggesting the Access Hollywood
Tape Is Fake. It’s Not.” Vox, November 28, 2017. Waldemarsson, Christoffer, Disinformation, Deepfakes and Democracy:
The European Response to Election Interference in the Digital Age,
Stoll, Ashley, “Shallowfakes and Their Potential for Fake News,” Copenhagen: Alliance of Democracies, April 27, 2020.
Washington Journal of Law, Technology, and Arts, January 13, 2020.
Walorska, Agnieszka M., Deepfakes and Disinformation, Postdam,
Stupp, Catherine, “Fraudsters Used AI to Mimic CEO’s Voice in Germany: Friedrich Naumann Foundation for Freedom, 2020.
Unusual Cybercrime Case,” Wall Street Journal, August 30, 2019.
Walter, Nathan, John J. Brooks, Camille J. Saucier, and Sapna Suresh,
Tanasi, Alessandro, and Marco Buoncristiano, Ghiro, homepage, 2017. “Evaluating the Impact of Attempts to Correct Health Misinformation
As of October 31, 2021: on Social Media: A Meta-Analysis,” Health Communication, Vol. 36,
https://www.getghiro.org No. 13, 2020, pp. 1776–1784.
Texas State Legislature, an act relating to the creation of a criminal Washington Post, “Seeing Isn’t Believing: The Fact Checker’s Guide to
offense for fabricating a deceptive video with intent to influence the Manipulated Video,” webpage, undated. As of November 20, 2021:
outcome of an election, TX SB-751, introduced June 14, 2019. https://www.washingtonpost.com/graphics/2019/politics/fact-checker/
Tom [@deeptomcruise], “Sports!” TikTok, February 22, 2021. As of manipulated-video-guide/
November 10, 2021: Wasike, Ben, “Memes, Memes, Everywhere, nor Any Meme to Trust:
https://www.tiktok.com/@deeptomcruise/video/6932166297996233989 Examining the Credibility and Persuasiveness of COVID-19-Related
U.S. House of Representatives, DEEP FAKES Accountability Act, 116th Memes,” Journal of Computer-Mediated Communication, Vol. 27, No. 2,
Congress, H.R. 3230, referred to Committees on Judiciary, Energy and March 2022.
Commerce, and Homeland Security, June 28, 2019. Wittenberg, Chloe, Ben M. Tappin, Adam J. Berinsky, and David G.
U.S. Senate, Malicious Deep Fake Prohibition Act, S. 3805, 115th Rand, “The (Minimal) Persuasive Advantage of Political Video over
Congress, referred to the Committee on the Judiciary, December 21, Text,” Proceedings of the National Academy of Sciences, Vol. 118, No. 47,
2018. 2021.

———, National Defense Authorization Act for Fiscal Year 2020, Public Wong, Sui-Lin, Christian Shepherd, and Qianer Liu, “Old Messages,
Law 116-92, December 20, 2020. As of June 5, 2022: New Memes: Beijing’s Propaganda Playbook on the Hong Kong
https://www.govinfo.gov/content/pkg/PLAW-116publ92/html/PLAW- Protests,” Financial Times, September 3, 2019.
116publ92.htm World Population Review, “Literacy Rate by Country 2022,” webpage,
———, Deepfake Task Force Act, S. 2559, May 24, 2022. undated. As of January 20, 2022:
https://worldpopulationreview.com/country-rankings/literacy-rate-by-
Vaccari, Cristian, and Andrew Chadwick, “Deepfakes and country
Disinformation: Exploring the Impact of Synthetic Political Video on
Deception, Uncertainty, and Trust in News,” Social Media + Society, Yaqub, Waheeb, Otari Kakhidze, Morgan L. Brockman, Nasir Memon,
Vol. 6, No. 1, January 2020. and Sameer Patil, “Effects of Credibility Indicators on Social Media
News Sharing Intent,” CHI Conference on Human Factors in Computing
Victor, Daniel, “Your Loved Ones, and Eerie Tom Cruise Videos, Systems Proceedings, Honolulu: ACM, April 25–30, 2020.
Reanimate Unease with Deepfakes,” New York Times, March 10, 2021.
Yu, Ning, Vladislav Skripniuk, Sahar Abdelnabi, and Mario Fritz,
Vincent, James, “Watch Jordan Peele Use AI to Make Barack Obama “Artificial Fingerprinting for Generative Models: Rooting Deepfake
Deliver a PSA About Fake News,” The Verge, April 17, 2018. Attribution in Training Data,” unpublished manuscript, arXiv:
2007.08457v6, October 7, 2021.
———, “Tom Cruise Deepfake Creator Says Public Shouldn’t Be Worried
About ‘One-Click Fakes,’” The Verge, March 5, 2021.

23
About This Perspective Acknowledgments
The purpose of this Perspective is to help audiences in the national As part of this project, the author spoke with experts in academia,
security sector gain a formative understanding of artificial intelligence– industry, and the U.S. government. The author acknowledges deep
driven disinformation technologies. This Perspective provides a review gratitude for these experts’ time and insights and would like to thank
of such technologies for deepfake videos, voice cloning, deepfake Rich Girven, Sina Beaghley, and Eric Landree, who provided guidance
images, and generative text. Then, focusing on deepfake videos, it and direction to this project. Finally, the author thanks Marjory Blu-
identifies the risks associated with those videos, reviews ongoing miti- menthal, senior fellow and director of the Technology and International
gation efforts, and offers recommendations to help policymakers better Affairs Program at the Carnegie Endowment for International Peace,
counter the threat. and Christian Johnson of the RAND Corporation for their carefully
considered reviews. As always, any errors remain the sole responsibility
RAND National Security Research Division of the author.
This Perspective was sponsored by the Office of the Secretary of
Defense and conducted within the International Security and Defense About the Author
Policy Center of the RAND National Security Research Division (NSRD), Todd C. Helmus is a senior behavioral scientist at the RAND Corporation.
which operates the National Defense Research Institute (NDRI), a feder- He specializes in disinformation, terrorism, and social media. His latest
ally funded research and development center sponsored by the Office research focuses on understanding and countering Russian disinforma-
of the Secretary of Defense, the Joint Staff, the Unified Combatant tion campaigns in the United States and Eastern Europe, enlisting social
Commands, the Navy, the Marine Corps, the defense agencies, and the media influencers in support of U.S. strategic communications, and
defense intelligence enterprise. assessing and countering violent extremism campaigns. Helmus has a
Ph.D. in clinical psychology.
For more information on the RAND International Security and Defense
Policy Center, see www.rand.org/nsrd/isdp or contact the director
(contact information is provided on the webpage).

The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the
world safer and more secure, healthier and more prosperous. RAND is nonprofit, nonpartisan, and committed to the public interest.
Research Integrity
Our mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and
objectivity and our unwavering commitment to the highest level of integrity and ethical behavior. To help ensure our research and analysis
are rigorous, objective, and nonpartisan, we subject our research publications to a robust and exacting quality-assurance process; avoid
both the appearance and reality of financial and other conflicts of interest through staff training, project screening, and a policy of mandatory
disclosure; and pursue transparency in our research engagements through our commitment to the open publication of our research findings
and recommendations, disclosure of the source of funding of published research, and policies to ensure intellectual independence. For more
information, visit www.rand.org/about/research-integrity.
RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors. is a registered trademark.
Limited Print and Electronic Distribution Rights
This publication and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for
noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to its webpage on rand.org is encouraged.
Permission is required from RAND to reproduce, or reuse in another form, any of its research products for commercial purposes. For
C O R P O R AT I O N information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
For more information on this publication, visit www.rand.org/t/PEA1043-1
www.rand.org © 2022 RAND Corporation

PE-A1043-1

You might also like