0% found this document useful (0 votes)
39 views36 pages

VanBavel Etal 2020 SPIR Misinformation SUBMITTED

This document presents a model of the psychological factors underlying the belief and spread of misinformation. The model shows that exposure to misinformation increases belief in it (Path 1), which then increases sharing of the misinformation (Path 2). However, exposure can also directly increase sharing even without increasing belief (Path 3). The model identifies psychological risk factors that can increase exposure (Path A), impact of exposure on belief (Path B), and impact of exposure on sharing (Path C). Understanding these pathways can help develop interventions to reduce the spread of misinformation. The rapid spread of misinformation on social media exacerbates its negative effects on society.

Uploaded by

damoydskeene
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views36 pages

VanBavel Etal 2020 SPIR Misinformation SUBMITTED

This document presents a model of the psychological factors underlying the belief and spread of misinformation. The model shows that exposure to misinformation increases belief in it (Path 1), which then increases sharing of the misinformation (Path 2). However, exposure can also directly increase sharing even without increasing belief (Path 3). The model identifies psychological risk factors that can increase exposure (Path A), impact of exposure on belief (Path B), and impact of exposure on sharing (Path C). Understanding these pathways can help develop interventions to reduce the spread of misinformation. The rapid spread of misinformation on social media exacerbates its negative effects on society.

Uploaded by

damoydskeene
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

1

Political psychology in the digital (mis)information age:


A model of news belief and sharing

Jay J. Van Bavel ([email protected])


Department of Psychology & Center for Neural Science, New York University

Elizabeth A. Harris
Department of Psychology, New York University

Philip Pärnamets
Department of Psychology, New York University
Department of Clinical Neuroscience, Karolinska Institutet

Steve Rathje
Department of Psychology, Cambridge University

Kimberly C. Doell
Department of Psychology, New York University

Joshua A. Tucker ([email protected])


Department of Politics & Center for Social Media and Politics, New York University

CITATION: Van Bavel, J. J., Harris, E, A, Pärnamets, P., Rathje, S., Doell, K. C., & Tucker, J. A. (in
press). Political psychology in the digital (mis)information age: A model of news belief and
sharing. Social Issues and Policy Review.

Acknowledgements: This work was supported in part by the John Templeton Foundation to
JVB, the Swiss National Science Foundation to KCD (grant number: P400PS_190997), the Gates
Cambridge Scholarship to SR (supported by the Gates Cambridge Trust) and the Swedish
Research Council (2016-06793) to PP.

Contribution Statement: All authors contributed to the writing of this paper

Contact Information: [email protected]


2
3

Abstract

The spread of misinformation, including “fake news,” propaganda, and conspiracy theories,
represents a serious threat to society, as it has the potential to alter beliefs, behavior, and
policy. Research is beginning to disentangle how and why misinformation is spread and identify
processes that contribute to this social problem. We propose an integrative model to
understand the social, political, and cognitive psychology risk factors that underlie the spread of
misinformation and highlight strategies that might be effective in mitigating this problem.
However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars
need to identify and test novel solutions, and work with policy makers to evaluate and deploy
these solutions. Hence, we provide a roadmap for future research to identify where scholars
should invest their energy in order to have the greatest overall impact.

Keywords: misinformation, fake news, conspiracy theories, social psychology, political


psychology, personality psychology
4

“Anyone who has the power to make you believe absurdities has the power to make you
commit injustices.” --Voltaire

In 2017, “fake news” was named the Collins Dictionary word of the year (Hunt, 2017).
This dubious honor reflects the large impact fake news--false information distributed as if it is
real news--has had on economic, political and social discourse in the last few years. But fake
news is just one form of misinformation, which also includes disinformation, rumors,
propaganda, and conspiracy theories (see Guess & Lyons, 2020). Misinformation poses a
serious threat to democracy because it can make it harder for citizens to make informed
political choices and hold politicians accountable for their actions, foster social conflict, and
undercut trust in important institutions. Moreover, misinformation exacerbates a number of
other global issues, including beliefs about the reality of climate change (Hornsey & Fielding,
2020), the safety of vaccinations (Kata, 2010), and the future of liberal democracy in general
(Persily, 2017; Persily & Tucker, 2020). It has also proven deadly during the 2020 global
pandemic, leading the president of the World Health Organization Director to declare that
“we're not just fighting a pandemic; we're fighting an infodemic” (Ghebreyesus, 2020).
Therefore, understanding what drives belief and spread of misinformation has far reaching
consequences for human welfare. Our paper presents a model that explains the psychological
factors that underlie the spread of misinformation and strategies that might be effective in
reducing this growing problem.
The issue of misinformation is compounded by the rapid growth of social media. Over
3.6 billion people now actively use social media around the world and as social media has
become the main source of news for many (Shearer & Gottfried, 2017), it has also become
easier to create and spread misinformation. Indeed, an analysis of rumors spread by over 3
million people online found that misinformation spread significantly more than truth—and this
was greatest for political misinformation (Vosoughi, Roy & Aral, 2018). Indeed, misinformation
appears to be amplified around major political events. For instance, people may have engaged
with (e.g., “liked”, “shared”, etc.) fake news more than real news in the few months leading up
to the 2016 US election (Silverman, 2016). As such, there is a dangerous potential for a cycle in
which political division feeds both the belief in, and sharing of, partisan misinformation and
this, in turn, increases political division (Sarlin, 2018; Tucker et al. 2018). However, it is still
unclear how this might impact political behavior (e.g., voting).
In the past few decades, the study of misinformation has grown rapidly (see Douglas,
Sutton, & Cichocka, 2017; Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012; Persily & Tucker
2020; Tucker et al., 2018). We integrate a host of insights gained from previous research in
order to propose a novel theoretical model (see Figure 1 & 2) to explain the psychological
processes underlying the belief and spread of misinformation. To date, most work on the topic
has examined a specific psychological factor underlying the spread of misinformation (e.g.,
partisan bias, analytic thinking, or the need for chaos). Our model integrates several of these
distinct theoretical approaches to provide an overarching model and scholars and inform
policy-makers who are tasked with combating misinformation. Specifically, our model
incorporates research from personality psychology, cognitive psychology, political psychology,
and political science and explains how users, media outlets, online platforms, policymakers and
institutions might design different interventions to reduce belief in and sharing of false news.
5

Our model of misinformation belief and sharing is presented in Figure 1. According to


the model, exposure to misinformation increases belief (Path 1) which, in turn, increases
sharing (Path 2). However, exposure to misinformation can increase sharing directly (Path 3)
even if it does not increase belief. We describe how various psychological risk factors can
increase exposure to misinformation (Path A) as well as the impact of misinformation on belief
(Path B) and sharing (Path C). We describe each of these paths below. We also speculate on
reverse pathways from sharing to belief and exposure (i.e. the light grey pathways in Figure 1)
in the future directions section. According to the model, when one individual shares
misinformation it increases exposure to misinformation among other people in their social
network (shown in Figure 2). This, in turn, increases the likelihood that these new individuals
will believe and share the information with their own social networks. In online environments,
this spread can unfold rapidly and have far-reaching consequences for exposure to
misinformation.

Figure 1. A model of (mis)information belief and spread. According to the model, exposure to misinformation
increases belief (Path 1) which, in turn, increases sharing (Path 2). However, exposure to misinformation can
increase sharing directly (Path 3) even if it does not increase belief. Psychological risk factors can increase exposure
to misinformation (Path A) as well as the impact of misinformation on belief (Path B) and sharing (Path C). We
describe each of these paths in our paper. We also speculate on reverse pathways from sharing to belief and
exposure in the future directions section (exemplified by the light grey arrows).

Understanding the factors driving the spread of misinformation, outside of belief (i.e.
Path 3), is also important for developing interventions and solutions to the fake news and
misinformation problem. Interventions designed to detect and tag falsehoods (i.e. fact-
checking) in a way that decreases belief may not be sufficient (discussed below; see also
Mourão & Robertson, 2019). If people are motivated to share fake news in order to signal one’s
political identity, increase out-party derogation, generate chaos, or make money, then they will
6

likely place less value on whether stories are true or false, as long as the stories further their
political agenda (see Osmundsen et al., 2020). Further, these motives are not necessarily
uniquely related to misinformation. For example, partisans and elites likely share information
that is factually correct in order to derogate the outgroup party. However, given that
misinformation tends to be particularly negative, spreads quickly, and is false (Brady et al. 2017;
Vosoughi, Roy, & Aral, 2018), it may contribute to social conflict even more so than factual
information. In the following sections, we review the individual-level risk factors for
susceptibility to believe and share misinformation and recent efforts that have been made to
reduce the spread of misinformation. Specifically, we review the role of partisan bias,
polarization, political ideology, cognitive styles, memory, morality, and emotion. By evaluating
potential interventions, we aim to provide a useful resource for scholars as well as policy
makers. However, far more work needs to be done in this area and we identify critical gaps in
knowledge and provide a roadmap for future research on misinformation.

Psychological Risk Factors


Partisan bias
When partisans are exposed to information relevant to a cherished social identity or
ideological worldview, that information is often interpreted in a biased manner that reinforces
original predispositions--a phenomenon known as partisan bias (Kahan, Peters, Dawson &
Slovic, 2017; Meffert, Chung, Joiner, Waks & Garst, 2006; Van Bavel & Pereira, 2018). For
example, someone who believes in the death penalty might give less weight to information
suggesting that the death penalty does not reduce crime. This partisan bias in belief can be due
to selective exposure to partisan news or motivated cognition (see Festinger et al., 1956;
Kunda, 1990)--although these two factors are often hard to disentangle. Political division can
lead partisans to believe misinformation or dismiss real news as fake (Schulz, Wirth, & Müller,
2018)--making it an important risk factor for believing misinformation (Path B). These partisan
bias has been observed across a wide variety of contexts and tasks. Studies provide evidence of
motivated cognition across the political spectrum (Ditto et al., 2018; Mason, 2018), such as
updating opinions on policy issues in a way that affirms one’s preferred political party. There is
evidence of partisan bias in both the United States (Cambell et al. 1960, Druckman 2001, Kam,
2005) and abroad (Coan et al. 2008, Brader, Tucker, & Duell 2012, Brader et al. 2020). We
suspect it emerges from a more basic cognitive tendency to divide the world into groups (Tajfel,
1970) and develop a sense of shared social identity with fellow in-group members (Tajfel &
Turner, 1986; see Cikara & Van Bavel, Ingbretsen, & Lau, 2017).
When people process information, they are influenced not only by the information itself
but by a variety of motives, such as self-, group-, and system-serving goals (Jost, Hennes &
Lavine, 2013; Van Bavel & Pereira, 2018). Theories that incorporate these goals, referred to as
motivational models, posit that when individuals encounter false information that identity-
congruent (i.e., untrue positive information about the ingroup or untrue negative information
about an outgroup) their identity-based motives (e.g., the desire to believe information that
makes us feel positive about our group) will conflict with accuracy motives (e.g., the desire to
have accurate beliefs about the world; Tomz & Van Howeling, 2008; Van Bavel & Pereira, 2018).
In other words, when a Republican in the United States encounters a fake online news story
from an unknown source with the headline “Pope Francis shocks World, endorses Donald
7

Trump for President”, their accuracy goals might motivate them to dismiss the story (because it
is false or from an untrustworthy source) but their social identity goals would motivate them to
believe that it is true (because it is positive and about their ingroup leader). 1 When accuracy
and identity goals are pitted against each other in this way, the goal that is most salient or most
“valued” by the individual often determines which belief to endorse and thus how to engage
with the content. The impact of identity extends to all kinds of judgments. For instance,
American partisans assessed the economy more positively when political power shifted in their
favor and more negatively when it shifted in the outgroup’s favor (Gerber & Huber, 2010).
When someone is faced with evidence that contradicts their beliefs, rational individuals
should update or change those beliefs. Therefore, the finding that people maintain their beliefs
that were discredited as false (Path B) has been of central interest to psychologists (e.g.
McGuire, 1964; Ross, Lepper & Hubbard, 1975). Although backfire effects--where people
strengthen their initial beliefs in the face of disconfirming evidence--appear to be rare (see
Wood & Porter, 2019), politically relevant beliefs are nevertheless quite resistant to
disconfirming evidence (see Aslett et al. 2020; Batailler, Brannon, Teas & Gawronski, 2020;
Pereira, Harris & Van Bavel, 2020). For instance, both Democrats and Republicans were more
likely to believe in and willing to share negative fake news stories featuring politicians from the
other party (Pereira et al., 2020. In both cases, partisans from both parties believed fake (or
real) news when it portrayed the outgroup negatively. Similarly, the most important variable in
predicting whether an individual was likely to assess an article (which professional fact-checkers
believed was false) as misleading was the alignment of the partisan slant of the news source
with their own identity (Aslett et al., 2020). Worse yet, one study found that partisans
continued to believe information that aligned with their partisan identity, even when that
information was unambiguously labeled as false (Bullock, 2007). This suggests that not only are
beliefs affected by our identity and goals, but the mutability of those beliefs is as well.
Politically biased reasoning is also found outside of the United States, with similar
findings in Brazil (Samuels & Zucco Jr, 2014) and Uganda (Carlson, 2016). There is also good
reason to think it might motivate people to share misinformation even if they believe it is false
(Pair C). This is analogous to propaganda, where political actors promote false or misleading
information to advance a political cause or point of view. Sharing misinformation can also serve
an identity signaling function--letting others know exactly where one stands on an issue, leader,
or party. In any event, the impact of identity appears to be a feature of human nature rather
than a single political system. However, there are features of the political context that may
amplify the impact of identity on misinformation belief and sharing. We discuss one such
feature in the next section: political polarization.
Polarization
Another risk factor that both impacts and is impacted by misinformation is political
polarization. Political polarization refers to the divergence of political attitudes and beliefs
towards ideological extremes, although a more pernicious form of polarization focuses less on
the triumphs of ingroup party members than on dominating opposing party members (Finkel et
al., 2020). Over the past few decades, political polarization has become more extreme in many
countries (Kevins & Soroka, 2018; Pew Research Center, 2014; Zimerman & Pinheiro, 2020). A
1 According to Buzzfeed, this was the most engaging Fake News story in the 2016 US Presidential election, with
over 960,000 engagements (comments, likes, reactions and shares; Silverman, 2016).
8

polarized political system likely motivates people to spread misinformation for partisan gain
(Path C), which can, in turn, increase exposure to misinformation for a broad segment of the
population (Path A and Figure 2). Likewise, by making political identity salient it may also
motivate people to select hyper-partisan information sources and increase their belief in
misinformation (Path B). As we noted above, the spread of misinformation appears to increase
as an election draws near (Silverman, 2016), which is often a moment of heightened
polarization. As such, polarization is an overarching risk factor for all aspects of our model.
Polarization is likely higher in two-party systems (or where two parties dominate the
political environment). From a social identity perspective, a two-party system can lead to an “us
vs them” mentality resulting in increased prejudice, intergroup conflict, and general out-group
or out-party derogation (Abramowitz & Webster, 2018; Iyengar, Sood, & Lelkes, 2012; Johnson,
Rowatt, & Labouff, 2012; Tajfel, 1970; Tajfel & Turner, 1986). This mentality can make it difficult
to change political opinions because the simple act of fact checking a false claim can be seen as
supporting a partisan agenda. In the US recently, many right-wing partisans left mainstream
social media platforms (e.g., Twitter) and migrated to a new platform (i.e., Parler) to avoid
being fact-checked (Bond, 2020). Rather than updating their beliefs, partisans may see fact-
checkers as biased (Walker & Gottfried, 2019) or gravitate towards other social media platforms
that will not correct their false beliefs (Isaac & Browning, 2020). Polarization may not only
amplify the impacts of identity on belief (Path B; see Van Bavel & Pereira, 2018) but motivate
sharing misinformation as a way of signaling their own political identity (Path C; see Brady,
Crockett & Van Bavel, 2020). In a polarized environment, relatively neutral information may be
seen as politically relevant, information offered by one’s ingroup is more likely to be believed as
true, and information offered by the outgroup is likely to be dismissed as false.
Another negative consequence of the “us vs them” mentality is affective polarization.
Negative feelings towards the out-group are heightened by polarization, and this acts to
increase the divide between parties (Abramowitz & Webster, 2018; Banda & Cluverius, 2018;
Finkel et al., 2020). Partisans, especially extreme partisans, may be motivated to spread fake
news to bolster the ingroup group or foster negative feelings toward the opposition--even if
they do not believe the information (Path C). One recent Twitter-based study investigated this
hypothesis in more than 2,000 Americans by combining surveys with real-world news sharing of
more than 2 million tweets (Osmundsen, Bor, Vahlstrup, Bechmann, & Petersen, 2020). Not
only did the strength of partisan identity predict the likelihood of sharing at least one story
from a fake news source, but negative affect directed toward the out-group party, more so than
positive affect toward the in-group party, appeared to drive sharing behaviors (see also Rathje,
Van Bavel, & van der Linden, 2020). Although correlational, these results are in line with the
notion that sharing fake news may be motivated by affective polarization. Importantly, this
would suggest that people care less about accuracy and more about whether or not
information aligns with their partisan identity (i.e. if it effectively derogates the out-group; see
Osumndesn et al., 2020) in polarized contexts.
Polarization varies widely across nations. Some countries (e.g. Norway, Sweden and
Germany) have exhibited long-term trends of decreasing polarization, while others (e.g.,
Canada, Switzerland and the US) have long-term trends of increasing polarization. However,
since 2000, many countries have experienced greater polarization (regardless of long-term
trends). This latter finding is consistent with conclusions drawn from recent investigations in
9

Sweden where both a nationally representative survey (Renström, Bäck & Schmeisser, 2020)
and linguistic analysis of speeches in parliament (Bäck & Carroll, 2018) have found affective
polarization. Analysis of multi-party systems is complicated since it is not always entirely clear
who the “out-group party” is. In these systems, the motivation for sharing misinformation to
bolster one’s in-group may be less relevant. To the extent that there are clear coalitions
dividing multiple parties into “blocks” or in countries that have recently elected unprecedented
parties into governments (e.g. Italy and Greece have both recently elected far right-leaning
parties), it is probable that the spread of misinformation may also reinforce (and be reinforced
by) polarization itself, in a manner similar to the US.
Polarization amongst political elites contributes to political polarization in the general
population, and here, fake news sharing can be especially dangerous. Within a few clicks,
political elites can post information which can reach millions of people in within hours or
minutes and these social media posts drive further mainstream media coverage (Wells et al.
2016). To increase support within and amongst their in-group party, political elites often post
information intended to derogate the opposition (i.e., utilizing affective polarization), and their
supporters respond in kind, by expressing more negative evaluations toward the out-group
(Banda & Cluverius, 2018). If the information is false, or that party endorses a misinformed
policy, then it can alter the opinions and beliefs of citizens (Dietz, 2020). For example, American
conservatives tend not to believe in human-caused climate change (e.g., Mccright & Dunlap,
2011). President Trump has repeatedly expressed his skepticism of climate change on social
media (Matthews, 2017). Following his election in 2016, belief in climate change actually
decreased in the United States, for both Republicans and Democrats, suggesting that partisans
update their beliefs based on cues from political leaders (Hahnel, Mumenthaler, & Brosch,
2020; see also Zawadzki, Bouman, Steg, Bojarskich, & Druen, 2020). Interestingly, the changes
in climate change belief from pre- to post-election were mediated by increased positive feelings
towards the Republican party (by both Republicans and Democrats), and those with the most
pronounced increases were the ones who most strongly reduced their climate change beliefs.
These results are worrisome because increased climate skepticism, especially among political
elites, can harm mitigation efforts (Dietz, 2020).
Polarization is an overarching risk factor that affects all aspects of our model (i.e. Path A,
B, and C), and strongly contributes to a vicious cycle of polarization reinforcement. However, it
should be noted that there are various barriers and moderators that are likely relevant for
stemming this cycle. For example, research suggests that many people are reluctant to share
fake news because doing so could be harmful to their reputation (Altay et. al, 2019). Thus, one
likely moderator are the social norms operating with different partisan communities. Some
work has found similar patterns of misinformation belief among both Democrats and
Republicans (Path B), while Republicans are more willing to share misinformation (Path C;
Guess et al., 2019; Pereira et al., 2020). This would suggest that within the Repulican
community, the sharing of misinformation is normative or there is no norm that would select
against misinformation sharing. These party differences may also stem from personality factors,
which we discuss in the next section.
Political ideology
Although there is a body of work finding symmetrical patterns of misinformation belief
and sharing among people on the left and right, there is also evidence of some important
10

differences. Political ideology, which refers to the set of organizing beliefs individuals hold
about the world generally and the polity specifically (Path B; Jost, Federico & Napier, 2009),
may lead some people to be more susceptible to misinformation. Recent research on online
misinformation has generally focused on contrasting people with more conservative (right-
wing) versus liberal (left-wing) political beliefs. In one study of users who agreed to share the
content of their Facebook feed, conservatives, and particularly extreme conservatives, were
nearly ten times more likely to share fake news than liberals (Guess, Nagler & Tucker, 2019).
Other work conducted during the COVID-19 pandemic found an association between political
conservatism and willingness to believe misinformation about the pandemic in a large US
sample, (Calvillo, Ross, Garcia, Smelter & Rutchik, 2020). A large international study observed a
similar, albeit weaker relationship, when analyzing the relationship between political ideology
and conspiracy theory beliefs in 67 countries (Van Bavel et al., 2020). These studies raise the
question of why these associations between conservatism and online misinformation are
found.
One possibility is that there is something about conservative ideology that makes them
susceptible to misinformation and other false beliefs (Baron & Jost, 2019; van der Linden,
Panagopoulos, Azevedo & Jost 2020). However, other more proximate causes may also account
for these findings. For instance, the majority of fake news online (e.g., during the 2016 US
presidential election) had conservative or anti-liberal content, meaning that exposure to
shareable news was much larger for the average conservative compared to the average liberal
(Path A; Guess, Nyhan & Reifler, 2018). In the limited cases of pro-Clinton fake news, liberals
were actually more likely to share links to these stories than conservatives (Path C; Guess,
Nagler & Tucker, 2019). Indeed, ideological congruence--or partisan alignment--was the most
important demographic predictor of believing news stories that were labeled as false or
misleading by professional fact checkers. This behavior was prevalent for both liberals
(believing false stories from liberal sources to be true) and conservatives (believing false stories
from conservative sources to be true; Aslett et al., 2020; see also Periera et al., 2020). As such,
it is currently difficult to disentangle the effects of identity from political ideology in many of
these studies.
In the US, where conservative national leadership and news media has at times
promoted false information, increased susceptibility to false news among conservatives was
mediated by participants’ approval of the current Republican president and correlated with Fox
News media consumption (Calvillo et al. 2020). The relationship with Presidential approval
suggests that identity leadership (Hogg & Reid, 2001) might motivate people to believe fake
news (i.e. the tendency of individuals to take cues for beliefs and actions from leaders of their
identified groups). However, when non-political fake news (e.g., JFK and Marilyn Monroe having
an unborn child) was presented to partisans, Republicans were more likely to believe it than
Democrats (Pereira et al., 2020). Thus, there may be ideological differences in baseline levels of
susceptibility to misinformation, but aspects of partisan identity might dominate belief and
sharing once the information is clearly aligned their identity. As such, we believe there are both
ideological differences driving beliefs as well as aspects of social identity (i.e., identity-
congruence and identity leadership) driving misinformation belief and sharing.
Cognitive style
11

Differences in tendency to engage in analytical thinking are related to differences in the


belief in misinformation (Pennycook & Rand, 2019). Analytical thinking is usually gauged using
the Cognitive Reflection Test (CRT; Frederick, 2005), which involves asking people to respond to
simple mathematical questions that have an intuitive, but incorrect answer and a correct
answer that can be reached with a few moments of reflection. High scores on this task require
not only the capacity to answer these items correctly but also the motivation to override initial
intuitions to compute an accurate response (i.e., accuracy motives). One paper found that
Americans who scored higher on the CRT were better at discerning real from fake news
(Pennycook & Rand, 2019; see also Bronstein, Pennycook, Bear, Rand & Cannon, 2019). And a
recent re-analysis of this research found that both partisanship and cognitive reflection
independently contributed to fake news detection, with partisanship predicting biased
judgments of fake news and cognitive reflection predicting accurate discernment of fake news
(correct detections minus false alarms) (Batailler et al., 2020). The CRT is correlated with
“bullshit receptivity” – the ability to be able to distinguish pseudo-profound statements from
genuinely profound ones (Pennycook, Cheyne, Barr, Koehler & Fuglesang, 2015). More recent
work has linked the two constructs to a common factor capturing lack of skepticism and
reflexive open-mindedness (Pennycook & Rand, 2020).
Some research suggests that individual differences in analytical thinking may better be
characterized as numeracy (and insight; see Patel, Baker & Scherer, 2019). As such, individual
differences in the belief of some misinformation may hinge more on the capacity of people to
understand quantitative information and statistics, which are integral parts of scientific and
political communication. This has been particularly pronounced during the coronavirus
pandemic, which required an understanding of nonlinear growth curves, and misinformation
about the virus alleged that it was less deadly than the common flu. Indeed, a recent study with
representative national samples in five nations (the UK, Ireland, USA, Spain and Mexico) found
that increased numeracy skills along with trust in scientists, were related to lower susceptibility
to COVID-19 misinformation across all countries (Roozenbeek et al., 2020). Moreover, factors
like age, gender, education, and political ideology were not consistent predictors of belief.
Therefore, a lack of numeracy skills might underlie some of the findings from the CRT
literature and provide a risk factor for the belief in misinformation (Path B). Some studies
suggest that higher scores on the CRT, numeracy, or education also correlate with more
polarized beliefs about contentious political issues. Under this framework, better reasoning
ability ironically leads to more motivated reasoning and a greater ability to justify identity-
congruent beliefs (Kahan, 2012, Kahan et. al, 2017). However, this finding does not appear to
apply to misinformation, where CRT and numeracy ability predict reduced susceptibility to
misinformation for individuals across the political spectrum. This has led some to claim that
“lack of reasoning” better explains susceptibility to fake news than motivated reasoning
(Pennycook el al., 2018; but see Batailler et al., 2020).
One motive for spreading misinformation--even if they know it’s false--is an anti-social
mindset known as need for chaos (Path C; Petersen, Osmundsen & Arceneaux, 2020). The need
for chaos is captured by agreement to statements such as “sometimes I just feel like destroying
beautiful things.” Hence, the need for chaos represents the idea that “some men just want to
watch the world burn” (Nolan, 2008). As such, they may be willing to share misinformation
even if they do not believe it (indeed, sharing false information might be especially appealing to
12

these individuals). This need is hypothesized to arise in individuals high in status-seeking


dominance tendencies (such as psychopathy and social dominance orientation) when they find
themselves in socially and economically marginalized situations (including social isolation). In a
representative sample of Americans, the need for chaos was correlated with a willingness to
share hostile online rumors (Petersen, Osmundsen & Arceneaux, 2020). Importantly, the need
for chaos appears to transcend partisan motivations. In the US, individuals low in need for
chaos usually prefer to share rumors about their partisan outgroup (Democrats about
Republicans and vice versa), while individuals high in need for chaos indicated greater
willingness to share rumors about both party groups indiscriminately.
The existence of such individuals who are willing to share fake news and hostile rumors
with different motivations (i.e. non-partisan) is troubling as those individuals may not be
susceptible to the same interventions as those targeting reductions in polarization or partisan
motivation. Thus, interventions to combat these actors should aim to decrease exposure to
misinformation (Path A) or reduce their capacity to share it with others (e.g., by removing
repeat offenders from social media platforms). A minority of users may produce the majority of
misinformation, but this motivated minority can have significant destabilizing effects on the
majority of users in informational networks (Juul & Porter, 2019; Törnberg, 2018). For example,
these individuals might generate a disproportionate amount of information that is then
amplified by people with the sorts of partisan or ideological motives we mentioned above
leading to a shift in the majority opinion (e.g. Figure 2).
Another difference in thinking style that appears to be a determinant of misinformation
beliefs is intellectual humility. In many ways, intellectual humility is opposed to the need for
chaos. It is a multi-faceted construct characterized by virtues of reasoning such as open-
mindedness, modesty and corrigibility (Alfano et al., 2017). Several studies have found that
individuals higher in intellectual humility are less likely to believe common conspiracy theories
or endorse fake news (Meyer, 2019), as well as being less likely to endorse conspiracy beliefs
related to the COVID-19 pandemic (Meyer, Alfano & de Bruin, 2020). Importantly, the latter
finding was recently replicated in a large, global study involving over 40,000 participants from
61 countries finding open-mindedness, a facet of intellectual humility, being one of the
strongest negative predictors of COVID misinformation beliefs among a wide range of
predictors drawn from the social psychological literature (Pärnamets et al., 2020). This
construct might therefore provide a buffer against misinformation, but it is not known how to
significantly increase this trait.
Memory
There are multiple cognitive risk factors that affect when and why individuals might
believe misinformation and memory appears to be a central factor (Path B). One well-
established effect in the cognitive psychology literature is known as the illusory truth effect (see
Dechêne, Stahl, Hansen & Wänke, 2010). The illusory truth effect occurs when statements that
are seen repeatedly are more likely to be recalled as true, regardless of whether they are true
or false (Hasher, Goldstein & Toppino, 1977). For instance, an early study on wartime rumor
spreading found that rumors people had heard before were more likely to be believed (Allport
& Lepkin, 1945). Similarly, fake news that is viewed multiple times is more likely to be
incorrectly remembered as, and believed to be, true compared to fake news only viewed once
(Aslett et al. 2020; Berinsky & Wittenberg 2020). Even one additional exposure to a particular
13

fake news headline increases the perception of accuracy (both on the same day and a week
later)--regardless of whether the headline had been flagged by fact-checkers or not
(Pennycook, Cannon & Rand, 2018).
There are also cognitive changes that individuals undergo as they age that affect their
belief in fake news. A recent examination of large-scale Facebook data found that elderly
individuals shared almost seven times more links to fake news websites than the youngest
individuals (Guess, Nagler & Tucker, 2019). Why might this be the case? The authors of the
study point to digital literacy as the most obvious potential culprit, with the over 65 generation
being less familiar with Facebook (see as well Brashier & Schacter 2020). However, another
potential culprit is the cognitive decline that occurs with age: elderly adults have reduced
source memory (Spencer & Raz, 1995) and recollection abilities (Prull et al., 2006). These
deficits appear to lead to increased difficulty in rejecting misinformation, even when they
initially knew it was false (Paige et al., 2019) or when it was initially tagged by a fact checker
(Brashier & Schacter, 2020). Indeed, elderly individuals were more susceptible to the illusory
truth effect (Law, Hawkins & Craik, 1998). Brashier and Schacter (2020) also point to the
increase in trust associated with aging (Poulin & Haase, 2015). A third potential explanation is
that older people tend to be more polarized than younger people and have consumed a
lifetime of potentially partisan news (Converse 1969). Another potential explanation could be
that if there are potential costs to be born in the labor market from being identified as having
shared fake news, older retirees would not have to worry about the consequences of such
sanctions. As such, it is too early to know how age-related changes in cognition account for
their tendency to share misinformation and if this is mediated by shifts in belief. We do know,
however, that this is a very important risk factor and more research should explore this topic.
Morality and Emotion
One critical factor involved in the belief and spread of misinformation is emotion. A
large project analyzed over 100,000 tweets dated between 2006 to 2017, looking for factors
that are related to greater diffusion of tweets (Vosoughi, Roy, & Aral, 2018). Their main finding
was that misinformation, as compared to true information, diffused wider and faster. This was
true of misinformation on a variety of topics, but especially of political misinformation
(compared to false information on urban legends, business, terrorism, science, entertainment,
etc.). However, there were two other interesting factors. First, misinformation tended to be
more novel (compared to tweets the users had previously seen). This suggests that, in contrast
to people being more likely to believe news they have seen previously, people are more likely to
share novel news (Path C). Second, misinformation elicited greater surprise than true
information, as well as more fear and disgust. Other recent studies have found that relying on
emotion as opposed to reason increases belief in fake news (Path B; Martel, Pennycook &
Rand, 2020).
Morality, specifically moral violations, are associated with negative emotions (e.g.,
contempt, anger and disgust (Rozin, Lowery, Imada & Haidt, 1999). Due to this connection
between emotions and morality, another study of over 500,000 tweets explored the
relationship between the type of language in the tweet and its retweet count (Brady, Wills, Jost,
Tucker & Van Bavel, 2017). They found moral-emotional language (but not distinctly emotional
or distinctly moral language) was associated with increased retweet count in political
conversations--each additional moral emotional word was associated with a 20% increase in the
14

probability of it being retweeted (the same pattern was found among hundreds of political
leaders; see Brady et al., 2019). Importantly, the use of this language was associated with
increased polarization in the retweet network as people were more likely to share content that
aligned with their political beliefs. These correlational studies capture huge real-world samples
and suggest that when fake news elicits surprise and moral-emotions, that piece of fake news
may be more likely to be shared online. Vosoughi and colleagues (2018) suggest that
misinformation elicits greater negative, potentially moral, emotions (such as disgust). As such,
one potential solution may be to use moral emotional language when delivering true
information, allowing it to compete more efficiently against misinformation in virality.

Figure 2. The spread of misinformation in social networks. According to the model, when one individual shares
misinformation it increases exposure to misinformation among other people in their social network. This, in turn,
increases the likelihood that these new individuals will believe and share the information with their own social
networks. In online environments, this spread can unfold rapidly and have far-reaching consequences for exposure
to misinformation.

Potential Solutions
The literature reviewed thus far clarifies the psychological factors underlying
susceptibility to the belief and spread of misinformation. However, it is offset by a growing
body of work explaining potential interventions for mitigating these issues. In the section
below, we will discuss four potential solutions for the misinformation problem: (1) fact-
checking, (2) equipping people with the psychological resources to better spot fake news (e.g.
fake news inoculation), (3) eliminating bad actors, and (4) fixing the incentive structures that
promote fake news. In Table 1, we show how these interventions target the pathways outlined
15

in our model (Figure 1) and how they may be useful to address the psychological risk factors we
presented above.
The traditional source of reducing misinformation has been fact-checking and there are
now numerous dedicated websites for exactly this task (e.g., Snopes, Politifact). Multiple meta-
analyses suggest that fact-checks can successfully reduce belief in misinformation (Chan et al.,
2017; Clayton et al., 2019; Walter & Murphy, 2018). However, fact-checking is much less
effective in political contexts, where people have strong prior beliefs, partisan identities, and
ideological commitments (Path B; Walter et al., 2020), or when the motivations underlying
sharing misinformation (e.g. out-group derogation, need for chaos) are unrelated to the
accuracy of the information (Path C). People are sensitive to the source of the fact-checks
(Schwarz et al., 2016), and are skeptical of fact-checks from political outgroup members
(Berinsky, 2017). Some work suggests that politically incongruent fact-checks can lead to a
“backfire effect,” whereby a fact-check causes people to believe more strongly in that
misinformation (Nyhan & Reifler, 2010). However, recent research suggests that this backfire
effect is rare, and that, for the most part, fact-checks successfully correct misperceptions
(Wood & Porter, 2019). As such, fact checking may have a limited range of utility. Specifically, it
might be most useful when the fact checker is seen as neutral or the domain of misinformation
is unrelated to partisan identities. It is less useful when applied to bad actors who are
indifferent to the factual basis for their claims.
There are some challenges, however, for implementing fact checking. For instance,
introducing certain warning labels on misinformation can have unwanted spillovers leading
people to decrease their credence in true news headlines as well (Clayton et al., 2019;
Pennycook & Rand, 2017). This example illustrates the complexities involved in designing
effective interventions. Moreover, fact checking requires time to have fact checks completed
and published, which still leaves open the question of how to address misinformation when it
first appears online (Aslett et al. 2020). For example, one study found an on average 10-20
hours lag between the first spread of fake news and fact checking, whereby fake news was
more often spread by few active accounts while the spread of fact checking information relied
on grass-roots user activity (Shao, Ciampaglia, Flammini, & Menczer, 2016). It is also unclear if
fact-checks reach their intended audience (Guess, Nyhan, et al., 2020) since the type of people
who tend to visit untrustworthy websites are not necessarily exposed to fact-checks.
Additionally, the “continued influence effect” of misinformation suggests that people continue
to rely on misinformation even after it is debunked (Berinsky & Wittenberg 2020; Lewandowsky
et al., 2012).2 For fact-checks to be effective, they should provide detailed alternative
explanations that fill in knowledge gaps (Lewandowsky et. al, 2020).
An effective complement to fact-checking or de-bunking is to “pre-bunk,” or better
prepare people against deception and manipulation (Path B). Similar to vaccines, “pre-bunking”
involves the same logic that it is better to prevent than to cure (van der Linden, 2019). For
instance, playing an interactive game in which people try to create fake news can successfully
“inoculate” people against misinformation and make them better at identifying false headlines
(Roozenbeek & van der Linden, 2019). Importantly, the effects of this intervention were not
2 However, contrary to popular belief that it is harmful to repeat misinformation when debunking it, one study has
shown that repeating misinformation together with a retraction was more effective in reducing reliance on
misinformation than those that simply showed the retraction alone (Ecker, Hogan & Lewandowsky, 2017).
16

moderated by cognitive reflection, education, or political orientation, though it was slightly less
effective among males and older individuals. Additionally, this inoculation-based intervention
had positive effects up to several months after the intervention was complete--suggesting that
the lessons were sustained (Martins et. al, 2020). Similarly, a media literacy intervention in
which Facebook and WhatsApp users were given “tips” for spotting misinformation increased
discernment between mainstream and untrustworthy headlines for people across the political
spectrum (Guess, Lerner, et al., 2020). Thus, inoculation or media literacy-based interventions
can be used to reduce susceptibility to misinformation for individuals with a range of
demographic factors, cognitive styles, and political orientations.
In addition to teaching media literacy, another scalable psychological solution to the
fake news problem is to encourage more reflective thinking. As we noted above, people who
lack the motivation to generate accurate responses may rely more on intuitive responses and
believe or share misinformation (Pennycook & Rand, 2019). One recent study (Bago et al.,
2020), showed that encouraging people to deliberate about whether a headline is true or false
can improve accuracy in detecting fake news. Participants were required to give fast intuitive
responses to a series of headlines, and then subsequently given an opportunity to rethink their
responses, free from a time constraint (thus permitting more deliberation). Deliberation
corrected intuitive mistakes for the false headlines (and the true headlines were unaffected).
Additionally, giving people a brief “accuracy nudge” in which they are asked about whether a
single headline is true can lead to decreased sharing of fake news (Pennycook et al., 2019,
2020). While these effects are modest, they are not moderated by various important factors,
such as cognitive reflection, science knowledge, or partisanship. Thus, scalable “nudges” can be
implemented by social media platforms to encourage analytical thinking and a focus on
accuracy (e.g., Twitter recently implemented a reminder to read any articles before retweeting
them); however, further research is necessary to ascertain the long-term effectiveness of
repeated exposure to these nudges.
While media literacy interventions and accuracy nudges may work for people who are
motivated to be accurate or motivate them to be accurate (Path B), respectively, other
solutions must address bad actors and trolls who willingly share fake news or try to manipulate
public opinion (Path C). Bots and influence campaigns have attempted to sow discord by
spreading false and hostile claims (Stukal et al., 2017) and by using polarizing rhetoric and
content (Simchon, Brady & Van Bavel, 2020). To resolve these issues, companies may need to
implement and enforce regulations. For instance, Twitter and Facebook have reportedly made a
number of changes to their platform to remove bots or conspiracy theory groups (Mosseri,
2017; Roth & Pickles, 2020). Most recently, Facebook and Twitter have removed all content
from the conspiracy theory group QAnon (Wong, 2020). However, social media platforms often
do not make the details behind these changes transparent, making it difficult for researchers or
policymakers to evaluate the efficacy of these solutions. Nonetheless, observational research
has found that interactions with fake news websites fell by about 60% between 2015 and 2018
on Facebook and Twitter, likely because of stricter content moderation policies (Allcott et al.,
2019). Thus, it appears that social media companies have significant power to stem the spread
of fake news through internal content moderation strategies.3
3 One potential challenge is that misinformation purveyors may then migrate to other platforms that ignore or
embrace the spread of false content (e.g., recent reports suggest that people are migrating from Facebook &
17

Lastly, one can shift the incentive structure that contributes to the misinformation
problem on social media platforms. Currently, the majority of these platforms are structured
such that the “goal” of posting is to garner likes, shares, and/or followers, and thus, false news
(Vosoughi et al., 2018) or posts expressing moral outrage (Brady et al., 2019) are more likely to
go “viral” online. Further, platforms like YouTube can offer substantial monetary incentives for
viral content.4 This is especially problematic because the algorithms that regulate and moderate
content (both video creation and community engagement in the comment section) have been
the subject of controversy, especially in relation to the role of partisanship (see Jing, Robertson,
& Wilson, 2019). Thus, because of these incentive structures, social media platforms may
inadvertently be providing incentives for people to create fake news and encourage its rapid
spread. This would suggest that some of the most important steps to reduce exposure (Path A)
and sharing (Path C) of fake news may need to come from the platforms themselves.5
People also respond to incentives for accuracy. For instance, paying people to give more
accurate answers to factual questions about politics reduces polarized responses to politically
contentious facts (Bullock et al., 2013; Prior et al., 2015). However, paying people to correctly
assess the veracity of news that had appeared in the past 24 hours did not increase accuracy
(Aslett et al. 2020), so it is unclear whether these incentives work in all contexts. Regarding the
creation of fake news (Path A and C) sending state legislators a series of letters about the
reputational costs of making false claims reduced the number of false claims made by these
politicians (Nyhan & Reifler, 2015). This could be especially powerful since political elites often
have large, receptive audiences for misinformation. In sum, shifting the incentive structure to
disincentivize the creation and sharing of false claims and incentivizing the detection of
misleading claims, and may help prevent the spread of misinformation.
Social media companies can also address this incentive structure. As we noted above,
they can change the platform design to “nudge” people toward truth and away from falsehoods
(Brady et al., 2020; Lewandowsky et al., 2012; Lorenz-Spreen et al., 2020), or change the
algorithmic structure of the website to downrank false or hyper-partisan content (Pennycook &
Rand, 2019b). For instance, they can make it harder to see (Path A) or share false information
(Path C), or add qualifiers that discredit misinformation (Path B). Policy-makers can also
address the broader incentive structure surrounding fake news by creating relevant laws.
However, these should be implemented with caution, as a number of previous laws “banning”
fake news have had unclear definitions of what fake news is, and have raised questions about
government censorship. For example, China’s fake news laws cast a broad definition of fake

Twitter to Parler, a platform funded by conservative activists in the United States).


4 One YouTube content creator reported making more than $10,000 USD for a single video with 3 million views
(https://youtu.be/0EEZ4ECr9iU?t=403). It should be noted that some videos can reach several billion views on that
platform.
5 The platforms are of course aware of this possibility; see for example YouTube’s January 2019 attempt to remove
videos with misinformation from its recommendation algorithm
(https://blog.youtube/news-and-events/continuing-our-work-to-improv), Facebook’s June 2020 decision to label
misleading posts from politicians (https://www.latimes.com/business/technology/story/2020-06-26/facebook-
following-twitter-will-label-posts-that-violate-its-rules-including-trumps), and Twitter’s November 2020 decision to
test a warning label when someone attempts to like a Tweet with misinformation in it
(https://www.usatoday.com/story/tech/2020/11/10/twitter-tests-add-misleading-information-label-when-liking-
tweets/6231896002/).
18

news, giving the government broad authority to imprison individuals who spread rumors
undermining the government (Tambini, 2017).
In each of the proposed solutions above, we have highlighted their benefits, potential
limitations, and how each solution interacts with potential risk factors. Table 1 demonstrates
how exactly each intervention strategy might be utilized, depending on which risk factors are
being targeted, with the ultimate goal being to reduce misinformation sharing. Overall, we
recommend a multifaceted approach that combines aspects of fact checking, psychological
interventions (inoculation, accuracy nudges, and media literacy interventions), more
sophisticated content moderation strategies, and an adjustment of incentive structures that
lead to the creation and propagation of misinformation. We also suggest implementing
interventions with caution, and highlight the importance of rigorous pilot tests before scaling
solutions, ideally involving scholars outside of the platforms in addition to platforms’ internal
research teams: since much of these solutions have been tested through in-lab or online
experiments, the precise effects of large-scale misinformation-reduction strategies (for
example, from social media platforms or governments) is still unclear.
19

Table 1: Potential interventions, and the risk pathways and risk factors that they target.
INTERVENTION RISK PATHWAY RISK FACTORS: Addressed and
Unaddressed

(1) Fact-checking ● Social identity,


● Multiple meta-analyses polarization, ideology,
suggest that fact-checks thinking styles, cognition
successfully reduce belief in ● May be less effective
misinformation. Exposure → Belief → when sharing is not
● Caveats: Fact-checks may Sharing driven by belief (e.g.,
be less effective in need for chaos and
polarized contexts trolling).

(2) Providing psychological ● These interventions have


resources (e.g. media literacy, been shown across
inoculation, etc.) thinking styles (analytic
● “Nudging” people to think vs. reflective) and
about accuracy can reduce Exposure → Belief → political orientations.
belief in misinformation. Sharing ● May be slightly less
● “Pre-Bunking,” or Exposure → Sharing effective for males and
preemptively inoculating older adults.
people against
misinformation strategies.
● Caveats: Effective when
people are motivated to be
accurate.

(3) Removing Bad Actors ● Need for chaos and


● Removing producers of trolling behavior.
misinformation by deleting ● Reduces exposure to
their accounts or using misinformation.
algorithms to downrate Exposure → Belief →
their content Sharing
● Caveats: A lack of Exposure → Sharing
transparency behind
rationale for content
moderation from social
media platforms can lead to
backlash.
20

(4) Accuracy Incentives ● Social identity,


● Incentivizing accurate polarization, ideology,
responses reduces thinking styles.
polarized responses to ● Improves motivation (but
politically-charged facts. Exposure → Belief→ not necessarily ability) to
● Reminding politicians about Sharing distinguish between true
reputational incentives Exposure → Sharing and false news.
decreased the amount of
false news shared by
politicians.
● Caveats: Only works when
people can accurately
distinguish between true
and false news.

A roadmap for future research


Our review of the literature has revealed important areas where more work should be
done to advance our theoretical understanding of these processes as well as provide greater
utility to policy makers. For instance, more work needs to be done on investigating the
underlying psychology but also to develop richer theoretical frameworks. To date, many studies
have focused on single (and often narrow) theoretical explanations. For instance, many papers
have manipulated the key variable of interest while controlling for other factors and largely
focused their theoretical explanation on a single theoretical framework. At such, evidence for
many theoretical claims often explains a relatively small proportion of variance which offers less
value for policy interventions. What is largely missing from the literature is a concerted effort to
account for multiple explanatory factors in a single framework. We have made such an attempt
here.
As Kurt Lewin famously noted, there is “Nothing as practical as a good theory”. Our
model provides a framework for understanding how different factors might influence belief and
sharing in social networks. For instance, our model suggests that minimizing exposure or
limiting sharing might be more effective than altering beliefs (e.g., fact-checking) since exposure
and sharing are fundamental to all forms of misinformation dissemination, whereas belief is
not. More conceptual work in this vein would not only provide a more comprehensive theory,
but also allow scholars to predict the belief and spread of misinformation with greater
accuracy. This model also lays out potential directions for future work. For instance, we
included two light grey paths in our model (from sharing to belief and from sharing to
exposure). The link from sharing to exposure reflects the need to understand how these
dynamics unfold in large, online social networks (see Figure 2). Whereas the link from sharing
to belief connects to central questions in social psychology (e.g., does sharing misinformation
reinforce beliefs about the information due to self-perception or consistency needs?) We
encourage future work that both tests and expands on this model.
Another important issue in the literature is the possibility of alternative explanations or
limited domains of generalizability. For instance, many of the findings on motivated reasoning
21

could be equally explained by differences in prior information exposure. For instance, people
who spend 30 years watching the same partisan news sources may be unlikely to update their
beliefs in the face of a single new piece of information presented during a study. This failure to
update one’s beliefs in light of a novel piece of information might even be considered
“rational”. Of course, it hardly seems rational for people to believe and spread false claims. As
such, it might be more fruitful to understand how people are motivated to attend to and read
hyper-partisan or low-quality news sources since these motives might underlie differences in
exposure that precede motivated reasoning (see Xiao, Coppin & Van Bavel, 2017).
Although our paper has focused on political psychology, there are many other fields
tackling this issue. As such, future research should incorporate greater interdisciplinary
collaboration to harness the methods and insights at the intersection of psychology and other
fields. As we mentioned above, we need to better understand how political psychology
interfaces with incentives to create and spread misinformation (e.g., economic and political
incentives for individuals, nations and platforms) as well as the design features and algorithms
that amplify this problem. One promising interdisciplinary approach is computational social
science, where social scientists model human behavior using large real-world datasets.
However, these datasets are often correlational in nature making it difficult to infer causation.
As such, we encourage these scholars to combine the rich data and ecological validity of this
field, with the careful experiments necessary to assess causal relationships as well. This
approach will likely reveal the most promising targets for field interventions and policy change.
Moving forward, it is crucial that we begin to expand our understanding of exposure to,
sharing of, and belief in misinformation beyond just a single online platform (e.g., using Twitter
data). The vast majority of research on social media to date has been conducted on studies of
Twitter data (see Tucker et al. 2018), but Twitter is far from the only -- or most popular -- social
media platform, either globally or in the United States. According to data from the Pew
Research Center, YouTube and Facebook are vastly more popular than Twitter, and platforms
such as Instagram, Snapchat, and LinkedIn have similar numbers of users, to say nothing of
TikTok, which claims as of the summer of 2020 to have over 100 million monthly users in the
United States, surpassing Twitter.6 Likewise, encrypted messaging platforms like WhatsApp and
Telegram are extremely popular outside North American and may be the primary platform for
misinformation in those contexts. Moreover, we also know that many people actually “live”
their virtual lives on multiple platforms, and thus work focusing on a single platform incorrectly
characterizes the information environment that many inhabit in the digital age. Finally, these
platforms interface with mainstream media, meaning that a holistic understanding of
misinformation will require understanding how these different mediums work together to
spread misinformation.
Besides simply using different platforms as venues for conducting research, we believe
one promising area for future research is to actively theorize about the differences between
platforms that we would expect to interact with the various political, psychological, and
personality based factors we have identified above (e.g. Bossetta, 2018 for a platform-based
analysis applied to political campaigning). We propose the following three distinctions, all of
which could affect theory, research design, and policy implications:
6 www.pewresearch.org/internet/fact-sheet/social-media/; https://fortune.com/2020/08/24/tiktok-claims-user-
numbers-snapchat-twitter/
22

1) Audience size and composition: Different platforms -- especially those with smaller
audiences -- may be inhabited by, or specifically targeted towards, different people or
specific subgroups of the population. For instance, platforms that are inhabited
primarily by younger users may be subject to different patterns of behavior than those
inhabited by a more equitable spread of users across generations. Similarly, the
consequences of sharing misinformation among small, homogeneous groups of people
(think platforms organized by online bulletin boards divided by topics such as Reddit or
4chan) may be different than when the audience is more heterogeneous, possibly
increasing the likelihood of information being fact checked.
2) Platform Affordances: Social media platforms differ not only in terms of their audience,
but also in terms of the media format by which information is shared (Bossetta, 2018;
Kreiss, Lawrence, and McGregor 2018; Debit, Birnholtz, and Hancock 2017). Twitter and
Reddit rely heavily on text, TikTok and YouTube almost exclusively on video, Instagram
primarily on images, and Facebook contains a mix of all of these. Theorizing about the
relationships between the types of political and psychological factors we have
highlighted above and the media format should yield important insights about the
nature of information spread across these different platforms (Brady et al., 2020).
3) Platform Norms: One under-researched source of differences across platforms that can
be both related and orthogonal to platform affordances are the norms of different
platforms. In the 2014 Ukrainian Revolution of Dignity, anti-regime activists tended to
rely more on Twitter whereas regime supporters were more likely to use vKontakte, a
Facebook-like platform popular in the former Soviet space (Metzger & Tucker 2018). In
the United States, users of LinkedIn and Facebook will notice a similar type of format to
the presentation of posts, but a very different tone and approach to discussing politics in
particular. Specifically, LinkedIn is used primarily for career social networking (with
personal profiles that look like resumes with links to employers) and the tone is more
professional. Theorizing about the causes and effects of these differences and norms
may be another area in which political psychology can inform our understanding of the
spread and consumption of misinformation online.

What all of these different research agendas have in common, though, is that they
require access to data that is currently locked up inside of social media platforms. It is difficult
to make theoretical progress without access to the most relevant data sources. Persily and
Tucker (2020) conclude their edited volume on Social Media and Democracy by noting that for
any study of human behavior, the digital information age is both the best of times -- thanks to
digital trace data, we have more data that could be used to study people’s attitudes and
behavior than any previous moment in human history -- and the worst of times, because the
vast majority of that data is only available for analysis if huge, powerful companies constrained
by complex legal regulations decide to make that data available. As such, scholars should work
with platforms when possible, pursuing data gathering strategies that do not require the
cooperation of platforms, and advocate for government regulations to make more of the data
“owned” by social media platforms available for analysis. Advancing research and theory on
misinformation is ultimately dependent on obtaining access to these data.
23

Conclusion
The current paper provides an overview of the psychology involved in the belief and
spread of misinformation, provides a model of these processes, and outlines some potential
solutions and interventions. With the massive growth of social media over the past decade, we
have seen the emergence of new forms of misinformation ranging from fake news purveyors to
troll farms to deep fakes. It is impossible to anticipate the role that misinformation will take in
the coming years--and how it might escalate in the hands of new technologies, like artificial
intelligence--but it seems certain that this issue will continue to grow and evolve. This provides
a level of urgency for scholars to study the impacts of different platforms, design features, and
types of misinformation and how these interact with basic elements of human psychology, and
then inform policy makers on how to integrate suitable strategies. We believe that more work
is critically needed in this area and the work should be a priority for funding agencies, as well as
more applied work in collaboration with organizations and policy makers.
We have provided a model for both understanding the psychology of misinformation
and where we believe future work should proceed on these issues. By highlighting key elements
of social, political and cognitive psychology involved in the belief and spread of misinformation,
we have provided an overview of the empirical work in this area. However, there is a need for a
cross-discipline unifying theory that operates at multiple levels of analysis and engages with
scholarship in other disciplines. As we note above, this is an area where excellent work is
emerging from multiple disciplines, including communications, computer science, political
science, data science, and sociology, and psychologists will benefit from building theoretical
frameworks that align with insights and scholarship in these adjacent fields.
From a policy perspective, we believe the insights of the current model will prove useful
in understanding why regular people are susceptible to misinformation and might share it as
well as specific strategies that policy makers can implement. There is growing evidence that a
small number of bad actors (e.g., propagandists, conspiracy theorists, foreign agents) produce a
huge volume of false information. More puzzling is why millions of people might be drawn to a
sloppy YouTube documentary full of misinformation about the pandemic, or to a fake news
story about the Pope endorsing Donald Trump. In many cases, this content clearly lacks the
professional veneer or quality reporting of traditional media coverage and yet people not only
believe it but actively share the misinformation with their own family, friends, and colleagues.
Understanding the psychological factors that we outlined in the current paper offers insights
into these motives.
Looking under the hood into the mental operations guiding the belief and spread of
misinformation may also be instrumental to minimizing the infodemic. If social media
companies want to alter their design to minimize the spread of conspiracy theories, for
example, it likely helps to understand if increasing deliberation or hiding political identity cues
might be more or less effective in addressing this issue. For this reason, we encourage policy
makers to use the current work to guide intervention creation, but to also then test these
interventions, and share their data openly, before scaling them. Through this process, we will
likely have more success in fending off the current tsunami of misinformation flowing across
the internet.
References
24

Abramowitz, A. I., & Webster, S. W. (2018). Negative partisanship: Why Americans dislike
parties but behave like rabid partisans. Political Psychology, 39, 119–135.

Alfano, M., Iurino, K., Stey, P., Robinson, B., Christen, M., Yu, F., & Lapsley, D. (2017).
Development and validation of a multi-dimensional measure of intellectual humility.
PloS one, 12, e0182950.

Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on
social media. Research & Politics, 6, 2053168019848554.

Allport, F. H., & Lepkin, M. (1945). Wartime rumors of waste and special privilege: why
some people believe them. The Journal of Abnormal and Social Psychology, 40, 3-36.

Altay, S., Hacquin, A. S., & Mercier, H. (2019). Why do so few people share fake news? It
hurts their reputation. PsyArXiv.

Aslett, K., Godel, W., Sanderson, Z., Persily, N., Tucker, J. A., Nagler, J., & Bonneau, R.
(2020). The truth about fake news: Measuring vulnerability to fake news online. Paper
presented at the 2020 Annual Meeting of the American Political Science Association.

Associated Press (2020). “Facebook to label misleading politicians’ posts, including Trump’s
—Los Angeles Times”. Retrieved from https://www.latimes.com/business/technology/
story/2020-06-26/facebook-following-twitter-will-label-posts-that-violate-its-rules-
including-trumps

Bäck, H., & Carroll, R. (2018). Polarization and Gridlock in Parliamentary Regimes. The
Legislative Scholar, 3, 2-5.

Bago, B., Rand, D. G., & Pennycook, G. (2020). Fake news, fast and slow: Deliberation
reduces belief in false (but not true) news headlines. Journal of Experimental
Psychology: General, 149, 1608-1613.

Banda, K. K., & Cluverius, J. (2018). Elite polarization, party extremity, and affective
polarization. Electoral Studies, 56, 90–101.

Baron, J., & Jost, J. T. (2019). False equivalence: Are liberals and conservatives in the United
States equally biased?. Perspectives on Psychological Science, 14, 292-303.

Batailler, C., Brannon, S. M., Teas, P. E., & Gawronski, B. (in press). A Signal Detection
Approach to Understanding the Identification of Fake News. Perspectives on
Psychological Science.
25

Berinsky, A. and Wittenberg, C. (2020). Misinformation and its Corrections. In N. Persily & J.
A. Tucker (Eds.), Social Media and Democracy (pp. 163-198). Cambridge: Cambridge
University Press.

Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political


misinformation. British Journal of Political Science, 47, 241–262.

Bond, S. (2020) Conservatives Flock To Mercer-Funded Parler, Claim Censorship On


Facebook And Twitter. NPR.Org. Retrieved from
www.npr.org/2020/11/14/934833214/conservatives-flock-to-mercer-funded-parler-
claim-censorship-on-facebook-and-twi

Bossetta, M. (2018). The digital architectures of social media: Comparing political


campaigning on Facebook, Twitter, Instagram, and Snapchat in the 2016 US election.
Journalism & mass communication quarterly, 95, 471-496.

Brader, T., De Sio, L., Paparo, A., & Tucker, J.A., (2020). “Where you lead, I will follow”:
Partisan cueing on high‐salience issues in a turbulent multiparty system. Political
Psychology.

Brader, T. A., Tucker, J. A., & Duell, D. (2012). Which parties can lead opinion? Experimental
evidence on partisan cue taking in multiparty democracies. Comparative Political
Studies, 46, 1485–1517.

Brady, W. J., Crockett, M., & Van Bavel, J. J. (2020). The MAD Model of Moral Contagion:
The role of motivation, attention and design in the spread of moralized content online.
Perspectives on Psychological Science.

Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes
the diffusion of moralized content in social networks. Proceedings of the National
Academy of Sciences, 114, 7313-7318.

Brashier, N. M., & Schacter, D. L. (2020). Aging in an era of fake news. Current Directions in
Psychological Science, 29, 316-323

Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake
news is associated with delusionality, dogmatism, religious fundamentalism, and
reduced analytic thinking. Journal of Applied Research in Memory and Cognition, 8,
108-117.
26

Bullock, J. (2007). Experiments on partisanship and public opinion: Party cues, false beliefs,
and Bayesian updating. Ph.D. dissertation, Stanford University

Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2013). Partisan bias in factual beliefs
about politics. National Bureau of Economic Research.

Burdein, I., Lodge, M., & Taber, C. (2006). Experiments on the automaticity of political
beliefs and attitudes. Political Psychology, 27, 359-371.

Calvillo, D. P., Ross, B. J., Garcia, R. J., Smelter, T. J., & Rutchick, A. M. (2020). Political
Ideology Predicts Perceptions of the Threat of COVID-19 (and Susceptibility to Fake
News About It). Social Psychological and Personality Science.

Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American Voter. New
York: Wiley.

Carlson, E. (2016). Finding partisanship where we least expect it: Evidence of partisan bias
in a new African democracy. Political Behavior, 38, 129-154.

Chan, M. S., Jones, C. R., Hall Jamieson, K., & Albarracín, D. (2017). Debunking: A meta-
analysis of the psychological efficacy of messages countering misinformation.
Psychological Science, 28, 1531–1546.

Cikara, M., Van Bavel, J., J., Ingbretsen, Z. A., & Lau, T. (2017). Decoding “Us” and “Them”:
Neural representations of generalized group concepts. Journal of Experimental
Psychology: General, 146, 621-631.

Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., ... & Sandhu, M. (2019).
Real solutions for fake news? Measuring the effectiveness of general warnings and
fact-check tags in reducing belief in false stories on social media. Political Behavior, 1-
23.

Coan, T. G., Merolla, J. L., Stephenson, L. B., & Zechmeister, E. J. (2008). It’s not easy being
green: Minor party labels as heuristic aids. Political Psychology, 29, 389–405.

Converse, P. E. (1969). Of time and partisan stability. Comparative political studies, 2(2),
139-171.

Coppock, A. E. (2016). Positive, small, homogeneous, and durable: Political persuasion in


response to information. Doctoral dissertation, Columbia University.
27

Cunningham, W. A., Zelazo, P. D., Packer, D. J., & Van Bavel, J. J. (2007). The iterative
reprocessing model: A multilevel framework for attitudes and evaluation. Social
Cognition, 25, 736-760.

Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-
analytic review of the truth effect. Personality and Social Psychology Review, 14, 238-
257.

Devito, M. A., Birnholtz, J., & Hancock, J. T. (2017). Platforms, people, and perception:
Using affordances to understand self-presentation on social media. Proceedings of the
ACM Conference on Computer Supported Cooperative Work, CSCW, 740–754.

Dietz, T. (2020). Political events and public views on climate change. Climatic Change, 161,
1–8.

Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E., Grady, R. H., ... & Zinger, J. F.
(2018). At least bias is bipartisan: A meta-analytic comparison of partisan bias in
liberals and conservatives. Perspectives on Psychological Science, 14, 273-291.

Douglas, K. M., Sutton, R. M., & Cichocka, A. (2017). The psychology of conspiracy theories.
Current Directions in Psychological Science, 26(6), 538–542.

Druckman, J. N. (2001). Using credible advice to overcome framing effects. Journal of Law,
Economics, and Organization, 17, 62–82.

Dunlap, R. E. (2017). Bayesian versus politically motivated reasoning in human perception


of climate anomalies. Environmental Research Letters, 12, 114004.

Ecker, U. K., Hogan, J. L., & Lewandowsky, S. (2017). Reminders and repetition of
misinformation: Helping or hindering its retraction?. Journal of Applied Research in
Memory and Cognition, 6, 185-192.

Farhall, K., Gibbons, A., & Lukamto, W. (2019). Political elites ’ use of fake news discourse
across communications platforms, The University of Melbourne, Australia The
University of Texas at Austin, 13, 4353–4375.

Fazio, L. K., Rand, D. G., & Pennycook, G. (2019). Repetition increases perceived truth
equally for plausible and implausible statements. Psychonomic Bulletin & Review, 26,
1705-1710.

Finkel, E. J. et al. Political sectarianism in America. Science. 370, 533–536 (2020).


28

Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic


perspectives, 19, 25-42.

Gerber, A. S., & Huber, G. A. (2010). Partisanship, political control, and economic
assessments. American Journal of Political Science, 54, 153-173.

Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of
fake news spread on Facebook. Science Advances, 5, eaaau4586.

Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence
from the consumption of fake news during the 2016 US presidential campaign.
European Research Council, 9, 4.

Guess, A., Nyhan, B., & Reifler, J. (2020). Exposure to untrustworthy websites in the 2016
US election. Nature Human Behaviour, 4, 472–480.

Guess, A. M., & Lyons, B. A. (2020). Misinformation, Disinformation, and Online


Propaganda. In N. Persily & J. A. Tucker (Eds.), Social Media and Democracy (pp. 2–33).
Cambridge: Cambridge University Press.

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N.
(2020). A digital media literacy intervention increases discernment between
mainstream and false news in the United States and India. Proceedings of the National
Academy of Sciences, 117, 15536–15545.

Hahnel, U. J. J., Mumenthaler, C., & Brosch, T. (2020). Emotional foundations of the public
climate change divide. Climatic Change, 161, 9–19.

Hartman, T. K., & Newmark, A. J. (2012). Motivated reasoning, political sophistication, and
associations between President Obama and Islam. PS: Political Science & Politics, 45,
449-455.

Hasell, A., & Weeks, B. E. (2016). Partisan provocation: The role of partisan news use and
emotional responses in political information sharing in social media. Human
Communication Research, 42, 641–661.

Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential
validity. Journal of Verbal Learning and Verbal Behavior, 16, 107-112.
29

Hogg, M. A., & Reid, S. A. (2001). Social identity, leadership, and power. In A. Y. Lee-Chai &
J. A. Bargh (Eds.), The use and abuse of power: Multiple perspectives on the causes of
corruption (p. 159–180). Psychology Press.

Hornsey, M. J., & Fielding, K. S. (2020). Understanding (and Reducing) Inaction on Climate
Change. Social Issues and Policy Review, 14(1), 3–35.

Hunt, J. (2017, November). 'Fake news' named Collins dictionary's official Word of the Year.

Hunt, J. (2017). ‘Fake news’ named Collins dictionary’s official word of the year for 2017.
The Independent.

Iyengar, S., Sood, G., & Lelkes, Y. (2012). Affect, not ideology: A social identity perspective
on polarization. Public Opinion Quarterly, 76, 405–431.

Jiang, S., Robertson, R. E., & Wilson, C. (2019). Bias misperceived: The role of partisanship
and misinformation in YouTube comment moderation. Proceedings of the 13th
International Conference on Web and Social Media, ICWSM 2019, (Icwsm), 278–289.

Johnson, M. K., Rowatt, W. C., & Labouff, J. P. (2012). Religiosity and prejudice revisited: In-
group favoritism, out-group derogation, or both? Psychology of Religion and
Spirituality, 4, 154–168.

Jost, J. T., Federico, C. M., & Napier, J. L. (2009). Political ideology: Its structure, functions,
and elective affinities. Annual Review of Psychology, 60, 307-337.

Jost, J. T., Hennes, E. P., & Lavine, H. (2013). “Hot” political cognition: Its self-, group-, and
system-serving purposes. Oxford Handbook of Social Cognition, 851-875.

Juul, J. S., & Porter, M. A. (2019). Hipsters on networks: How a minority group of
individuals can lead to an antiestablishment majority. Physical Review E, 99, 022313.

Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and
enlightened self-government. Behavioural Public Policy, 1, 54-86.

Kahan, D. M. (2012). Ideology, motivated reasoning, and cognitive reflection: An


experimental study. Judgment and Decision making, 8, 407-24.

Kata, A. (2010). A postmodern Pandora’s box: Anti-vaccination misinformation on the


Internet. Vaccine, 28(7), 1709–1716. https://doi.org/10.1016/j.vaccine.2009.12.022

Kevins, A., & Soroka, S. N. (2018). Growing apart? Partisan sorting in Canada, 1992–2015.
30

Kreiss, D., Lawrence, R. G., & McGregor, S. C. (2017). In their own words: Political
practitioner accounts of candidates, audiences, affordances, genres, and timing in
strategic social media use. Political Communication, 35, 8–31.

Law, S., Hawkins, S. A., & Craik, F. I. (1998). Repetition-induced belief in the elderly:
Rehabilitating age-related memory deficits. Journal of Consumer Research, 25, 91-107.

Lawson, A., & Kakkar, H. (2020). Of pandemics, politics, and personality: The role of
conscientiousness and political ideology in sharing of fake news. PsyArXiv.

Leist, A. K. (2013). Social media use of older adults: A mini-review. Gerontology, 59, 378-
384.

Lelkes, Y., & Westwood, S. J. (2017). The limits of partisan prejudice. The Journal of Politics,
79, 485–501

Lewandowsky, S., Cook, J., Ecker, U. K., Lewandowsky, S., Cook, J., Ecker, U. K. H., ... &
Newman, E. J. The Debunking Handbook 2020: A consensus-based handbook of
recommendations for correcting or preventing misinformation.

Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation
and its correction: Continued influence and successful debiasing. Psychological Science
in the Public Interest, 13, 106–131.

Lodge, M., & Taber, C. S. (2005). The automaticity of affect for political leaders, groups, and
issues: An experimental test of the hot cognition hypothesis. Political Psychology, 26,
455-482.

Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural
sciences can promote truth, autonomy and democratic discourse online. Nature
Human Behaviour, 1–8.

Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2020). Long-term
effectiveness of inoculation against misinformation: Three longitudinal experiments.
Journal of Experimental Psychology: Applied.

Martel, C., Pennycook, G., & Rand, D. G. (2020). Reliance on emotion promotes belief in
fake news. Cognitive research: principles and implications, 5, 1-20.

Matthews, D. (2017). Donald Trump has tweeted climate change skepticism 115 times.
Here’s all of it. Retrieved from
31

www.vox.com/policy-and-politics/2017/6/1/15726472/trump-tweets-global-warming-
paris-climate-agreement

Mccright, A. M., & Dunlap, R. E. (2011). The politicization of climate change and
polarization in the American public’s views of global warming, 2001-2010. Sociological
Quarterly, 52, 155–194.

McGuire, W. (1964). Inducing resistance to persuasion, in L. Berkowitz (ed.): Advances in


Experimental Social Psychology.

McNamara, J., & Houston, A. (1980). The application of statistical decision theory to animal
behaviour. Journal of Theoretical Biology, 85, 673-690.

Meffert, M. F., Chung, S., Joiner, A. J., Waks, L., & Garst, J. (2006). The effects of negativity
and motivated information processing during a political campaign. Journal of
Communication, 56, 27-51.

Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an
argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

Metzger, M. M., & Tucker, J. A. (2017). Social media and EuroMaidan: A review essay.
Slavic Review, 76(1), 169-191.

Meyer, M., Alfano, M., & De Bruin, B. (2020). Epistemic vice predicts acceptance of Covid-
19 misinformation. Available at SSRN 3644356.

Meyer, M. (2019). Fake News, Conspiracy, and Intellectual Vice. Social Epistemology
Review and Reply Collective, 8, 9-19.

Mosseri, A. (2017). Working to stop misinformation and false news. Facebook Newsroom.

Mourão, R. R., & Robertson, C. T. (2019). Fake news as discursive integration: An analysis of
sites that publish false, misleading, hyperpartisan and sensational information.
Journalism Studies, 20, 2077–2095.

Nolan, C. (Director). (2008). The Dark Knight [Film]. Warner Bros.

Murphy, C. (2020). “Twitter Tests 'Misleading Information' Label When Users Try to 'Like' a
Tweet with Misinformation”. USA Today, Gannett Satellite Information Network.
Retrieved from www.usatoday.com/story/tech/2020/11/10/twitter-tests-add-
misleading-information-label-when-liking-tweets/6231896002/.
32

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political
misperceptions. Political Behavior, 32, 303–330.

Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on
US state legislators. American Journal of Political Science, 59, 628–640.

Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A., & Petersen, M. B. (2020). Partisan
polarization is the primary psychological motivation behind “fake news” sharing on
Twitter. Unpublished manuscript.

Paige, L. E., Fields, E. C., & Gutchess, A. (2019). Influence of age on the effects of lying on
memory. Brain and Cognition, 133, 42-53.

Pennycook, G., & Rand, D. G. (2019). Fighting misinformation on social media using
crowdsourced judgments of news source quality. Proceedings of the National Academy
of Sciences, 116, 2521–2526.

Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news
is better explained by lack of reasoning than by motivated reasoning. Cognition, 188,
39-50

Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit
receptivity, overclaiming, familiarity, and analytic thinking. Journal of personality, 88,
185-200.

Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect:
Attaching warnings to a subset of fake news headlines increases perceived accuracy of
headlines without warnings. Management Science.

Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived
accuracy of fake news. Journal of Experimental Psychology: General, 147, 1865-1880.

Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the
reception and detection of pseudo-profound bullshit. Judgment and Decision making,
10, 549-563.

Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2019).
Understanding and reducing the spread of misinformation online. PsyArXiv.
33

Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19
misinformation on social media: Experimental evidence for a scalable accuracy-nudge
intervention. Psychological Science, 31, 770–780.

Pereira, A., Harris, E., & Van Bavel, J. J. (2020). Identity concerns drive belief: The impact of
partisan identity on the belief and spread of true and false news. PsyArXiv.

Persily, N., & Tucker, J. (Eds.). (2020). Social Media and Democracy: The State of the Field,
Prospects for Reform (SSRC Anxieties of Democracy). Cambridge: Cambridge University
Press.

Persily, N. (2017). The 2016 US Election: Can democracy survive the internet?. Journal of
democracy, 28, 63-76.

Petersen, M., Osmundsen, M., & Arceneaux, K. (2020). The “need for chaos” and
motivations to share hostile political rumors. Unpublished manuscript.

Pew Research Center (2020). Demographics of Social Media Users and Adoption in the
United States. Retrieved from
https://www.pewresearch.org/internet/fact-sheet/social-media

Poulin, M. J., & Haase, C. M. (2015). Growing to trust: Evidence that trust increases and
sustains well-being across the life span. Social Psychological and Personality Science, 6,
614-621.

Prull, M. W., Dawes, L. L. C., Martin III, A. M., Rosenberg, H. F., & Light, L. L. (2006).
Recollection and familiarity in recognition memory: adult age differences and
neuropsychological test correlates. Psychology and Aging, 21, 107.

Renström, E. A., Bäck, H. & Schmeisser, Y. (2020). Vi ogillar olika. Om affektiv polarisering
bland svenska väljare in Ulrika Andersson, Anders Carlander & Patrik Öhberg (Ed.)
Regntunga skyar. Göteborgs universitet: SOM-institutet.

Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., … van
der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the
world. Royal Society Open Science, 7, 201199.

Roozenbeek, J. & van der Linden, S. (2019). Fake news game confers psychological
resistance against online misinformation. Palgrave Communications, 6, 65.
34

Ross, L., Lepper, M. R., & Hubbard, M. (1975). Perseverance in self-perception and social
perception: biased attributional processes in the debriefing paradigm. Journal of
Personality and Social Psychology, 32, 880.

Roth, Y., & Pickles, N. (2020). Updating our approach to misleading information. Twitter
Blog.

Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: a mapping
between three moral emotions (contempt, anger, disgust) and three moral codes
(community, autonomy, divinity). Journal of Personality and Social Psychology, 76, 574.

Samuels, D., & Zucco Jr, C. (2014). The power of partisanship in Brazil: Evidence from
survey experiments. American Journal of Political Science, 58, 212-225.

Sarlin, B. (2018). ‘Fake news’ went viral in 2016. This expert studied who clicked. NBC.

Schulz, A., Wirth, W., & Müller, P. (2020). We are the people and you are fake news: A
social identity approach to populist Citizens’ false consensus and hostile media
perceptions. Communication Research, 47, 201–226.

Schwarz, N., Newman, E., & Leach, W. (2016). Making the truth stick & the myths fade:
Lessons from cognitive psychology. Behavioral Science & Policy, 2, 85–95.

Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016, April). Hoaxy: A platform for
tracking online misinformation. In Proceedings of the 25th international conference
companion on world wide web (pp. 745-750).

Shearer, E., & Gottfried, J. (2017). News use across social media platforms 2017. Pew
Research Center, Journalism and Media.

Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed
real news on Facebook. BuzzFeed News.

Spencer, W. D., & Raz, N. (1995). Differential effects of aging on memory for content and
context: a meta-analysis. Psychology and Aging, 10, 527.

Stukal, D., Sanovich, S., Bonneau, R., & Tucker, J. A. (2017). Detecting bots on Russian
political Twitter. Big Data, 5, 310–324.

Taber, C. S., Cann, D., & Kucsova, S. (2009). The motivated processing of political
arguments. Political Behavior, 31, 137-155.
35

Tajfel, H., & Turner, J. C. (1986). The social theory of intergroup behaviour. Key Readings in
Social Psychology (pp. 276–293). Psychology Press.

Tajfel, H. (1970). Experiments in Intergroup discrimination. Scientific American, 223, 96–


102.

Tambini, D. (2017). Fake news: Public policy responses. LSE Media Policy Project Series.

Tetlock, Philip E. (1985). Accountability: A socialcheck on the Fundamental Attribution


Error. Social Psychology Quarterly, 48, 227-238.

The Partisan Divide on Political Values Grows Even Wider. (2017). Pew Research Center,
Washington, D.C. Retrieved from www.people-press.org/2017/10/05/the-partisan-
divide-on-political-values-grows-even-wider/.

Tomz, M., & Van Houweling, R. P. (2008). Candidate positioning and voter choice.
American Political Science Review, 303-318.

Törnberg, P. (2018). Echo chambers and viral misinformation: Modeling fake news as
complex contagion. PLoS One, 13, e0203958.

Torrey, N. L. (1961). Les Philosophes: The Philosophers of the Enlightenment and Modern
Democracy. Capricorn Books, 1961, pp. 277-278

Tucker, J. A., Guess, A., Barbera, P., Vaccari, C., Siegel, Alexandra Sanovich, Sergey Stukal,
D., & Nyhan, B. (2018). Social media, political polarization, and political disinformation:
A review of the scientific literature. Available at SSRN:
https://ssrn.com/abstract=3144139

Van Bavel, J. J., & Pereira, A. (2018). The partisan brain: An identity-based model of
political belief. Trends in Cognitive Sciences, 22, 213-224.

van der Linden, S., Panagopoulos, C., & Roozenbeek, J. (2020). You are fake news: political
bias in perceptions of fake news. Media, Culture and Society, 42, 460–470.

van der Linden, S., Panagopoulos, C., Azevedo, F., & Jost, J. T. (2020). The paranoid style in
American politics revisited: An ideological asymmetry in conspiratorial thinking.
Political Psychology.

van der Linden, S. (2019). Countering science denial. Nature Human Behaviour, 3, 889–890.
36

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359, 1146-1151.

Walker, M., & Gottfried, J. (2019). Republicans far more likely than democrats to say fact
checkers favor one side. Retrieved from
https://www.pewresearch.org/fact-tank/2019/06/27/republicans-far-more-likely-than-
democrats-to-say-fact-checkers-tend-to-favor-one-side/

Walter, N., & Murphy, S. T. (2018). How to unring the bell: A meta-analytic approach to
correction of misinformation. Communication Monographs, 85, 423–441.

Walter, N., Cohen, J., Holbert, R. L., & Morag, Y. (2020). Fact-checking: A meta-analysis of
what works and for whom. Political Communication, 37, 350–375.

Wells, C., Shah, D. V., Pevehouse, J. C., Yang, J., Pelled, A., Boehm, F., ... & Schmidt, J. L.
(2016). How Trump drove coverage to the nomination: Hybrid media campaigning.
Political Communication, 33(4), 669-676.

Wong, J. C. (2020, October 6). Facebook to ban QAnon-themed groups, pages and
accounts in crackdown. The Guardian.
www.theguardian.com/technology/2020/oct/06/qanon-facebook-ban-conspiracy-
theory-groups

Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes’ steadfast factual
adherence. Political Behavior, 41, 135–163.

Yazdany, J., & Kim, A. H. J. (2020). Use of Hydroxychloroquine and Chloroquine during the
COVID-19 Pandemic: What every clinician should know. Annals of Internal Medicine,
172, 754–755.

Zawadzki, S. J., Bouman, T., Steg, L., Bojarskich, V., & Druen, P. B. (2020). Translating
climate beliefs into action in a changing political landscape. Climatic Change, 161, 21–
42.

Zimerman, A., & Pinheiro, F. (2020). Appearances Can Be Deceptive: Political Polarization,
Agrarian Policy, and Coalitional Presidentialism in Brazil. Politics & Policy, 48, 339- 371.

You might also like