Abstract
In the last decade there has been a proliferation of research on misinformation. One important aspect of this work that receives less attention than it should is exactly why misinformation is a problem. To adequately address this question, we must first look to its speculated causes and effects. We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet, social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent significant deviations from ground truths through people’s new way of knowing (intersubjectivity). This, we argue, is illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to established norms of liberal democracy that come from efforts to target the problem of misinformation.
Keywords: misinformation and disinformation, intersubjectivity, correlation versus causation, free speech
The aim of this review is to answer the question, (Why) is misinformation a problem? We begin the main review with a discussion of definitions of “misinformation” because this, in part motivated our pursuit to answer this question. Incorporating evidence from many disciplines helps us to examine the speculated effects and causes of misinformation, which give some indication of why it might be a problem. Answers in the literature reveal that advancements in information technology are the commonly suspected primary cause of misinformation. However, the reviewed literature shows considerable divergence regarding the assumed outcomes of misinformation. This may not be surprising given the breadth of disciplines involved; researchers in different fields observe effects from different perspectives. The fact that so many effects of misinformation are reported is not a concern as long as the direct causal link between misinformation and the aberrant behaviors it generates is clear. We emphasize that the evidence provided by studies investigating this relationship is weak. This exposes two issues: one that is empirical, as to the effects of misinformation, and one that is conceptual, as to the cause of the problem of misinformation. We argue that the latter issue has been oversimplified. Uniting the two issues, we propose that the alarm regarding the speculated relationship between misinformation and aberrant societal behaviors appears to be rooted in the increased opportunities through advancements in information technology for people to “go it alone”—that is, establish their own ways of knowing that increases deviations from ground truths.
We therefore propose our own conceptual lens from which to understand the cause of concern that misinformation poses. It acknowledges modern information technology (e.g., Internet, social media) but goes beyond it to understand the roots of knowledge through historical epistemology—the study of (primarily scientific) knowledge, specifically concepts (e.g., objectivity, belief), objects (e.g., statistics, DNA), and the development of science (Feest & Sturm, 2011). The current situation is triggering alarm because it appears that the process of knowing is undergoing a transition whereby objectivity is second to intersubjectivity. Intersubjectivity, as we define it, is a coordination effort by two or more people to interpret entities in the world through social interaction. Owing to progress in information technologies, intersubjectivity is seen as being in opposition to the way ground truths are established by traditional institutions. However, the two processes are not necessarily at odds because, for instance, the scientific endeavor is not itself devoid of intersubjective mechanisms. Second, there is no clear evidence that intersubjectivity as a means for establishing truth is on the rise outside of the fact that the Internet facilitates and exposes more clearly the interactions people are having. Critically, the causal connection between beliefs established in an intersubjective manner and aberrant behavior is also not established. In our concluding section, we discuss whether the efforts to reduce misinformation are proportionate to the actual or rather the perceived scale of the problem of misinformation. We also propose possible methodological avenues for future exploration that might help to better expose causal relations between misinformation and behavior.
Defining Misinformation and Other Associated Terms
What is misinformation? Several definitions of “misinformation” refer to it as “false,” “inaccurate,” or “incorrect” information (e.g., Qazvinian et al., 2011; Tan et al., 2015; Van der Meer & Jin, 2019) and as the antonym to information. By contrast, disinformation is false information that is also described as being intentionally shared (e.g., Levi, 2018). Information shared for malicious ends—to cause harm to an individual, organization, or country (Wardle, 2020)—is malinformation and can be either true (e.g., as with doxing, when private information is publicly shared) or false. A close cousin of the term disinformation is fake news (Anderau, 2021; Zhou & Zafarani, 2021), a term made popular by the former U.S. President Donald Trump, for which there are a variety of examples that fall under this catchall term (Tandoc et al., 2018). Similar to disinformation, “fake news” is defined as information presented as news that is intentional and verifiably false (Allcott & Gentzkow, 2017; Anderau, 2021). This is different from satire, parody, and propaganda. However, rumor, often discussed alongside hearsay, gossip, or word of mouth, is perhaps the oldest relative in the misinformation family, with research dating back decades (for discussion, see Allport & Postman’s The Psychology of Rumor, 1947). A companion of humankind for millennia, a rumor is commonly defined as “unverified and instrumentally relevant information statements in circulation” (DiFonzo & Bordia, 2007, p. 31).
Yet these terms are used differently (e.g., Habgood-Coote, 2019; Karlova & Lee, 2011; Scheufele & Krause, 2019). For example, Wu et al. (2019) employed “misinformation” as an umbrella term to include all false or inaccurate information (unintentionally spread misinformation, intentionally spread misinformation, fake news, urban legends, rumor). This is arguably why Krause et al. (2022) concluded that misinformation has become a catchall term with little meaning. It is perhaps not surprising that these definitions are also contentious and that debates continue among scholars. For example, a point of discussion is the demarcation between information and misinformation, with some scholars arguing that this is a false dichotomy (e.g., Ferreira et al., 2020; Haiden & Althuis, 2018; Marwick, 2018; Tandoc et al., 2018). The problem with drawing a line between the two is that it ignores what it means to ascertain the truth. Krause et al. (2020) illustrated this in their real-world example of COVID-19, in which the efficacy of measures such as masks was initially unknown. Osman et al. (2022) demonstrated this with reference to the origins of the COVID-19 virus and the problems with prematurely labeling what was conspiracy and what was a viable scientific hypothesis. A continuum approach is an alternative to reductionist classifications into true or false information (e.g., Hameleers et al., 2021; Tandoc et al., 2018; Vraga & Bode, 2020), but the problem of how exactly truth is or ought to be established remains.
Some define truth by what it is not, rather than what it is. Paskin (2018) argued that fake news has no factual basis, which implies that truth is equated simply to facts. Other scholars make reference to ideas such as accuracy and objectivity (Tandoc et al., 2018) as well as to evidence and expert opinion (Nyhan & Reifler, 2010). Regarding what is false, many scholars define “disinformation” as intentional sharing of false information (e.g., Fallis, 2015; Hernon, 1995; Shin et al., 2018; Søe, 2017), but the problem is how to determine intent (e.g., Karlova & Lee, 2011; Shu et al., 2017). Given how fundamental the conceptual problems are, some have proposed that it is too early to investigate misinformation (Avital et al., 2020; Habgood-Coote, 2019).
For the purposes of this review, a commonly agreed-upon definition is not necessary (or likely possible). Instead, what we have done here is graphically represent various conceptions of information, misinformation, and other related phenomena (Fig. 1) with reference to some examples. We also distill what we view as a shared essential property of most of the definitions we have discussed here: information that departs from established ground truths. Three things of note: First, we make no assertion about whether the information is unintentionally or deliberately designed to depart from ground truth; we note only that it does. Second, criteria for determining ground truth are evidentially problematic given the conceptual obstacles already mentioned, so we do not attempt this. Rather, we return to discussing the issues around this later in the review. Third, in our view the essential property described is dynamic—what we refer to as dynamic lensing of information. This is necessary to reflect that, just as lensing is an optical property that can distort and magnify light, the status of information interpreted through various means (lenses) is liable to shifts and over time diverges from, or converges to, ground truth (for an example, see Fig. 1d).
Fig. 1.
Different ways of conceptualizing and contextualizing information and misinformation. In (a), we show dichotomous distinctions between information and misinformation using commonly discussed criteria from the literature (e.g., Levi, 2018; Qazvinian et al., 2011; Tan et al., 2015; Van der Meer & Jin, 2019). In (b) we show Hameleers, van der Meer, and Vliegenthart’s (2021) continuum “space of (un)truthfulness” that characterizes mis- or disinformation by degrees of truth and falsehood. In (c) we illustrate Tandoc et al.’s (2018) two-dimensional space, which is used to map what the authors refer to as examples of fake news, with additional examples of terms (in yellow) that we have attempted to map onto the described dimensions. In (d), Vraga and Bode’s (2020) expertise and evidence are used as criteria for contextualizing misinformation, with two examples to illustrate claims that were controversial at one time but over time became settled (either because they were verifiably true or verifiably false): Galileo’s 1632 challenge of the geocentric astronomical model in favor of the heliocentric model (the Catholic Church did not officially pardon him for heresy until 1992), and Charles Dawson’s (1913) discovery of the “missing link” from jaw and tooth remains in Piltdown (England) that were shown by Weiner, Oakley, and Clark (1953) to have been fabricated to simulate a fossilized skull. “Piltdown Man” was deemed a hoax by the scientific community.
The “Problem” of Misinformation
A review of the kind presented here has not yet been conducted, and so we present this as a starting point for future interdisciplinary reviews and meta-analyses of the causes and effects of misinformation that could extend or challenge the claims we are making here. To address the title question, we concentrated on theoretical and empirical articles that explicitly reference the effects of misinformation. We have tried to be comprehensive (if not exhaustive) by drawing on and synthesizing research across a wide range of disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology). Our strategy involved searching for articles on Google Scholar containing the terms “misinformation” and “fake news.” However, many of these articles contained related terms such as “disinformation,” “rumor,” “posttruth,” “hoax,” “satire,” and “parody.” Because the effects of “misinformation” and related terms were rarely referred to as a problem explicitly, we initially did not add “problem” or its synonyms to our search terms, and instead selected the most cited articles. To capture any specific references to the problem of misinformation, however, we examined in the second step both “misinformation” (and related terms) and “problem” (and related terms, such as “crisis,” “trouble,” “obstacle,” “dilemma,” and “challenge”).
The time frame chosen depended on the term. Interest in “fake news” boomed after 2016. Articles with more than 50 citations served as a starting point (albeit an arbitrary one) for our search between 2016 and 2022 (from a total of 25,300 search results); we allowed some exceptions, such as the highly cited Conroy et al. (2015) article on automatic detection of fake news. Research on the term “misinformation” traditionally focused on memory (e.g., Ayers & Reder, 1998; Frost, 2000) with applications to legal settings, specifically eyewitness studies (Wright et al., 2000). There are instances of misinformation in the more general sense, as applied to the news and the Internet, as early as Hernon’s (1995) exploratory study. To make things manageable, we limited our search by examining the most cited articles between 2000 and 2022 (from a total of 116,000 search results). Most studies made reference to the effects of misinformation or fake news in their introduction as motivation for their research, or in the conclusion to explain why their work has important implications. For others, the aim of the research is to precisely examine the effects of misinformation. In total, we carefully inspected 149 articles, either because they made references to the causes or to the effects of misinformation.
We observed that scholars broadly characterize the effects of misinformation in two ways: societal-level effects, which we group into four domains (media, politics, science, economics), and individual-level effects, which are psychologically informed (cognitive, behavioral). We reserve critical appraisals for the literature on the psychological effects of misinformation because they are fundamental to, and have direct implications for, societal-level effects.
Societal-level effects of misinformation
We will begin with those articles in which the effects of misinformation can be classified as general topical areas that impact society: media, politics, science, and economics.
Media
Determining the impact of misinformation for media, particularly news media, depends on whether readers can reliably distinguish between true and false news and between biased (left- or right-leaning) and objective content. Although some empirical studies show that readers can distinguish between true and false news (Barthel et al., 2016; Burkhardt, 2017; De Beer & Matthee, 2021; Posetti & Matthews, 2018; Shu et al., 2018; Waldman, 2018), Bryanov and Vziatysheva’s (2021) review indicates that overall the evidence is mixed. Luo et al. (2022) showed that news classified as misinformation can garner increased attention, gauged by the number of social media “likes” a post receives. This does not in turn imply that the false content is judged to be credible; in fact, there is a tendency to disbelieve both fake and true messages (Luo et al., 2022). Possible spillover effects such as this one have been used to explain disengagement with news media in general and distrust in traditional news institutions (Altay et al., 2020; Axt et al., 2020; Fisher et al., 2021; Greifeneder et al., 2021; Habgood-Coote, 2019; Levi, 2018; Lewis, 2019; Robertson & Mourão, 2020; Shao et al., 2018; Shu et al., 2018; Steensen, 2019; Tandoc et al., 2021; Tornberg, 2018; Van Heekeren, 2019; Waldman, 2018; Wasserman, 2020). For example, Axios, an American news website, reported that between 2021 and 2022 there was a 50% drop in social-media interactions with news articles, an 18% drop in unique visits to the five top news sites, and a 19% drop in cable news prime-time viewing (Rothschild & Fischer, 2022). Survey work, such as the Edelman Trust Barometer (2019) and Gallup’s Confidence in Institutions survey (2018), has shown declining trust in news media and journalists. Consistent with this, Wagner and Boczkowski’s (2019) in-depth interviews indicated negative perceptions of the quality of current news and distrust of news circulated on social media. Duffy et al. (2019) suggested a positive outcome may be that mistrusting news on social media will drive the public back to traditional news sources. Along similar lines, Wasserman (2020) claimed that misinformation provides traditional news institutions with an opportunity to rebuild their position as authoritative by emphasizing verification of claims communicated to the public.
In short, declining trust in media, doubtful source credibility, and the blurring of the dichotomy between false and true news are suspected effects of misinformation that worry some (Kaul, 2012; Posetti & Matthews, 2018; Steensen, 2019). A remedy for these effects is a more nuanced appreciation of truth and a greater sense of how claims are communicated in ways that allow for healthy skepticism (Godler, 2020).
Politics
The overarching negative effect of misinformation comes from the argument that without an accurately informed public, democracy cannot function (Kuklinski et al., 2000), although there is some discussion about whether this falls squarely back on the shoulders of news media. 1 Much of the evidence base is designed to show how beliefs in misinformed claims impact the evaluation of and support for particular policies (e.g., Allcott & Gentzkow, 2017; Benkler et al., 2018; Dan et al., 2021; Flynn et al., 2021; Fowler & Margolis, 2013; Garrett et al., 2013; Greifeneder et al., 2020; Levi, 2018; Lewandowsky & van der Linden, 2021; Marwick et al., 2022; Metzger et al., 2021; Monti et al., 2019; Shao et al., 2018; Waldman, 2018). However, there is as yet no consensus on how to precisely measure misinformation to determine its direct effects on democratic processes (e.g., election voting, public discourse on policies; Watts et al., 2021).
One of the strongest claimed effects of misinformation is that it leads voters toward advocating policies that are counter to their own interests, for example, the 2016 U.S. election and the Brexit vote in the United Kingdom (Bastos & Mercea, 2017; Cooke, 2017; Humprecht et al., 2020; Levi, 2018; Monti et al., 2019; A. S. Ross & Rivers, 2018; Shu et al., 2018; Wagner & Boczkowski, 2019; Weidner et al., 2019). Silverman & Alexander (2016) found that the 20 top fake election-news stories generated more engagement on Facebook than the 20 top election stories from 19 major news outlets combined. Participants in Wagner and Boczkowski’s (2019) study indicated how the reporting of the 2016 U.S. election reduced their trust in media because of false news stories associated with both political parties. Political polarization is also affected because false stories further divisions between parties as well as reinforce support for one’s own party (e.g., Axt et al., 2020; European Commission, 2018; Ribeiro et al., 2017; Sunstein, 2017; Vargo et al., 2017; Waldman, 2018). However, evidence showing a direct causal relationship between viewing fake news and switching positions—and in turn changing the outcome of elections and referenda—is lacking, and consequently these claims have also been questioned (Allcott & Gentzkow, 2017; Grinberg et al., 2019; Guess et al., 2019). For the same reasons, drawing connections between misinformation and political polarization has been difficult, and similar challenges emerge for a variety of other alternative explanations that have been proposed (Canen et al., 2021; Hebenstreit, 2022).
Although there currently is a strong interest in the effects of misinformation on political issues, debates surrounding the effects of misinformation are certainly not new. There are documented examples as far back as ancient Rome of how misinformation as rumor can impact the reputations of political figures. As a result, misinformation has been used as a form of social control to produce political outcomes, albeit with varying success (e.g., Chirovici, 2014; Guastella, 2017). For example, when Nicolae Ceaus,escu led Romania in the 1960s and 1970s, positive rumors were used to help establish support, but negative rumors—for example, that Ceaus,escu had periodic blood transfusions from children—later had the opposite effect. Attempts by mass media to censor such negative rumors failed, and as resistance intensified, the government’s response was harsh: a rumored genocide of 60,000 civilians in Timisoara. The executions of Ceaus,escu and his wife in 1989 were based on charges of genocide, among other things (Chirovici, 2014).
As well as rumor, there are several examples of state propaganda (e.g., Pomerantsev, 2015; Snyder, 2021), alternatively referred to as disinformation campaigns. These are also designed as a form of social control. This runs counter to the “Millian” market of ideas, based on Mill’s (1859) work proposing that only in a free market of ideas are we able to arrive at the truth (Cohen-Almagor, 1997). In fact, these examples suggest that deliberate as well as inadvertent processes disrupt the possibility of the best ideas surfacing to the point of consensus (Cantril, 1938). State propaganda and rumors attempt to suppress public discourse, which have been shown to negatively impact the populous, particularly in drawing connections to the instigation of violence, and influence voting behavior (e.g., Posetti & Matthews, 2018).
Thus far, it is not clear what the remedies are for addressing the effects of misinformation in the domain of politics. What is clear is that there are strong claims regarding the effects (e.g., polarization, disengagement with democratic processes, hostility to political figures, and violence) that underscore why misinformation is perceived as a threat to democracy.
Science
Misinformation in science communication (Kahan, 2017) and around science policymaking (Farrell et al., 2019) is purported to have an effect on public understanding of science (Lewandowsky et al., 2017). Two topics that are frequently brought up in connection with the negative effects of misinformation are health and anthropogenic climate change.
COVID-19 has been at the epicenter of many misinformation studies examining how it has impacted attitudes, beliefs, intentions, and behavior (Ecker et al., 2022; Kouzy et al., 2020; Pennycook et al., 2020; Roozenbeek et al., 2020; Vraga et al., 2020). For instance, Tasnim et al. (2020) referred to reported chloroquine overdoses in Nigeria following claims on the news that it effectively treats the virus (chloroquine is a pharmacological treatment for malaria). Misinformation was also claimed to have resulted in hoarding and panic buying (G. Chua et al., 2021) as well as avoiding nonpharmacological measures (e.g., handwashing, social distancing; Shiina et al., 2020).
Researchers have also considered the effect of misinformation on the zika (Ghenai & Mejova 2017; Valecha et al., 2020) and ebola viruses (Jin et al., 2014). Negative health behaviors include vaccine hesitancy, which has been most notably related to associations between the measles, mumps and rubella (MMR) vaccine and autism (Dixon et al., 2015; Flynn et al., 2021; Kahan, 2017; Kirkpatrick, 2020; Lewandowsky et al., 2012; Pluviano et al., 2022). The actual effects that are measured vary from self-reported behaviors in response to inaccurate health claims to views on trust in the news media, politics, and general damage to democracy. Studies examining whether changes are possible suggest that belief updating can be achieved when contrary information is presented in textual form (Desai & Reimers, 2018) or through an interactive game (Roozenbeek & van der Linden, 2019).
The effects of misinformation also extend to the topic of anthropogenic climate change. It has been proposed that misinformation causes differences of opinion on the urgency of the issue (Bolsen & Shapiro, 2016; Cook, 2019; Farrell, 2019; McCright & Dunlap, 2011). Misinformation is also used to explain the stalling of political action. This is either because of a lack of public consensus on the issue (claimed to be informed by misinformation), or because of resistance to addressing it, again because of apparent false claims (Benegal & Scruggs, 2018; Conway & Oreskes, 2012; Cook et al., 2018; Flynn et al., 2021; Lewandowsky et al., 2017; Maertens et al., 2020; van der Linden et al., 2017; Y. Zhou & Shen, 2021). Moreover, the effects of misinformation do not have an equal impact on recipients because anthropogenic climate change has been perceived to be connected with particular ideological and political positions (Cook et al., 2018; Elasser & Dunlap, 2013; Farrell et al., 2019; Kormann, 2018; McCright et al., 2016).
If misinformation is indeed the sole contributor to these effects, rather than one of several factors, then fears around the impact of misinformation on generating false beliefs and motivating aberrant behaviors are justified given the potential impact on human well-being. Some of the empirical findings presented here have also included interventions designed to address the effects of misinformation, primarily centered on refocusing the way people encounter false claims and improving their ability to scrutinize claims to determine their truth status.
Economics
For some researchers, the economic cost of misinformation is considered significant enough to warrant analysis (e.g., Howell et al., 2013). Financial implications are also considered, such as, for example, how misinformation can disrupt the stability of markets (Kogan et al., 2021; Levi, 2018; Petratos, 2021), as well as the cost of attempting to debunk misinformation (Southwell & Thorson, 2015), and the policing of misinformation online in order to develop measures to limit public exposure (Gradón, 2020). For example, Canada has spent CAD$7 million to increase public awareness of misinformation (Funke, 2021) in an attempt, among other issues, to stem the economic impact. Burkhardt (2017) focuses on consumer behavior and how brands inadvertently propagate misinformation and how that can present opportunities to increase profits. As a main goal to get consumers’ attention, Burkhardt proposes that advertisements appearing alongside a piece of information that is true or false can impact purchasing behavior. Given the appeal of gossip and scandal, as illustrated by media outlets such as the National Enquirer or Access Hollywood on TV, advertisers can thus profit off sensationalized and potentially misinformed claims (e.g., Han et al., 2022). Another fundamental concern is that advertisements themselves contain misinformation (Baker, 2018; Braun & Loftus, 1998; Glaeser & Ujhelyi, 2010; Hattori & Higashida, 2014; Rao, 2022; Zeng et al., 2020, 2021).
There are also real-world examples of the economic effects of misinformation, such as how brands fall victim to unsubstantiated claims. Berthon et al. (2018) discuss how Pepsi’s stock fell by about 4% because a story went viral about Pepsi’s CEO, Indra Nooyi, allegedly telling Trump supporters to “take their business elsewhere.” This story has been cited by other marketing researchers to emphasize the adverse effects of misinformation (e.g., Talwar et al., 2019) on reputation management—an industry estimated to be worth $9.5 billion alone (Cavazos, 2019), excluding indirect costs for increasing trust and transparency. Another popular example, which aligns with Levi’s (2018) observation on market stability, is a tweet broadcast by the Associated Press in 2013. It claimed President Obama had been injured in an explosion, which reportedly caused the Dow Jones stock market index to drop 140 points in 6 min (e.g., Liu & Wu, 2020; Zhou & Zafarani, 2021). Further examples can be found in Cavazos’s (2019) report for software company Cheq, which concluded that misinformation is a $78 billion problem.
In combination, these examples often tie in with effects reported elsewhere: For instance, increased reputation management is a response to the fragile trust that we noted in relation to the media. Thus, misinformation has been claimed to impact market behavior, consumer behavior, and brand reputation, which in turn has economic and financial effects on business. When advertising sits alongside misinformation in news stories, or when misinformation is embedded in advertisements, both are claimed to facilitate profits, but it is also possible that such a juxtaposition limits profits because of the reputational damage to businesses.
Individual-level effects of misinformation
Establishing the fundamental effects of the way misinformation is processed psychologically and establishing its influence on behavior have core implications for research proposing how the general effects are expressed in society (e.g., engagement with scientific concepts, democratic processes, trust in news media, and economic factors). Therefore, we critically consider the evidential support for the effects of misinformation on cognition and behavior. We review these effects separately also because we think that a tacit inference underlying much of the current debate seems to go like this: 1) Experimental evidence shows that misinformation affects beliefs in various ways. 2) While there are few experimental studies examining the direct behavioral consequences of beliefs influenced by misinformation, we generally “know” that beliefs affect behavior. Hence, we can conclude that misinformation is a cause of aberrant behavior. Hence, we can conclude that misinformation is a cause of aberrant behavior. We are of the opinion that this line of reasoning is an oversimplification that does not do justice to the complexity of the problem and its serious implications for policymaking. The presumed causal chain is also at odds with research on the more complicated relations between beliefs and behavior and different cognitive factors (e.g., Ajzen, 1991), which we discuss below in more detail.
Cognitive effects of misinformation
A large proportion of research on misinformation has been dedicated to examining the effects on cognition. One example is the difficulty in revising beliefs when false claims are retracted (debunking or continued influence effect; Chan et al., 2017; Desai et al., 2020; Desai & Reimers, 2018; Ecker et al., 2011; Garrett et al., 2013; Lewandowsky et al., 2012; Newport, 2015; Nyhan & Reifler, 2010; Southwell & Thorson, 2015; Walter & Murphy, 2018). Findings show that presenting counterexamples (e.g., via causal explanations) corrects false beliefs (e.g., Desai & Reimers, 2018; Guess & Coppock, 2018; Wood & Porter, 2018), irrespective of group differences (e.g., age, gender, education level, political affiliation; Roozenbeek & van der Linden, 2019). Another option is to “inoculate” people from misinformation (Lewandowsky & van der Linden, 2021; Roozenbeek & van der Linden, 2019). This involves warning people in advance that they might be exposed to misinformation and giving them a “weak dose,” allowing people to produce their own cognitive “antibodies.” However, for this and other debunking efforts, there is also evidence of backfiring (increased skepticism to all claims that are presented, or increased acceptance and sharing of misinformation, as well as reduced scrutiny and correction; e.g., Courchesne et al., 2021; Nyhan & Reifler, 2010; Trevors & Duffy, 2020). In short, intervention attempts to reduce belief in misinformed claims, as defined by the researchers, can have unintended perverse effects that spill over in all manner of directions. This likely indicates problems to do with the interventions themselves as well as the nature of the claims that are the subject of interventions.
Misinformation is claimed to have a competitive advantage over accurate information in the attention economy because it is not constrained by truth (Acerbi, 2019; Hills, 2019), so it is framed in sensationalist ways to maximally capture attention (Acerbi, 2019). Hills (2019) emphasized that the unprecedented quantities of information available today require increased cognitive selection, which in turn can lead to adverse outcomes. Because people’s information acquisition is constrained by their selection processes, they tend to seek out information that is consistent with existing beliefs—negative, social, and predictive. Selecting and sharing information in such a manner in turn can lead to adverse effects. For example, preferentially seeking out and sharing negative information can lead to the social amplification of risks, and belief-consistent selection of information can lead to polarization. These processes in turn shape the information ecosystem, leading, for instance, to a proliferation of misinformation.
In addition, frequent exposure to misinformation is claimed to hinder people’s general ability to distinguish between true and false information (Barthel et al., 2016; Burkhardt, 2017; Grant, 2004; Newman et al., 2019; Shu et al., 2018; Tandoc et al., 2018). However, methodological concerns have been raised because of overinterpretation of questionnaire responses as indicators of stable beliefs informed by misinformation, as well as the likelihood of pseudo-opinions (Bishop et al., 1980)—especially as responses can reflect bad guesses by participants in response to unfamiliar content (e.g., Pasek et al., 2015).
Truth, lies, and objectivity
Studies of deception and lie detection provide important insights into the relationship between truth and misinformation, as well as people’s ability to detect the difference. Meta-analyses show that people’s accuracy in lie detection is barely above chance level (Bond & DePaulo, 2006; Hartwig & Bond, 2011, 2014). Even if people can use the appropriate behavioral markers to make judgments about what is true, objective relations between deception and behavior tend to be weak. In other words, catching liars is hard because the validity of behavioral cues to lying is so low.
Not only are people bad at telling the difference between truth and lies in others, they also have a distorted sense of their own immunity to bias and falsehoods, referred to as the objectivity illusion (Robinson et al., 1995; L. Ross, 2018). Essentially the illusion is expressed in such a way that if “I” take a particular stance on a topic (including beliefs, preferences, choices), I will view this position as one that is objective. I can then appeal to objectivity as a persuasive mechanism to convince others of my position, along with supporting evidence that is supposedly unbiased. If disagreement with me ensues, and my proposed position is rejected, the rationalization is that the other side is both unreasonable and irrational. This is a powerful expression of a bias that is centered on justifying a position with reference to objectivity without accepting that it may also be liable to bias. Pronin et al. (2002) called this a “biased blind spot.” Regardless of political affiliation (e.g., Schwalbe et al., 2020), and even one’s profession (e.g., scientists expert in reasoning from evidence; Ceci & Williams, 2018), no one is immune from the objectivity illusion.
Laying the effects of misinformation on distorting cognition comes with a problem. The fundamental issue is how to establish a normative rule to define criteria that distinguish reliably between truth and falsehood, to determine in turn whether or not people can adequately distinguish between the two. But in a world in which no obvious diagnostic truth criteria exist, the clichés of how truth and deception or objectivity and bias can be discriminated are by magnitudes stronger than any normative rule.
Behavioral effects of misinformation
The most commonly referenced behavioral effects pertain to health behaviors in response to false claims (e.g., antivaccine movements, speculated vaccine-autism link, genetically modified mosquitos and the Zika virus, COVID-19; Bode & Vraga, 2017; Bronstein et al., 2021; Galanis et al., 2021; Gangarosa et al., 1998; Greene & Murphy, 2021; Joslyn et al., 2021; Kadenko et al., 2021; Loomba et al., 2021; Muğaloğlu et al., 2022; van der Linden et al., 2020; Van Prooijen et al., 2021; Xiao & Wong, 2020). The same association has also been made between misinformation associated with anthropogenic climate change and resistance to adopting proenvironmental behaviors (Gimpel et al., 2020; Soutter et al., 2020). The effects of misinformation on behavior extend to the rise in far-right platforms (Z. Chen et al., 2021), religious extremism impacting voting behavior (Das & Schroeder, 2021), disengagement in political voting (Drucker & Barreras, 2005; Finetti et al., 2020; Galeotti, 2020), intended voting behavior (Pantazi et al., 2021), and advertising aligned with fake news reports leading to increased consumer spending (Di Domenico et al., 2021; Di Domenico & Visentin, 2020). A further study examined the unconscious influences of misinformation (specifically fake news) in a priming study, demonstrating direct effects on the speed of tapping responses (Bastick, 2021).
Another behavioral effect of misinformation is more interpersonal and centers on information sharing. First, there are those studies that examine the extent of sharing behavior. Some researchers propose that this is highly prevalent: Chadwick et al. (2018) found that 67.7% of respondents admitted sharing problematic news on social media during the general election campaign in the United Kingdom in June 2017. However, others argue against claims that sharing is a cause for concern (Altay et al., 2020; Grinberg et al., 2019; Guess et al., 2019; Nelson & Taneja, 2018). Grinberg et al. (2019) found that just 0.1% of Twitter users shared 80% of misinformation during the 2016 U.S. election, whereas Allen et al. (2020) estimated that misinformation comprises only 0.15% of Americans’ daily media diet. Such estimates should be taken with a grain of salt, because of the problematic basis on which these estimates are made, but they do nevertheless imply that for many citizens the signal-to-noise ratio is fairly high.
But why would people not share misinformation? Altay et al. (2020) showed that people do not share misinformation because it hurts their reputation and that they would do so only if they were paid. This aligns with the work of Duffy et al. (2019): They found that respondents expressed regret when they shared news that later turned out to be misinformation. In explaining sharing behavior, studies suggest there are message-based characteristics, such as its ability to spark discussion (X. Chen et al., 2015), emotional appeal (Berger & Milkman, 2012; Valenzuela et al., 2019), and thematic relevance to the recipient (Berger & Milkman, 2012). Researchers have also argued for social reasons to share misinformation, that is, to entertain or please others (Chadwick et al., 2018; X. Chen et al., 2015; Duffy et al., 2019; Pennycook et al., 2021c), to express oneself (X. Chen et al., 2015), to inform or help others (Duffy et al., 2019; Herrero-Diz et al., 2020), to signal group membership (Osmundsen et al., 2021), to achieve social validation (Waruwu et al., 2021), and to address a fear of missing out (Talwar et al., 2019). Studies show that these motivations can lead people to pay less attention to the accuracy of information because other factors play more of a salient role in the sharing process beyond the content itself (Pennycook et al., 2021). This reaffirms the fundamental misinformation/disinformation distinction: Sharing misinformation involves unintentional deception driven by interactional motivations, whereas disinformation stems from intentional deception. Research indicates that there are several motivations for sharing misinformation beyond the goal of deliberately spreading false information to influence others.
Last, there are individual differences that increase the likelihood of sharing behavior and therefore lead to adverse effects of misinformation. For instance, those who believe that knowledge is stable and easy to acquire (i.e., epistemologically naive) are more likely to share online health rumors than those who believe knowledge is fluid and hard to acquire (i.e., epistemologically robust; Chua & Banerjee 2017). Another factor that impacts one’s likelihood of sharing misinformation is the need to instill chaos, which arises from social marginalization and an antisocial disposition (Arcenaux et al., 2021). From an ideological and age perspective, conservatives are more likely to share misinformation than liberals, and older generations are more likely to share misinformation than younger age groups (Grinberg et al., 2019; Guess et al., 2019).
Beliefs, attitudes, intentions, and behavior
The logic behind studies drawing a causal connection between misinformation and behavior is that misinformation is pivotal to motivating negative behaviors. In other words, if claims that were factually inaccurate had not been encountered, then the harmful behaviors would not have occurred. There are two ways in which this presumed causal relationship between misinformation and behavior can be theorized. Either misinformation introduces new false beliefs and attitudes, and these in turn motivate a particular aberrant behavior, or misinformation reinforces preexisting false beliefs and attitudes and strengthens them enough to motivate a particular aberrant behavior (Imhoff et al., 2022; Pennycook & Rand, 2021b; Van Bavel et al., 2021).
Both of these hypotheses rest on a reliable relationship between beliefs and attitudes and behavior. Since Fishbein and Ajzen’s (1975) seminal work, psychologists have been interested in belief and attitude formation and with showing how it is associated with intention and behavior. According to Ajzen’s theory of planned behavior (Ajzen, 1991, 2012, 2020), the principle of compatibility (Ajzen, 1988) requires an explicit definition of the behavior, the target, the context in which the behavior appears, and the time frame. From this, it is then possible to apply an analysis that determines how each factor (behavior, target, context, time frame) contributes to the target of interest. If this approach is applied to the problem of misinformation, we can examine its influence on a behavior of interest. For example, after people encounter some misinformation on social media regarding climate change, when it comes to food consumption (behavior), we could predict that more meat is eaten (target), in a lunchtime canteen (context), and observed within a few days of encountering the misinformation (time frame). The determinants of the intention to act in accordance with the consumptive behavior involve beliefs and attitudes, which in this case are negatively valenced. One can then derive a belief index through the application of an expectancy-value model to calculate the strength of the belief (e.g., climate-change denial) multiplied by the subjective evaluation (e.g., negative attitudes to eating sustainably) and the outcome (e.g., not eating sustainably in a lunchtime canteen setting). Critically, there is a requirement to show how misinformation is instrumental in generating the beliefs and attitudes that can then lead to misbehaviors.
With the exception of the unconscious priming study (Bastick, 2021), none of the cited work examining the association between misinformation (and fake news) and behavior shows a causal link between the two. 2 None of the evidence as yet has been able to reveal the kind of relationships needed to reliably establish the cause of behavior via changes in belief. Why are we making this strong critique? Much of the empirical work relies on self-reports of intentions or on judged willingness or judged resistance to behave in particular ways, or else demonstrates correlations between the circulation of misinformation and aberrant behaviors. In other words, they are subjective judgments about behavior, not actual direct indicators of behavior. There are some recent meta-analyses that have examined the impact of misinformation on sharing intentions (Pennycook & Rand, 2021b), people’s beliefs (Walter & Murphy, 2018), and people’s worldviews (Walter & Tukachinsky, 2020), as well as the impact of fact-checking on political beliefs (Walter et al., 2020) and people’s misunderstanding and behavioral intentions regarding health misinformation (Walter et al., 2021). Again, as yet, none of this work has been able to draw a direct connection between misinformation and specific measurable behavioral effects, aside from intentions to and judgments about willingness to behave in a particular way.
More generally, several meta-analytic studies examined the relationship between different types of beliefs, attitudes, and intentions on behavior (e.g., Glasman & Albarracín, 2006; Kim & Hunter, 1993; Kraus, 1995; Sheeran et al., 2016; Webb & Sheeran, 2006; Zebregs et al., 2015). On the whole, the reported effects of belief on behavior suggest that there is a relationship, but many authors note that their analyses are limited by the fact that they measured behavioral intentions, not behavior itself (Gimpel et al., 2020; Kim & Hunter, 1993; Xiao & Wong, 2020). In addition, there are weak relationships between beliefs and intentions (Zebregs et al., 2015), and when intentions and behaviors are examined the effects can also be weak, with many other moderating intervening factors (e.g., personality, incentives, goals, persuasiveness of communication) explaining this weakness (Soutter et al., 2020; Webb & Sheeran, 2006). The observed weak relations are consistent with the results of research on more applied issues. For instance, in health and risk communication it is widely accepted that the mere provision of accurate information is typically not sufficient to induce behavioral change—raising the question of why perceiving false information should be sufficient to induce aberrant behavior.
We return to the issue regarding evidence for the association between misinformation and aberrant behaviors in the concluding section, and we will address how combinations of experimental methods could be used to better locate potential directional relationships between misinformation and behavior.
Causes of the Problem of Misinformation
The digital age seems rife with misinformation, which in turn is alleged to lead to several profound societal and individual problems. A comprehensive approach to understanding the causes of these reported effects is therefore required. Why exactly is misinformation a problem, given all these apparent effects?
One of the most common explanations of the causes of misinformation in its various forms relates to the advances in technologies that produce and distribute information. The information ecosystem (i.e., the technological infrastructure that enables the flow of information across individuals and groups) is assumed to be driving the problem of misinformation, because it is the critical means by which people now all source information, and it has itself been contaminated by misinformation (Pennycook & Rand, 2021b; Shin et al., 2018; Shu et al., 2016). There is nothing like the digital landscape for quick and wide dissemination of misinformation (Celliers & Hattingh, 2020; Lazer et al., 2018; Moravec et al., 2018; Tambiusco et al., 2015), and it is said to have transformed consumers into producers of information (Ciampaglia et al., 2015; Greifeneder et al., 2020; Kaul, 2012; Marwick, 2018), and misinformation (Bufacchi, 2020; Levi, 2018). Others emphasize that the sheer volume of information that is now available encourages sharing behavior through online networks (Bessi et al., 2015) and leads to biased information-selection processes with potentially adverse consequences (Hills, 2019).
Also seen as facilitating the proliferation of misinformation are technological tools such as recommender systems (Fernandez & Bellogín, 2020, 2021), Web platforms (e.g., Han et al., 2022), and social media (Allcott et al., 2019; Chowdhury et al., 2021; Durodolu & Ibenne, 2020; Pennycook & Rand, 2021a, 2021b; Valenzuela, Halpern, et al., 2019). Also, social-media environments allow swarms of bots to disseminate or obscure information (Bradshaw & Howard, 2017). Moreover, these can also be used to generate Sybil attacks (Asadian & Javadi, 2018), when a single entity emulates the behaviors of multiple users and attempts to create problems both for other users and the network itself. In some cases, sybils are used to make individuals appear affluent to give the impression that their opinions are highly endorsed by other social-media users, when in fact this is artificially generated (Ferrara et al., 2016). Thus, not only is the content artificially generated, but features used to judge its reliability, popularity, and interest from others are also artificially manipulated.
The digitization of media also enables producers of (mis)information to access sophisticated and extremely convincing tools of digital forgery, such as Deepfakes, the digital alteration of an image or video that convincingly replaces the voice, the face, and the body of one individual for another (Levi, 2018; Steensen, 2019; Whyte, 2020). Even accurate news of actual events can be distorted as successive users are adding their own contexts and interpretations (Peck, 2020). Opaque algorithmic curation that takes humans out of the loop aims to maximize consumption and interactions, possibly causing viral appeal to take precedence over truthfulness (Rader & Gray, 2015). However, technology also offers ways to tackle misinformation. Bode and Vraga (2017) showed how comments on social media are as effective as algorithms in correcting misperceptions, as well as contributing to the development of tools to track misinformation (Shao et al., 2016).
The media landscape has certainly been fundamentally changed by the Internet and social media, but historians have also argued that although gossip is a permanent and widespread feature of social exchanges online, its transmission and diffusion still retain the same core characteristics as pre-Internet, pretelecommunications, prewriting press (Darnton, 2009; Guastella, 2017; Shibutani, 1966). For instance, Guastella (2017) argued that information has always been diffused through open-ended, chainlike transmission. This hinders the ability to verify any single item of information, because the source responsible is by its very nature obscured. In other words, communication is frequently disorderly and untraceable, whether the issue at hand is a rumor in antiquity or in the present day. Cultural historian Darnton (2009) offered a different perspective but drew similar conclusions regarding the role of technology in misinformation. He claimed that we are not witnessing a change in the information landscape but a continuation of the long-standing instability of texts. Information is no more unreliable today than in the past, because news sources have never corresponded to actual events.
According to these views, although new channels for misinformation have emerged, the conditions in which it is created and spread have not changed as much as one might think. In fact, viewing misinformation through a historical as well as a philosophical lens not only sheds light on shifts regarding the medium of information transmission but also informs how to view the problem of misinformation in the main. We now turn our attention to this research that explores why misinformation may not be cause for alarm and the scholars from different disciplines who share this perspective.
Misinformation: unsounding the alarm?
Misinformation is not a new phenomenon (Allcott & Gentzkow, 2017; De Beer & Matthee, 2021; Kopp et al., 2018; Scheufele & Krause, 2019; Waldman, 2018). For instance, fake news was recorded and disseminated through early forms of writing on clay, stone, and papyrus so that leaders could maintain power and control (Burkhardt, 2017).
Altay et al. (2021) deconstruct several alarmist misconceptions about the problem of misinformation. One is that misinformation engulfs the Internet, specifically social media, and as a result falsehoods spread faster than the truth. For example, according to a BuzzFeed report (2016), the top 20 fake news stories on Facebook resulted in nine million instances of engagement (likes, comments, shares) between August and November 2016. However, if all 1.5 billion Internet users engaged with one piece of content a week, these 9 million instances would represent only 0.042% of all engagements (Watts & Rothschild, 2017). Also, fact-checking and science-based evidence were found to be retweeted more than false information during the pandemic (Pulido et al., 2020). If, as some have claimed, misinformation has always existed as an inherent feature of human society (Acerbi, 2019; Allcott & Gentzkow, 2017; Nyhan, 2020; Pettegree, 2014), then the current focus on its presence online may reflect methodological convenience (i.e., it can be measured more easily; Tufekci, 2014), overlooking misinformation on television, newspapers, and the radio.
Another misconception is conflating the volume of content engagement with content belief. Reasons for engaging with misinformation are numerous, from expressing sarcasm (Metzger et al., 2021) to informing others (Duffy et al., 2019): Thus, inferring acceptance from consumption can exaggerate the negative effects of misinformation (Wagner & Boczkowski, 2019). Consumption of information is informed by prior beliefs (Guess et al., 2019, 2021), which suggests that it is not strictly instrumental in the generation of new false beliefs. People are not necessarily misinformed but may simply be uninformed; this is also the point at which pseudo-opinions can emerge (Bishop et al., 1980), so this is an important distinction (Scheufele & Krause, 2019). Luskin & Bullock (2011) observed that 90% of surveys lack a “don’t know” response, which increases the likelihood of an incorrect answer by 9 percent.
These misperceptions suggest that concerns around misinformation exceed what can be inferred from the available evidence, and that the causal link between misinformation and behaviors may be exaggerated (Scheufele et al., 2021). This is also why, as Krause et al. (2022) argued, we are not in an “infodemic.” This term, frequently used during the COVID-19 pandemic, refers to an alleged surplus of false information (World Health Organization, 2022). Our complex information ecology presents a challenge for disentangling the relationship between misinformation and behavior, and science itself grapples with the volatility of the evidence base as it develops (Osman et al., 2022); determining what constitutes misinformation is therefore frequently akin to shifting sand.
From a more historical perspective, Scheufele et al. (2021) showed that the worry regarding the current circulation of misinformation neglects examples predating the advent of digitization of information. The emergence of communication studies in the United States in the 1920s is perceived to be the result of concerns over the aberrant influence of the media (Wilson, 2015). New media technologies were seen as responsible for the growing disconnect between what people believed and the real world in that era (Lippman, 1922). Panics also arose in reaction to the arrival of telegraphy in the early 19th century (Van Heekeren, 2019). 3 Before that, after the invention of the printing press in 1440 granted open access to knowledge, concerns from the Catholic Church resulted in the 1560 publication of the Index of Forbidden Books. In all such attempts to sound the alarm, and then to address the alarm around technologies offering greater access of information to the populace, failure ensued—because ideas continue to circulate even once speech is restricted (Berkowitz, 2021). Only when we consider misinformation through a historical lens can we learn that the current situation is arguably preferable, although admittedly still problematic, compared with previous information eras (Van Heekeren, 2019), because, as explained earlier, traditional media can learn from the past and focus on verification and fact-checking as a way to reassert their authority. Furthermore, digital media has devised means of increasing the likelihood of self-correction (e.g., debunking and fact-checking websites).
The discussion in this section presents obstacles for the argument that technological advances lead to misinformation which, in turn, has a range of negative ramifications. Nonetheless, if this position is pursued, one might ask how, then, can we make sense of why misinformation is perceived to be a problem? We address this question with reference to historical epistemology, which is concerned with the process of knowing. We argue that knowledge is currently guided by the concept of objectivity and its associated objects, such as statistics and DNA, but that this is a recent development; knowledge has previously been guided by other concepts, such as subjectivity relying on rhetoric. Thus, how people arrive at ground truth is not as consistent as one might assume. This suggests that the worries around misinformation are rooted in a perceived shifting away from norms of objectivity and empirical facts.
It is helpful to begin with the emergence of rhetorike—the art of public speaking (Sloane, 2001). As Aristotle’s Art of Rhetoric dictates, rhetoric was believed to communicate, rather than discover, truth through ethos (trustworthiness), logos (logic), and pathos (emotion). Thus, the primary vehicle for both ancient philosophy and rhetoric was orality which, according to Guastella (2017), meant that information was engulfed by the disorderly and unstable flow of time. Chirovici’s (2014) work on rumor throughout the ages also revealed people’s tendency to favor stirring the listener’s imagination over veracity. This is exemplified in Grant’s (2004) research, which showed how historical writing in antiquity contained numerous instances of misinformation. For example, Roman historian Titus Livy was known for embellishing his accounts to such an extent that scholars debated whether he should be regarded as a novelist or a historian. According to Grant, Titus Livy was motivated to build his depictions of events on blatantly fictitious legends. The upshot of the oral tradition was a series of historical accounts that were incomplete, untrustworthy, or entirely false.
During the Middle Ages, the process of establishing knowledge transformed with the emergence of the modern fact (e.g., Shapiro, 2000; Poovey, 1998; Wootton, 2015). Before the 13th century, written records regarding one’s financial affairs were kept secret and stored away in locked chests with other important documents such as heirlooms, prayers, and IOUs (Poovey, 1998). However, Poovey (1998) observed that double-entry bookkeeping by merchants was a catalyst for the development of a new “epistemological unit,” otherwise known as the modern fact. This had two repercussions: (a) Private information became a vehicle for public knowledge, and (b) the status of numbers was elevated such that they were now positively associated with accuracy and precision, rather than negatively associated with supernaturalism and symbolism. Wootton (2015) also referred to the critical role of double-entry bookkeeping in the mathematization of the world. The value of numerical descriptions produced new standards of evidence that focused on accurately measuring natural phenomena and precisely characterizing objects for practical use (Winchester, 2018).
The arrival of the printing press solidified a shift in the meaning of knowing (Wootton, 2015): Facts occupied the new cornerstone of knowledge and became synonymous with truth. The assumption that facts have always been integral to knowledge is evidenced by their use as a reference point for ground truth among those attempting to manage and detect misinformation (e.g., Hui et al., 2018; Rashkin et al., 2017; Shao et al., 2016; Tambuscio et al., 2015). Once facts became embedded in public discourse, the role of rhetoric faded. This is best reflected in the Royal Society’s new motto in 1663 (nullius in verba, meaning “take nobody’s word for it”) (Wootton, 2015). As Bender, J., & Wellberry, D. (1990). succinctly observed, “Rhetoric drowned in a sea of ink.” By the 18th century, knowledge stemmed from an objective relationship between an individual and the natural world in the form of measurements and facts. Crucially, knowledge could now be widely communicated to an increasingly literate public.
This historical and philosophical research on misinformation, as well as studies by researchers from other disciplines (e.g., Altay et al., 2021; Berkowitz, 2021; Krause et al., 2022; Van Heekeren, 2019), are distinct from the work reviewed earlier. This alternative position regards misinformation as neither new or especially concerning; it is treated as part and parcel of a variety of linguistic and communication styles that impact every aspect of the way people encounter information. So if we take the position that the problem is not novel and the way we use information to establish ground truth is never entirely stable, then what else could be causing the current concern over misinformation? The work of those reviewed in this section (particularly Altay et al., 2021; Krause et al., 2022) lays the foundations for us to propose why we believe misinformation is perceived to be a problem.
Intersubjectivity
The heart of our argument
If we condense the critical issues of misinformation into the following, then academic literature, traditional news media (e.g., BBC, Reuters, CNN) and public policy (e.g., the European Commission, the World Health Organization, the World Economic Forum) are sounding the alarm because the digital information ecosystem allows everyone to generate and distribute misinformation. Technological advances have not only increased access to better quality information but also created a corrupt ecosystem that enables a greater supply of poor information—which in turn has even greater potential for negative impact on behavior offline.
We pose the following argument: Information technological advances over the past few decades increased the transparency of interactions between people. Simply observing people interact with each other online is not an issue in and of itself. However, by exposing these interactions, current information systems pave the way for a simple and erroneous inference. The vast volume of interactions must also mean there are more opportunities for people to deviate from objectivity, because they can more easily coordinate beliefs about the world with one another (intersubjectively). In other words, there is a perception that intersubjectivity is emerging as a new way of knowing, and this threatens established (though historically fairly young) norms of objectivity. This leads to a tension between ground truths that are established through traditional institutions and agents that are typically considered authorities presiding over ground truths, and the processes (e.g., via social media) used to establish knowledge outside of these institutions. The problem of misinformation as conceptualized by those sounding the alarm is therefore the perceived misalignment between laypeople’s coordination efforts for interpreting entities and mechanisms for objectively establishing these entities.
We argue that the currently available evidence does not support a clear link between beliefs generated or reinforced through misinformation and aberrant behavior. The dynamics of belief formation are far more complicated. In addition, even if intersubjectivity does replace objectivity as the primary way of knowing, research demonstrates that our relationship with epistemic concepts (e.g., objectivity, belief, probability) and objects (e.g., metrology, statistics, rhetoric) has been dynamic throughout history. Desiring a pre-internet era suggests recovery from such current shifts in knowing to some point in time where misinformation was less of a problem, but when was that? Last, some coordination between the scientific community is required to determine what evidence meets the criteria of objectivity, because it does not happen on its own outside of group consensus. In the next section we expand on what we mean by intersubjectivity and how it can be a useful conceptual device for understanding why there is such alarm associated with misinformation.
Tensions between intersubjectivity and objectivity
We define intersubjectivity as a coordination effort between two or more people to interpret entities in the world (ideas, events, people, observations) through social interaction. Our definition of intersubjectivity is informed by prior definitions proposed in philosophy (Schuetz, 1942), economics (Kaufmann, 1934), psychology (W. James, 1908), psychoanalysis (Bernfeld, 1941), psycholinguistics (Rommetveit, 1979), and rhetoric (Brummett, 1976). The core idea of intersubjectivity, according to Brummett (1976), is that meaning arises from our interactions. Specifically, he argued that there is no objective reality in that sensations, perceptions, beliefs, and experiences are meaningless. Rather, it is our experiences with other people that imbue these sensations, perceptions, beliefs, and other constructs of knowing with meaning—namely emotional valence and moral judgments.
How does this contrast with objectivity? According to Reiss and Sprenger (2020), observations are objective if they are (a) based on publicly observable phenomena, (b) free from bias, and (c) accurate representations of the world. There is not only consensus on how these observations should be made and interpreted, but this information and the ways by which it is obtained are also made visible for scrutiny (e.g., in the form of academic publications). This signals that the information has met the necessary standards while simultaneously reinforcing them. Crucially, if these observations do meet the criteria, they are regarded as contributing to knowledge. Objectivity converts observations into facts, which have become synonymous with ground truth, and this functions to reduce uncertainty about the world. Indeed, the fact-as-truth sense developed from the fact-as-occurrence sense (Poovey, 1998).
Intersubjectivity as a candidate for why misinformation is a big problem
So, why is intersubjectivity supposedly replacing objectivity, and why is this a cause for concern with respect to misinformation? Regarding the first question, social media is inherently about connecting people, so the number of visible interactions is potentially vast. Regarding the second question, the concern is that intersubjectivity is free from the formal rules of generating objective truths. In short, it looks as though more people are going it alone in establishing knowledge, and this has now been better exposed through advanced information technologies. Anyone can now participate because the affordances encourage intersubjective knowledge production and sharing. Intersubjectivity hampers the influence of facts in helping people accurately understand the world, leading to an uncontrollable breeding ground for misinformation, which then produces more aberrant behaviors.
In addition, intersubjectivity can appear to be a good candidate for generating misinformation, and there is research suggesting that social relationships and interactions, rather than objective methods, are at the foreground of knowledge. Kahan (2017) drew on Sherman and Cohen’s (2002) identity-protective cognition, which argues that culture is both cognitively and normatively prior to fact. He uses this theory to explain his findings that people are more likely to hold misconceptions if those misconceptions are consistent with their values. Similarly, Oyserman and Dawson (2020) argued that during the United Kingdom referendum in 2016, people used simple identity-based reasoning rather than complex information-based reasoning to inform their voting decision. According to Margolin’s (2021) theory of informative fictions, there are two types of information in any exchange: property information (object-focused) and character information (agent-focused). Misinformation therefore occurs when people prioritize character information despite the negative impact on property information. These ideas are also reflected in Pennycook et al.’s (2021c) study, which found that although people care about the accuracy of the information that they share with others, signaling political affiliation is more important because of the social-media context.
Perceived intersubjectivity as a candidate for why misinformation is a big problem
We each have a vast network of social ties, each with its own strengths (strong/weak) and valences (positive/negative). This complicates which claims are believed and for how long, for two reasons. First, our relationships to others are not static but are susceptible to transformation during an interaction. Second, if a claim is circulating widely, whether it is in a community or on a news website, then it becomes available for negotiation across a multitude of interactions with individuals. There are two consequences: First, an individual’s likelihood of believing a claim is an aggregate of their interactions; and second, the interpretation of an entity (e.g., observation of the world) is constantly in flux. These consequences make intersubjectivity a candidate for explaining why there is so much alarm about the proliferation and consequences of misinformation. But, on the other hand, intersubjectivity also helps to expose why this perception is potentially illusory.
Knowledge can emerge from an interaction, but an interaction or sharing of false information does not equate to evidence that knowledge has been negatively impacted in a fundamental way. Conflating the diffusion of information with its adoption is problematic because engagement, through likes or shares, does not automatically mean belief (Altay et al., 2021). The billions of interactions we see online (Boyd & Crawford, 2012), whether they contain misinformation or not, may seem concerning but do not necessarily reflect all agents’ inner state of mind (Bryanov & Vziatysheva, 2021; Fletcher & Nielsen, 2019; Wagner & Boczkowski, 2019).
Intersubjectivity does not subscribe to the transparency of objectivity. Although scientific methods are explicitly designed to distinguish between competing hypotheses and, ideally, between truth and falsehood, intersubjectivity lacks this normative appeal. However, this does not mean that social-negotiation processes of generation and transmission cannot have their own corrective mechanisms, and they may draw attention to consequential, and potentially false, claims that require scrutiny. An example of this is the Reddit thread ChangeMyView, where individuals post their argument about a claim and ask users to change their minds (e.g., “Influencers are not only pointless, but causing active harm to society”; “Statistics is much more valuable than trigonometry and should be the focus in schools”). On occasions, this seems to be successful, as indicated by the individual’s responses to the proposed counterarguments, and research has been conducted into the language of persuasion in this thread (e.g., Musi, 2018; Priniski & Horne, 2018; Wei et al., 2016).
Intersubjectivity in public debate, especially in the absence of simple ground truths, can resemble features of scientific discourse (Brummett, 1976; Trafimow & Osman, 2022). Statistics, which are widely regarded today as a window to ground truth, were initially doubted in the early 19th century precisely because they were treated as another tool of rhetoric masquerading as irrefutable, legitimate evidence (Coleman, 2018). Thus, the emergence and eventual domination of statistics incorporated an element of coordination and negotiation among scientists to establish statistics as a valid representation of, and inference mechanism for, observations of the world.
Although one might be tempted to consider objectivity and intersubjectivity as mutually exclusive, they both feature in the scientific selection of ideas (Heylighen, 1997). A recent example occurred when various hypotheses regarding the origins of the COVID-19 pandemic were ruled out because they were initially regarded by many in the scientific community as conspiratorial; these theories were later recognized as valid hypotheses (Osman et al., 2022). This demonstrates how knowledge and claims can be in flux when they are interpreted by different agents. It also highlights the difficulties of being objective on a topic when the evidence is accumulating in real time and when political factors try to steer what constitutes legitimate and illegitimate investigation. Moreover, there is a historical precedent for censorship when authorities or experts (scholars, journalists, politicians) assert themselves in order to address the destabilization of epistemological foundations, especially when it presents an existential as well as a political threat (Berkowitz, 2021). Such strategies therefore serve to maintain not just power but perceived order in the world. Intersubjectivity and objectivity, although different, can and have coexisted in the past, which offers reassurance to those who are concerned that intersubjectivity is causing chaos in the form of misinformation (Krause et al., 2022).
Last, there needs to be a consistent way in which deviations from ground truth are managed. If researchers see the present situation as more problematic than ever before, this indicates that to them pre-Internet society was preferable and had recovered from past shifts in the value of epistemic concepts. In other words, if the present is contrasted with the past to highlight the current problem of misinformation, then there needs to be an acknowledgment that past shifts, which caused great concern at the time, were not as problematic as those of the present day. Just because past shifts have not been problematic, this does not mean that future shifts will also be nonproblematic. It is just that there is a lack of evidence that objectivity has been displaced by intersubjectivity to the extent that it is the current default mode of knowing, and that misinformation arising from this way of knowing impacts behavior. Although we do not deny the possibility that misinformation may be a serious problem at some point, it seems premature to reach this diagnosis at the moment.
Conclusion
In reviewing the perceived problem of misinformation, we found that many disciplines agree that the issue is not novel, but that modern technology generates unprecedented quantities of misinformation. This exacerbates its potential to cause harm both on the individual level and the societal level. The hyperconnectivity of today’s information and (social) media landscape is seen as facilitating the generation and distribution of misinformation. In turn, this leads to a perceived increase in establishing worldviews intersubjectively, which deviates from the epistemic objectivity heralded by traditional institutions and gatekeepers of knowledge and truth. We have proposed our own analysis on the fundamentals of the problem of misinformation, and through the lens of intersubjectivity we have provided a conceptual framework to argue two essential points.
First, today’s informational ecosystem is the mechanism by which we can observe transparently the multitude of interactions that it hosts. Seeing the volume of daily interactions in, for instance, social-media networks, leads to the inference that more people are deviating from truth, because they can better coordinate their own subjective knowledge of the world outside of established facts—that is, intersubjectively. We argue that simply because technology is increasing the number of visible interactions does not necessarily suggest that there is anything profoundly new about the status of epistemic objects. The interactions people have when sharing what they think and feel is not equivalent to evidence that epistemic objects in and of themselves have changed. This, we propose, explains why misinformation is viewed as an existential threat. Further, in our view, the current evidence is also not sufficient to justify this conclusion because correlation is often viewed as causation.
Second, historical epistemology exposes what is often ignored: The generation of knowledge is a process of establishing conventions for the best way to arrive at ground truths. We make this point from a realist, not a relativist, perspective. There are objective facts. Nonetheless, historical epistemology shows that whereas there is an absence of stable diagnostic criteria for ground truth, the mechanism by which objective facts are acquired, characterized, and communicated has changed over the course of human history. This, we propose, explains why misinformation cannot be examined without the recognition that distortion affects any act of communicating truth.
What does future work on misinformation need to consider?
Future work needs to address three key issues. First, it must show that misinformation in a particular context of interest has the widespread potential to establish or significantly strengthen related beliefs in a considerable manner. This is important because the Internet and social media may seem rife with misinformation, but reliable estimates on its prevalence and impact on recipients are hard to come by (e.g., Allen et al., 2020; Altay et al., 2021). Thus, precisely mapping the (social) media landscape to gauge the extent of the problem is an important first step in understanding its potential impact on individuals’ beliefs and public discourse. Of course, even if the sheer amount of misinformation is small compared with legitimate information, it might still have a severe impact on people’s beliefs and attitudes, as well as on their evaluation of evidence. Again, however, it can be debated to what extent misinformation influences the public’s beliefs on a large scale (e.g., British Royal Society, 2022). Thus, the jury is still out regarding the severity of the problem.
A second key issue concerns the role of misinformation in societally harmful behaviors and whether misinformation constitutes a major factor in individuals’ actions. The assumption is that there is a direct causal link between the prevalence and consumption of misinformation and subsequent harmful behaviors. To date, however, this link has not been sufficiently demonstrated, and further research is required into this issue (see below).
Third, when evidence for misinformation leading to harmful behavior at the individual and societal level can be reliably shown, a critical question is what tools and strategies are best suited to address the problem. This issue resonates with broader discussions surrounding behavior-change techniques in general (Hertwig & Grüne-Yanoff, 2017; Osman et al., 2020), as well as in digital environments (British Royal Society, 2022; Kozyreva et al., 2020), and the limited efficacy of these techniques in generating reliable behavioral change (e.g., Osman et al., 2020; Trafimow & Osman, 2022). In addition, organizations can resort to regulatory mechanisms that serve as more substantial approaches to addressing misinformation. We discuss those in more detail later in this section.
Approaches to investigating causal links between misinformation and behavior
A key question for science, policymaking, and public debate is the nature and strength of the relation between misinformation and behavior. Sometimes a simple causal chain of the form misinformation → beliefs → aberrant behavior is presumed. This is despite (a) a weak empirical basis; (b) deep conceptual problems arising from neglecting the dynamics of belief generation (as well as the changing status of what is deemed by consensus of various authorities as illegitimate and legitimate claims; e.g., Osman et al., 2022); and (c) complex relations between beliefs and behavior. Another line of argument to support the presumed causal relation between misinformation and aberrant behavior is to refer to major events (e.g., the speculated role of mis- and disinformation spread before the January 6, 2021, riots at the U.S. Capitol). We emphasize the importance of documenting and carefully analyzing such cases, as well as their relevance to academic, political, and public debate. At the same time, we think that such events do not constitute, on their own, a sufficiently robust evidence base for guiding policymaking. Just as cases of adverse side effects after a vaccination require further examination and call for large-scale studies to assess the magnitude of the problem, we call for more systematic research on the relation between the consumption of misinformation and resulting behaviors. This is also of importance to gauge the severity of the problem, identify key mechanisms and vulnerable areas, and take appropriate measures to limit its consequences.
For misinformation to be the cause of aberrant behavior, it needs to either reinforce or introduce false beliefs. Consequently, future investigations could draw from established methods examining belief conviction (especially of false beliefs) and choice blindness (Hall et al., 2013). In the clinical domain, work has shown how individual differences account for strength in maintaining false and delusional beliefs in light of contrary evidence (e.g., Combs et al., 2006). This is informed by deconstructing measures of beliefs to examine the association between the belief and conviction in it. Beliefs are treated as multifactorial, which in combination with how they are used helps researchers to develop measurement tools of belief conviction (Abelson, 1988). Items examine strength of beliefs, length of time the beliefs have been held, frequency of thoughts, personal importance of the beliefs, and personal concern in the beliefs, as well as willingness to commit personal time in pursuit of those beliefs (e.g., Conviction in Delusional Beliefs Scale, Combs et al., 2006; Brown Assessment of Beliefs Scale, Eisen et al., 1988). Work has demonstrated that combining these into metrics of conviction can be useful predictors of individual differences in the durability of attitudes and beliefs over time, whether they are false or true beliefs (Lecci, 2000). Why is this important? Using measurement tools that expand the way beliefs are characterized is one important way to gauge the extent to which encountering misinformation should be a cause for worry. Given that, even if, by association, people hold false beliefs, they may have next to no conviction in them. But when they do have conviction, this is a predictive factor as to whether they will later act on the false beliefs they hold.
By extension, the choice-blindness paradigm is another way of examining conviction, and it has often been used to examine strength in policies that persuade people to affiliate with and vote for particular political parties (e.g., Hall et al., 2013; Strandberg et al., 2019). The paradigm presents people with various political issues (e.g., gun control, abortion, immigration, tax reform), and asks them to indicate where on a scale they align (e.g., from strongly for tax increases on gas to strongly against tax increases on gas). Then through sleight of hand, some responses on the scale are changed to the opposite of the participant’s original position. Participants are then asked who they would vote for, while also reviewing their (doctored) positions on political issues. The paradigm reveals that people often do not correct the change, but when they do this is predicted by the extremity of their initial positions. This paradigm offers a way to show when some beliefs that appear to be significant are in fact fragile and can be easily flipped in a simple manipulation, but when others are in fact stable, particularly when they are extreme. The choice-blindness paradigm is another way to detect the extent to which people are discerning about their own beliefs and the conviction in them. Taken together with work from studies on belief conviction in the clinical domain, this suggests efforts to examine the link between misinformation and behavior should concentrate on specific groups that already hold extreme beliefs because they are already likely to be stable over time (e.g., Stille et al., 2017). They are likely to be held with conviction—that is, they are already liable to be acted upon. A strong test of the causal link between misinformation and behavior is to expose the extent to which new misinformation that elaborates on or reinforces extreme false beliefs then generates aberrant behaviors. From this it would be possible to determine whether a new unit of misinformation is the tipping point to instigating aberrant actions, independent of the propensity to act on already-held beliefs with conviction—beliefs that by necessity need to be held stably over time to motivate behavior.
Social-level approaches to investigating the problem of misinformation focus on large-scale units of behavior (e.g., markets, distrust in traditional news media, polarization, voting behavior) to show how misinformation has aggregate effects. It is even harder, then, to avoid overly interpreting correlation as causation at such a broad level. One approach would be multidisciplinary, building on historical, sociological, linguistic, and philosophical disciplines to consider how the accumulation of particular claims shape the collective narratives of the populace. Some of these narratives could be misinformed. Correspondingly, methodological and theoretical approaches in cultural evolution, which investigate the mechanism of cultural transmission (Cavalli–Sforza & Feldman, 1981), could simulate the evolution of beliefs across populations over time (Efferson et al., 2020; McKay & Dennett, 2009; Norenzayan & Atran, 2004). Adopting such approaches would help to offer a first pass at investigating mechanisms at group level that shape and change how some beliefs (false or otherwise) are preserved and strengthen over time and why others fail to survive. An agreed set of behaviors would need to be mapped onto this to determine the relationship between the transmission of beliefs (false or otherwise) across populations and corresponding changes in behavior over the same period, isolated from other potential contributing factors (e.g., economic, geopolitical). Again, we would need to acknowledge that this would not determine how any single misinformation claim can directly cause a change in behavior, but it would help to show how, on aggregate, claims of a particular kind become popular and how, on aggregate, they contribute to the formation of narratives that people in their day-to-day lives use to frame their beliefs about the world. The analysis of the formation of narratives could compare those formed in news media or on social-media platforms with those formed in day-to-day, face-to-face interactions in social gatherings. It is likely that political, economic, and social narratives will, over time, depart from each other or converge on news media, social media, and social interactions offline.
Regulatory responses to misinformation
Possible tools to address misinformation include restricting freedom of speech through governmental or private institutions (e.g., removing content or banning people and institutions from social-media platforms), using fact-checking tools and related means to debunk misinformation, and implementing strategies to “inoculate” people against the influence of misinformation. Nonetheless, many of these tactics, such as content-moderator jobs, require interpretation and agreement as to what distinguishes legitimate information from misinformation (Heldt, 2019). Moreover, to what extent are such methods justifiable? That measures are adequate and effective is of particular relevance because the foundation of an open and free society is diversity in thought, worldviews, and values. Naturally, this implies mutual respect for opinions and actions even one disagrees with them and even if they conflict with scientific consensus—if people hold the belief that the Earth is flat or if they avoid walking under a leaning ladder to prevent bad luck, they have the right to do so. So what are the criteria for holding illegitimate beliefs, without, for instance, reference to criminal acts?
Freedom of speech and expression is a universal human right in many countries, but it rarely, if ever, is absolute. Society operates within common limitations of free speech and expression that include defamation, threat, incitement, libel, and matters of public order or national security. Typically, justifications of these limitations refer to the harm principle (Mill, 1859): the premise that freedom of expression and actions of an individual can only be rightfully restricted to prevent harm to others. Tacitly, the harm principle also underscores much of the current debate on misinformation: the assumption that the surge of misinformation is, at least probabilistically, a cause of actions that are harmful to others and society. Thus, a critical question is to what extent does the generation and distribution of misinformation, when linked to societally harmful actions, justify regulatory measures and legislative actions that limit freedom of speech on top of the current restrictions that exist?
When it comes to dealing with scientific misinformation (as opposed to strictly illegal content, such as child pornography or violent extremist propaganda), there is both doubt about the effectiveness of such measures and worries that they could backfire, for instance by further increasing distrust toward governmental regulations or scientific institutions (British Royal Society, 2022). Indeed, prohibiting the expression of an idea does not eliminate its darker influence on knowledge, and it shows how some authorities think they are worthier of expressing themselves than others are (Berkowtiz, 2021). Controlling the flow of information assumes that traditional institutions are generally immune from making errors, and this in turn can have negative effects on citizens’ level of trust in them.
Outside of our own analysis of the problem of misinformation, even if we subscribe to the view that it is an existential threat, any regulatory action that is taken to address the problem still needs to be informed by robust evidence. For this reason, we considered the issues that research needs to address and which methodological approaches could offer new insights into the problem of misinformation. Our position is that any regulatory interventions should aim at empowering people and helping them to navigate both traditional and online information landscapes without posing the risk of eroding the foundations of an open and democratic society.
Finally, an issue not addressed in this review is the moral imperative that underpins much of the concern around misinformation. The logic goes something like this: There is truth, however defined, and deviating from truth is morally reprehensible, given the potential such deviation has for creating aberrant behaviors. A deeper analysis is required about the new moral landscape we face that encourages simple categorizations as to who is virtuous for believing a particular view and who is bad for believing otherwise. More to the point, if the moral imperative is used to justify punitive actions, then are the grounds for this based on having a false belief, or acting in an immoral way? Aberrant behaviors may indeed be informed by false beliefs (among many other factors), but we maintain that false beliefs are often not stable, and even when they are, they do not inevitably lead to aberrant behavior.
On the basis of Jones (2016) findings, Axt et al. (2020) claimed that democratic damage stems from media distrust, which suggests that political consequences are perceived as secondary. The logic of the argument here is that misinformation leads to distrust in news media, and that in turn leads to disengagement in democratic and political processes. In fact, even the terms “misinformation” and “fake news” are claimed to promote antidemocratic ideology (e.g., Habgood-Coote, 2019; Levi, 2018) because introducing the idea that false claims are circulated in traditional news media is sufficient cause for generating mistrust and potential disengagement in democratic processes.
It is worth highlighting that through the priming paradigm that Bastick (2021) used, any potential association between misinformation (in their case, fake news) and behavior amounted to finger-tapping behaviors. So it would be hard to infer from this what the effect of fake news is on aberrant behaviors, given that the behavioral measure here was simply finger tapping, which in and of itself is not aberrant.
The arrival of a global telegraph network using cables, wires, and relay stations connected people from around the world. However, in 1858, three days after the first successful test of the cable that linked North America and Europe, an article was published in the New York Times: “So far as the influence of the newspaper upon the mind and morals of the people is concerned, there can be no rational doubt that the telegraph has caused vast injury.” In 1961, an article in the Times argued that people “mourn the good old times when mails came by steamer twice a month” (LaFrance, 2014).
Footnotes
ORCID iD: Magda Osman
https://orcid.org/0000-0003-1480-6657
Transparency
Action Editor: Klaus Fiedler
Editor: Klaus Fiedler
The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.
References
- Abelson R. P. (1988). Conviction. American Psychologist, 43(4), 267–275. [Google Scholar]
- Acerbi A. (2019). Cognitive attraction and online misinformation. Palgrave Communications, 5(1), 1–7. [Google Scholar]
- Ajzen I. (1988). Attitudes, personality, and behavior. Open University Press. [Google Scholar]
- Ajzen I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211. [Google Scholar]
- Ajzen I. (2012). The theory of planned behavior. In Lange P. A. M., Kruglanski A. W., Higgins E. T. (Eds.), Handbook of theories of social psychology (Vol. 1, pp. 438–459). SAGE. [Google Scholar]
- Ajzen I. (2020). The theory of planned behavior: Frequently asked questions. Human Behavior and Emerging Technologies, 2(4), 314–324. [Google Scholar]
- Allcott H., Gentzkow M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. [Google Scholar]
- Allcott H., Gentzkow M., Yu C. (2019). Trends in the diffusion of misinformation on social media. Research & Politics, 6(2), 2053168019848554. [Google Scholar]
- Allen J., Howland B., Mobius M., Rothschild D., Watts D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science Advances, 6(14), Article eaay3539. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Allport G., Postman L. (1947). The psychology of rumor. Henry Holt and Company. [DOI] [PubMed] [Google Scholar]
- Altay S., Berriche M., Acerbi A. (2021). Misinformation on misinformation: Conceptual and methodological challenges. https://psyarxiv.com/edqc8
- Altay S., Hacquin A.-S., Mercier H. (2020). Why do so few people share fake news? It hurts their reputation. New Media & Society, 24(6), 1303–1324. 10.1177/1461444820969893 [DOI] [Google Scholar]
- Anderau G. (2021). Defining fake news. KRITERION–Journal of Philosophy, 35(3), 197–215. [Google Scholar]
- Arceneaux K., Gravelle T. B., Osmundsen M., Petersen M. B., Reifler J., Scotto T. J. (2021). Some people just want to watch the world burn: The prevalence, psychology and politics of the ‘Need for Chaos.’ Philosophical Transactions of the Royal Society B, 376(1822), Article 20200147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Asadian H., Javadi H. H. S. (2018). Identification of Sybil attacks on social networks using a framework based on user interactions. Security and Privacy, 1(2), Article e19. [Google Scholar]
- Avital M., Baiyere A., Dennis A., Gibbs J., Te’eni D. (2020). Fake news: What is it and why does it matter? In The Academy of Management Annual Meeting 2020: Broadening Our Sight.
- Axt J., Landau M., Kay A. (2020). The psychological appeal of fake-news attributions. Psychological Science, 31(7), 848–857. [DOI] [PubMed] [Google Scholar]
- Ayers M. S., Reder L. M. (1998). A theoretical review of the misinformation effect: Predictions from an activation-based memory model. Psychonomic Bulletin & Review, 5(1), 1–21. [Google Scholar]
- Baker S. (2018, December 12). Subscription traps and deceptive free trials scam millions with misleading ads and fake celebrity endorsements. Better Business Bureau. https://www.bbb.org/globalassets/local-bbbs/st-louis-mo-142/stlouismo142/studies/bbbstudy-free-trial-offers-and-subscription-traps.pdf [Google Scholar]
- Barometer E. T. (2019, February). Edelman trust barometer global report. November, 2 2021. [Google Scholar]
- Barthel M., Mitchell A., Holcomb J. (2016). Many Americans believe fake news is sowing confusion. Pew Research Center’s Journalism Project. https://policycommons.net/artifacts/618138/many-americans-believe-fake-news-is-sowing-confusion/1599054/ [Google Scholar]
- Bastick Z. (2021). Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Computers in Human Behavior, 116, Article 106633. [Google Scholar]
- Bastos M. T., Mercea D. (2017). The Brexit botnet and user-generated hyperpartisan news. Social Science Computer Review, 37(1), 38–54. [Google Scholar]
- Bender J., Wellbery D. (Eds.). (1990). The ends of rhetoric: History, theory, practice. Stanford University Press. [Google Scholar]
- Benegal S. D., Scruggs L. A. (2018). Correcting misinformation about climate change: The impact of partisanship in an experimental setting. Climatic Change, 148, 61–80. [Google Scholar]
- Benkler Y., Faris R., Roberts H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press. [Google Scholar]
- Berger J., Milkman K. L. (2012). What makes online content viral. Journal of Marketing Research, 49(2), 192–205. [Google Scholar]
- Berkowitz E. (2021). Dangerous ideas: A brief history of censorship in the West, from ancients to fake news. Westbourne Press. [Google Scholar]
- Bernfeld S. (1941). The facts of observation in psychoanalysis. The Journal of Psychology, 12(2), 289–305. [Google Scholar]
- Berthon P., Treen E., Pitt L. (2018). How truthiness, fake news and post-fact endanger brands and what to do about it. Brands and Fake News, 10(1), 18–24. [Google Scholar]
- Bessi A., Coletto M., Davidescu G. A., Scala A., Caldarelli G., Quattrociocchi W. (2015). Science vs conspiracy: Collective narratives in the age of misinformation. PLOS ONE, 10(2), Article e0118093. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bishop G. F., Oldendick R. W., Tuchfarber A. J., Bennett S. E. (1980). Pseudo-opinions on public affairs. Public Opinion Quarterly, 44(2), 198–209. [Google Scholar]
- Bode L., Vraga E. (2017). See something, say something: Correction of global health misinformation on social media. Health Communication, 33(9), 1131–1140. [DOI] [PubMed] [Google Scholar]
- Bolsen T., Shapiro M. A. (2016). The US news media, polarization on climate change, and pathways to effective communication. Environmental Communication, 12(2), 149–163. [Google Scholar]
- Bond C. F., Jr., DePaulo B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10, 214–234. [DOI] [PubMed] [Google Scholar]
- Boyd D., Crawford K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662–679. [Google Scholar]
- Bradshaw S., Howard P. N. (2017). Troops, trolls and troublemakers: A global inventory of organized social media manipulation (Working Paper No. 2017.12, p. 37). Project on Computational Propaganda. http://comprop.oii.ox.ac.uk/2017/07/17/troops-trolls-and-trouble-makers-a-globalinventory-of-organized-social-media-manipulation/ [Google Scholar]
- Braun K. A., Loftus E. F. (1998). Advertising’s misinformation effect. Applied Cognitive Psychology, 12(6), 569–591. [Google Scholar]
- British Royal Society. (2022). The online information environment: Understanding how the internet shapes people’s engagement with scientific information. https://royalsociety.org/topics-policy/projects/online-information-environment
- Bronstein M., Kummerfeld E., MacDonald A., III, Vinogradov S. (2021). Investigating the impact of anti-vaccine news on SARS-CoV-2 vaccine intentions (SSRN 3936927). [Google Scholar]
- Brummett B. (1976). Some implications of “process” or “intersubjectivity”: Postmodern rhetoric. Philosophy & Rhetoric, 9(1), 21–51. [Google Scholar]
- Bryanov K., Vziatysheva V. (2021). Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLOS ONE, 16(6), Article e0253717. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bufacchi V. (2020). Truth, lies and tweets: A consensus theory of post-truth. Philosophy & Social Criticism, 47(3), 347–361. [Google Scholar]
- Burkhardt J. (2017). History of fake news. Library Technology Reports, 53(8), 5–9. [Google Scholar]
- Buzzfeed (2016). This analysis shows how viral fake election news stories outperformed real news on facebook. https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook
- Canen N. J., Kendall C., Trebbi F. (2021). Political parties as drivers of US polarization: 1927-2018 (No. w28296). National Bureau of Economic Research. [Google Scholar]
- Cantril H. (1938). Propaganda analysis. The English Journal, 27(3), 217–221. [Google Scholar]
- Cavalli–Sforza L. L., Feldman M. W. (1981). Cultural transmission and evolution: A quantitative approach. Princeton University Press. [PubMed] [Google Scholar]
- Cavazos R. (2019). The economic cost of bad actors on the internet. CHEQ. https://s3.amazonaws.com/media.mediapost.com/uploads/EconomicCostOfFakeNews.pdf
- Ceci S. J., Williams W. M. (2018). Who decides what is acceptable speech on campus? Why restricting free speech is not the answer. Perspectives on Psychological Science, 13(3), 299–323. [DOI] [PubMed] [Google Scholar]
- Celliers M., Hattingh M. (2020). A systematic review on fake news themes reported in the literature. In Hattingh M., Matthee M., Smuts H., Pappas I., Dwivedi Y. K., Mäntymäki M. (Eds.), Responsible design, implementation and use of information and communication technology. I3E 2020. Lecture Notes in Computer Science (Vol. 12067, pp. 223–234). Springer. 10.1007/978-3-030-45002-1_19 [DOI] [Google Scholar]
- Chadwick A., Vaccari C., O’Loughlin B. (2018). Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing. New Media & Society, 11, 4255–4274. [Google Scholar]
- Chan M.-P. S., Jones C. R., Hall Jamieson K., Albarracín D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen X., Sin S.-C. J., Theng Y.-L., Lee C. S. (2015). Why students share misinformation on social media: Motivation, gender, and study-level differences. The Journal of Academic Librarianship, 41, 583–592. [Google Scholar]
- Chen Z., Meng X., Yu W. (2021). Depolarization in the rise of far-right platforms? A moderated mediation model on political identity, misinformation belief and voting behavior in the 2020 US Presidential Election. In 2021 International Association for Media and Communication Research Conference (IAMCR 2021): Rethinking Borders and Boundaries. [Google Scholar]
- Chirovici E. (2014). Rumors that change the world: A history of violence and discrimination. Lexington Books. [Google Scholar]
- Chowdhury N., Khalid A., Turin T. C. (2021). Understanding misinformation infodemic during public health emergencies due to large-scale disease outbreaks: A rapid review. Journal of Public Health, 1–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chua A. Y. K., Banerjee S. (2017). To share or not to share: The role of epistemic belief in online health rumours. International Journal of Medical Informatics, 108, 36–41. [DOI] [PubMed] [Google Scholar]
- Chua G., Yuen K. F., Wang X., Wong Y. D. (2021). The determinants of panic buying during COVID-19. International Journal of Environmental Research and Public Health, 18(6), Article 3247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ciampaglia G. L., Flammini A., Menczer F. (2015). The production of information in the attention economy. Scientific Reports, 5, Article 9542. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen-Almagor R. (1997). Why tolerate? Reflections on the Millian truth principle. Philosophia, 25(1–4), 131–152. [Google Scholar]
- Coleman S. (2018). The elusiveness of political truth: From the conceit of objectivity to intersubjective judgment. European Journal of Communication, 33(2), 157–171. [Google Scholar]
- Combs D. R., Adams S. D., Michael C. O., Penn D. L., Basso M. R., Gouvier W. D. (2006). The conviction of delusional beliefs scale: Reliability and validity. Schizophrenia Research, 86(1–3), 80–88. [DOI] [PubMed] [Google Scholar]
- Conroy N. K., Rubin V. L., Chen Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science Technology, 52(1), 1–4. [Google Scholar]
- Conway E. M., Oreskes N. (2012). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. Bloomsbury. [Google Scholar]
- Cook J. (2019). Understanding and countering misinformation about climate change. In Chiluwa I. E., Samoilenko S. A. (Eds.), Handbook of research on deception, fake news, and misinformation online (pp. 281–306). Hershey, PA: IGI Global. [Google Scholar]
- Cook J., Ellerton P., Kinkead D. (2018). Deconstructing climate misinformation to identify reasoning errors. Environmental Research Letters, 13(2), Article 024018. [Google Scholar]
- Cooke N. (2017). Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. The Library Quarterly: Information, Community, Policy, 87(3), 211–221. [Google Scholar]
- Courchesne L., Ilhardt J., Shapiro J. N. (2021). Review of social science research on the impact of countermeasures against influence operations. Harvard Kennedy School Misinformation Review. [Google Scholar]
- Dan V., Paris B., Donovan J., Hameleers M., Roozenbeek J., van der Linden S., von Sikorski C. (2021). Visual mis-and disinformation, social media, and democracy. Journalism & Mass Communication Quarterly, 98(3), 641–664. [Google Scholar]
- Darnton R. (2009). The case for books: Past, present, and future. Public Affairs. [Google Scholar]
- Das A., Schroeder R. (2021). Online disinformation in the run-up to the Indian 2019 election. Information, Communication & Society, 24(12), 1762–1778. [Google Scholar]
- Dawson C., Woodward A. S. (1913). On the discovery of a Palaeolithic human skull and mandible in a flint-bearing gravel overlying the Wealden (Hastings Beds) at Piltdown, Fletching (Sussex). Quarterly Journal of the Geological Society, 69(1–4), 117–123. [Google Scholar]
- De Beer D., Matthee M. (2021). Approaches to identify fake news: A systematic literature review. In Antipova T. (Ed.), Integrated science in digital age 2020 (pp. 13–22). Springer. [Google Scholar]
- Desai S., Pilditch T., Madsen J. (2020). The rational continued influence of misinformation. Cognition, 205, Article 104453. [DOI] [PubMed] [Google Scholar]
- Desai S., Reimers S. (2018). Some misinformation is more easily countered: An experiment on the continued influence effect. Annual Meeting of the Cognitive Science Society. [Google Scholar]
- Di Domenico G., Sit J., Ishizaka A., Nunan D. (2021). Fake news, social media and marketing: A systematic review. Journal of Business Research, 124, 329–341. [Google Scholar]
- Di Domenico G., Visentin M. (2020). Fake news or true lies? Reflections about problematic contents in marketing. International Journal of Market Research, 62(4), 409–417. [Google Scholar]
- DiFonzo N., Bordia P. (2007). Rumors influence: Toward a dynamic social impact theory of rumor. In Pratkanis A. R. (Ed.), The science of social influence: Advances and future progresses (pp. 271–295). Psychology Press. [Google Scholar]
- Dixon G. N., McKeever B. W., Holton A. E., Clarke C., Eosco G. (2015). The power of a picture: Overcoming scientific misinformation by communicating weight-of-evidence with visual exemplars. Journal of Communication, 65, 639–659. [Google Scholar]
- Drucker E., Barreras R. (2005). Studies of voting behavior and felony disenfranchisement among individuals in the criminal justice system in New York, Connecticut, and Ohio. The Sentencing Project. [Google Scholar]
- Duffy A., Tandoc E., Ling R. (2019). Too good to be true, too good not to share: The social utility of fake news. Information, Communication & Society, 23(13), 1965–1979. [Google Scholar]
- Durodolu O. O., Ibenne S. K. (2020). The fake news infodemic vs information literacy. Library Hi Tech News, 37(7), 13–14. [Google Scholar]
- Ecker U. K. H., Lewandowsky S., Cook J., Schmid P., Fazio L. K., Brashier N., Kendeou P., Vraga E., Amazeen M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. [Google Scholar]
- Ecker U. K. H., Lewandowsky S., Swire B., Chang D. (2011). Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction. Psychonomic Bulletin & Review, 18(3), 570–578. [DOI] [PubMed] [Google Scholar]
- Efferson C., McKay R., Fehr E. (2020). The evolution of distorted beliefs vs. mistaken choices under asymmetric error costs. Evolutionary Human Sciences, 2, 1–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eisen J. L., Phillips K. A., Baer L., Beer D. A., Atala K. D., Rasmussen S. A. (1998). The Brown Assessment of Beliefs Scale: Reliability and validity. American Journal of Psychiatry, 155(1), 102–108. [DOI] [PubMed] [Google Scholar]
- Elsasser S. W., Dunlap R. E. (2013). Leading voices in the denier choir: Conservative columnists’ dismissal of global warming and denigration of climate science. American Behavioral Scientist, 57(6), 754–776. [Google Scholar]
- European Commission. (2018). A multi-dimensional approach to disinformation. European Commission (Directorate-General for Communications Networks, Content and Technology). [Google Scholar]
- Fallis D. (2015). What is disinformation? Library Trends, 63(3), 401–426. [Google Scholar]
- Farrell J. (2019). The growth of climate change misinformation in US philanthropy: Evidence from natural language processing. Environmental Research Letters, 14(3), Article 034013. [Google Scholar]
- Farrell J., McConnell K., Brulle R. (2019). Evidence-based strategies to combat scientific misinformation. Nature Climate Change, 9, 191–195. [Google Scholar]
- Feest U., Sturm T. (2011). What (good) is historical epistemology? Editors’ introduction. Erkenntnis, 75, 285–302. [Google Scholar]
- Fernández M., Bellogín A. (2020). Recommender systems and misinformation: The problem or the solution? http://oro.open.ac.uk/72186/1/2020_Recys_ohars_workshop.pdf
- Ferrara E., Varol O., Davis C., Menczer F., Flammini A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104. [Google Scholar]
- Ferreira G. B., Borges S. (2020). Media and misinformation in times of COVID-19: How people informed themselves in the days following the Portuguese declaration of the state of emergency. Journalism and Media, 1(1), 108–121. [Google Scholar]
- Finetti H., Ramirez J., Dwyre D. (2020). The impact of ex-felon disenfranchisement on voting behavior. https://www.researchgate.net/profile/Hayley-Finetti/publication/344362006_The_Impact_of_Ex-Felon_Disenfranchisement_on_Voting_Behavior/links/5f6c59b0299bf1b53eedd4ab/The-Impact-of-Ex-Felon-Disenfranchisement-on-Voting-Behavior.pdf
- Fishbein M., Ajzen I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Addison-Wesley. [Google Scholar]
- Fisher C., Flew T., Park S., Lee J. Y., Dulleck U. (2021). Improving trust in news: Audience solutions. Journalism Practice, 15(10), 1497–1515. [Google Scholar]
- Fletcher R., Nielsen R. K. (2019). Generalised scepticism: How people navigate news on social media. Information, Communication & Society, 22(12), 1751–1769. [Google Scholar]
- Flynn A. W., Domínguez S., Jr., Jordan R., Dyer R. L., Young E. I. (2021). When the political is professional: Civil disobedience in psychology. American Psychologist, 76(8), 1217. [DOI] [PubMed] [Google Scholar]
- Flynn D. J., Nyhan B., Reifler J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(1), 127–150. [Google Scholar]
- Fowler A., Margolis M. (2013). The political consequences of uninformed voters. Electoral Studies, 30, 1–11. [Google Scholar]
- Frost P. (2000). The quality of false memory over time: Is memory for misinformation ‘‘remembered’’ or ‘‘known’’? Psychonomic Bulletin & Review, 7(3), 531–536 [DOI] [PubMed] [Google Scholar]
- Funke D. (2021). Global responses to misinformation and populism. In Tumber H., Waisbord S. (Eds.), The Routledge companion to media disinformation and populism (pp. 449–458). Routledge. [Google Scholar]
- Galanis P. A., Vraka I., Siskou O., Konstantakopoulou O., Katsiroumpa A., Moisoglou I., Kaitelidou D. (2021). Predictors of parents’ intention to vaccinate their children against the COVID-19 in Greece: A cross-sectional study. medRxiv. 10.1101/2021.09.27.21264183 [DOI]
- Galeotti A. E. (2020). Political disinformation and voting behavior: Fake news and motivated reasoning. Notizie di Politeia, 142, 64–85. [Google Scholar]
- Gangarosa E. J., Galazka A. M., Wolfe C. R., Phillips L. M., Miller E., Chen R. T., Gangarosa R. E. (1998). Impact of anti-vaccine movements on pertussis control: The untold story. The Lancet, 351(9099), 356–361. [DOI] [PubMed] [Google Scholar]
- Garrett R. K., Nisbet E. C., Lynch E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naive theory. Journal of Communication, 63(4), 617–637. [Google Scholar]
- Ghenai A., Mejova Y. (2017). Catching Zika fever: Application of crowdsourcing and machine learning for tracking health misinformation on Twitter. arXiv preprint arXiv:1707.03778. [Google Scholar]
- Gimpel H., Graf V., Graf-Drasch V. (2020). A comprehensive model for individuals’ acceptance of smart energy technology–A meta-analysis. Energy Policy, 138, Article 111196. [Google Scholar]
- Glaeser E. L., Ujhelyi G. (2010). Regulating misinformation. Journal of Public Economics, 94(3–4), 247–257. [Google Scholar]
- Glasman L. R., Albarracín D. (2006). Forming attitudes that predict future behavior: A meta-analysis of the attitude-behavior relation. Psychological Bulletin, 132(5), 778–822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Godler Y. (2020). Post-post-truth: An adaptationist theory of journalistic verism. Communication Theory, 30(2), 169–187. [Google Scholar]
- Gradón K. (2020). Crime in the time of the plague: Fake news pandemic and the challenges to law-enforcement and intelligence community. Society Register, 4(2), 133–148. [Google Scholar]
- Grant M. (2004). Greek and Roman historians: Information and misinformation. Routledge. [Google Scholar]
- Greene C. M., Murphy G. (2021). Quantifying the effects of fake news on behavior: Evidence from a study of COVID-19 misinformation. Journal of Experimental Psychology: Applied, 27(4), 773–784. [DOI] [PubMed] [Google Scholar]
- Greifeneder F., Notarnicola C., Wagner W. (2021). A machine learning-based approach for surface soil moisture estimations with google earth engine. Remote Sensing, 13(11), 2099. [Google Scholar]
- Greifeneder R., Jaffé M. E., Newman E. J., Schwarz N. (eds.). (2020). What is New and True 1 about Fake News? In The psychology of fake news (pp. 1–8). Routledge. [Google Scholar]
- Grinberg N., Joseph K., Friedland L., Swire-Thompson B., Lazer D. (2019). Fake news on Twitter during the 2016 US presidential election. Science, 363(6425), 374–378. [DOI] [PubMed] [Google Scholar]
- Guastella G. (2017). Word of mouth: Fama and its personifications in art and literature in ancient Rome. Oxford University Press. [Google Scholar]
- Guess A., Aslett K., Tucker J., Bonneau R., Nagler J. (2021). Cracking open the news feed: Exploring what US Facebook users see and share with large-scale platform data. Journal of Quantitative Description: Digital Media, 1. 10.51685/jqd.2021.006 [DOI]
- Guess A., Coppock A. (2018). Does counter-attitudinal information cause backlash? Results from three large survey experiments. British Journal of Political Science, 50(4), 1497–1515. [Google Scholar]
- Guess A., Nagler J., Tucker J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), Article eaau4586. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Habgood-Coote J. (2019). Stop talking about fake news! Inquiry, 62(9–10), 1033–1065. [Google Scholar]
- Haiden L., Althuis J. (2018). The definitional challenges of fake news. In International Conference on Social Computing, Behavior-Cultural Modeling, and Prediction and Behavior Representation in Modeling and Simulation, Washington, DC. [Google Scholar]
- Hall L., Strandberg T., Pärnamets P., Lind A., Tärning B., Johansson P. (2013). How the polls can be both spot on and dead wrong: Using choice blindness to shift political attitudes and voter intentions. PLOS ONE, 8(4), Article e60554. 10.1371/journal.pone.0060554 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hameleers M., van der Meer T., Vliegenthart R. (2021). Civilised truths, hateful lies? Incivility and hate speech in false information – evidence from fact-checked statements in the US. Information, Communication and Society, 25(11), 1596–1613. 10.1080/1369118X.2021.1874038 [DOI] [Google Scholar]
- Han C., Kumar D., Durumeric Z. (2022). On the infrastructure providers that support misinformation websites. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 16, pp. 287–298). [Google Scholar]
- Hartwig M., Bond C. F., Jr. (2011). Why do lie-catchers fail? A lens model meta-analysis of human lie judgments. Psychological Bulletin, 137(4), 643–659. [DOI] [PubMed] [Google Scholar]
- Hartwig M., Bond C. F., Jr. (2014). Lie detection from multiple cues: A meta-analysis. Applied Cognitive Psychology, 28(5), 661–676. [Google Scholar]
- Hattori K., Higashida K. (2014). Misleading advertising and minimum quality standards. Information Economics and Policy, 28, 1–14. [Google Scholar]
- Hebenstreit J. (2022). Voter polarisation in Germany: Unpolarised Western but polarised Eastern Germany? German Politics. 10.1080/09644008.2022.2056595 [DOI]
- Heldt A. (2019). Let’s meet halfway: Sharing new responsibilities in a digital age. Journal of Information Policy, 9(1), 336–369. [Google Scholar]
- Hernon P. (1995). Disinformation and misinformation through the internet: Findings of an exploratory study. Government Information Quarterly, 12(2), 133–139. [Google Scholar]
- Herrero-Diz P., Conde-Jiménez J., Reyes de Cózar S. (2020). Teens’ motivations to spread fake news on WhatsApp. Social Media + Society, 6(3), 1–14. 10.1177/2056305120942879 [DOI] [Google Scholar]
- Hertwig R., Grüne-Yanoff T. (2017). Nudging and boosting: Steering or empowering good decisions. Perspectives on Psychological Science, 12(6), 973–986. [DOI] [PubMed] [Google Scholar]
- Heylighen F. (1997). Objective, subjective and intersubjective selectors of knowledge. Evolution and Cognition, 3(1), 63–67. [Google Scholar]
- Hills T. (2019). The dark side of information proliferation. Perspectives on Psychological Science, 14(3), 323–330. [DOI] [PubMed] [Google Scholar]
- Howell L. (2013). Global risks 2013. Geneva: World Economic Forum. Retrieved from http://reports.weforum.org/global-risks-2013/risk-case-1/digital-wildfires-in-a-hyperconnected-world/ [Google Scholar]
- Hui P.-M., Shao C., Flammini A., Menczer F., Ciampaglia G. (2018). The Hoaxy misinformation and fact-checking diffusion network. In Twelfth International AAAI Conference on Web and Social Media. AAAI Press. [Google Scholar]
- Humprecht E., Esser F., Van Aelst P. (2020). Resilience to online disinformation: A framework for cross-national comparative research. The International Journal of Press/Politics, 25(3), 493–516. [Google Scholar]
- Imhoff R., Zimmer F., Klein O., António J. H., Babinska M., Bangerter A., . . . Van Prooijen J. W. (2022). Conspiracy mentality and political orientation across 26 countries. Nature Human Behaviour, 6(3), 392–403. [DOI] [PubMed] [Google Scholar]
- James W. (1890). The Principles of Psychology. Henry Holt and Company. [Google Scholar]
- Jin F., Wang W., Zhao L., Dougherty E., Cao Y., Chang-Tien L., Ramakrishnan N. (2014). Misinformation propagation in the age of Twitter. Computer, 47(12), 90–94. [Google Scholar]
- Jones J. M. (2018). U.S. media trust continues to recover from 2016 low. Gallup. https://news.gallup.com/poll/243665/media-trust-continues-recover-2016-low.aspx
- Joslyn S., Sinatra G. M., Morrow D. (2021). Risk perception, decision-making, and risk communication in the time of COVID-19. Journal of Experimental Psychology: Applied, 27(4), 579–583. [DOI] [PubMed] [Google Scholar]
- Kadenko N. I., van der Boon J. M., Kaaij J., Kobes W. J., Mulder A. T., Sonneveld J. J. (2021). Whose agenda is it anyway? The effect of disinformation on COVID-19 vaccination hesitancy in the Netherlands. In International Conference on Electronic Participation (pp. 55–65). Springer. [Google Scholar]
- Kahan D. M. (May 24, 2017). Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition (Cultural Cognition Project Working Paper Series No. 164, Yale Law School, Public Law Research Paper No. 605, Yale Law & Economics Research Paper No. 575). Available at SSRN: https://ssrn.com/abstract=2973067 or 10.2139/ssrn.2973067 [DOI]
- Karlova N., Lee J. H. (2011). Notes from the underground city of disinformation: A conceptual investigation. Proceedings of the American Society for Information Science, 48(1), 1–9. [Google Scholar]
- Kaufmann F. (1934). The concept of law in economic science. The Review of Economic Studies, 1(2), 102–109. [Google Scholar]
- Kaul V. (2012). Changing paradigms of media landscape in the digital age. Mass Communication and Journalism, 2. [Google Scholar]
- Kim M. S., Hunter J. E. (1993). Relationships among attitudes, behavioral intentions, and behavior: A meta-analysis of past research, part 2. Communication Research, 20(3), 331–364. [Google Scholar]
- Kirkpatrick A. W. (2020). The spread of fake science: Lexical concreteness, proximity, misinformation sharing, and the moderating role of subjective knowledge. Public Understanding of Science, 30(1), 55–74. [DOI] [PubMed] [Google Scholar]
- Kogan S., Moskowitz T. J., Niessner M. (2021). Social media and financial news manipulation. https://ssrn.com/abstract=3237763
- Kopp C., Korb K., Mills B. (2018). Information-theoretic models of deception: Modelling cooperation and diffusion in populations exposed to “fake news.” PLOS ONE, 13(11), Article e0207383. 10.1371/journal.pone.0207383 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kormann C. (2018). Scott Pruitt’s crusade against “secret science” could be disastrous for public health. The New Yorker. https://www.newyorker.com/science/elements/scott-pruitts-crusade-against-secret-science-could-be-disastrous-for-public-health
- Kouzy R., Jaoude J. A., Kraitem A., El Alam M. B., Karam B., Adbib E., Zarka J., Traboulsi C., Akl E. W., Baddour K. (2020). Coronavirus goes viral: Quantifying the COVID-19 misinformation epidemic on Twitter. Cureus, 12(3), Article e7255. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kozyreva A., Lewandowsky S., Hertwig R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103–156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kraus S. J. (1995). Attitudes and the prediction of behavior: A meta-analysis of the empirical literature. Personality and Social Psychology Bulletin, 21(1), 58–75. [Google Scholar]
- Krause N., Freiling I., Beets B., Brossard D. (2020). Fact-checking as risk communication: The multi-layered risk of misinformation in times of COVID-19. Journal of Risk Research, 23(7–8), 1052–1059. [Google Scholar]
- Krause N. M., Freiling I., Scheufele D. A. (2022). The “infodemic” infodemic: Toward a more nuanced understanding of truth-claims and the need for (not) combatting misinformation. The ANNALS of the American Academy of Political and Social Science, 700(1), 112–123. [Google Scholar]
- Kuklinski J. H., Quirk P. J., Jerit J., Schwieder D., Rich R. F. (2000). Misinformation and the currency of democratic citizenship. Journal of Politics, 62(3), 790–816. [Google Scholar]
- LaFrance A. (2014). In 1858, people said the telegraph was ‘too fast for the truth’. The Atlantic. [Google Scholar]
- Lazer D. M., Baum M. A., Benkler Y., Berinsky A. J., Greenhill K. M., Menczer F., Nydan B., Pennycook G., Rothschild D., Schudson M., Sloman S. A., Thorson E. A., Watts D. J., Zittran J. L. (2018). The science of fake news: Addressing fake news requires a multidisciplinary effort. Science, 359(6380), 1094–1096. [DOI] [PubMed] [Google Scholar]
- Lecci L. (2000). An experimental investigation of odd beliefs: Individual differences in non-normative belief conviction. Personality and Individual Differences, 29(3), 527–538. [Google Scholar]
- Levi L. (2018). Real “fake news” and fake “fake news.” First Amendment Law Review, 16, 232. [Google Scholar]
- Lewandowsky S., Ecker U. K. H., Cook J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. [Google Scholar]
- Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., Cook J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science, 13(3), 106–131. [DOI] [PubMed] [Google Scholar]
- Lewandowsky S., van der Linden S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384. [Google Scholar]
- Lewis S. C. (2019). Lack of trust in the news media, institutional weakness, and relational journalism as a potential way forward. Journalism, 20(1), 44–47. [Google Scholar]
- Lippman W. (1922). Public Opinion. Harcourt, Brace and Company. [Google Scholar]
- Liu Y., Brook Wu Y.-F. (2020). FNED: A deep network for fake news early detection on social media. ACM Transactions on Information Systems, 38(3), 1–33. [Google Scholar]
- Loomba S., de Figueiredo A., Piatek S. J., de Graaf K., Larson H. J. (2021). Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5(3), 337–348. [DOI] [PubMed] [Google Scholar]
- Luo M., Hancock J. T., Markowitz D. M. (2022). Credibility perceptions and detection accuracy of fake news headlines on social media: Effects of truth-bias and endorsement cues. Communication Research, 49(2), 171–195. [Google Scholar]
- Luskin R. C., Bullock J. G. (2011). “Don’t know” means “don’t know”: DK responses and the public’s level of political knowledge. The Journal of Politics, 73(2), 547–557. [Google Scholar]
- Maertens R., Anseel F., van der Linden S. (2020). Combatting climate change misinformation: Evidence for longevity of inoculation and consensus messaging effects. Journal of Environmental Psychology, 70, Article 101455. 10.1016/j.jenvp.2020.101455 [DOI] [Google Scholar]
- Margolin D. B. (2021). Theory of informative fictions: A character-based approach to false news and other misinformation. Communication Theory, 31(4), 714–736. [Google Scholar]
- Marwick A. (2018). Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technology Review, 2(2), 474–512. [Google Scholar]
- Marwick A., Clancy B., Furl K. (2022). Far-Right online radicalization: A review of the literature. The Bulletin of Technology and Public Life. [Google Scholar]
- McCright A. M., Dunlap R. E. (2011). The politicization of climate change and polarization in the American public’s views of global warming, 2001–2010. The Sociological Quarterly, 52(2), 155–194. [Google Scholar]
- McCright A. M., Dunlap R. E., Marquart-Pyatt S. T. (2016). Political ideology and views about climate change in the European Union. Environmental Politics, 25(2), 338–358. [Google Scholar]
- McKay R. T., Dennett D. C. (2009). Our evolving beliefs about evolved misbelief. Behavioral and Brain Sciences, 32(6), 541–561. [DOI] [PubMed] [Google Scholar]
- Metzger M., Flanagin A., Mena P., Jiang S., Wilson C. (2021). From dark to light: The many shades of sharing misinformation online. Media and Communication, 9(1), 134–143. [Google Scholar]
- Mill J. S. (1859). On liberty. Oxford University. [Google Scholar]
- Miró-Llinares F., Aguerri J. C. (2021). Misinformation about fake news: A systematic critical review of empirical studies on the phenomenon and its status as a ‘threat.’ European Journal of Criminology. 10.1177/1477370821994059 [DOI]
- Monti F., Frasca F., Eynard D., Mannion D., Bronstein M. M. (2019). Fake news detection on social media using geometric deep learning. arXiv preprint. arXiv:1902.06673. [Google Scholar]
- Moravec P., Minas R., Dennis A. R. (2018). Fake news on social media: People believe what they want to believe when it makes no sense at all (Kelly School of Business Research Paper No. 18-87). https://ssrn.com/abstract=3269541
- Muğaloğlu E. Z., Kaymaz Z., Mısır M. E., Laçin-Şimşek C. (2022). Exploring the role of trust in scientists to explain health-related behaviors in response to the COVID-19 pandemic. Science & Education, 31(5), 1281–1309. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Musi E. (2018). How did you change my view? A corpus-based study of concessions’ argumentative role. Discourse Studies, 20(2), 270–288. [Google Scholar]
- Nelson J. L., Taneja H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737. [Google Scholar]
- Newman N., Fletcher R., Kalogeropoulos A., Nielsen R. K. (2019). Reuters Institute digital news report 2019. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-06/DNR_2019_FINAL_0.pdf
- Newport F. (2015). In U.S., percentage saying vaccines are vital dips slightly. http://www.gallup.com/poll/181844/percentage-saying-vaccines-vital-dips-slightly.aspx
- Norenzayan A., Atran S. (2004). Cognitive and emotional processes in the cultural transmission of natural and nonnatural beliefs. In Schaller M., Crandall C. (Eds.), The psychological foundations of culture (pp. 149–169). Mahwah, NJ: Lawrence Erlbaum. [Google Scholar]
- Nyhan B. (2020). Facts and myths about misperceptions. Journal of Economic Perspectives, 34(3), 220–236. [Google Scholar]
- Nyhan B., Reifler J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. [Google Scholar]
- Osman M., Adams Z., Meder B., Bechlivanidis C., Verduga O., Strong C. (2022). People’s understanding of the concept of misinformation. Journal of Risk Research, 25(10), 1239–1258. [Google Scholar]
- Osman M., McLachlan S., Fenton N., Neil M., Löfstedt R., Meder B. (2020). Learning from behavioral changes that fail. Trends in Cognitive Sciences, 24(12), 969–980. [DOI] [PubMed] [Google Scholar]
- Osmundsen M., Bor A., Vahlstrup P. B., Bechmann A., Petersen M. B. (2021). Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. American Political Science Review, 115(3), 999–1015. [Google Scholar]
- Oyserman D., Dawson A. (2020). Your fake news, our facts: Identity-based motivation shapes what we believe, share and accept. In Greifender R., Jaffe M., Newman E. J., Schwarz N. (Eds.), The psychology of fake news: Accepting, sharing and correcting misinformation. Psychology Press. [Google Scholar]
- Pantazi M., Hale S., Klein O. (2021). Social and cognitive aspects of the vulnerability to political misinformation. Political Psychology, 42, 267–304. [Google Scholar]
- Pasek J., Sood G., Krosnick J. A. (2015). Misinformed about the Affordable Care Act? Leveraging certainty to assess the prevalence of misperceptions. Journal of Communication, 65(4), 660–673. [Google Scholar]
- Paskin D. (2018). Real or fake news: Who knows? The Journal of Social Media in Society, 7(2), 252–273. [Google Scholar]
- Peck A. (2020). A problem of amplification: Folklore and fake news in the age of social media. Journal of American Folklore, 133(529), 329–351. [Google Scholar]
- Pennycook G., Epstein Z., Mosleh M., Arechar A., Eckles D., Rand D. (2021. c). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590–592. [DOI] [PubMed] [Google Scholar]
- Pennycook G., McPhetres J., Zhang Y., Lu J., Rand D. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pennycook G., Rand D. G. (2021. a). Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation. Nature Communications, 13, Article 2333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pennycook G., Rand D. G. (2021. b). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. [DOI] [PubMed] [Google Scholar]
- Petratos P. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons, 64(6), 763–774. [Google Scholar]
- Pettegree A. (2014). The invention of news: How the world came to know about itself. Yale University Press. [Google Scholar]
- Pluviano S., Watt C., Pompéia S., Ekuni R., Della Sala S. (2022). Forming and updating vaccination beliefs: Does the continued effect of misinformation depend on what we think we know? Cognitive Processing, 23, 367–378. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pomerantsev P. (2015). Authoritarianism goes global (II): The Kremlin’s information war. Journal of Democracy, 26(4), 40–50. [Google Scholar]
- Poovey M. (1998). A history of the modern fact: Problems of knowledge in the sciences of wealth and society. The University of Chicago Press. [Google Scholar]
- Posetti J., Matthews A. (2018). A short guide to the history of ‘fake news’ and disinformation. International Center for Journalists. https://www.icfj.org/news/short-guide-history-fake-news-and-disinformation-new-icfj-learning-module
- Priniski J. H., Horne Z. (2018, July 25–28). Attitude change on reddit’s change my view. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, Madison (pp. 2279–2284). Cognitive Science Society. [Google Scholar]
- Pronin E., Lin D., Ross L. (2002). The bias blind spot: Perceptions of bias in self and others. Personality and Social Psychology Bulletin, 28, 369–381. [Google Scholar]
- Pulido C. M., Villarejo-Carballido B., Redondo-Sama G., Gomez A. (2020). COVID-19 infodemic: More retweets for science-based information on coronavirus than for false information. International Sociology, 35(4), 377–392. [Google Scholar]
- Qazvinian V., Rosengren E., Radev D., Mei Q. (2011). Rumor has it: Identifying misinformation in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (pp. 1589–1599). [Google Scholar]
- Rader E., Gray R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 173–182). [Google Scholar]
- Rao A. (2022). Deceptive claims using fake news advertising: The impact on consumers. Journal of Marketing Research, 59(3), 534–554. [Google Scholar]
- Rashkin P., Choi E., Jang J., Volkova S., Choi Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference in Empirical Methods in Natural Language Processing, Copenhagen, Denmark (pp. 2931–2937). [Google Scholar]
- Reiss J., Sprenger J. (2020). Scientific objectivity. In The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=scientific-objectivity
- Ribeiro M. H., Calais P. H., Almeida V. A. F., Meira W., Jr. (2017). “Everything I disagree with is #FakeNews”: Correlating political polarization and spread of misinformation. arXiv preprint arXiv:1706.05924. [Google Scholar]
- Robertson C. T., Mourão R. R. (2020). Faking alternative journalism? An analysis of presentations of “fake news” sites. Digital Journalism, 8(8), 1011–1029. [Google Scholar]
- Robinson R. J., Keltner D., Ward A., Ross L. (1995). Actual versus assumed differences in construal: “Naive realism” in intergroup perception and conflict. Journal of Personality and Social Psychology, 68(3), 404–417. [Google Scholar]
- Rommetveit R. (1979). On the architecture of intersubjectivity. In Rommetveit R., Blekar R. M. (Eds.), Studies of language, thought and verbal communication (pp. 58–75). Academic Press. [Google Scholar]
- Roozenbeek J., Schneider C. R., Dryhurst S., Kerr J., Freeman A. L. J., Recchia G., van der Bles A. M., van der Linden S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), Article 201199. 10.1098/rsos.201199 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roozenbeek J., van der Linden S. (2019). The fake news game: Actively inoculating against the risk of misinformation. Journal of Risk Research, 22(5), 570–580. [Google Scholar]
- Ross A. S., Rivers D. J. (2018). Discursive deflection: Accusation of “fake news” and the spread of mis- and disinformation in the tweets of President Trump. Social Media + Society, 4(2), 1–12. [Google Scholar]
- Ross L. (2018). From the fundamental attribution error to the truly fundamental attribution error and beyond: My research journey. Perspectives on Psychological Science, 13(6), 750–769. [DOI] [PubMed] [Google Scholar]
- Rothschild N., Fischer S. (2022, July 12). News engagement plummets as Americans tune out. Axios. Retrieved from https://www.axios.com/2022/07/12/news-media-readership-ratings-2022
- Scheufele D. A., Krause N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, USA, 116(16), 7662–7669. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scheufele D. A., Krause N. M., Freiling I. (2021). Misinformed about the “infodemic?” Science’s ongoing struggle with misinformation. Journal of Applied Research in Memory and Cognition, 10(4), 522–526. [Google Scholar]
- Schuetz A. (1942). Scheler’s theory of intersubjectivity and the general thesis of the alter ego. Philosophy and Phenomenological Research, 2(3), 323–347. [Google Scholar]
- Schwalbe M. C., Cohen G. L., Ross L. D. (2020). The objectivity illusion and voter polarization in the 2016 presidential election. Proceedings of the National Academy of Sciences, USA, 117(35), 21218–21229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shao C., Ciampaglia G., Flammini A., Menczer F. (2016). Hoaxy: A platform for tracking online misinformation. In WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web (pp. 745–750). 10.1145/2872518.2890098 [DOI] [Google Scholar]
- Shao C., Hui P.-M., Wang L., Jiang X., Flammini A., Mentzer F., Ciampaglia G. (2018). Anatomy of online misinformation network. PLOS ONE, 13(4), Article e0196087. 10.1371/journal.pone.0196087 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sheeran P., Maki A., Montanaro E., Avishai-Yitshak A., Bryan A., Klein W. M., Rothman A. J. (2016). The impact of changing attitudes, norms, and self-efficacy on health-related intentions and behavior: A meta-analysis. Health Psychology, 35(11), 1178–1188. [DOI] [PubMed] [Google Scholar]
- Sherman D. K., Cohen G. L. (2002). Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science, 11, 119–123. [Google Scholar]
- Shibutani T. (1966). Improvised news: A sociological study of rumor. Bobbs-Merrill. [Google Scholar]
- Shiina A., Niitsu T., Kobori O., Idemoto K., Hashimoto T., Sasaki T., Iyo M. (2020). Relationship between perception and anxiety about COVID-19 infection and risk behaviors for spreading infection: A national survey in Japan. Brain, Behavior, and Immunity - Health, 6, Article 100101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shin J., Jian L., Driscoll K., Bar F. (2018). The diffusion of misinformation on social media: Temporal pattern, message and source. Computers in Human Behavior, 83, 278–287. [Google Scholar]
- Shu K., Sliva A., Wang S., Tang J., Liu H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD explorations newsletter, 19(1), 22–36. [Google Scholar]
- Shu K., Wang S., Liu H. (2018, April). Understanding user profiles on social media for fake news detection. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR) (pp. 430–435). IEEE. [Google Scholar]
- Silverman C., Alexander L. (2016). How teens in the Balkans are duping Trump supporters with fake news. Buzzfeed News, 3, 874–888. [Google Scholar]
- Sloane T. (2001). Encyclopaedia of rhetoric. Oxford University Press. [Google Scholar]
- Snyder T. (2021). On tyranny: Twenty lessons from the twentieth century [graphic ed.]. Random House. [Google Scholar]
- Søe S. (2017). Algorithmic detection of misinformation and disinformation: Gricean perspectives. Journal of Documentation, 74(2), 309–332. [Google Scholar]
- Southwell B. G., Thorson E. A. (2015). The prevalence, consequences and remedy of misinformation in mass media systems. Journal of Communication, 65(4), 589–595. [Google Scholar]
- Soutter A. R. B., Bates T. C., Mõttus R. (2020). Big Five and HEXACO personality traits, proenvironmental attitudes, and behaviors: A meta-analysis. Perspectives on Psychological Science, 15(4), 913–941. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steensen S. (2019). Journalism’s epistemic crisis and its solution: Disinformation, datafication and source criticism. Journalism, 20(1), 185–189. [Google Scholar]
- Stille L., Norin E., Sikström S. (2017). Self-delivered misinformation – Merging the choice blindness and misinformation effect paradigms. PLOS ONE, 12(3), Article e0173606. 10.1371/journal.pone.0173606 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strandberg T., Björklund F., Hall L., Johansson P., Pärnamets P. (2019, July). Correction of manipulated responses in the choice blindness paradigm: What are the predictors? In CogSci (pp. 2884–2890).
- Sunstein C. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press. [Google Scholar]
- Talwar S., Dhir A., Kaur P., Zafar N., Alrasheedy M. (2019). Why do people share fake news? Associations between the dark side of social media use and fake news sharing behaviour. Journal of Retailing and Consumer Services, 51, 72–82. [Google Scholar]
- Tambuscio M., Ruffo G., Flammini A., Menczer F. (2015). Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In WWW ’15 Companion: Proceedings of the 24th International Conference on World Wide Web (pp. 977–982). 10.1145/2740908.2742572 [DOI] [Google Scholar]
- Tan A. S., Lee C. J., Chae J. (2015). Exposure to health (mis)information: Lagged effects on young adults’ health behaviors and potential pathways. Journal of Communication, 65(4), 674–698. [Google Scholar]
- Tandoc E. C., Jr., Duffy A., Jones-Jang S. M., Pin W. G. W. (2021). Poisoning the information well? The impact of fake news on news media credibility. Journal of Language and Politics, 20(5), 783–802. [Google Scholar]
- Tandoc E. C., Jr., Lim Z. W., Ling R. (2018). Defining “fake news.” Digital Journalism, 6(2), 137–153. [Google Scholar]
- Tasnim S., Hossain M. M., Mazumder H. (2020). Impact of rumours and misinformation on COVID-19 in social media. Journal of Preventive Medicine and Public Health, 53, 171–174. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tornberg P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLOS ONE, 13(9), Article e0203958. 10.1371/journal.pone.0203958 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Trafimow D., Osman M. (2022). Barriers to converting applied social psychology to bettering the human condition. Basic and Applied Social Psychology, 44(1), 1–11. [Google Scholar]
- Trevors G., Duffy M. C. (2020). Correcting COVID-19 misconceptions requires caution. Educational Researcher, 49(7), 538–542. [Google Scholar]
- Tufekci Z. (2014). Big questions for social media big data: Representativeness, validity and other methodological pitfalls. Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, 8(1), 505–514. 10.1609/icwsm.v8i1.14517 [DOI] [Google Scholar]
- Valecha R., Volety T., Rao H. R., Kwon K. H. (2020). Misinformation sharing on Twitter during Zika: An investigation of the effect of threat and distance. IEEE Internet Computing, 25(1), 31–39. [Google Scholar]
- Valenzuela S., Bachmann I., Bargsted M. (2019). The personal is the political? What do WhatsApp users share and how it matters for news knowledge, polarization and participation in Chile. Digital Journalism, 9(1), 1–21. [Google Scholar]
- Valenzuela S., Halpern D., Katz J. E., Miranda J. P. (2019). The paradox of participation versus misinformation: Social media, political engagement, and the spread of misinformation. Digital Journalism, 7(6), 802–823. [Google Scholar]
- Van Bavel J. J., Harris E. A., Pärnamets P., Rathje S., Doell K. C., Tucker J. A. (2021). Political psychology in the digital (mis)information age: A model of news belief and sharing. Social Issues and Policy Review, 15(1), 84–113. [Google Scholar]
- van der Linden S., Leiserowitz A., Rosenthal S., Maibach E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1, Article 1600008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- van der Linden S., Roozenbeek J., Compton J. (2020). Inoculating against fake news about COVID-19. Frontiers in Psychology, 11, Article 566790. 10.3389/fpsyg.2020.566790 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van der Meer T., Jin Y. (2019). Seeking formula for misinformation treatment in public health crises: The effects of corrective information type and source. Health Communication, 35(5), 560–575. [DOI] [PubMed] [Google Scholar]
- Van Heekeren M. (2019). The curative effect of social media on fake news: A historical re-evaluation. Journalism Studies, 21(3), 306–318. [Google Scholar]
- van Prooijen J.-W., Etienne T. W, Kutiyski Y., Krouwel A. P. M. (2021). Conspiracy beliefs prospectively predict health behavior and well-being during a pandemic. Psychological Medicine. Advance online publication. 10.1017/S0033291721004438 [DOI] [PMC free article] [PubMed]
- Vargo C. J., Guo L., Amazeen M. A. (2017). The agenda-setting power of fake news: A Big Data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20, 2028–2049. [Google Scholar]
- Vraga E. K., Bode L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. [Google Scholar]
- Vraga E. K., Tully M., Bode L. (2020). Empowering users to respond to misinformation about Covid-19. Media and Communication (Lisboa), 8(2), 475–479. [Google Scholar]
- Wagner M. C., Boczkowski P. J. (2019). The reception of fake news: The interpretations and practices that shape the consumption of perceived misinformation. Digital Journalism, 7(7), 870–885. [Google Scholar]
- Waldman A. (2018). The marketplace of fake news. University of Pennsylvania Journal of Constitutional Law, 20, 845–870. [Google Scholar]
- Walter N., Brooks J. J., Saucier C. J., Suresh S. (2021). Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Communication, 36(13), 1776–1784. [DOI] [PubMed] [Google Scholar]
- Walter N., Cohen J., Holbert R. L., Morag Y. (2020). Fact-checking: A meta-analysis of what works and for whom. Political Communication, 37(3), 350–375. [Google Scholar]
- Walter N., Murphy S. T. (2018). How to unring the bell: A meta-analytic approach to correction of misinformation. Communication Monographs, 85(3), 423–441. [Google Scholar]
- Walter N., Tukachinsky R. (2020). A meta-analytic examination of the continued influence of misinformation in the face of correction: How powerful is it, why does it happen, and how to stop it? Communication Research, 47(2), 155–177. [Google Scholar]
- Wardle C. (2020). Journalism and the new information ecosystem: Responsibilities and challenges. In Zimdars M., McLeod K. (Eds.), Fake news: Understanding media and misinformation in the digital age (pp. 71–86). MIT Press. [Google Scholar]
- Waruwu B. K., Tandoc E. C., Jr., Duffy A., Kim N., Ling R. (2021). Telling lies together? Sharing news as a form of social authentication. New Media & Society, 23(9), 2516–2533. [Google Scholar]
- Wasserman H. (2020). Fake news from Africa: Panics, politics and paradigms. Journalism, 21(1), 3–16. [Google Scholar]
- Watts D. J., Rothschild D. M. (2017, Dec. 5). Don’t blame the election on fake news. Blame it on the media. Columbia Journalism Review. https://www.cjr.org/analysis/fake-news-media-election-trump.php
- Watts D. J., Rothschild D. M., Mobius M. (2021). Measuring the news and its impact on democracy. Proceedings of the National Academy of Sciences, 118(15), e1912443118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Webb T. L., Sheeran P. (2006). Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin, 132(2), 249–268. [DOI] [PubMed] [Google Scholar]
- Wei Z., Liu Y., Li Y. (2016). Is this post persuasive? Ranking argumentative comments in the online forum. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (pp. 195–200). [Google Scholar]
- Weidner K., Beuk F., Bal A. (2019). Fake news and the willingness to share: A schemer schema and confirmatory bias perspective. Journal of Product and Brand Management, 29(2), 180–187. 10.1108/JPBM-12-2018-2155 [DOI] [Google Scholar]
- Weiner J. S., Oakley K. P. (1953). THE SOLUTION OF. Geology, 2(3). [Google Scholar]
- Whyte C. (2020). Deepfake news: AI-enabled disinformation as a multi-level public policy challenge. Journal of Cyber Policy, 5(2), 199–217. [Google Scholar]
- Wilson K. H. (2015). The national and cosmopolitan dimensions of disciplinarity: Reconsidering the origins of communication studies. Quarterly Journal of Speech, 101(1), 244–257. [Google Scholar]
- Winchester S. (2018). Exactly: How precision engineers created the modern world. William Collins. [Google Scholar]
- Wood T., Porter E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behaviour, 41, 135–163. [Google Scholar]
- Wootton D. (2015). The invention of science: A new history of the scientific revolution. Penguin Books. [Google Scholar]
- World Health Organization. (2022). Infodemic. https://www.who.int/health-topics/infodemic#tab=tab_1
- Wright D. B., Self G., Justice C. (2000). Memory conformity: Exploring misinformation effects when presented by another person. British Journal of Psychology, 91, 2189–2202. [DOI] [PubMed] [Google Scholar]
- Wu L., Morstatter F., Carley K., Liu H. (2019). Misinformation in social media: Definition, manipulation and detection. ACM SIKDD Explorations Newsletter, 21(2), 80–90. [Google Scholar]
- Xiao X., Wong R. M. (2020). Vaccine hesitancy and perceived behavioral control: A meta-analysis. Vaccine, 38(33), 5131–5138. [DOI] [PubMed] [Google Scholar]
- Zebregs S., van den Putte B., Neijens P., de Graaf A. (2015). The differential impact of statistical and narrative evidence on beliefs, attitude, and intention: A meta-analysis. Health Communication, 30(3), 282–289. [DOI] [PubMed] [Google Scholar]
- Zeng E., Kohno T., Roesner F. (2020). Bad news: Clickbait and deceptive ads on news and misinformation websites. In Workshop on Technology and Consumer Protection. (ConPro). IEEE, New York, NY. [Google Scholar]
- Zeng E., Kohno T., Roesner F. (2021). What makes a “bad” ad? User perceptions of problematic online advertising. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–24). [Google Scholar]
- Zhou X., Zafarani R. (2021). Fake news: A survey of research, detection methods and opportunities. ACM Computing Surveys, 53(5), 1–40. [Google Scholar]
- Zhou Y., Shen L. (2021). Confirmation bias and the persistence of misinformation on climate change. Communication Research, 49(4), 500–523. [Google Scholar]