Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2021 Apr 9;118(15):e1912437117. doi: 10.1073/pnas.1912437117

Misinformation and public opinion of science and health: Approaches, findings, and future directions

Michael A Cacciatore a,1
PMCID: PMC8053916  PMID: 33837143

Abstract

A summary of the public opinion research on misinformation in the realm of science/health reveals inconsistencies in how the term has been defined and operationalized. A diverse set of methodologies have been employed to study the phenomenon, with virtually all such work identifying misinformation as a cause for concern. While studies completely eliminating misinformation impacts on public opinion are rare, choices around the packaging and delivery of correcting information have shown promise for lessening misinformation effects. Despite a growing number of studies on the topic, there remain many gaps in the literature and opportunities for future studies.

Keywords: misinformation, disinformation, literature review


The popularity of “misinformation” in the American public consciousness arguably peaked in 2018 during the lead-up to the US midterm elections (1). Shortly after the midterms, “misinformation” was Dictionary.com’s “word of the year” (2), just 1 y after Collins English Dictionary had granted “fake news” the same title (3). Interest was driven largely by a focus on politics and the role that misinformation might have played in influencing candidate preferences and voting behaviors. However, certainly more can be said about a topic that has captured the attention of governments and citizens across the globe. What does “misinformation” (and the terms that are oftentimes treated synonymously) mean? How big of a problem is it in areas outside of politics, including science and health? What do we know about the ways in which it impacts citizens? What can be done to minimize the damage it is doing to public understanding of the key issues of the day?

In this paper I summarize the literature on misinformation with a specific focus on academic studies in areas of science and health. I review the methodological approaches and operationalizations employed in these works, explore the theoretical frameworks that inform much of the misinformation research, and break down the proposed solutions for combatting the problem, including the scholarly research aimed at stopping the spread of such content and lessening its impacts on public opinion. Finally, I discuss avenues for future research. I begin, however, with a discussion of some of the most common definitions of misinformation (and related terms) in the communication literature.

Defining “Misinformation”

An exploration of the literature suggests that “misinformation” is the most commonly employed label for studies focused on the proliferation and impacts of false information. Part of this has to do with the fact that misinformation has become something of a catch-all term for related concepts like disinformation, ignorance, rumor, conspiracy theories, and the like.* Its status as a catch-all term has sometimes resulted in broad use of the concept and imprecise definitions. Much of the earliest work on the topic would employ the label “misinformation” while failing to formally define the concept at all (e.g., refs. 4 and 5), treating misinformation as a known concept.

As misinformation work grew, scholars brought greater structure to the term. Arguably the most commonly applied definition of misinformation is the one offered by Lewandowsky et al. (6), who refer to misinformation as “any piece of information that is initially processed as valid but is subsequently retracted or corrected” (6). Others have removed the “processing” element from this definition, describing misinformation as information that is initially presented as true but later shown to be false (e.g., refs. 7 and 8).

Lewandowsky et al. (9) also draw a line between misinformation and disinformation. While not the first scholars to do so (e.g., ref. 10), their distinction hinges on intentionality, with misinformation operating in the unintentional space and disinformation in the intentional [e.g., “outright false information that is disseminated for propagandistic purposes” (9)]. Nevertheless, studies have continued to lean on the term “misinformation” even when referring to groups who are actively spreading false content for advocacy purposes (e.g., refs. 11 and 12), illustrative of a lingering conceptual fuzziness in the literature.

Ignorance differs from mis/disinformation in terms of both how much an individual knows and the degree of confidence they have in that knowledge. An ignorant person is not only ill-informed but realizes they are, while those who are misinformed are usually confident in their understanding even though it is inaccurate (13, 14). Terms like “myth,” “falsehoods,” and “conspiracy” are less commonly employed and typically serve as synonyms for the more general misinformation.

Methodological Approaches and Operationalizations

This next section will focus on how mis/disinformation has been studied. I will outline five major groupings to this scholarship (content analyses, computational text analysis, network analyses/algorithmic work, public opinion surveys/focus groups/interviews, and experiments) and discuss the common ways mis/disinformation is operationalized and manipulated. I will save the discussion of key conclusions for Trends in the Findings.

Content Analysis.

The goal of virtually all of the content analysis work on mis/disinformation is to diagnose the scope of the problem. Content analyses with some emphasis on mis/disinformation—even if the terms are not specifically acknowledged—have been conducted on a variety of topics, including many health issues.

There is much variation in the mis/disinformation content analysis work, although nearly all this work focuses on online sources. Some have focused on returned internet search results. For example, Hu et al. (15) explored the returned results for skin condition searches on top internet search engines, searching for the relative prevalence of product-focused versus educational websites and the quality of information across those categories of content. Kalk and Pothier (16) took a unique look at information searches online, examining returned Google search results for “schizophrenia” in terms of their readability using the standardized Flesch Reading Ease classification. Rowe et al. (17) focused more narrowly on the open question portal on the BBC website in the immediate aftermath of avian flu’s arriving in the United Kingdom. Their analysis focused not on the potential for online content to misinform the public but on the open question portal as a means of identifying whether and in what areas the public lacked an adequate understanding of avian flu. Still other work has taken a slightly different approach by analyzing the rhetoric and persuasive communication strategies of a specific population to understand what makes their use of disinformation effective (e.g., refs. 18 and 19).

Particularly in the 2010s, content analyses of social media platforms became popular. A collaboration between researchers in Nigeria and Norway looked at the prevalence of medical mis/disinformation in Ebola content shared on Twitter, including a comparison of the potential reach of such content relative to facts (20). Jin et al. (21) also explored Ebola content on Twitter, although with a more narrow focus on rumor spread in the immediate aftermath of news of the first case of Ebola in the United States. Other work has focused on Facebook. Bessi et al. (22, 23) relied on a sample of 1.2 million individuals on the platform to better understand how mainstream scientific and conspiracy news are consumed and shape communities, including correlating user engagement with metrics like numbers of Facebook friends. Content analyses of vaccination-related issues have been conducted on YouTube videos (24, 25), with such work focusing on the stance of the video (positive, negative, or neutral toward vaccines) and false links between vaccines and cases of autism.

Perhaps owing to their oftentimes broader focus on issues outside of mis/disinformation (e.g., the tone of content, the frame being emphasized, etc.), much content analysis work lacks clear operationalizations of mis/disinformation and related measures. The most common operationalizing is a determination of whether the content contains evidence of factually inaccurate information, innuendo, or conspiracy theories (e.g., refs. 11, 1921, and 2629). Unfortunately, it is not always clear how the authors are differentiating rumor from fact. Some categorize content by relying solely on assessments by groups of coders who are considered experts in the field (e.g., ref. 30), while others, particularly those in the health communication space, compare content to guidelines put forth by major health organizations like the Centers for Disease Control and Prevention (CDC) or the World Health Organization (e.g., refs. 31 and 32). Still other work is decidedly more subjective in nature, requiring coders to search for any evidence that audiences are struggling with or otherwise made confused or anxious by the content they encounter (e.g., refs. 17 and 33). These more subjective operationalizations reflect the fuzziness around our understanding of mis/disinformation and may serve to overstate the scope of the problem.

Additional work has taken a broader approach to the classification of content, focusing less on specific pieces of communication and more on the source of the information. One approach involves identifying “fake news” pages online and treating all content from those pages as disinformation (e.g., refs. 22, 23, 34, and 35). For example, the Bessi et al.’s (22, 23) studies relied on Facebook pages dedicated to debunking conspiracy theories to identify “conspiracy news pages,” while other work has relied on existing databases and projects that track fake news sources (e.g., refs. 34 and 35). This approach can, of course, be supplemented by then examining the content on these “conspiracy” or “fake news” pages for specific instances of disinformation. A second approach speculates that a misinformed public is likely to follow given the focus of content found online (e.g., ref. 15) or the accessibility/readability of that content (e.g., ref. 16). Rather than explicitly measure mis/disinformation, these works warn that the oftentimes product-focused (rather than health- or education-focused) nature of health websites coupled with the use of jargon and sophisticated language on those pages may breed a misinformed public. Assessments of the conclusions provided by content analyses should therefore be made with these operational decisions in mind since some works may not actually be classifying individual news items, instead deeming all content from a given source as mis/disinformation.

Computational Text Analysis, Natural Language Processing, and Topic Modeling.

A cousin of the content analysis work noted above is the work being done through computational text analysis, natural language processing, and related approaches. While a complete overview of these methodologies is not feasible, one can generally think of these as computer-assisted approaches to the thematic clustering of large-scale textual data. These generally take an inductive approach to data, with computer algorithms identifying topics or themes based on hidden language patterns in texts (36). There are various approaches to the clustering of data and a variety of algorithms used for the task (37), but these computational approaches generally carry with them two key advantages. First, they conduct reliable content analyses on collections of data that are too big to code by hand, and thus are an extension of the content analysis approach noted above. Second, they rely on machine learning, which allows for the discovery of patterns in texts that may not be recognized by individual coders (36).

As one example of this work, Bousallis and Coan (38) retrieved all climate change-focused documents produced by 19 well-known conservative think tanks and classified them by type and theme using a clustering algorithm. This approach allowed the authors to identify, among other things, a misinformation campaign that escalated over a 15-y period between 2008 and 2013. Other work in this space uses these methodologies in concert with various forms of metadata or existing datasets. For instance, Farrell (39) collected philanthropic data, including lists of conference attendees and speakers, and combined this information with existing datasets of all persons known to be connected to organizations linked to the promulgation of climate change misinformation between 1993 and 2017. Using natural language processing, he was able to identify the degree to which persons and organizations linked to climate mis/disinformation were also integrated into mainstream philanthropic networks. He also took a similar approach to the question of corporate funding, combining Internal Revenue Service data of Exxon Mobile and Koch Industries funding donations with collections of government documents and written and verbal texts from both mainstream news media and groups opposing the science of climate change (40). Relying on a combination of network science and machine-learning text analysis, this work was able to not only explain the corporate structure of the climate change countermovement but also pinpoint its influence on mainstream news media and politics.

Network Analysis, Algorithms, and Online Tracking.

At the same time that the work identified above was identifying the scope of the mis/disinformation problem, efforts were being made, largely through computer technology, to help solve it. A first step in this process—the identification of factually inaccurate information from legitimate news—has become attractive to scholars working in artificial intelligence and natural language processing. Vosoughi et al. (41) focused on identifying the salient features of rumors by examining their linguistic style, the characteristics of the people who share them, and network propagation dynamics. Other work has focused on specific features of content, like hashtags, links, and mentions (42).

Once rumors and false content are identified, the next step is controlling or stifling their spread. Rumor control studies can be grouped into two major categories. First, scholars have focused on garnering a general understanding of how information—factual or otherwise—is shared and spread online (e.g., refs. 23, 32, 35, 43, and 44). This work looks at the structure of online communities, including the strength of ties between community members and key features of information sources, like whether a source of content is likely a bot. Patterns in shared content are examined, as well, including key features of messages that garner engagement and time series models to better understand the speed at which information is shared. Bessi et al. (23), for example, focused on homophily and polarization as key triggers in the spread of conspiracies, while Jang et al. (44) focused on the differences in authorship between fake and real news and the alterations that each go through as they are shared online.

The second major approach to rumor control work focuses on identifying critical nodes in social networks and either removing them from the network or combatting their effects via information cascades. These works focus heavily on building and testing algorithms that can be automatically applied to large-scale data so as to identify and deal with critical nodes both quickly and at low costs. As one example, a group of researchers looked at information cascades as a method for limiting the spread of mis/disinformation (45). Their approach focuses on stifling the spread of false information by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before they are exposed to the false content.

The operationalization of mis/disinformation in these works generally follows that noted for content analyses. In fact, work in this area often includes a content analysis component for identifying mis/disinformation and the major sources of such content. Once mis/disinformation has been identified the authors model the information and run simulations on the data.

Public Opinion Surveys/Focus Groups/Interviews.

Surveys and focus groups are popular for understanding how different population groups perceive or are vulnerable to the problems of mis/disinformation. Studies of expert populations are common in the space of healthcare and disease. For example, Ahmad et al. (46) conducted focus groups with physicians to learn more about the benefits and risks of incorporating internet-based health information into routine medical consultations. A similar approach was taken by Dilley et al. (47), who employed surveys and structured interviews with physicians and clinical staff to learn more about the barriers to human papillomavirus vaccination.

The bulk of survey, focus group, and interview work in this area, however, has focused on lay audiences. Nyhan (13) focused on public misperceptions in the context of healthcare reform, relying on secondary survey data to show how false statements about reform by politicians and media members were linked to misperceptions among American audiences. Silver and Matthews (48) relied on semistructured interviews with survivors of a tornado to learn more about the spread of (mis)information in the aftermath of a disaster, while Kalichman et al. (49) surveyed over 300 people living with HIV/AIDS to assess their vulnerability to medical mis/disinformation.

Mis/disinformation is generally operationalized in similar ways in surveys, focus groups, and interviews. The work with expert populations will often employ attitudinal measures to understand how experts view the size of the problem within a given topic area (e.g., refs. 46, 47, 50, and 51). The work with lay audiences will more often employ measures of factual knowledge—for example, true/false items about the causes, symptoms, and possible cures for a given disease or virus (12, 5254)—or perceived knowledge or concerns (e.g., refs. 53 and 55), which might ask respondents to report how much they believe they know about a topic, or how big a problem they believe inaccurate information to be. Other work has utilized quasi-experimental stimuli to assess a respondent’s susceptibility to false content by exposing participants to different-quality webpages before asking them to rate the pages in terms of believability and trust (49). Finally, attempts have been made to distinguish mere ignorance from actual mis/disinformation by analyzing not only whether an individual holds a misperception but how strongly that misperception is tied to their self-assessed knowledge of the topic (13).

Experiments.

With the possible exception of content analysis work, experiments have been the most popular methodological approach to the issue of mis/disinformation. It is worth noting that most experiments have tended to focus on misinformation in the form of honest journalist or witness mistakes, rather than more flagrant attempts to deceive (i.e., disinformation). Much of this work has explored the role of retractions or corrections in lessening the continued influence of misinformation in the minds of the public, but other approaches have been employed, including inoculating people to misinformation prior to exposure (e.g., refs. 56 and 57), providing participants with myth–fact sheets or event statements that correct the misinformation (5860), and using the “related links” feature on Facebook, or subsequent posts in a social media newsfeed, to provide alternative viewpoints on the topic (6163).

Some work has avoided the use of retractions or inoculating information altogether by looking at intervention materials for areas where misperceptions are already common, such as vaccines (64). Still other work falls outside this general framework. Rather than attempting to reverse misperceptions in people’s minds, Nyhan and Reifler (65) used a mailed reminder of fact-checking services to see if the reminder would deter politicians from making false statements on the campaign trail.

Experimental work generally operationalizes mis/disinformation in one of several ways. First, it is oftentimes a manipulated variable, with the most common manipulations taking the form of providing a false piece of information to experimental participants. These are typically real or constructed news articles or “dispatches” (e.g., refs. 66 and 67) but might also be brief posts or headlines shared on social media (e.g., ref. 68), generic statements or statistics (e.g., refs. 60 and 69), quotes from a politician (e.g., ref. 70), or recordings of news reports (e.g., ref. 71). After exposure, participants will receive some form of retraction notice, thus turning the original information into misinformation.

Misperceptions are typically assessed after exposure to an experimental stimulus through some form of factual knowledge questions, attitudinal items, or inference queries. Factual knowledge questions might take the form of basic fact-recall items based on information in the communication to which the participant was exposed (e.g., “On which day did the accident occur?”; ref. 58). These are similar to the measures employed in survey work, and might take the form of true–false items. Some work assesses fact-recall with response booklets, where participants are asked to provide as many event-related details as possible to provide a complete account of the event (e.g., ref. 58).

Attitudinal items are usually posed around the key components of the shared mis/disinformation. For instance, a study about the false link between vaccines and autism first presented participants with misinformation then corrected that information in one of several ways before measuring attitudes related to the misinformation through a series of agree–disagree items (e.g., “Some vaccines cause autism in healthy children” or “If I have a child, I will vaccinate him or her”; ref. 61).

Inference questions are generally open-ended and allow a respondent to either reference the inaccurate content they were originally given, reference the correction to that information, or avoid the context altogether. In their study of a fictitious minibus accident, Ecker et al. (58) asked participants the following inference question: “Why do you think it was difficult getting both the injured and uninjured passengers out of the minibus?” Having first misinformed participants by noting that the passengers were elderly, and then later correcting that information, a reference to the advanced age of the passengers would be evidence of misinformation.

Finally, unique approaches to studying mis/disinformation require unique approaches to measuring outcomes. The Nyhan and Reifler (65) study that used a reminder of fact-checking to see if it deterred politicians from making false statements measured how dishonest the politicians were in their later statements by turning to PolitiFact ratings and searching LexisNexis for any media articles that challenged a statement by any of the legislators in the study.

Theoretical Underpinnings

The Continued Influence Effect.

The backbone of a significant number of studies of mis/disinformation, particularly many of the experimental approaches built around correcting the effects of misinformation on the public, is the so-called continued influence effect (CIE). The CIE refers to the tendency for information that is initially presented as true, but later revealed to be false, to continue to affect memory and reasoning (59). A relatively small group of researchers have made the most headway in this space, primarily exploring the CIE in news retraction and correction studies (e.g., refs. 6, 5860, 66, 67, and 7173).

There are multiple proposed explanations for the CIE. The first concerns “mental event models” (74, 75). People are said to build mental models of events as they unfold. However, in doing so, they are reluctant to dismiss key information, such as the cause of an event, unless a plausible alternative exists to replace the dismissed information. If no plausible alternative is available, people prefer an inconsistent model over an incomplete one, resulting in a continued reliance on the outdated information.

The second explanation for the CIE is focused on retrieval failure in controlled memory processes (6). This process can be relatively simple, such as misattributing a specific piece of information to the wrong source (e.g., recalling the subsequently retracted cause of a fire but thinking that information came from the credible police report), or it might be rather complex, having to do with dual-process theory and the automatic versus strategic retrieval of information from memory (76). While a complete overview of dual-process theory is beyond the scope of this paper, this explanation largely focuses on a breakdown in the encoding and retrieval process in memory due to things like time pressure or cognitive overload (73). In short, how we encode information impacts how quickly and with what accuracy we will retrieve information at a later time.

A third explanation for the CIE concerns processing fluency and familiarity. Oftentimes, in producing a retraction we repeat the initial false information, which may inadvertently increase the strength of that information in the receiver’s memory and their belief in it by making it more familiar (73). When the receiver is later called upon to recall the event, the mis/disinformation is more easily recalled, thereby giving it greater credence. Finally, there is some evidence that the CIE might be based on reactance effects, whereby people do not like being told what to think and push back when they are told to disregard an earlier piece of information by a retraction. This explanation has been largely tested in courtroom settings where jurors are asked to disregard a piece of evidence after being told it is inadmissible (6).

Motivated Reasoning.

Since at least the mid-20th century, scholars have noted that partisans are selective in both their choice and processing of information. The biased processing of content has come to be known as “motivated reasoning.” Motivated reasoning has become a popular concept in mis/disinformation research, particularly for issues with a strong partisan divide (e.g., refs. 13, 61, 66, 72, and 77).

Several mechanisms have been proposed to explain motivated reasoning, including the prior attitude effect, disconfirmation bias, and confirmation bias (78). The prior attitude effect occurs when “people who feel strongly about an issue … evaluate supportive arguments as stronger and more compelling than opposing arguments” (78). Disconfirmation bias argues that “people will spend more time and cognitive resources denigrating and counterarguing attitudinally incongruent than congruent arguments” (78). Individuals are engaging in confirmation bias when they choose to expose themselves to “confirming over disconfirming arguments” when they are given freedom in their information choice (78). Additional work has expanded upon the mechanisms noted here. For instance, Jacobson’s (79) selective perception argues that “people are more likely to get the message right when it is consistent with prior beliefs and more likely to miss it when it is not” (79), while his selective memory suggests that “people are more likely to remember things that are consistent with current attitudes and to forget or misremember things that are inconsistent with them” (79). In the context of mis/disinformation, motivated reasoning can help explain why some people may be resistant to new information that, for example, contradicts a believed link between vaccinations and autism (64).

Other Concepts Common to the Literature.

Factors related to the CIE and motivated reasoning that are also common in the mis/disinformation literature include echo chambers (“polarized groups of like-minded people who keep framing and reinforcing a shared narrative”; ref. 80), filter bubbles (“where online content is controlled by algorithms reflecting user’s prior choices”; ref. 44), worldviews (audience values and orientation toward the world, including their political ideology; ref. 6), and skepticism (the degree to which people question or distrust new information or information sources; ref. 6). These concepts generally help explain the resistance to correcting information that forms the foundation of the CIE.

Trends in the Findings

How Big Is the Problem?

As noted, content analysis work and computational text analyses have helped scholars better understand the scope of the mis/disinformation problem. A complete summary of the studies in this space is not feasible; however, some patterns are worth noting. First, there is often convergence in results even with vastly different approaches to studying the problem. The work on vaccine mis/disinformation represents one area where scholars have generally coalesced in their research findings. For example, Basch et al. (24) explored videos about vaccines on YouTube and found that a strong percentage reported a link between vaccines and autism, a finding that was echoed by Donzelli et al. (25) in their exploration of the same topic and platform. Those findings have been complemented by Moran et al. (19) and Panatto et al. (11), who identified similar false claims about links between vaccination and autism and Gulf War syndrome, respectively, in their samples of web pages. Computational analyses focused on climate change communication have also generally identified problems with mis/disinformation. For example, Boussalis and Coan (38) found increases in climate change mis/disinformation over time, arguing that the “era of science denial” is alive and well, while Farrell (36) found evidence that organizations that produce climate contrarian texts exert strong influence within networks and therefore wield great power in the spread of information.

At the same time, results have not always been consistent, even when exploring the same issue within the same medium. For instance, researchers conducted a search of “Ebola” and “prevention” or “cure” on Twitter, a search that returned a large set of tweets, of which 55% were said to contain medical mis/disinformation with a potential audience of more than 15 million (as compared to about 5.5 million for the medically accurate tweets) (20). Also on Twitter, Jin et al. (21) looked at rumor spread in the immediate aftermath of news of the first case of Ebola in the United States. They found rumors to represent a relatively small fraction of the overall Ebola-related content on the platform. They also found evidence that rumors typically remain more localized and are less believed than legitimate news stories on the topic. All told, the work focused on identifying the scope of the mis/disinformation problem, while oftentimes varying in approach, has consistently found evidence for at least some degree of concern, although pinpointing the exact nature of the problem has proven difficult.

Combatting the Spread of Misinformation.

Computational analyses, including algorithm creation, have allowed for a better understanding of how mis/disinformation spreads, particularly in the online environment. This work is promising for alerting people to likely pieces of false content and has potential for limiting its spread. Vosoughi et al. (41) focused on mis/disinformation identification. They explored the linguistic style of rumors, the characteristics of the people who share them, and network propagation dynamics to develop a model for the automated verification of rumors. They tested their system on 209 rumors across nearly 1 million tweets and found they were able to correctly predict 75% of the rumors, and did so faster than any other public source. Similarly, Ratkiewicz et al. (42) created the “Truthy” system, which identified misleading political memes on Twitter through tweet features like hashtags, links, and mentions.

The work on rumor control has also yielded important findings. Pham et al. (81) developed an algorithm for identifying a set of nodes in a social network that, if removed, will severely limit the spread of mis/disinformation. The authors claim that their approach is not only efficient but a cost-effective tool for combatting mis/disinformation spread. Similar algorithms have been developed by Saxena et al. (82) and Zhang et al. (83). In each case, the authors argue that their algorithmic approach can dramatically disrupt information spread, preventing exposure to a large number of nodes. Of course, the question with these works, and others not outlined here, is how nodes will ultimately be removed from a network, and under what circumstances it is ethically and legally feasible to remove or silence a social media user.

Perhaps because of these questions, Tong et al. (45) focused on stifling the spread of mis/disinformation by identifying it early, seeding key nodes in a social network with accurate information, and allowing those users to spread the accurate information to others before exposure to the false content. Their approach was found effective for rumor blocking, suggesting there are multiple promising avenues for identifying and controlling the spread of mis/disinformation online.

Combatting Misinformation within Members of the Public.

Arguably the most extensive work aimed at combatting misinformation is the experimental work on retractions and corrections, usually in the context of the CIE. Once again, this work has generally focused on honest mistakes in reporting rather than more deliberate attempts to deceive, which is likely to impact how receptive audiences are to correcting information. Work in this space has focused on altering the impact of a retraction by being more clear and direct with its wording (74), repeating it multiple times (84), altering the timeline for the presentation of the retraction (74, 85), and providing supplemental information alongside it (i.e., giving reasons why the misinformation was first assumed to be factual; ref. 86). Other work has focused on the emotionality of the misinformation (87) or has manipulated how carefully a respondent is asked to attend to the presented information (88).

Virtually no work has been successful at completely eliminating the effects of misinformation; however, some studies have shown promise for reducing misperceptions. Among the most promising involves delivering warnings at the time of initial exposure to the misinformation (6). Ecker et al. (58) found that a highly specific warning (a detailed description of the CIE) reduced but failed to fully eliminate the CIE. A more general warning having to do with the limitations of fact checking in media did very little to reduce reliance on misinformation. Cook et al. (56), as well as van der Linden et al. (57), have found promising evidence that audiences can be inoculated against the effects of false content by providing very specific warnings about issues like false-balance reporting and the use of “fake experts.” It is worth noting that warnings are most effective when they are administered prior to mis/disinformation exposure (89).

The repetition or strengthening of retractions has been found to reduce, but again not eliminate, the CIE (6). The best evidence of this is from a study by Ecker et al. (73), who varied both the strength of the misinformation (one or three repetitions) and the strength of the retractions (zero, one, or three repetitions). Their experiments revealed that after three presentations of misinformation a single retraction served to lessen reliance on misinformation, with three retractions reducing it even further. However, the repetition of misinformation also had a stronger effect on thinking than the repetition of the retraction (73). Therefore, efforts to correct a misperception through repetition of a retraction might actually result in boomerang effects as retractions oftentimes involve repeating the original misinformation (90). Further, there is at least some evidence that the repetition of a retraction produces a “protest-too-much” effect, causing message recipients to lose confidence in the retraction (86).

The provision of alternative narratives has also shown promise for reducing the CIE. An alternative narrative fills the gap in a recipient’s mind when a key piece of evidence is retracted (e.g., “It wasn’t the oil and gas [that caused the fire], but what else could it be?”; ref. 6). There is some fMRI data that corroborates this theory as it found that the continued influence of retracted information may be due to a breakdown of narrative-level integration and coherence-building mechanisms implemented by the brain (71). To maximize effectiveness, the alternative narrative should be plausible, should account for the information that was removed by the retraction, and should explain why the misinformation was believed to be correct (6).

Other factors that have been tested include recency and primacy effects, with recency emerging as a more important contributor to the persistence of misinformation as people generally rely more on recent information in their evaluations of retractions (59). Familiarity and levels of explanatory detail have also been tested (60). The authors found that providing greater levels of detail when correcting a myth produced a more sustained change in belief. They also found that the affirmation of facts worked better than the retraction of myths over the short term (1 wk), but not over a longer term (3 wk), and that this effect was most pronounced among older rather than younger adults. It is also worth noting that combining approaches can enhance their effects. For example, merging a specific warning with a plausible alternative explanation can further reduce the CIE compared with administering either of those approaches separately (58).

Source work has also been popular for combatting the effects of false information. For instance, having the refutation of a rumor come from an unlikely source, such as someone for whom the refutation runs counter to their personal or political interests, can increase the willingness of even partisan individuals to reject the rumor (66). It is worth noting, however, that the author conducted a content analysis of rumor refutation by unlikely sources in the context of healthcare reform and found it to be an exceedingly rare event.

Driven by its role in the proliferation of mis/disinformation, Bode and Vraga (61) have focused their correction studies on social media. One study did so using the “related stories” function on Facebook. This work presented participants a Facebook post that contained inaccurate content and then manipulated the related stories around it to either 1) confirm, 2) correct, or 3) both confirm and correct that information. The analysis revealed a significant reduction in misperceptions among those participants who received content designed to correct it. They later looked at source credibility in the context of information shared on Twitter and found that while a single correction from another social media user failed to significantly reduce misperceptions, a single correction from the CDC could impact misperceptions (62). In fact, corrections from the CDC worked best among those with the highest levels of initial misperception. They further investigated whether providing a source was necessary to curb misperceptions by having two individual commenters discredit the information in a Facebook and Twitter conversation (68). In one condition those users provided a link to debunking news stories from the CDC or Snopes.com, while in the other they did so without reference to any outside sources. Their results suggest a source is needed to correct misperceptions.

Finally, outside of factors related to the misinformation itself or the retraction, individual-level differences have also been tested in the context of the CIE and misinformation correction studies, including racial prejudice (72), worldview and partisanship (70, 91), and skepticism (92), with mixed results scattered across studies.

Gaps in the Literature and Moving Forward

While misinformation remains a relatively new topic of public concern, scholars have been addressing issues in this space for quite some time. The result is a large body of literature, but one with significant gaps. Perhaps most worrisome is that much of the work has focused on combatting misinformation, and, importantly, not disinformation. This distinction is subtle, but important. The bulk of the studies focused on the CIE, for instance, have focused on small journalistic errors in reporting (e.g., misrepresenting the cause of a fire) and have largely avoided issues characterized by more deliberate attempts to deceive and persuade. Of course, the major controversy surrounding false information has less to do with honest errors in writing and much more to do with deliberate attempts to deceive. The early retraction studies (e.g., refs. 58 and 87) have provided a strong foundation of initial findings, but we must push these further with highly partisan issues and audiences.

Related to the point above, relatively few studies have explored methods for inoculating individuals from mis/disinformation. As noted, progress has been made in this space with regard to issuing warnings about things like the CIE prior to misinformation exposure (58). Other work has explored factors like false-balance reporting and the use of “fake experts,” also with promising results (56, 57). While such work does not always completely prevent mis/disinformation from taking hold, it does present a promising avenue for better understanding the causes of mis/disinformation and ways to prevent its spread.

A third gap in the literature, one articulated by Lewandowsky et al. (6), has to do with the relative dearth of studies focused on individual-level differences that exacerbate or attenuate things like the CIE. The authors specifically reference intelligence, memory capacity and updating abilities, and tolerance for ambiguity as factors worthy of research attention. However, other factors, including elaborative processing, social monitoring, and a host of variables related to media use and literacy, also remain untested. Greater attention should also be paid to the role of emotion in both the processing of mis/disinformation and its spread (6).

A fourth gap in the literature has to do with better understanding the mechanisms that explain the persistence of mis/disinformation in our minds. Different pathways have been suggested for explaining why mis/disinformation is so difficult to combat. However, relatively few studies have attempted to test competing theories, instead choosing to speculate on explanations post hoc. The functional MRI work of Gordon et al. (71) is both an interesting approach and promising step in furthering our understanding of information persistence. Without more definitive attempts to explain the process through which mis/disinformation seemingly infects our brains, we are doomed to continue the uphill battle against this content.

A common thread in much of the literature cited in this paper is a focus on individuals—typically everyday citizens—and their perceptions. Of course, mis/disinformation can also influence other populations, including political elites, the media, and funding organizations. Indeed, it is arguably most impactful when these audiences are reached as they represent potentially powerful pathways to political influence. Unfortunately, there is a relative dearth of work in this space, at least as compared to studies focused on individual perceptions. Notable exceptions can be found in some of the computational work focused on climate change countermovements (e.g., refs. 36 and 3840). For example, Brulle (93) recently examined the network of political coalitions, including those in coal and oil and gas sectors, to better understand the organization and structure of a movement opposed to mandatory limits on carbon emissions. Further work focused on the nature and makeup of networks involved in the spread of false content is an especially fruitful path for future research.

Finally, it is worth noting that addressing any of the above gaps in the literature will be very difficult without paying greater attention to issues of conceptualization and operationalization that plague many of the key concepts in the space. Far too many studies have defined or measured misinformation in ways that are actually reflective of different concepts, including disinformation, ignorance, or misunderstandings. A necessary first step in improving our understanding of mis/disinformation impacts and combatting their negative effects, therefore, is to clearly and appropriately define what we mean by key terms and how we should be measuring them in empirical studies of the topic.

Footnotes

The author declares no competing interest.

This paper results from the Arthur M. Sackler Colloquium of the National Academy of Sciences, “Advancing the Science and Practice of Science Communication: Misinformation About Science in the Public Sphere,” held April 3–4, 2019, at the Arnold and Mabel Beckman Center of the National Academies of Sciences and Engineering in Irvine, CA. NAS colloquia began in 1991 and have been published in PNAS since 1995. From February 2001 through May 2019, colloquia were supported by a generous gift from The Dame Jillian and Dr. Arthur M. Sackler Foundation for the Arts, Sciences, & Humanities, in memory of Dame Sackler’s husband, Arthur M. Sackler. The complete program and video recordings of most presentations are available on the NAS website at http://www.nasonline.org/misinformation_about_science.

This article is a PNAS Direct Submission. A.H. is a guest editor invited by the Editorial Board.

*I will often employ the general label “mis/disinformation” in this work when referring to literatures or studies that are ambiguous or especially broad in their focus. I will reserve more specific terms like “disinformation” or “rumor” for use when discussing those specific studies that use that language.

Data Availability Statement.

There are no data associated with the paper.

References

  • 1.Stelter B., “Big Tech and the midterms: The scary thing is what we still don’t know.” CNN Business (2018). https://www.cnn.com/2018/11/03/media/midterms-misinformation-fake-news/index.html. Accessed 18 December 2018.
  • 2.Strauss V., Word of the year: Misinformation. Here’s why. The Washington Post, 10 December 2018. https://www.washingtonpost.com/education/2018/12/10/word-year-misinformation-heres-why/. Accessed 18 December 2018.
  • 3.Meza S., “‘Fake news’ named word of the year.” Newsweek (2017). https://www.newsweek.com/fake-news-word-year-collins-dictionary-699740. Accessed 18 December 2018.
  • 4.Morahan-Martin J., Anderson C. D., Information and misinformation online: Recommendations for facilitating accurate mental health information retrieval and evaluation. Cyberpsychol. Behav. 3, 731–746 (2000). [Google Scholar]
  • 5.Oravec J. A., On-line medical information and service delivery: Implications for health education. J. Health Educ. 31, 105–110 (2000). [Google Scholar]
  • 6.Lewandowsky S., Ecker U. K., Seifert C. M., Schwarz N., Cook J., Misinformation and its correction: Continued influence and successful debiasing. Psychol. Sci. Public Interest 13, 106–131 (2012). [DOI] [PubMed] [Google Scholar]
  • 7.Cook J., Understanding and countering climate science denial. J. Proc. R. Soc. N. S. W. 150, 207–219 (2017). [Google Scholar]
  • 8.Ecker U. K., Hogan J. L., Lewandowsky S., Reminders and repetition of misinformation: Helping or hindering its retraction? J. Appl. Res. Mem. Cogn. 6, 185–192 (2017). [Google Scholar]
  • 9.Lewandowsky S., Stritzke W. G. K., Freund A. M., Oberauer K., Krueger J. I., Misinformation, disinformation, and violent conflict: From Iraq and the “War on Terror” to future threats to peace. Am. Psychol. 68, 487–501 (2013). [DOI] [PubMed] [Google Scholar]
  • 10.Hernon P., Disinformation and misinformation through the internet: Findings of an exploratory study. Gov. Inf. Q. 12, 133–139 (1995). [Google Scholar]
  • 11.Panatto D., Amicizia D., Arata L., Lai P. L., Gasparini R., A comprehensive analysis of Italian web pages mentioning squalene-based influenza vaccine adjuvants reveals a high prevalence of misinformation. Hum. Vaccin. Immunother. 14, 969–977 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Krishna A., Motivation with misinformation: Conceptualizing lacuna individuals and publics as knowledge-deficient, issue-negative activists. J. Public Relat. Res. 29, 176–193 (2017). [Google Scholar]
  • 13.Nyhan B., Why the “death panel” myth wouldn’t die: Misinformation in the health care reform debate. Forum 8, 1–24 (2010). [Google Scholar]
  • 14.Kuklinski J. H., Quirk P. J., Jerit J., Schwieder D., Rich R. F., Misinformation and the currency of democratic citizenship. J. Polit. 62, 790–816 (2000). [Google Scholar]
  • 15.Hu W., Siegfried E. C., Siegel D. M., Product-related emphasis of skin disease information online. Arch. Dermatol. 138, 775–780 (2002). [DOI] [PubMed] [Google Scholar]
  • 16.Kalk N. J., Pothier D. D., Patient information on schizophrenia on the internet. Psychiatr. Bull. 32, 409–411 (2008). [Google Scholar]
  • 17.Rowe G., Hawkes G., Houghton J., Initial UK public reaction to avian influenza: Analysis of opinions posted on the BBC website. Health Risk Soc. 10, 361–384 (2008). [Google Scholar]
  • 18.Armfield J. M., When public action undermines public health: A critical examination of antifluoridationist literature. Aust. New Zealand Health Policy 4, 25 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Moran M. B., Lucas M., Everhart K., Morgan A., Prickett E., What makes anti-vaccine websites persuasive? A content analysis of techniques used by anti-vaccine websites to engender anti-vaccine sentiment. J. Commun. Healthc. 9, 151–163 (2016). [Google Scholar]
  • 20.Oyeyemi S. O., Gabarron E., Wynn R., Ebola, twitter, and misinformation: A dangerous combination? BMJ 349, g6178 (2014). [DOI] [PubMed] [Google Scholar]
  • 21.Jin F., et al., Misinformation propagation in the age of Twitter. Computer 47, 90–94 (2014). [Google Scholar]
  • 22.Bessi A., et al., Science vs conspiracy: Collective narratives in the age of misinformation. PLoS One 10, e0118093 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bessi A., et al., Homophily and polarization in the age of misinformation. Eur. Phys. J. Spec. Top. 225, 2047–2059 (2016). [Google Scholar]
  • 24.Basch C. H., Zybert P., Reeves R., Basch C. E., What do popular YouTubeTM videos say about vaccines? Child Care Health Dev. 43, 499–503 (2017). [DOI] [PubMed] [Google Scholar]
  • 25.Donzelli G., et al., Misinformation on vaccination: A quantitative analysis of YouTube videos. Hum. Vaccin. Immunother. 14, 1654–1659 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Jaafar Z., Giam X., Misinformation and omission in science journalism. Trop. Conserv. Sci. 5, 142–149 (2012). [Google Scholar]
  • 27.Fung I. C., et al., Social media’s initial reaction to information and misinformation on Ebola, August 2014: Facts and rumors. Public Health Rep. 131, 461–473 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Kangmennaang J., Osei L., Armah F. A., Luginaah I., Genetically modified organisms and the age of (Un) reason? A critical examination of the rhetoric in the GMO public policy debates in Ghana. Futures 83, 37–49 (2016). [Google Scholar]
  • 29.Rojecki A., Meraz S., Rumors and factitious informational blends: The role of the web in speculative politics. New Media Soc. 18, 25–43 (2016). [Google Scholar]
  • 30.Lavorgna L., et al., Fake news, influencers and health-related professional participation on the web: A pilot study on a social-network of people with multiple sclerosis. Mult. Scler. Relat. Disord. 25, 175–178 (2018). [DOI] [PubMed] [Google Scholar]
  • 31.Venkatraman A., Mukhija D., Kumar N., Nagpal S. J., Zika virus misinformation on the internet. Travel Med. Infect. Dis. 14, 421–422 (2016). [DOI] [PubMed] [Google Scholar]
  • 32.Sommariva S., Vamos C., Mantzarlis A., Đào L. U. L., Martinez Tyson D., Spreading the (fake) news: Exploring health messages on social media and the implications for health professionals using a case study. Am. J. Health Educ. 49, 246–255 (2018). [Google Scholar]
  • 33.Crook B., Glowacki E. M., Suran M. K., Harris J., Bernhardt J. M., Content analysis of a live CDC twitter chat during the 2014 Ebola outbreak. Commun. Res. Rep. 33, 349–355 (2016). [Google Scholar]
  • 34.Marcon A. R., Murdoch B., Caulfield T., Fake news portrayals of stem cells and stem cell research. Regen. Med. 12, 765–775 (2017). [DOI] [PubMed] [Google Scholar]
  • 35.Vargo C. J., Guo L., Amazeen M. A., The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media Soc. 20, 2028–2049 (2018). [Google Scholar]
  • 36.Farrell J., Corporate funding and ideological polarization about climate change. Proc. Natl. Acad. Sci. U.S.A. 113, 92–97 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Grimmer J., King G., General purpose computer-assisted clustering and conceptualization. Proc. Natl. Acad. Sci. U.S.A. 108, 2643–2650 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Boussalis C., Coan T. G., Text-mining the signals of climate change doubt. Glob. Environ. Change 36, 89–100 (2016). [Google Scholar]
  • 39.Farrell J., The growth of climate change misinformation in US philanthropy: Evidence from natural language processing. Environ. Res. Lett. 14, 034013 (2019). [Google Scholar]
  • 40.Farrell J., Network structure and influence of the climate change counter-movement. Nat. Clim. Chang. 6, 370–374 (2016). [Google Scholar]
  • 41.Vosoughi S., Mohsenvand M., Roy D., Rumor gauge: Predicting the veracity of rumors on twitter. ACM Trans. Knowl. Discov. Data 11, (2017). [Google Scholar]
  • 42.Ratkiewicz J., Conover M. D., Meiss M., Flammini A., Menczer F., “Detecting and tracking political abuse in social media” in Proceedings of the 5th International Association for the Advancement of Artificial Intelligence Conference on Weblogs and Social Media (Association for the Advancement of Artificial Intelligence, Menlo Park, CA, 2011), pp. 297–304. [Google Scholar]
  • 43.Rykov Y. G., Meylakhs P. A., Sinyavskaya Y. E., Network structure of an AIDS-denialist online community: Identifying core members and the risk group. Am. Behav. Sci. 61, 688–706 (2017). [Google Scholar]
  • 44.Jang S. M., et al., A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis. Comput. Human Behav. 84, 103–113 (2018). [Google Scholar]
  • 45.Tong G., Wu W., Du D. Z., Distributed rumor blocking with multiple positive cascades. IEEE Trans. Computat. Soc. Syst. 5, 468–480 (2018). [Google Scholar]
  • 46.Ahmad F., Hudak P. L., Bercovitz K., Hollenberg E., Levinson W., Are physicians ready for patients with Internet-based health information? J. Med. Internet Res. 8, e22 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Dilley S. E., Peral S., Straughn J. M. Jr, Scarinci I. C., The challenge of HPV vaccination uptake and opportunities for solutions: Lessons learned from Alabama. Prev. Med. 113, 124–131 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Silver A., Matthews L., The use of Facebook for information seeking, decision support, and self-organization following a significant disaster. Inf. Commun. Soc. 20, 1680–1697 (2017). [Google Scholar]
  • 49.Kalichman S. C., et al., Use of dietary supplements among people living with HIV/AIDS is associated with vulnerability to medical misinformation on the internet. AIDS Res. Ther. 9, 1 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Dudo A., Besley J. C., Scientists’ prioritization of communication objectives for public engagement. PLoS One 11, e0148867 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Avery E. J., Public information officers’ social media monitoring during the Zika virus crisis, a global health threat surrounded by public uncertainty. Public Relat. Rev. 43, 468–476 (2017). [Google Scholar]
  • 52.Sundstrom B., et al., Protecting the next generation: Elaborating the health belief model to increase HPV vaccination among college-age women. Soc. Mar. Q. 21, 173–188 (2015). [Google Scholar]
  • 53.Groshek J., et al., Media consumption and creation in attitudes toward and knowledge of inflammatory bowel disease: Web-based survey. J. Med. Internet Res. 19, e403 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Motta M., Callaghan T., Sylvester S., Knowing less but presuming more: Dunning-Kruger effects and the endorsement of anti-vaccine policy attitudes. Soc. Sci. Med. 211, 274–281 (2018). [DOI] [PubMed] [Google Scholar]
  • 55.Widman C. A., et al., Clinician and parent perspectives on educational needs for increasing adolescent HPV vaccination. J. Cancer Educ. 33, 332–339 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Cook J., Lewandowsky S., Ecker U. K. H., Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One 12, e0175799 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.van der Linden S. L., Leiserowitz A. A., Rosenthal S. A., Feinberg G. D., Maibach E. W., Inoculating the public against misinformation about climate change. Global Challenges 1, 1600008 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Ecker U. K., Lewandowsky S., Tang D. T., Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem. Cognit. 38, 1087–1100 (2010). [DOI] [PubMed] [Google Scholar]
  • 59.Ecker U. K., Lewandowsky S., Cheung C. S. C., Maybery M. T., He did it! She did it! No, she did not! Multiple causal explanations and the continued influence of misinformation. J. Mem. Lang. 85, 101–115 (2015). [Google Scholar]
  • 60.Swire B., Ecker U. K. H., Lewandowsky S., The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Mem. Cogn. 43, 1948–1961 (2017). [DOI] [PubMed] [Google Scholar]
  • 61.Bode L., Vraga E. K., In related news, that was wrong: The correction of misinformation through related stories functionality in social media. J. Commun. 65, 619–638 (2015). [Google Scholar]
  • 62.Vraga E. K., Bode L., Using expert sources to correct health misinformation in social media. Sci. Commun. 39, 621–645 (2017). [Google Scholar]
  • 63.Bode L., Vraga E. K., See something, say something: Correction of global health misinformation on social media. Health Commun. 33, 1131–1140 (2018). [DOI] [PubMed] [Google Scholar]
  • 64.Nyhan B., Reifler J., Richey S., Freed G. L., Effective messages in vaccine promotion: A randomized trial. Pediatrics 133, e835–e842 (2014). [DOI] [PubMed] [Google Scholar]
  • 65.Nyhan B., Reifler J., The effect of fact-checking on elites: A field experiment on U.S. state legislators. Am. J. Pol. Sci. 59, 628–640 (2015). [Google Scholar]
  • 66.Berinsky A. J., Rumors and health care reform: Experiments in political misinformation. Br. J. Polit. Sci. 47, 241–262 (2017). [Google Scholar]
  • 67.Nyhan B., Reifler J., Displacing misinformation about events: An experimental test of causal corrections. J. Exp. Political Sci. 2, 81–93 (2015). [Google Scholar]
  • 68.Vraga E. K., Bode L., I do not believe you: How providing a source corrects health misperceptions across social media platforms. Inf. Commun. Soc. 21, 1337–1353 (2018). [Google Scholar]
  • 69.Goldfarb J. L., Kriner D. L., Building public support for science spending: Misinformation, motivated reasoning, and the power of corrections. Sci. Commun. 39, 77–100 (2017). [Google Scholar]
  • 70.Swire B., Berinsky A. J., Lewandowsky S., Ecker U. K., Processing political misinformation: Comprehending the Trump phenomenon. R. Soc. Open Sci. 4, 160802 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Gordon A., Brooks J. C. W., Quadflieg S., Ecker U. K. H., Lewandowsky S., Exploring the neural substrates of misinformation processing. Neuropsychologia 106, 216–224 (2017). [DOI] [PubMed] [Google Scholar]
  • 72.Ecker U. K., Lewandowsky S., Fenton O., Martin K., Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Mem. Cognit. 42, 292–304 (2014). [DOI] [PubMed] [Google Scholar]
  • 73.Ecker U. K., Lewandowsky S., Swire B., Chang D., Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction. Psychon. Bull. Rev. 18, 570–578 (2011). [DOI] [PubMed] [Google Scholar]
  • 74.Johnson H. M., Seifert C. M., Sources of the continued influence effect: When misinformation in memory affects later inferences. J. Exp. Psychol. Learn. Mem. Cogn. 20, 1420–1436 (1994). [Google Scholar]
  • 75.Wilkes A. L., Leatherbarrow M., Editing episodic memory following the identification of error. Q. J. Exp. Psychol. 40, 361–387 (1988). [Google Scholar]
  • 76.Ayers M. S., Reder L. M., A theoretical review of the misinformation effect: Predictions from an activation-based memory model. Psychon. Bull. Rev. 5, 1–21 (1998). [Google Scholar]
  • 77.Schaffner B. F., Roche C., Misinformation and motivated reasoning: Responses to economic news in a politicized environment. Public Opin. Q. 81, 86–110 (2017). [Google Scholar]
  • 78.Taber C. S., Lodge M., Motivated skepticism in the evaluation of political beliefs. Am. J. Pol. Sci. 50, 755–769 (2006). [Google Scholar]
  • 79.Jacobson G. C., Perception, memory, and partisan polarization on the Iraq war. Polit. Sci. Q. 125, 31–56 (2010). [Google Scholar]
  • 80.Schmidt A. L., Zollo F., Scala A., Betsch C., Quattrociocchi W., Polarization of the vaccination debate on Facebook. Vaccine 36, 3606–3612 (2018). [DOI] [PubMed] [Google Scholar]
  • 81.Pham C. V., Thai M. T., Duong H. V., Bui B. Q., Hoang H. X., Maximizing misinformation restriction within time and budget constraints. J. Comb. Optim. 35, 1202–1240 (2018). [Google Scholar]
  • 82.Saxena C., Doja M. N., Ahmad T., Group based centrality for immunization of complex networks. Physica A 508, 35–47 (2018). [Google Scholar]
  • 83.Zhang E., Wang G., Gao K., Yu G., Finding critical blocks of information diffusion in social networks. World Wide Web 18, 731–747 (2015). [Google Scholar]
  • 84.van Oostendorp H., Bonebakker C., “Difficulties in updating mental representations during reading news reports” in The Construction of Mental Representations During Reading, Oostendorp Hv., Goldman S. R., Eds. (Erlbaum, Hillsdale, NJ, 1999), pp. 319–333. [Google Scholar]
  • 85.Wilkes A. L., Reynolds D. J., On certain limitations accompanying readers’ interpretations of corrections in episodic text. Q. J. Exp. Psychol. 52A, 165–183 (1999). [Google Scholar]
  • 86.Bush J. G., Johnson H. M., Seifert C. M., “The implications of corrections: Then why did you mention it?” in Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, Ram A., Eiselt K., Eds. (Erlbaum, Hillsdale, NJ, 1994), pp. 112–117. [Google Scholar]
  • 87.Ecker U. K., Lewandowsky S., Apai J., Terrorists brought down the plane!–No, actually it was a technical fault: Processing corrections of emotive information. Q J Exp Psychol (Hove) 64, 283–310 (2011). [DOI] [PubMed] [Google Scholar]
  • 88.van Oostendorp H., Updating situation models derived from newspaper articles. Medienpsychologie 8, 21–33 (1996). [Google Scholar]
  • 89.Chambers K. L., Zaragoza M. S., Intended and unintended effects of explicit warnings on eyewitness suggestibility: Evidence from source identification tests. Mem. Cognit. 29, 1120–1129 (2001). [DOI] [PubMed] [Google Scholar]
  • 90.Schwarz N., Sanna L. J., Skurnik I., Yoon C., Metacognitive experiences and the intricacies of setting people straight: Implications for debiasing and public information campaigns. Adv. Exp. Soc. Psychol. 39, 127–161 (2007). [Google Scholar]
  • 91.Nyhan B., Reifler J., When corrections fail: The persistence of political misperceptions. Polit. Behav. 32, 303–330 (2010). [Google Scholar]
  • 92.Fein S., McCloskey A. L., Tomlinson T. M., Can the jury disregard that information? The use of suspicion to reduce the prejudicial effects of pretrial publicity and inadmissable testimony. Pers. Soc. Psychol. Bull. 23, 1215–1226 (1997). [Google Scholar]
  • 93.Brulle R. J., Networks of opposition: A structural analysis of U.S. climate change countermovement coalitions 1989–2015. Sociol. Inq., 10.1111/soin.12333 (2019). [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

There are no data associated with the paper.


Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES