Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2022 Apr 28;17(4):e0267463. doi: 10.1371/journal.pone.0267463

Vaccination against misinformation: The inoculation technique reduces the continued influence effect

Klara Austeja Buczel 1, Paulina D Szyszka 1, Adam Siwiak 1, Malwina Szpitalak 1, Romuald Polczyk 1,*
Editor: Margarida Vaz Garrido2
PMCID: PMC9049321  PMID: 35482715

Abstract

The continued influence effect of misinformation (CIE) is a phenomenon in which certain information, although retracted and corrected, still has an impact on event reporting, reasoning, inference, and decisions. The main goal of this paper is to investigate to what extent this effect can be reduced using the procedure of inoculation and how it can be moderated by the reliability of corrections’ sources. The results show that the reliability of corrections’ sources did not affect their processing when participants were not inoculated. However, inoculated participants relied on misinformation less when the correction came from a highly credible source. For this source condition, as a result of inoculation, a significant increase in belief in retraction, as well as a decrease in belief in misinformation was also found. Contrary to previous reports, belief in misinformation rather than belief in retraction predicted reliance on misinformation. These findings are of both great practical importance as certain boundary conditions for inoculation efficiency have been discovered to reduce the impact of the continued influence of misinformation, and theoretical, as they provide insight into the mechanisms behind CIE. The results were interpreted in terms of existing CIE theories as well as within the remembering framework, which describes the conversion from memory traces to behavioral manifestations of memory.

Introduction

Continued influence effect (CIE)

There are numerous cases where a certain belief persists despite being proven wrong. The widespread myth that vaccines cause autism in children is one of the most well-known examples of this phenomenon [1]. Misinformation has become a significant concern in contemporary society [2]; the strategies used to oppose it do not deliver expected results and the ongoing research suggests that corrections are not sufficient in reducing the effects of misinformation [35].

In psychology, the influence of retracted information on reasoning and decisions is called the continued influence effect (CIE) [57]. In a typical CIE procedure, participants are being presented a scenario about some fictional event (e.g. a warehouse fire, jewelry theft, or a bus accident) in which certain information is given and then retracted and/or corrected [8]. Although it might seem that the respondents, aware of the retraction, should formulate conclusions without misinformation, the retraction only slightly reduces the reliance on misinformation [818]. Sometimes, the retraction seems to be completely ineffective [6, 19] and, in some cases, it can even cause a paradoxical increase of reliance on misinformation [20]; see, however, e.g. [21, 22]. There is also a tendency that people believe more in misinformation than in corrections regardless of the sources of retractions [19, 23].

Two leading cognitive explanations of CIE have been proposed, which are not exclusive and may be complementary. Both are focused on memory mechanisms, assuming that some kind of memory error is responsible for the phenomenon. The first of the two main theoretical approaches assume failure in updating and integrating information in the mental models of unfolding events [6, 8, 2427]. The mental model is defined as a representation based on one’s available information and knowledge, built on the principle of cause-effect relationships [28]. Mental models are not static structures: they can be updated locally, where minor elements are added to the existing model, or globally, where the whole model is reconstructed and a new one is created [29, 30]. In the case of CIE, it is assumed that the model updating leads to the local consistency but conclusions are drawn globally, leading to errors due to local sustaining of misinformation [25]. Johnson & Seifert [6] proposed that misinformation is maintained in the model because of its causal role for the described event–when the information central to the model is invalidated, a gap arises that renders the model inconsistent. One of the possible solutions to this problem is keeping the discredited misinformation in the model because of its role in filling the gap. In other words one will prefer to keep the coherent but inaccurate model to the incoherent but accurate model; this consequently makes an answer that is consistent with misinformation better that no answer at all, even though it is potentially erroneous. Therefore, it is proposed to conceptualize CIE as a result of a failure in integrating information in the model when the retraction is processed [3135].

The second theoretical approach argues that CIE arises from selective retrieval of misinformation from memory [10, 12, 17, 3638]. According to this approach, both misinformation and its retraction are stored simultaneously in memory; consequently, CIE occurs when there is insufficient suppression of misinformation. One possible explanation within this approach is based on the distinction between automatic and strategic processes [39]. The former relies on the context-free recognition of information previously encountered, usually of a general nature, while the latter allows the extraction of additional content, e.g. context, source, and truthfulness of the information, and requires the use of cognitive resources. It is assumed [40] that individual parts of information in the memory compete for activation and that only those that win the competition are available. At the same time, associations between the information and their context (e.g. source) are not accessible and strategic processes must be used to gain access to them. Thus, CIE occurs when misinformation wins the competition in activation and is retrieved automatically, e.g. as a result of a presence of an appropriate clue; at the same time controlled processes are disrupted and fail to retrieve the correction [36]. This kind of automatic retrieval could rely on familiarity and fluency, thus leading lead to the illusory truth effect, where familiar (e.g. repeated) information is more likely to be perceived as true [41]; when misinformation or any clue connected with it appears, the chance to activate it increases. Alternatively, according to negation processing models, information being encoded is always initially treated as if it were true and is only subsequently falsified by attaching a negation tag [42]. However, negation tag retrieval uses cognitive resources, so when strategic processes fail, one can "lose" the negation tag, retrieving only the misinformation.

Besides memory factors, non-memory factors such as motivational factors (e.g. person’s attitudes and the worldview) are also being taken into account in explaining CIE. It is proposed, for example, that motivated reasoning [43] is responsible for misinformation compliance when misinformation relates to political views or prejudices [9, 11, 13, 20, 44]; however, see: [4548]. Furthermore, more skeptical people tend to reject misinformation and accept retractions more correctly [49]. It is also possible that one does not accept the retraction because it seems more unreliable than misinformation [19, 23] or comes from an unreliable source [16, 19, 50]. Connor Desai et al. [51] even argued that if the misinformation comes from a source that is perceived to be more reliable than the source of the retraction, it may be considered rational to rely on misinformation. Recent results obtained by Susmann & Wegener [52] also indicate that motivational factors may play a key role along with cognitive factors even in situations when research material is purely neutral.

From a practical point of view, recognizing effective techniques for limiting the impact of misinformation in the CIE paradigm seems to be extremely important. As summarized in 2012 by Lewandowsky et al. [7], three factors that may increase the effectiveness of retractions were identified: warning against exposure to misinformation [10], repetition of the retraction [12], and an alternative to misinformation that fills the causal gap left by the retraction [6] (for countering non-laboratory misinformation, see, e.g.: [53]). It seems that the most effective technique is a combination of a warning and alternative, which reduces reliance on misinformation to a greater extent than its components alone [10]. Unfortunately, all these techniques seem problematic for several reasons. First, none of them can eliminate the impact of misinformation, and sometimes the alternative is not effective at all [23, 26]. Repetition of a retraction does not reduce CIE more than a single retraction when misinformation is presented once; however, the repetition of the retraction is more effective than a single retraction when misinformation is repeated several times [12]. The techniques mentioned are also subject to critique when it comes to their practical use. Finding a credible alternative in non-laboratory conditions is often impossible [7] and warning people before they encounter specific misinformation is rather unrealistic [54]. Therefore, it seems that further exploration of these factors is necessary, focusing especially on significant practical applications. In this study, we present an experiment that investigates the effect of inoculation on CIE.

Inoculation theory

In the inoculation theory [55, 56] a metaphor of biological vaccination is used to illustrate a mechanism of immunization against persuasion: as in the case of an infection, people can be "vaccinated" against social influence by being exposed to a weaker kind of persuasive attack. Similar to the biological vaccination, the psychological inoculation procedure leads to producing "antibodies" in the form of counterarguments that can immunize an individual against subsequent persuasion and protect their attitudes from change [56].

The procedure of inoculation consists of two basic components: Warning and Refutational Preemption. The Warning makes it visible to the individual that there are possible arguments for changing their attitude, which produces the feeling of threat. According to the theory, to activate the motivational processes responsible for attitude reinforcement, a person going through a process of inoculation must feel that one of their belief is in danger. Without this feeling of threat, proper inoculation would be impossible [5559]. In addition to the sense of threat, factors such as the involvement in defending an attitude [60], the emotion of anger [61], and the accessibility of the target attitude in one’s memory [60, 62] play a crucial role in inoculating against persuasion. However, although necessary, the very sense of threat triggered by a warning is not sufficient for producing effective resistance to persuasion–a Refutational Preemption also has to be made, i.e. exposure to arguments against one’s attitude and then refutation of these arguments, which strengthen one’s attitude due to the production of counterarguments to the content of future persuasion [55, 58, 60, 61, 63].

Regardless of whether these counterarguments are the same or different that the content of the persuasive message, they can protect against persuasion [56, 58, 61, 62, 64]. The inoculation procedure can be processed actively (participants write an essay containing counterarguments) or passively (participants read counterarguments prepared by the experimenter); both forms of processing have similar effectiveness [64]. Furthermore, the protective effects of inoculation appear to be highly stable over time and may persist for weeks or even months [60, 63, 65]; see, however: [64, 66]. Inoculation treatments are also able to protect attitudes that were not the subject of protection by vaccination but are somewhat related to the target attitude [67, 68]. Additionally, inoculation is effective not only in generating resistance to a change in beliefs but also in shaping and directing them–it can help with developing attitudes in accordance with the desired direction among people who have neutral or negative attitude toward a given issue [69]. Inoculation can cause people to perform the so-called "post-inoculation talk" in which they share with acquaintances both their impressions and thoughts related to the inoculation process and the acquired knowledge while also questioning the opinion of the interlocutors when it is inconsistent with the defended attitude [70, 71]. Therefore, the therapeutic effects of inoculation are not limited to individuals and may contribute to the production of immunity within the communities.

Inoculation is widely used in contexts where a social influence can occur, e.g. in political, health, and social campaigns, public relations, or marketing [55]. Notably, in recent years an effort was made to inoculate people against conspiracy theories [7274] and misinformation, especially in the context of climate change. For example, Cook et al. [75] used inoculation against misinformation which was introduced in the form of a false balance, where two different positions on a given issue are disproportionately supported (e.g. a scientific consensus of climate scientists vs a single scientist disagreeing with it), and yet are perceived by people as equally credible. The researchers presented the misinformation, but before that participants were presented a text that explained the flawed argumentation technique used by the source of the misinformation (i.e. an explanation of the misinformation strategy used by the tobacco industry) and highlighted the scientific consensus on climate change. The results showed that the inoculation successfully neutralized the negative effects of the misinformation. van der Linden et al. [76], on the other hand, presented the participants a set of statements often used by climate denialists which they were to assess in terms of their familiarity and the degree of being convincing. Previously, some respondents were warned about a potential attack on their attitudes and received a set of charts, graphics, and information confirming the scientific consensus on climate change. The experimenters also discredited some seemingly credible sources of misinformation that can be found in real-world context. This study also managed to reduce the harmful effects of misinformation by using the inoculation technique, which was also more effective than only providing g information about the scientific consensus. Inoculation based on the methods of critical thinking is also effective in counteracting misinformation [77]. Roozenbeek & van der Linden [78] developed a psychological intervention in the form of the online browser game Bad News based on the assumptions of the inoculation theory, in which the player plays the role of a news editor, whose task is to create fake news using techniques used in creating and disseminating misinformation. By creating various types of fake news, players learn about the mechanisms of misinformation and, at the same time, undergo the inoculation process themselves which, consequently, reduces their own vulnerability to misinformation.

Although inoculating against misinformation has been an important and interesting research topic for several years, the previous studies cited above are limited to interventions against misinformation already existing in the public space. Although for many–especially practical–reasons, this seems to be the right endeavor, it seems necessary to check the influence of inoculation on misinformation under laboratory conditions.

In a recent study, Tay et al. [79] used fictional (but based on a real-world topic) misinformation about fair trade. The effectiveness of inoculation located prospectively, i.e. before the misinformation is presented (prebunking), and retrospectively, i.e. after the misinformation is presented(debunking) was investigated. Although both types of interventions seemed to be effective in reducing CIE, contrary to the researchers’ expectations, it turned out that retrospective inoculation was more effective than prospective one (also: [4, 80]; see, however: [74, 81]). In addition to the questionnaire measures, the impact of misinformation on a number of behavioral indicators (here: consumer behavior) was also verified. However, the differences between the conditions were small or statistically insignificant. Though, while this study used fictional material, the procedure was not entirely a "typical" CIE paradigm in a form used in strictly laboratory studies that takes a form of a narrative scenario (e.g. [6, 8, 10]; cf. [82]). Meanwhile, we think that investigating the effects of inoculation on CIE in the "classic" narrative paradigm is useful because the potentially beneficial effects of "vaccination" on neutral materials could broaden the scope of generalizing the results to a variety of contexts and also offer insight into the mechanisms behind CIE.

If inoculation was able to affect fictitious, neutral misinformation, it could mean that it is also capable of modulating memory processes. Since inoculation is able to increase the availability of attitudes [60, 62], thus influencing the resistance to persuasion, similar effects can be expected in counteracting the influence of misinformation. Counterarguments might help to increase the availability of the correction, making it easier to retrieve. This may in turn lead to filter strongly activated but incorrect information more carefully. Alternatively, though not contrary to the previous interpretation, the possible therapeutic effect of inoculation could mean that misinformation processes are not purely due to memory error. While there is no doubt that memory mechanisms are involved in CIE, participants remember the retraction perfectly well in the vast majority of cases. Though it is possible that the other cognitive processes are crucial, that could lead to the creation of something like "memory attitudes", i.e. the relationship of information with its evaluation [83, 84]. Since inoculation affects attitudes, its potential influence on CIE could provide suggestions supporting the idea that CIE is more the result of processes other than memory errors.

Regarding that interpretation, there is one factor that may come in useful. As was mentioned previously, source reliability plays a great role in the context of CIE. Guillory & Geraci [16] verified two aspects of source reliability: trustworthiness and expertise [85]. It turned out that only retractions from the source high in trustworthiness were effective, while the expertise dimension did not influence the effectiveness of corrections. This effect was later replicated in other research [50], even when the expert source was operationalized differently (as being able to make accurate statements based on competence and knowledge resulting from experience and education [19]; in contrast, Guillory & Geraci [16] assumed that the expert source was simply able to access true information). As the information source significantly influences persuasion and belief formation [8587], the potentially different effects of inoculation on reliance on misinformation when the source reliabili of retractions differs could support the idea of “memory attitudes”. Inoculation could prompt the respondents to pay more attention to the sources of corrections which are, sometimes, overlooked in other situations [88]. If non-inoculated participants rely only on trustworthiness, ignoring the expertise dimension, vaccination could make participants be more aware of all source characteristics. Thus, the significant reduction of misinformation reliance and decreased belief in misinformation may be expected if one analyzes if the source is truly credible (i.e. high in both dimensions), but not when the source is only trustworthy or only expert. In the latter case, it could be expected that inoculation would not have any significant impact in reducing misinformation reliance and belief.

This study aims to investigate the impact of inoculation on the processing misinformation and retractions whose sources differ in terms of their reliability. The experiment was based on the method used by Ecker & Antonio [19]. The subjects read the scenario sets, answered open-ended questions, and assessed belief in misinformation and retractions. Half of the participants had previously undergone the inoculation procedure. When it comes to replication hypotheses in misinformation reliance, we expected the CIE to occur (e.g. [6, 10, 17] (Hypothesis 1) as well as the effectiveness of retractions only from trustworthy but not expert source [16, 19, 50] (Hypothesis 2). We also expected different vaccination effects depending on the reliability of the sources of the retractions: the inoculated individual could rely less on misinformation if the source of the correction was credible in both dimensions but not if it was only expert or only trustworthy (Hypothesis 3). This is because inoculation may induce participants to pay more attention to the sources of retractions–in other situations people tend to ignore the sources [88] or tend to be guided only by the trustworthiness dimension [16, 19, 50]. Thus, vaccinated participants can analyze the source credibility and cannot trust retractions from the sources that are not both highly trustworthy and highly expert.

When it comes to belief estimations, we expected to replicate the tendency of believing more in misinformation than in retraction regardless of the reliability of the source [19, 23] (Hypothesis 4). However, as inoculation can shape one’s conviction in the direction consistent with protected attitude [69], we expected to observe a reversal of the tendency described above, but only for the highly credible sources (Hypothesis 5). The reasons for this are the same for Hypothesis 3, i.e., that inoculation could prompt to analyze retractions’ sources more thoroughly and increase their belief in truly credible information (high in both dimensions). Confirmation of the hypotheses could support the idea that CIE can be a result of rather non-memory factors such as attitudes than memory-like mechanisms.

Method

The study was approved by the Research Ethics Committee of the Institute of Psychology, Jagiellonian University, Kraków, Poland. Decision no.: KE/29_2021.

Design

The experiment used a within-between factors design; the within-subjects factors were three types of retraction source reliability: (1) credible source (high in expertise and high in trustworthiness), (2) expert source (high in expertise, but low in trustworthiness), and (3) trustworthy source (high in trustworthiness, but low in expertise); in addition, there were two control conditions: a no-retraction condition and a no-misinformation condition (and therefore also no-correction). The between-subject factor was the inoculation: its presence or absence.

Participants were presented with one scenario for each of the five conditions (for more details see: Materials & procedure), except for the no-misinformation condition which included two scenarios per participant (to equalize the number of scenarios with and without retraction). Participants’ reliance on the critical information was measured using an open-ended questionnaire (30 open questions; 5 per scenario) which required inferential reasoning. In addition, six questions that measured participants’ belief in the critical information and their belief in the retraction directly (two questions for each scenario that included retraction) were used.

Power analysis

An a priori power analysis was performed to determine the sample size necessary to detect a significant within-between interaction, with α assumed to be 0.05, and desired power: 95%. Analysis was performed by means of the G*Power software [89], with the option for effect size specification chosen ‘as in SPSS’. The analysis was performed for the three commonly assumed effect sizes: small, η2 = 0.01 (f(U) = 0.10; medium, η2 = 0.06 (f(U) = 0.25; large, η2 = 0.14 (f(U) = 0.40. Under these assumptions, the analysis indicated that 464, 76, and 32 participants were required, respectively. Given the available resources, a sample of 141 participants was tested, which allowed for the desired power in the case of large and medium effect sizes, but not for small ones. The power for detecting specific planned comparisons may be somewhat less but still, the sample size was twice as much as required to detect a medium-sized interaction so it should be enough to detect medium-sized planned contrasts.

Participants

Participants (N = 141) were recruited online via social media, survey distribution websites, and from Polish university students. The sample consisted of 108 female, 29 male and 4 non-binary participants (Mage = 24.62; SD = 7.46; range 18–68 years). Four people were excluded from the analysis due to a lack of answers in the test, resulting in a final sample of 137 participants (106 female, 28 male, and 3 non-binary; Mage = 24.59; SD = 7.56; range 18–68). Most of the participants were students (70%), 24% had higher education degree and 6% finished only secondary school. No gratification was given for their participation. Written consent was obtained; this method of obtaining consent was approved by the Research Ethics Committee mentioned above.

Materials & procedure

The study was conducted using Qualtrics (Provo, UT), and was advertised as a survey on the "Memory of narrative texts experiment". The entire experiment took approximately 30 minutes to complete. Participants first read the intentionally misleading description of the research (they were informed that the research was investigating how people process and remember narrations); they then read ethics-approved information, provided consent, and agreed to participate. After completing the information on gender, age, and education they were randomly allocated to one of the 12 possible survey versions (inoculation/lack of inoculation × 6 combinations of scenarios, depending on the layout of conditions).

Six scenarios were constructed based on those previously used in CIE research. Most of them were shortened and modified versions of those used before (water contamination, jewelry theft, warehouse fire, football scandal) and the other two were only inspired by these scenarios (car accident, politician dismissal). The misinformation was introduced indirectly to increase its strength [17]. The content of sentences other than those introducing misinformation and retraction was constructed such that the misinformation could explain the events described. The sources for the three retraction conditions were chosen based on those selected in Ecker & Antonio [19] and Guillory & Geraci [16] research. The expert source was operationalized accordingly to Guillory & Geraci [16] research as a source with access to information (not as a result of professionalism, but simply because of the possibility to access true information); the trustworthy source was operationalized as a source of possible good intentions, but without the access to true information. The credible source was operationalized as a source of high in trustworthiness and high in expertise (usually it was an independent professional). Examples of the sources are presented in the Table 1.

Table 1. Examples of the sources (in Football affair scenario*).

Source Example Message content
Credible director of the International Anti-Doping Committee Three days later, Olivier Estevez, director of the International Anti-Doping Committee, announced that Larsson was not involved in the doping affair (…)
Trustworthy Popular sports commentator Oliver Lindgren, a popular sports commentator, stated that Larsson was not involved in the doping affair (…)
Expert Footballer’s manager Oliver Lindgren, Larsson’s manager, stated that the player was not involved in the doping affair (…)

*Adapted from Ecker & Antonio [19].

In the no-retraction condition, the target information was introduced without correcting it later; in the no-misinformation condition the target information was never mentioned and there was also no correction. All scenarios existed in both the experimental and the control versions. As in the Ecker & Antonio [19] study, the assignment of scenarios to conditions, and the presentation order of conditions was counterbalanced across participants. Each scenario consisted of 4 fragments–displayed individually on the screen without the possibility of going back to previous ones–containing 1–2 sentences and a total of 30–60 words each.

The main dependent variable was the number of references to target information in the open-ended questions. There were five questions for each scenario. All questions were not based solely on the memory of facts but required the participant to make inferences. The questions were constructed such that the critical information was a possible answer while also allowing for unrelated responses (e.g. "What was the possible cause of the toxic fumes?", for which possible answers would be "The oil paints present in the cabinet", the (mis)information present in the warehouse fire scenario, or an unrelated response such as “Plastic from printers”). The second type of question was rating scales, requiring participants to indicate their level of agreement with a statement on the scale from 0 to 10. These questions concerned only scenarios under retraction conditions and were used to directly measure the participant’s belief in the misinformation and in retraction introduced by a specific source (e.g. in a traffic accident scenario where the driver was accused of drunk driving: "To what extent do you believe that the driver was driving under the influence of alcohol?" and "How much do you believe the opinion of the passenger of the car, the driver’s colleague, that the driver was not drunk?"). Each participant rated their belief on six scales (two for each of the retraction conditions; one for belief in misinformation and the other for belief in retraction). All questions (both open-ended and rating scales) were presented individually on the screen without the possibility of going back. As an interval task between reading the scenarios and answering the test questions we used filler questions about mood.

Half of the participants also underwent the inoculation procedure, which has been designed in accordance with the theoretical assumptions of the inoculation theory. The procedure consisted of two main parts: (1) Warning and (2) Refutational Preemption. The Warning (specifically a specific warning) was adapted from a study by Ecker et al. [10] and slightly modified. The warning also foreshadowed the presence of an example of misinformation that the participant should be careful about. The second stage, the Refutational Preemption, consisted of two elements. First, a short scenario was presented, that described a fictional debate about banning a potentially harmful substance in food production. However, it was later revealed that various research teams independently confirmed that the substance was in fact completely safe.

After the presentation of the inoculation scenario, there was a question that required the participants to estimate their belief that the described substance should be banned from use (on the 0–10 scale). Depending on the answer, the participant received short feedback. For ratings 0 or 1, participants were informed about their low misinformation compliance. For ratings 2–4, participants were informed about their medium misinformation compliance: message accented both occurrence of some resistance and some acceptance of misinformation. For higher ratings (5–10), participants were informed about a significant impact of misinformation on them. The second stage of Refutational Preemption followed for all participants in inoculation condition. Examples of misinformation, their debunking, and the mechanisms of reliance on misinformation were presented. Participants were also made aware of the omnipresence of misinformation and the importance of its impact on every aspect of the lives of individuals and societies. Finally, respondents were informed that they would now see several scenarios in which misinformation may or may not occur and that they should answer questions honestly and be careful to not be misinformed.

Results

Inoculation question ratings

The average estimate of the belief in the misinformation in the inoculation procedure, i.e. the assessment of the extent to which the described substance is harmful and should be banned in food production, was M = 3.61 (SD = 2.88).

Questionnaire coding

The open-ended inference questions were coded according to the schema described below. Any unambiguous reference to critical information was scored 1 point (e.g. "The explosions were caused by gas cylinders left in the closet"). The same goes for cases in which the answer could result from the presence of misinformation but misinformation was not mentioned explicitly (e.g. "They should pay attention to the closet in the warehouse" in response to " On what aspect of the fire may the police want to continue the investigation?"). Responses were scored 0 when critical information was not mentioned or explicitly rejected (e.g., "The toxic fumes came from burning plastic." or "The driver may have been drunk, but the allegations turned out to be untrue since no traces of alcohol or drugs were discovered in his blood."). The results for the no-misinformation condition were averaged for the two scenarios per person. The maximum score possible to receive by a participant in one condition was 5 points.

Inferential reasoning

We performed a repeated measures between-subjects ANOVA, with CIE conditions as within-subject factors and inoculation as between-subjects factor. As the Mauchly’s sphericity test showed a significant result (p < 0.001, ε^ = 0.93), the Huynh-Feldt correction was used. The analysis showed the interaction effect (F(3.72,501.88) = 4.32, p = 0.003, ηp2 = 0.03), as well as the main effect of the group (F(3.72,501.88) = 58.52, p = 0.003, ηp2 = 0.30) and the main effect of inoculation (F(1,135) = 21.17, p < 0.001, ηp2 = 0.14). The mean inference results are presented in Fig 1.

Fig 1. Inference scores across conditions.

Fig 1

noRetr No-retraction condition; Credible credible source condition (high in trustworthiness and in expertise); Expert expert source condition (high in expertise, low in trustworthiness); Trustworthy trustworthy source condition (high in trustworthiness, low in expertise); noMis No-misinformation condition. Error bars represent 95% confidence intervals.

We confirmed the Hypothesis 1 with the replication of the continued influence effect–in the no-inoculation condition, although retraction reduced reliance on misinformation compared to the no-retraction condition regardless of the retraction group (Fs(1,135) ≥ 3.98, ps ≤ 0.048, ηp2s ≥ 0.03), it was not enough to eliminate CIE as a significant difference with the no-misinformation condition was observed (Fs(1,135) ≥ 46.42, ps < 0.001, ηp2s ≥ 0.26). However, we failed to confirm the Hypothesis 2 that the effectiveness of retractions would differ depending on their sources–none of them differed significantly from the others (Fs(1,135) ≤ 3.37, ps ≥ 0.068, ηp2s ≤ 0.02). This finding contradicts previous reports which stated that trustworthy retraction sources are more effective in reducing CIE than expert sources [16, 19, 50].

As expected, the inoculation affected retraction conditions differently. Hypothesis 3 was confirmed–there was a significant decrease in reliance on misinformation in the credible source condition (F(1,135) = 34.71, p < 0.001, ηp2 = 0.21), as well as some reduction in reliance on misinformation for the trustworthy source condition (F(1,135) = 4.84, p = 0.029, ηp2 = 0.04). However, inference score in trustworthy source condition did not differ significantly from the expert source group (F(1,135) = 2.89, p = 0.092, ηp2 = 0.02). Inoculation did not have a significant effect on the expert source condition (F(1,135) = 1.29, p = 0.259, ηp2 < 0.01). What is particularly important, in the credible source condition, the number of references to misinformation did not differ significantly from the no-misinformation condition (F(1,135) = 1.63, p = 0.204, ηp2 = 0.01), which may indicate the elimination of CIE, although one should be cautious when inferring about lack of differences based on a statistically insignificant result. Moreover, in no-misinformation condition, a significant decrease in the occurrence of responses consistent with critical information was also reported (F(1,135) = 7.90, p = 0.006, ηp2 = 0.06). Overall, while not in all conditions a decline in responses consistent with misinformation could be observed, there was a downward trend in each group, which resulted, for example, in retractions from expert sources being completely ineffective and not differing from the no-retraction condition (F(1,135) = 2.29, p = 0.132, ηp2 = 0.02). In sum, inoculation seemed to be effective in reducing CIE, but only for highly credible sources of retraction.

Belief ratings

Inoculation was also expected to affect the direct belief in misinformation and in retraction. In accordance to results obtained by Ecker & Antonio [19], in no-inoculation condition, belief in misinformation was assumed to be higher than belief in retraction, regardless of the reliability of corrections (Hypothesis 4). A two-way ANOVA with repeated measures was performed (within-subject factors: misinformation / retraction × credible source, expert source, trustworthy source; between-subject factor: inoculation / none), in which three-way interaction effect was not showed (F(2,270) = 2.47, p = 0.086, ηp2 = 0.02). Although the main effect of the information was also not found (F(1,135) = 3.67, p < 0.058, ηp2 = 0.03), the main effect of the source was observed (F(2,270) = 10.40, p < 0.001, ηp2 = 0.07), as well as the interaction of information and source (F(2,270) = 41.75, p < 0.001, ηp2 = 0.24), the interaction of inoculation and information (F(1,135) = 16.65, p < 0.001, ηp2 = 0.11), and the interaction of inoculation and source (F(2,270) = 3.62, p = 0.028, ηp2 = 0.03). The mean assessments of direct belief in misinformation and retraction are presented in Figs 2 and 3.

Fig 2. Mean belief ratings in the no-inoculation condition.

Fig 2

Credible credible source condition (high in trustworthiness and in expertise); Expert expert source condition (high in expertise, low in trustworthiness); Trustworthy trustworthy source condition (high in trustworthiness, low in expertise). Error bars represent 95% confidence intervals.

Fig 3.

Fig 3

Mean belief ratings in the inoculation condition. Credible credible source condition (high in trustworthiness and in expertise); Expert expert source condition (high in expertise, low in trustworthiness); Trustworthy trustworthy source condition (high in trustworthiness, low in expertise). Error bars represent 95% confidence intervals.

In the no-inoculation condition, as predicted, higher belief in misinformation than in retraction was observed for expert (F(1,135) = 34.25, p < 0.001, ηp2 = 0.20) and trustworthy source condition (F(1,135) = 10.32, p = 0.002, ηp2 = 0.07). Contrary to the expectations, the beliefs were the same for the credible source condition (F(1,135) = 0.01, p = 0.917, ηp2 < 0.01). However, the belief in misinformation was found to be lower and the belief in retraction to be higher for the credible source condition compared to both expert and trustworthy source conditions (Fs(1,135) ≥ 4.48, ps ≤ 0.036, ηp2s ≥ 0.03). Also, the belief in retraction was higher for the trustworthy than expert source condition (F(1,135) = 12.96, p < 0.001, ηp2 = 0.09), although the belief in misinformation did not differ between these conditions (F(1,135) = 0.14, p = 0.713, ηp2 < 0.01). Therefore, the Hypothesis 4 was confirmed partially.

With regards to Hypothesis 5, a different pattern of results was observed for the inoculation condition. Only for expert source condition more belief in misinformation than in retraction was reported (F(1,135) = 11.10, p = 0.001, ηp2 = 0.08), while for trustworthy source condition no differences were observed (F(1,135) = 0.05, p = 0.816, ηp2 < 0.01). For the credible source condition, the belief in retraction was higher than in misinformation (F(1,135) = 40.90, p < 0.001, ηp2 = 0.23). Also, the belief in misinformation was lower, and the belief in retraction was higher for the credible source condition compared to other groups (Fs(1,135) ≥ 4.84, ps ≤ 0.029, ηp2s ≥ 0.04). Belief in misinformation was lower and belief in retraction was higher for the trustworthy condition than for the expert source condition (Fs(1,135) ≥ 6.56, ps ≤ 0.012, ηp2s ≥ 0.05). The effect of inoculation on belief in misinformation and retraction, as hypothesized, was observed for the credible source condition, where inoculation significantly increased belief in retraction and lowered belief in misinformation (Fs(1,135) ≥ 12.57, ps ≤ 0.001, ηp2s ≥ 0.09), which conforms Hypothesis 5. Inoculation also reduced the belief in misinformation for the trustworthy source condition (F(1,135) = 12.53, p = 0.001, ηp2 = 0.09), but did not change the belief in retraction (F(1,135) = 0.27, p = 0.604, ηp2 < 0.01), and also did not seem to affect any of the measures in expert source condition (Fs(1,135) ≤ 2.96, ps ≥ 0.088, ηp2s ≤ 0.02).

Regression analysis applied to individual sources showed that, under inoculation, belief in misinformation, but not in retraction, significantly predicted reliance on misinformation for the expert source condition (β = 0.32, p = 0.028), similarly to the no-inoculation condition (β = 0.37, p = 0.006). For the inoculation condition in trustworthy source condition, both assumed predictors turned out to be significant (respectively: β = 0.42, p < 0.001 for belief in misinformation, β = -0.26, p = 0.025 for belief in retraction); however, in the no-inoculation condition only belief in misinformation predicted inference score (β = 0.36, p = 0.003). For the credible source condition, in the no-inoculation condition, no relationship between any of the estimates and the inference score was found; however, in the inoculation condition, both turned out to be significant predictors (β = 0.28, p = 0.023 for belief in misinformation, β = -0.39, p = 0.002 for belief in retraction). Overall, by aggregating the beliefs in misinformation and retraction into two separate variables, it was found that belief in misinformation, but not in retraction, predicted the reliance on misinformation (β = 0.45, p < 0.001, R2 = 0.31) which is the opposite of what was observed by Ecker & Antonio [19]. However, since both estimates showed significant correlations of moderate or high strength with inference scores (r(135) = 0.54, p < 0.001 for belief in misinformation and r(135) = -0.39, p < 0.001 for belief in retraction), mediation analysis was performed using the PROCESS v. 3.0 software [90], where the belief in retraction was a predictor, the inference score was the dependent variable and the belief in misinformation was a mediator. The model turned out to be significant: B = -0.25, SE = 0.05, 95% CI [-0.34, -0.15] for total effect and B = -0.15, SE = 0.04, 95% CI [-0.23, -0.08] for indirect effect. The direct path between belief in retraction and inference score was not found to be significant, which suggests that belief in retractions negatively influenced relying on misinformation by negatively influencing belief in misinformation.

Discussion

Contrary to previous research [16, 19, 50], trustworthy sources of retractions did not prove more effective than expert ones in reducing the continued influence effect. However, this is not an unprecedented situation, as Ecker & Antonio [19] in their second experiment did not report any reduction of CIE for trustworthy retraction sources. It is possible that participants were not paying attention to the sources’ reliability, which is consistent with the reports of van Boekel et al. [88] who claimed that the reliability of the sources had an impact on processing information only when participants were previously instructed to pay attention to it. This interpretation is also supported by the results of regression analysis which suggested that it is the belief in misinformation, not in retraction, that predicts the inference score. On the other hand, contrary to the results in the expert and trustworthy source conditions, in the credible source condition there were no differences in the estimates of belief in both misinformation and retraction and neither of them was an inference score predictor. It may indicate that the respondents paid some attention to the reliability of the sources.

According to Ecker & Antonio [19] that fact that there was no reduction of misinformation reliance for trustworthy sources of retractions was an effect of increased skepticism about retractions. The skepticism may arise from making belief estimates [91] or from the presence of retraction itself. This is because participants might become suspicious when critical information that perfectly explain the described events is suddenly considered erroneous. This assumption about the causal role of misinformation is present in the mental models theory [6, 25]. Thus, skepticism about the retractions may arise when there is a risk of losing the coherence of the model–as skepticism prevents the creation of a gap while blocking the impact of retractions. The mechanism of this process may be motivational as experiencing discomfort while dealing with retractions may lead to reducing the tension by increasing skepticism about corrections [52]. Also, since the misinformation was introduced implicitly [17], it could increase the tendency to engage in causal thinking, which consequently might neutralize the impact of the reliability of the corrections’ sources (see, however [79, 92] for research that failed to replicate Rich & Zaragoza’s [17] results).

The main finding of this study is the effectiveness of inoculation in reducing reliance on misinformation in the CIE paradigm. As predicted, the inoculation led to a significant reduction in reliance on misinformation if the retraction was from a highly credible source; it could even lead to the elimination of CIE as the number of references to misinformation in this condition did not differ significantly from the no-misinformation control condition. Inoculation also caused a slight reduction in reliance on misinformation in the trustworthy source condition. Interestingly, a lower inference score was also observed in the no-misinformation condition. However, as there was no misinformation mentioned, it is difficult to assess why inoculated participants were less likely to refer to absent critical information. It is possible that the inoculation has put participants into a state of heightened general skepticism that has led them to avoid relying on assumptions that might appear plausible and therefore suspicious. In other words, the inoculation procedure may lead people to think that something is "too good to be true". This is in line with the research showing that inoculation generates a general cynicism resulting not only in increased resistance to misinformation but also decreased belief in accurate news, e.g. [93, 94]. On the other hand, if this interpretation was correct, one would also expect a significant decrease in the reliance on critical information in the no-retraction condition, which in our study was not observed. It is possible that the state of increased rejection of critical information is thus observable only when one generates explanation for the events (as was the case in the no-misinformation condition), but not when one is subjected to such an explanation (as was the case in all misinformation conditions). The information being generated may seem easier to reject and has less impact on inferences than external information integral to the story, similar to the results obtained by Johnson & Seifert [24]. However, it is difficult to make unequivocal conclusions.

When it comes to belief estimates, a similar pattern of results to those obtained by Ecker & Antonio [19] was observed. In the no-inoculation condition, the belief in misinformation was significantly higher than the belief in retraction in expert and trustworthy source groups. As noted by Ecker & Antonio [19], greater belief in misinformation may arise from processing retraction in the context of its contradiction with misinformation, which automatically causes it to become less reliable than misinformation. There is also a possibility, as we have argued before, that misinformation is more plausible because it is more consistent with the scenario. Therefore, potential gap creation within the mental model may lead to discomfort, which in turn lowers belief in retraction to prevent gap creation [52]. However, both estimations in the credible source group were found to not differ from each other. Hence, in contrast to the research mentioned, in this paper it is the belief in misinformation that predicted the inference score, not the belief in the retraction. This is in line with the ERP research results obtained by Brydges et al. [95], who concluded that reliance on misinformation may be driven by the strong recollection of the misinformation following poor integration of the retraction into the mental model. Nonetheless, the mediation analysis showed that belief in retraction indirectly influences the reliance on misinformation by lessening the belief in misinformation. As in this experiment the reliability of the retraction sources was manipulated, it seems reasonable that the belief in misinformation should also depend on the level of belief in retraction.

In accordance with our predictions, it was observed that in the credible source condition inoculation reduced belief in misinformation and increased belief in retraction. However, inoculation did not in any way influence the estimations in the expert source condition, at the same time causing the belief in misinformation to lessen in the trustworthy source condition to the level of the belief in retraction. Thus, under the influence of inoculation, it was possible to observe some similarities with previous studies, where the dimension of trustworthiness turned out to be more important than the expertise dimension [16, 19, 50]. Though, there is a possibility that the apparent ineffectiveness of inoculation in the expert source condition may be a result of the critical information not being treated by the participants as misinformation. This is in line with the operational definition of an expert source used in this study, which was defined as one that has access to truthful information but does not necessarily have to have good intentions [16]. The participants could, therefore, for rational reasons, reject the correction and respond in accordance with the critical information [51]. However, this rationality may be limited to the contextual sequence of the presented scenarios–the inoculation may be also effective for other sources if they precede the credible source. This is because source reliability assessment could result also from its comparison to others. Some confirmation was found for these speculations: in 2 out of 4 cases where the expert and trustworthy conditions appeared before the credible source condition, the inference results in these three groups did not differ from each other (ps ≥ 0.088). That shows that assessing the reliability of information is not purely analytical, and people are biased towards contextual processing.

Inoculation against continued influence of misinformation—How does it work?

Interpreting the mechanisms of inoculation on misinformation comes down to analyzing the mechanisms of CIE itself. Both theories described in the introduction offer some promising solutions. According to the mental models account, inoculation could prevent making global inferences from the locally updated model containing the misinformation. However, it seems that there is no reason why one should give up the motivation to maintain consistency in the model and risk creating a gap under the influence of inoculation. Inoculation does not fill the gap because it does not offer alternative solutions [6, 10, 11], but only facilitates the production of resistance against the influence of misinformation. Thus, inoculation may lead to the emergence of a new model (see: [30]). As this is the case only for the credible source condition, it is possible that only in this condition risk of generating a gap existed (maybe because in other conditions retraction couldn’t successfully undermine misinformation).

Alternatively, but not mutually exclusive with the previous explanation, according to the selective retrieval account inoculation might work in a similar way to that proposed by Ecker et al. [10] for the pre-exposure warning–actively suppress the automatic activation of misinformation and support the strategic processes in retrieving the retraction thus making it more available [62]. Inoculation could also induce respondents to pay more attention to the sources of corrections, which in other circumstances are overlooked [88]–as it may allow the “antibodies” to correctly recognize the "pathogens" in the critical information when the retraction source is highly credible or in retraction if its source is not reliable (see: [51]).

While both interpretations may reflect possible mechanisms of the influence of inoculation on CIE, we speculated before that the possible therapeutic effect of inoculation could mean that misinformation processes are not purely due to memory-like errors. Since the correction is usually well-remembered, and there are cases where CIE is not expected to occur due to strong belief in the retraction [23], and that in some cases it is rational to rely on misinformation [51], we propose that CIE can be interpreted in line with the memory conversion process.

CIE mechanisms—A layered account

Hartmut Blank [84] proposed an integrative framework of remembering, describing the stages of memory conversion [96] from a memory trace to an observable behavioral manifestation of memory. According to the framework, external factors can influence memory in three different stages. The first stage is accessing information, i.e. constructing the memory of an object or event based on traces retrieved from the memory. At this stage, external factors, such as the presence of the specific cues, can make some memory traces more accessible than others, thus resulting in a cue-tuned construction of memory in the form of a representation of the remembered information. Next, at the validation stage, memory information transforms into a memory belief, which resembles an attitude or is even identical to it. At this stage, a number of factors may play a crucial role, e.g. informative social influence, attitudes, persuasion, source reliability, activated concepts of Self, etc. Consequently, one may be more likely to accept the available information if it is in line with one’s worldview or corresponds to the opinion of experts or the majority. Finally, at the communication stage, memory belief transforms into a memory statement, i.e., a behavioral manifestation of memory (which does not necessarily have to have a verbal form). These statements also may be tuned, mainly because of the normative and socio-motivational influence, e.g. norms of conversation [97], making a person to adapt the way of communicating to the recipient, self-presentation, as well as one’s goals and the goals of the remembering itself.

It appears that the remembering framework can be adapted to describing and explaining CIE. Some of its assumptions are consistent with the ongoing theories. The first stage of remembering is consistent with the selective retrieval account, where it is also assumed that misinformation can be retrieved automatically, e.g. due to the presence of appropriate cues [10, 12, 17]. At the same time, it is also assumed that the retrieval stage plays a key role in the occurrence of behaviorally observable misinformation effects. However, adding a validation stage could be helpful in further explanation of misinformation reliance. According to the selective retrieval account, retrieving the retraction helps the monitoring processes to validate misinformation and confirm its falsity [10]. If this process is to be successful, then CIE should not happen. However, if one fails to retrieve the correction, then misinformation monitoring would become impossible and would lead to relying on misinformation. In most cases, however, people are aware of the presence of both misinformation and its retraction yet make use of the misinformation. It should therefore be expected that, after misinformation retrieval, CIE occurs not because the misinformation has not been sufficiently suppressed due to the failure of retraction retrieval, but as a result of the emergence of a memory belief.

The internal representation of the memory task (a certain set of assumptions about the nature and extent of the memorized content, for example, the consistency assumption [98].) may play a key role in this process. If one experiences two conflicting pieces of information, one may want to explain the contradiction in order to keep one’s assumptions–in this case, for example, by doubting one piece of information and reporting the other. Motivational factors may play a key role in this conflict resolution, e.g. when dealing with the contradiction between misinformation and its correction; see: [52]. Apart from the mentioned consistency assumption, one may have a number of different assumptions, such as the relevance assumption, according to which some information in the scenario must correctly explain the questions asked (or, more universally, that "there must be some explanation for certain events"; see: [99] to illustrate this mechanism in non-laboratory conditions). Therefore, misinformation is used because it seems to be the most appropriate answer to the problem. However, when these assumptions are overturned, e.g. as a result of a warning or inoculation, the emergence of a different memory belief than if the assumptions had not been disproved might occur [98]. The same may be true for the sources’ reliability–if the relevance assumption is not overturned, misinformation may be the best possible answer regardless of the degree of reliability of the source, or the belief in the retraction itself. The idea of the relevance assumption or the coherence assumption also coincides with the general idea of mental models since in both cases causal reasoning plays an important role.

Adjustments made at the communication stage seem to be equally important. Factors that influence the choice of the form of the memory statement can modify the memory report to be even different from the memory belief one possesses [84]. Thus, one can behave in accordance with misinformation even though one declares that one does not believe in this misinformation; contrariwise, belief in misinformation may not show up in behavior. For example, it can be assumed that when people subjected to the CIE procedure answering questions, they are involved in some problem-solving process [98] in which they must decide whether their memory beliefs are suitable for use in the report. For a variety of reasons, ranging from the strength and valence of a memory belief to communicative motivations, they may or may not use them. To the outside observer, CIE occurs or not regardless of whether the memory statement differs from the memory belief or not. Inoculation can also play an important role at this stage–it can influence e.g. decisions to use misinformation in verbal or non-verbal behavior.

As Blank [84] points out, the remembering framework offers the opportunity to understand memory in both cognitive and social terms within a single process. We believe this model is also well-suited to explain CIE in both real-life situations and when misinformation is exclusively fictional. Its frameworks do not deprive the mechanisms of the selective retrieval account of their significance but place them in a broader context and offer new interpretative possibilities, giving an important role to deliberative and motivational factors and not limiting CIE to mechanistic, purely cognitive or memory-like processes, as it is usually conceptualized (cf. [100]). Also, the idea of the need to maintain consistency, without referring to mental models, can be included in relevance and consistency assumptions, according to which one prefers misinformation because it is better suited to the performance of the memory goal of answering questions. What is also important is that the remembering framework also captures situations in which continued reliance on misinformation may be considered rational [51]. Understanding CIE as a layered process may also help to address the relationship between patterns of misinformation compliance at cognitive and behavioral levels, which, as pointed out by Tay et al. [79], may have weak connections. As attitudes and beliefs may weakly and indirectly predict behavior [101], a similar approach to CIE in the remembering interpretation can be attempted.

We propose that the interpretation of CIE should begin with an analysis of the behavioral manifestations of memory as the belief itself is not directly observable. Following Blank [84] we argue that only after excluding factors influencing memory conversion we obtain the possibility of interpreting CIE as the effect of retrieval processes. Otherwise, speculating about CIE mechanisms seems unjustified because, as Blank [84, 98, 102] notes, conversion effects, if unrecognized, can be mistaken for "conventional" retrieval effects and can mislead theoretical conclusions. Of course, this does not mean that CIE does not occur under certain conditions as a result of a retrieval error–rather, we argue that, given the full remembering process, it is simply less likely.

Practical implications, remarks and future directions

Although we have focused largely on theoretical explanations, this study provides also some clear practical implications. Thanks to inoculation, under conditions where a retraction came from a highly credible source, it was possible to eliminate the influence of misinformation to the level of the control no-misinformation condition. This widens the boundary conditions of the effectiveness of vaccination against misinformation, which may find application in anti-misinformation immunization procedures. Our research may be the first that has used inoculation procedure on CIE in the "classic" narrative paradigm and one of the first that has used fictional misinformation (see: [79]). Thus, there is a possibility that our results can be generalized to a wider range of conditions (e.g., vaccination or climate change). As inoculation highlights misleading argumentation techniques such as selective use of data or use of fake experts (see: [103]), thus providing protection from misinformation, it could be beneficial to stress retraction sources characteristics. Additionally, it would also pay off to boost the trustworthiness of the source, if it is perceived as an expert, but not necessarily trustworthy; that potentially would enhance the inoculation effectiveness. It seems that highlighting the fake experts technique works in a similar way by reducing the trustworthiness of the source (without necessarily having to lower the perceived expertise), so increasing the trustworthiness could also be beneficial. Finally, the use of the exercise similar to that used in this study (which made the participants aware that they are compliant to misinformation or not)–though not pivotal–may be useful, as in some cases inoculating against misinformation without such exercise seems to be ineffective (cf. [104]).

To fully investigate these therapeutic effects of inoculation, however, a series of further studies would need to be done. The major concern about this study is that we did not conduct any pilot study prior to running the main experiment in order to select retraction sources. Instead, we chose sources partly on the basis of the previous research (e.g. [16, 19]), which may cause some differences between the scenarios, affecting the results. It would be valuable to replicate the results with a previously conducted pilot study. Another issue is that while it was possible to demonstrate a consistent inoculation effectiveness for the highly credible retraction source, a considerable effect was also observed for the remaining retraction conditions if they were presented first before one became acquainted with the most credible source. To investigate this problem in more detail, future studies would need to focus on the extent to which the size of CIE may depend on contextual factors, as well as perform full inoculation using critical and analytical thinking methods [77]. This could be especially beneficial because analytical thinking is positively correlated with the ability to discern fake news from real news [105]. The effectiveness of inoculation could also be compared to only instructing participants not to use retracted content in answering questions, as it is certainly wrong, no matter what source it comes from, and to investigate the effectiveness of inoculation on expert sources other than defined in this study [19]. The usefulness of inoculation should also be verified when the source of the retraction is not specified. By fully controlling the issue of reliability of the sources, as well as examining participants in a between-subject design experiment, it is possible to fully assess to what extent inoculation is able to counteract the impact of misinformation.

Finally, the power of the current sample only allowed to detect medium-sized effects. This must be borne in mind when interpreting the lack of significance of some effects.

Conclusion

We conducted a study exploring inoculation against the continued influence of misinformation. Results have shown that inoculation is a highly effective technique to reduce the impact of misinformation reliance when the sources of retraction are highly credible. In order to interpret both our results and the CIE mechanisms themselves, we proposed a theoretical framework of remembering [84], which describes the conversion of memory from a memory trace to a behavioral statement. Both the impact of inoculation on misinformation reliance observed, as well as the proposed theoretical approach could make a significant contribution to the attempts to further understand the mechanisms of misinformation and ways of immunizing against it.

Supporting information

S1 Data. Raw data.

(XLSX)

pone.0267463.s001.xlsx (19.1KB, xlsx)
S1 File. Materials used in the study.

(DOCX)

pone.0267463.s002.docx (49.2KB, docx)

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Rao TSS, Andrade C. The MMR vaccine and autism: Sensation, refutation, retraction, and fraud. Indian J Psychiatry. 2011;53(2):95–6. doi: 10.4103/0019-5545.82529 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Lewandowsky S, Ecker UKH, Cook J. Beyond misinformation: Understanding and coping with the “post-truth” era. J Appl Res Mem Cogn. 2017. Dec 1;6(4):353–69. [Google Scholar]
  • 3.Chan MS, Jones CR, Hall Jamieson K, Albarracín D. Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. 2017. Nov 1;28(11):1531–46. doi: 10.1177/0956797617714579 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Walter N, Murphy ST. How to unring the bell: A meta-analytic approach to correction of misinformation. Commun Monogr. 2018. Jul 3;85(3):423–41. [Google Scholar]
  • 5.Walter N, Tukachinsky R. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun Res. 2020. Mar 1;47(2):155–77. [Google Scholar]
  • 6.Johnson HM, Seifert CM. Sources of the continued influence effect: When misinformation in memory affects later inferences. J Exp Psychol Learn Mem Cogn. 1994. Nov 1;20(6):1420–36. [Google Scholar]
  • 7.Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: Continued influence and successful debiasing. Psychol Sci Public Interest. 2012. Dec 1;13(3):106–31. doi: 10.1177/1529100612451018 [DOI] [PubMed] [Google Scholar]
  • 8.Wilkes AL, Leatherbarrow M. Editing episodic memory following the identification of error. Q J Exp Psychol Sect A. 1988. May 1;40(2):361–87. [Google Scholar]
  • 9.Ecker UKH, Ang LC. Political attitudes and the processing of misinformation corrections. Polit Psychol. 2019. Apr;40(2):241–60. [Google Scholar]
  • 10.Ecker UKH, Lewandowsky S, Tang DTW. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cognit. 2010. Dec 1;38(8):1087–100. doi: 10.3758/MC.38.8.1087 [DOI] [PubMed] [Google Scholar]
  • 11.Ecker UKH, Lewandowsky S, Apai J. Terrorists brought down the plane!—No, actually it was a technical fault: Processing corrections of emotive information. Q J Exp Psychol. 2011. Feb 1;64(2):283–310. doi: 10.1080/17470218.2010.497927 [DOI] [PubMed] [Google Scholar]
  • 12.Ecker UKH, Lewandowsky S, Swire B, Chang D. Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction. Psychon Bull Rev. 2011. Feb 26;18(3):570–8. doi: 10.3758/s13423-011-0065-1 [DOI] [PubMed] [Google Scholar]
  • 13.Ecker UKH, Lewandowsky S, Fenton O, Martin K. Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Mem Cognit. 2014. Feb;42(2):292–304. doi: 10.3758/s13421-013-0358-x [DOI] [PubMed] [Google Scholar]
  • 14.Ecker UKH, Lewandowsky S, Cheung CSC, Maybery MT. He did it! She did it! No, she did not! Multiple causal explanations and the continued influence of misinformation. J Mem Lang. 2015. Nov 1;85:101–15. [Google Scholar]
  • 15.Ecker UKH, Hogan JL, Lewandowsky S. Reminders and repetition of misinformation: Helping or hindering its retraction? J Appl Res Mem Cogn. 2017. Jun 1;6(2):185–92. [Google Scholar]
  • 16.Guillory JJ, Geraci L. Correcting erroneous inferences in memory: The role of source credibility. J Appl Res Mem Cogn. 2013. Dec 1;2(4):201–9. [Google Scholar]
  • 17.Rich PR, Zaragoza MS. The continued influence of implied and explicitly stated misinformation in news reports. J Exp Psychol Learn Mem Cogn. 2016. Jan;(42):62–74. doi: 10.1037/xlm0000155 [DOI] [PubMed] [Google Scholar]
  • 18.Rich PR, Zaragoza MS. Correcting misinformation in news stories: An investigation of correction timing and correction durability. J Appl Res Mem Cogn. 2020. Sep 1;9(3):310–22. [Google Scholar]
  • 19.Ecker UKH, Antonio LM. Can you believe it? An investigation into the impact of retraction source credibility on the continued influence effect. Mem Cognit. 2021. Jan 15;49:631–44. doi: 10.3758/s13421-020-01129-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Nyhan B, Reifler J. When corrections fail: The persistence of political misperceptions. Polit Behav. 2010. Mar 30;32(2):303–30. [Google Scholar]
  • 21.Ecker UKH, Lewandowsky S, Chadwick M. Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cogn Res Princ Implic. 2020. Aug 26;5(1):41. doi: 10.1186/s41235-020-00241-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Wood T, Porter E. The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence. Polit Behav. 2019. Mar 1;41(1):135–63. [Google Scholar]
  • 23.O’Rear AE, Radvansky GA. Failure to accept retractions: A contribution to the continued influence effect. Mem Cognit. 2020. Jan;48(1):303–30. [DOI] [PubMed] [Google Scholar]
  • 24.Johnson HM, Seifert CM. Modifying mental representations: Comprehending corrections. In: van Oostendorp H, Goldman SR, editors. The Construction of Mental Representations During Reading. Mahwah, NJ: Lawrence Erlbaum Associates; 1999. p. 303–18. [Google Scholar]
  • 25.Seifert CM. The continued influence of misinformation in memory: What makes a correction effective? In: Ross BH, editor. The psychology of learning and motivation: Advances in research and theory. San Diego, CA: Academic Press; 2002. p. 265–92. [Google Scholar]
  • 26.van Oostendorp H, Bonebakker C. Difficulties in updating mental representations during reading news reports. In: van Oostendorp H, Goldman SR, editors. The construction of mental representations during reading. Mahwah, NJ: Lawrence Erlbaum Associates; 1999. p. 319–39. [Google Scholar]
  • 27.Wilkes AL, Reynolds DJ. On Certain Limitations Accompanying Readers’ Interpretations of Corrections in Episodic Text. Q J Exp Psychol Sect A. 1999. Feb 1;52(1):165–83. [Google Scholar]
  • 28.Johnson-Laird PN. Mental models and consistency. In: Gawronski B, Strack F, editors. Cognitive consistency: A fundamental principle in social cognition. New York, NY: Guilford Press; 2012. p. 225–43. [Google Scholar]
  • 29.Albrecht J, O’Brien E. Updating a mental model: maintaining both local and global coherence. J Exp Psychol Learn Mem Cogn. 1993. Sep 1;19(5):1061–70. [Google Scholar]
  • 30.Kurby CA, Zacks JM. Starting from scratch and building brick by brick in comprehension. Mem Cognit. 2012. Jul;40(5):812–26. doi: 10.3758/s13421-011-0179-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Brydges CR, Gignac GE, Ecker UKH. Working memory capacity, short-term memory capacity, and the continued influence effect: A latent-variable analysis. Intelligence. 2018. Jul 1;69:117–22. [Google Scholar]
  • 32.Hamby A, Ecker UKH, Brinberg D. How stories in memory perpetuate the continued influence of false information. J Consum Psychol. 2020. Apr;30(2):240–59. [Google Scholar]
  • 33.Gordon A, Brooks JCW, Quadflieg S, Ecker UKH, Lewandowsky S. Exploring the neural substrates of misinformation processing. Neuropsychologia. 2017. Nov 1;106:216–24. doi: 10.1016/j.neuropsychologia.2017.10.003 [DOI] [PubMed] [Google Scholar]
  • 34.Kendeou P, O’Brien EJ. The Knowledge Revision Components (KReC) framework: Processes and mechanisms. In: Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences. Cambridge, MA, US: Boston Review; 2014. p. 353–77. [Google Scholar]
  • 35.Kendeou P, Butterfuss R, Kim J, Van Boekel M. Knowledge revision through the lenses of the three-pronged approach. Mem Cognit. 2019. Jan 1;47(1):33–46. doi: 10.3758/s13421-018-0848-y [DOI] [PubMed] [Google Scholar]
  • 36.Ecker UKH, Swire B, Lewandowsky S. Correcting misinformation–A challenge for education and cognitive science. In: Rapp B, Braasch JLG, editors. Processing inaccurate information: Theoretical and applied perspectives from cognitive science and the educational sciences. Cambridge, MA: MIT Press; 2014. p. 13–37. [Google Scholar]
  • 37.Gordon A, Quadflieg S, Brooks JCW, Ecker UKH, Lewandowsky S. Keeping track of ‘alternative facts’: The neural correlates of processing misinformation corrections. NeuroImage. 2019. Jun 1;193:46–56. doi: 10.1016/j.neuroimage.2019.03.014 [DOI] [PubMed] [Google Scholar]
  • 38.Swire B, Ecker UKH, Lewandowsky S. The role of familiarity in correcting inaccurate information. J Exp Psychol Learn Mem Cogn. 2017. Dec;43(12):1948–61. doi: 10.1037/xlm0000422 [DOI] [PubMed] [Google Scholar]
  • 39.Jacoby LL. A process dissociation framework: Separating automatic from intentional uses of memory. J Mem Lang. 1991. Oct;30(5):513–41. [Google Scholar]
  • 40.Ayers MS, Reder LM. A theoretical review of the misinformation effect: Predictions from an activation-based memory model. Psychon Bull Rev. 1998. Mar;5(1):1–21. [Google Scholar]
  • 41.Dechêne A, Stahl C, Hansen J, Wänke M. The truth about the truth: A meta-analytic review of the truth effect. Personal Soc Psychol Rev. 2010. May 1;14(2):238–57. doi: 10.1177/1088868309352251 [DOI] [PubMed] [Google Scholar]
  • 42.Mayo R, Schul Y, Burnstein E. “I am not guilty” vs “I am innocent”: Successful negation may depend on the schema used for its encoding. J Exp Soc Psychol. 2004. Jul 1;40(4):433–49. [Google Scholar]
  • 43.Kunda Z. The case for motivated reasoning. Psychol Bull. 1990. Nov;108(3):19. doi: 10.1037/0033-2909.108.3.480 [DOI] [PubMed] [Google Scholar]
  • 44.Thorson EA. Belief echoes: The persistent effects of corrected misinformation. Polit Commun. 2016. Jul 2;33(3):460–80. [Google Scholar]
  • 45.Aird MJ, Ecker UKH, Swire B, Berinsky AJ, Lewandowsky S. Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. R Soc Open Sci. 2018. Dec;5(12):180593. doi: 10.1098/rsos.180593 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Ecker UKH, Sze BKN, Andreotta M. Corrections of political misinformation: no evidence for an effect of partisan worldview in a US convenience sample. Philos Trans R Soc B Biol Sci. 2021. Apr 12;376(1822):20200145. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Swire B, Berinsky AJ, Lewandowsky S, Ecker UKH. Processing political misinformation: comprehending the Trump phenomenon. R Soc Open Sci. 2017. Mar;4(3):160802. doi: 10.1098/rsos.160802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Swire-Thompson B, Ecker UKH, Lewandowsky S, Berinsky AJ. They Might Be a Liar But They’re My Liar: Source Evaluation and the Prevalence of Misinformation. Polit Psychol. 2020;41(1):21–34. [Google Scholar]
  • 49.Lewandowsky S, Stritzke WGK, Oberauer K, Morales M. Memory for fact, fiction, and misinformation: The Iraq War 2003. Psychol Sci. 2005. Mar 1;16(3):190–5. doi: 10.1111/j.0956-7976.2005.00802.x [DOI] [PubMed] [Google Scholar]
  • 50.Pluviano S, Della Sala S, Watt C. The effects of source expertise and trustworthiness on recollection: the case of vaccine misinformation. Cogn Process. 2020. Aug 1;21(3):321–30. doi: 10.1007/s10339-020-00974-8 [DOI] [PubMed] [Google Scholar]
  • 51.Connor Desai SA, Pilditch TD, Madsen JK. The rational continued influence of misinformation. Cognition. 2020. Dec 1;205:104453. doi: 10.1016/j.cognition.2020.104453 [DOI] [PubMed] [Google Scholar]
  • 52.Susmann MW, Wegener DT. The role of discomfort in the continued influence effect of misinformation. Mem Cognit [Internet]. 2021. Sep 17 [cited 2021 Oct 3]; Available from: doi: 10.3758/s13421-021-01232-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Paynter J, Luskin-Saxby S, Keen D, Fordyce K, Frost G, Imms C, et al. Evaluation of a template for countering misinformation—Real-world Autism treatment myth debunking. PLOS ONE. 2019. Jan 30;14(1):e0210746. doi: 10.1371/journal.pone.0210746 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Swire B, Ecker UKH. Misinformation and its correction: Cognitive mechanisms and recommendations for mass communication. In: Southwell BG, Thorson EA, Sheble E, editors. Misinformation and mass audiences. Austin, TX: University of Texas Press; 2018. p. 195–221. [Google Scholar]
  • 55.Compton JA, Pfau M. Inoculation theory of resistance to influence at maturity: Recent progress in theory development and application and suggestions for future research. In: Kalbfleisch PJ, editor. Communication Yearbook. Mahwah, NJ: Lawrence Erlbaum Associates; 2005. p. 97–146. [Google Scholar]
  • 56.McGuire WJ. Some contemporary approaches. In: Berkowitz L, editor. Advances in experimental social psychology. San Diego, CA: Academic Press; 1964. p. 191–229. [Google Scholar]
  • 57.Compton J, Ivanov B. Untangling threat during inoculation-conferred resistance to influence. Commun Rep. 2012. Jan;25(1):1–13. [Google Scholar]
  • 58.Pfau M, Tusing J, Koerner AF, Lee W, Godbold LC, Penaloza LJ, et al. Enriching the inoculation construct: The role of critical components in the process of resistance. Hum Commun Res. 1997. Dec;24(2):187–215. [Google Scholar]
  • 59.Richards AS, Banas JA, Magid Y. More on inoculating against reactance to persuasive health messages: The paradox of threat. Health Commun. 2017. Jul 3;32(7):890–902. doi: 10.1080/10410236.2016.1196410 [DOI] [PubMed] [Google Scholar]
  • 60.Pfau M, Compton J, Parker KA, Wittenberg EM, An C, Ferguson M, et al. The traditional explanation for resistance versus attitude accessibility: Do they trigger distinct or overlapping processes of resistance? Hum Commun Res. 2004. Jul;30(3):329–60. [Google Scholar]
  • 61.Pfau M, Szabo A, Anderson J, Morrill J, Zubric J, H-Wan H-H. The role and impact of affect in the process of resistance to persuasion. Hum Commun Res. 2001. Apr;27(2):216–52. [Google Scholar]
  • 62.Pfau M, Roskos-Ewoldsen D, Wood M, Yin S, Cho J, Lu K-H, et al. Attitude accessibility as an alternative explanation for how inoculation confers resistance. Commun Monogr. 2003. Jan 1;70(1):39–51. [Google Scholar]
  • 63.Pfau M, Burgoon M. Inoculation in political campaign communication. Hum Commun Res. 1988. Sep;15(1):91–111. [Google Scholar]
  • 64.Banas JA, Rains SA. A meta-analysis of research on inoculation theory. Commun Monogr. 2010. Sep 22;77(3):281–311. [Google Scholar]
  • 65.Pfau M, Van Bockern S. The persistence of inoculation in conferring resistance to smoking initiation among adolescents: The second year. Hum Commun Res. 1994. Mar 1;20(3):413–30. [Google Scholar]
  • 66.Ivanov B, Parker KA, Dillingham LL. Testing the Limits of Inoculation-Generated Resistance. West J Commun. 2018. Oct 20;82(5):648–65. [Google Scholar]
  • 67.Parker KA, Ivanov B, Compton J. Inoculation’s efficacy with young adults’ risky behaviors: can inoculation confer cross-protection over related but untreated issues? Health Commun. 2012. Apr;27(3):223–33. doi: 10.1080/10410236.2011.575541 [DOI] [PubMed] [Google Scholar]
  • 68.Parker KA, Rains SA, Ivanov B. Examining the “blanket of protection” conferred by inoculation: The effects of inoculation messages on the cross-protection of related attitudes. Commun Monogr. 2016. Jan 2;83(1):49–68. [Google Scholar]
  • 69.Ivanov B, Rains SA, Geegan SA, Vos SC, Haarstad ND, Parker KA. Beyond simple inoculation: Examining the persuasive value of inoculation for audiences with initially neutral or opposing attitudes. West J Commun. 2017. Jan;81(1):105–26. [Google Scholar]
  • 70.Ivanov B, Miller CH, Compton J, Averbeck JM, Harrison KJ, Sims JD, et al. Effects of postinoculation talk on resistance to influence. J Commun. 2012. Aug;62(4):701–18. [Google Scholar]
  • 71.Ivanov B, Sims JD, Compton J, Miller CH, Parker KA, Parker JL, et al. The general content of postinoculation talk: Recalled issue-specific conversations following inoculation treatments. West J Commun. 2015. Mar 15;79(2):218–38. [Google Scholar]
  • 72.Banas JA, Miller G. Inducing resistance to conspiracy theory propaganda: testing inoculation and metainoculation strategies. Hum Commun Res. 2013. Apr 1;39(2):184–207. [Google Scholar]
  • 73.Banas JA, Richards AS. Apprehension or motivation to defend attitudes? Exploring the underlying threat mechanism in inoculation-induced resistance to persuasion. Commun Monogr. 2017. Apr 3;84(2):164–78. [Google Scholar]
  • 74.Jolley D, Douglas KM. Prevention is better than cure: Addressing anti-vaccine conspiracy theories. J Appl Soc Psychol. 2017. Jun 28;47(8):459–69. [Google Scholar]
  • 75.Cook J, Lewandowsky S, Ecker UKH. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLOS ONE. 2017. May 5;12(5):e0175799. doi: 10.1371/journal.pone.0175799 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.van der Linden S, Leiserowitz A, Rosenthal S, Maibach E. Inoculating the public against misinformation about climate change. Glob Chall. 2017. Jan 23;1(2):1600008. doi: 10.1002/gch2.201600008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Cook J, Ellerton P, Kinkead D. Deconstructing climate misinformation to identify reasoning errors. Environ Res Lett. 2018. Feb 1;13(2):024018. [Google Scholar]
  • 78.Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019. Jun 25;5(1):1–10. [Google Scholar]
  • 79.Tay LQ, Hurlstone MJ, Kurz T, Ecker UKH. A Comparison of Prebunking and Debunking Interventions for Implied versus Explicit Misinformation [Internet]. PsyArXiv; 2021. [cited 2021 Oct 1]. Available from: doi: 10.1111/bjop.12551 [DOI] [PubMed] [Google Scholar]
  • 80.Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci [Internet]. 2021. Feb 2 [cited 2021 Oct 3];118(5). Available from: https://www.pnas.org/content/118/5/e2020043118 doi: 10.1073/pnas.2020043118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Bolsen T, Druckman JN. Counteracting the Politicization of Science. J Commun. 2015. Jul 6;65(5):745–69. [Google Scholar]
  • 82.Connor Desai SA. (Dis) continuing the continued influence effect of misinformation [Doctoral thesis]. [London]: City, University of London; 2018.
  • 83.Fazio R. How do attitudes guide behavior. In: Sorrentino RH, Higgins ET, editors. The handbook of motivation and cognition: Foundations of social behaviour. New York, NY: Guilford Press; 1986. p. 204–43. [Google Scholar]
  • 84.Blank H. Remembering: A theoretical interface between memory and social psychology. Soc Psychol. 2009. Jul 23;40(3):164–75. [Google Scholar]
  • 85.Pornpitakpan C. The persuasiveness of source credibility: A critical review of five decades’ evidence. J Appl Soc Psychol. 2004. Feb;34(2):243–81. [Google Scholar]
  • 86.Briñol P, Petty RE. Source factors in persuasion: A self-validation approach. Eur Rev Soc Psychol. 2009. Feb 1;20(1):49–96. [Google Scholar]
  • 87.Kumkale GT, Albarracín D, Seignourel PJ. The Effects of Source Credibility in the Presence or Absence of Prior Attitudes: Implications for the Design of Persuasive Communication Campaigns. J Appl Soc Psychol. 2010;40(6):1325–56. doi: 10.1111/j.1559-1816.2010.00620.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.van Boekel M, Lassonde KA, O’Brien E, Kendeou P. Source credibility and the processing of refutation texts. Mem Cognit. 2017. Jan;45:168–81. doi: 10.3758/s13421-016-0649-0 [DOI] [PubMed] [Google Scholar]
  • 89.Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007. May;39(2):175–91. doi: 10.3758/bf03193146 [DOI] [PubMed] [Google Scholar]
  • 90.Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis, Second Edition: A Regression-Based Approach. 2nd ed. New York, NY: Guilford Publications; 2018. 713 p. [Google Scholar]
  • 91.Ithisuphalap J, Rich PR, Zaragoza MS. Does evaluating belief prior to its retraction influence the efficacy of later corrections? Memory. 2020. May 27;28(5):617–31. doi: 10.1080/09658211.2020.1752731 [DOI] [PubMed] [Google Scholar]
  • 92.Connor Desai SA, Reimers S. Some misinformation is more easily countered: An experiment on the continued influence effect. In: Proceedings of the 40th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2018. p. 1542–7.
  • 93.Tully M, Vraga EK. Effectiveness of a News Media Literacy Advertisement in Partisan Versus Nonpartisan Online Media Contexts. J Broadcast Electron Media. 2017. Jan 2;61(1):144–62. [Google Scholar]
  • 94.Van Duyn E, Collier J. Priming and Fake News: The Effects of Elite Discourse on Evaluations of News Media. Mass Commun Soc. 2019. Jan 2;22(1):29–48. [Google Scholar]
  • 95.Brydges CR, Gordon A, Ecker UKH. Electrophysiological correlates of the continued influence effect of misinformation: an exploratory study. J Cogn Psychol. 2020. Nov 16;32(8):771–84. [Google Scholar]
  • 96.Tulving E. Elements of episodic memory. Oxford: Clarendon Press; 1983. [Google Scholar]
  • 97.Grice P. Logic and conversation. In: Cole P, Morgan JL, editors. Syntax and semantics 3: Speech acts. San Diego, CA: Academic Press; 1975. p. 41–58. [Google Scholar]
  • 98.Blank H. Memory states and memory tasks: An integrative framework for eyewitness memory and suggestibility. Memory. 1998. Sep;6(5):481–529. doi: 10.1080/741943086 [DOI] [PubMed] [Google Scholar]
  • 99.Prasad M, Perrin AJ, Bezila K, Hoffman SG, Kindleberger K, Manturuk K, et al. “There Must Be a Reason”: Osama, Saddam, and Inferred Justification. Sociol Inq. 2009;79(2):142–62. [Google Scholar]
  • 100.Ecker UKH. Why rebuttals may not work: the psychology of misinformation. Media Asia. 2017. Dec 21;44(2):79–87. [Google Scholar]
  • 101.McEachan RRC, Conner M, Taylor NJ, Lawton RJ. Prospective prediction of health-related behaviours with the Theory of Planned Behaviour: a meta-analysis. Health Psychol Rev. 2011. Sep 1;5(2):97–144. [Google Scholar]
  • 102.Blank H. Another look at retroactive and proactive interference: A quantitative analysis of conversion processes. Memory. 2005. Feb;13(2):200–24. doi: 10.1080/09608210344000698 [DOI] [PubMed] [Google Scholar]
  • 103.Ecker UKH, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, et al. The psychological drivers of misinformation belief and its resistance to correction. Nat Rev Psychol. 2022. Jan;1(1):13–29. [Google Scholar]
  • 104.Szpitalak M. W kierunku poprawy jakości zeznań świadków. Pozytywne i negatywne następstwa ostrzegania o dezinformacji. Kraków: Wydawnictwo Uniwersytetu Jagiellońskiego; 2015. 294 p. [Google Scholar]
  • 105.Pennycook G, Rand DG. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition. 2019. Jul 1;188:39–50. doi: 10.1016/j.cognition.2018.06.011 [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Margarida Vaz Garrido

23 Aug 2021

PONE-D-21-17666

Vaccination against misinformation: The inoculation technique reduces the continued influence effect

PLOS ONE

Dear Dr. Polczyk,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

ACADEMIC EDITOR:

As you can see below, both expert reviewers found your study interesting and offer extensive and extremely valuable comments to improve your manuscript. However, while both have identified some merits, there are also conceptual and methodological issues that should be fully addressed. Overall, from my own assessment, I agree with most of the presented comments. I am not going to reiterate them all. Still, but I would suggest particular attention to the following:

Moderate your claims and do not overstate your findings

Stick to the terms/concepts (sometimes redundancy is a good thing)

Provide details on the Power calculation, namely the effect size specification

Please consider the additional references suggested by the reviewers

All the remaining comments of the reviewers should be comprehensively addressed.

Please submit your revised manuscript by Oct 07 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Margarida Vaz Garrido

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Thank you for stating the following financial disclosure: 

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

At this time, please address the following queries:

a) Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution. 

b) State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

c) If any authors received a salary from any of your funders, please state which authors and which funders.

d) If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of PONE-D-21-17666

1. Abstract: Rephrase “new highly applicable technique has been discovered” – You hardly discovered this technique.

2. p.3: Any reference to Nyhan & Reifler’s 2010 study must be accompanied by a reference to a failure to replicate their work, e.g. https://doi.org/10.1007/s11109-018-9443-y.

3. p.3 reference to the integration accounts of the CIE should refer to Kendeou’s KReC model, e.g. https://psycnet.apa.org/record/2014-41945-016 and https://doi.org/10.3758/s13421-018-0848-y.

4. p.4 “it is assumed that the model updating leads to local consistency and to errors in drawing conclusions when the situation is assessed globally” requires more explanation.

5. p.4: The selective retrieval account does not *necessarily* require the assumption of dual processes.

6. p.4: “CIE occurs when both misinformation and its correction are activated” – a CIE could also occur when the correction is not activated at all.

7. p.4/5: “controlled processes based on cognitive resources” is an odd phrase. Of course cognitive processes are based on cognitive resources.

8. p.5: illusory truth means familiar (e.g. repeated) information is more likely perceived as true, not “information known beforehand”

9. p.5: When discussing the potential role of worldview effects, the discussion needs to be more balanced, as there is also evidence that worldview does not matter much. For a recent discussion, see https://doi.org/10.1098/rstb.2020.0145

10. p.5/6: When discussing factors that are useful to reduce misinformation reliance, consider consulting https://doi.org/10.1371/journal.pone.0210746.

11. p.6 It is incorrect to state that “no novel techniques limiting the impact of misinformation in the CIE paradigm were identified”, not only in light of my previous comment, but also in light of existing research on inoculation in this domain. Do not overstate the novelty of your work please.

12. p.8: it should be mentioned that in Cook et al.’s study, the inoculation led to generalization from tobacco to climate change.

13. p.9: when claiming that no research has tested inoculation against neutral/fictional misinformation, it may be worth discussing 10.31234/osf.io/48zqn, which used fictional misinformation (albeit about a real-world topic). This is also relevant as it failed to replicate Rich & Zaragoza’s implied vs. explicit misinformation difference referred to on p.12.

14. p.11: I cannot replicate the power analysis. To detect a 5*2 within-between interaction effect of f = 0.1 with alpha = 0.05 and beta = 0.15 requires a sample of N = 340, according to my G*Power analysis. Note that even greater power is needed to test some of the specific hypotheses, so technically I would argue the study is underpowered, especially given how much emphasis is placed in the discussion on some of the simple contrasts between two specific cells (and given no correction for multiple comparisons is applied). If an (online) replication is possible, I would strongly suggest that, even if it were just one that focuses on key conditions. If not, achieved power should be discussed as a limitation.

15. p.13 I was initially confused about the “six rating scales” – there seem to be two scales per scenario (one targeting the misinformation, one the retraction), so with six scenarios, I thought there should be 12 scales, but there are only 3 retraction scenarios, so it’s 2*3. Please add some additional clarification.

16. p.15 (but also throughout): All examples should make sense to a reader not familiar with the materials. This could be achieved by sticking to one scenario across examples. For example, it is unclear what role “Harter's son” plays in the theft scenario.

17. p.15 an ANOVA with two factors is not a one-way ANOVA.

18. p.15: There is no “main effect of the interaction” – it’s an interaction effect. On p.17, is “main interaction effect” meant to mean “three-way interaction effect”?

19. Figures: Apart from the obvious issue with the y-axis labels, I think the no-correction control condition should be presented all the way on the left; also the error bars need to be specified.

20. p.17 please rephrase, avoiding the exaggerated language (“proved”, “extremely”): “inoculation has proved to be extremely effective in reducing CIE, but only for highly credible sources of retraction”. Again, a replication would be required to make any strong claims. Also, “prove” on p.21 is too strong.

21. p.19: The finding that “belief in misinformation, but not in retraction, predicted the reliance on misinformation” meshes well with the ERP results here: https://doi.org/10.1080/20445911.2020.1849226

22. p.21 : I did not understand how “Skepticism could be the result of maintaining the coherence of the model, which prevents the creation of a gap while blocking the impact of retractions.”. Please unpack this for the reader.

23. p.22 I agree with the conclusion that it seems that an inoculation “put participants into a state of

heightened general skepticism” – consider cross-referencing this with other literature on response criterion effects. Did inoculated participants generally write fewer words?

24. p.23 I did not understand why it could be considered “right to rely on critical information” in the expert-correction condition?

25. p.23 I did not follow the discussion on generated information, because participants in this study did not generate the information. I did not understand why information generation was (related to?) “the reason why participants rejected the critical information which "came to their minds" more easily than one that was just a part of a story, which just was not interpreted as a "candidate for being false"”. Please unpack and rephrase.

26. p.24 The notion that repeating misinformation in the context of a correction may lead to increased misinformation reliance is contradicted by a number of studies (e.g., https://doi.org/10.1016/j.jarmac.2017.01.014, https://doi.org/10.1186/s41235-020-00241-6).

27. p.25 It is unclear what “inoculation does not immunize to the scenario schema” means.

28. It may be good to add a note to either introduction or discussion that the two theoretical CIE accounts are not mutually exclusive and may be complementary.

29. The discussion is much too long, and could be easily cut in half. It contains too much redundant summary, and tends to place too much weight on specific condition differences that the study was not powered to assess properly.

30. While the English is largely acceptable, there were a few sentences that were confusing.

a. Abstract: This sentence is confusing: “The results showed that the reliability of the sources of corrections did not affect their processing when participants were not inoculated, but, at the same time, a significant reduction in the reliance on misinformation among vaccinated participants when the correction was made from a highly credible source was observed.” Break down into 2 sentences, starting the second with “When participants were inoculated…”

b. Abstract: What does “within the remembering framework” mean? Rephrase.

c. p.4: “where the whole model is reconstructed and new is created” should read “where the whole model is reconstructed and a new one created” or “where the whole model is reconstructed and created anew”

d. p.5: “information being coded is always treated as it was true and may be falsified later by attaching a negation tag to itself” should read “information being encoded is always treated initially as if it were true and is only falsified subsequently by attaching a negation tag” (information cannot attach anything to itself)

e. p.5: “non-memory factors, e.g. motivational (e.g. attitudes and the worldview)” should read “non-memory factors such as motivational factors (e.g., a person’s attitudes and worldview)”

f. p.7: “attitudes that are not vaccinated” – attitudes cannot be vaccinated

g. p.10: This is confusing—consider rephrasing and splitting into two sentences: “Given that retractions are more effective if their source is trusted but not necessarily expert, not only replication of existing reports can be expected (Hypothesis 1), but also different vaccination effects depending on the reliability of the sources of the retractions.”

h. p.14: This needs revision: “For ratings 0 or 1, message informing of surviving the misinformation attack was presented; for ratings 2-4 it informed about the occurrence of resistance, but the impact of misinformation on the participant’s decision was accented”

i. p.19: I think “The effect of inoculation on belief in misinformation and retraction, as hypothesized, was observed for the credible source condition, whereas inoculation significantly increased belief in retraction and lowered belief in misinformation” needs to read “The effect of inoculation on belief in misinformation and retraction, as hypothesized, was observed for the credible source condition, *where* inoculation significantly increased belief in retraction and lowered belief in misinformation”

31. The discussion section was particularly difficult to follow. Consider the following section as an example: “On the other hand, the results for credible source condition, in which there were no differences in the estimates of belief and in which neither of these variables was an inferential reasoning score predictor, may indicate that the respondents paid some attention to the reliability of the sources. Therefore, skepticism towards corrections may play a greater role considering that even with lower estimates of belief in misinformation compared to the other two retraction conditions, the results of inference in the credible source condition did not differ from them. As a result, one may express the same estimates of belief, but skepticism requires them to be careful about the submitted memory reports and provide the answer that best explains the described events. The skepticism itself may result from the

fact that a situation where the critical information perfectly explaining the described events may be considered, for some reason, erroneous, is thought to be suspicious by participants.” – I don’t really understand any of the four sentences in that section. Simplify the sentence structures. Shorten the sentences where possible. Avoid referring to “these variables” or “them”, instead always specify what you mean. Choose one set of terms and then stick to those terms; e.g. stick to the term “inference score” consistently, without paraphrasing (inferential reasoning score); what are “results of inference”? Also shorten the paragraphs, as there are multiple paragraphs that span more than a page. Another example is on p.24: What does “its” refer to in “Retraction may therefore be less credible as it would lead to its creation”? I stress these are merely examples; the entire discussion requires the authors’ careful attention.

32. There are many typos and minor language issues throughout, but these can probably be dealt with by the editorial office. Just pointing out a typo that might escape them on p.15: Huyhn

Reviewer #2: Study tests the impact of inoculation on reducing the continued influence effect (CIE) – finding that when participants are inoculated before showing misinformation and a retraction, it reduces reliance on misinformation to the same level as a control group that weren’t exposed to the misinformation (e.g., inoculation combined with a retraction potentially eliminates the CIE). This is an interesting and insightful result, in a well-designed experiment, and worthy of publication. The connection of inoculation theory with CIE is novel.

I have no problems with the experiment or results. Generally, there is one omission in the discussion/conclusion that I would like to see addressed. The inoculation seems to warn against the general possibility of being misinformed, which is not an optimal way to inoculate people against misinformation as it can breed general cynicism resulting in not only decreased vulnerability to misinformation but also decreased belief in accurate news. There is a body of literature exploring the breeding of cynicism when interventions against misinformation potentially decrease trust in accurate news sources that is relevant to this discussion (Ashley, Poepsel, & Willis, 2010; Mihailidis, 2009; Pennycook & Rand,2017; Tully & Vraga, 2017; Van Duyn & Collier, 2017).

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Apr 28;17(4):e0267463. doi: 10.1371/journal.pone.0267463.r002

Author response to Decision Letter 0


6 Nov 2021

RE: Revised version of the article: “Vaccination against misinformation: The inoculation technique reduces the continued influence effect”

Dear Editors,

Please find enclosed the revised version of our article. Below we listed all changes that were made:

Major changes:

1. We shortened the discussion part by about 1/3 of its previous length and made it clearer.

2. We corrected some linguistic mistakes thorough the text.

3. Aside from the research mentioned by reviewers, we added in the introduction and in the discussion sections some new research references:

a. https://doi.org/10.3758/s13421-021-01232-8

b. https://doi.org/10.3758/s13421-011-0179-8

c. https://doi.org/10.1073/pnas.2020043118, https://doi.org/10.1111/jcom.12171

d. https://doi.org/10.1080/01296612.2017.1384145

e. https://doi.org/10.1080/17437199.2010.521684

f. Connor Desai SA. (Dis) continuing the continued influence effect of misinformation [Doctoral thesis]. [London]: City, University of London; 2018.

g. Connor Desai SA, Reimers S. Some misinformation is more easily countered: An experiment on the continued influence effect. In: Proceedings of the 40th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2018. p. 1542–7.

We also tried to attend to all comments and recommendations made by the reviewers. Here is the list of the specific changes and corrections:

Reviewer A:

1. We corrected the fragment “new highly applicable technique has been discovered” to “certain boundary conditions for inoculation efficiency have been discovered” (p. 2).

2. We added a reference to the research mentioned and also https://doi.org/10.1186/s41235-020-00241-6 when it comes to failure to replicate Nyhan & Reifler’s (2010) study (p. 3).

3. We added references mentioned about the KReC model to the discussion of the integration accounts (p. 4).

4. We rephrased the sentence „it is assumed that the model updating leads to local consistency and to errors in drawing conclusions when the situation is assessed globally” into „In the case of CIE, it is assumed that the model updating leads to local consistency but conclusions are drawn globally, leading to errors due to local sustaining of misinformation” (p. 4).

5. We rephrase some phrases in the description of the selective retrieval account to make it clear that this approach does not necessarily require the assumption of dual processes and that the CIE could also occur when the correction is not activated (p. 4 / p. 5).

6. We have deleted the fragment „based on cognitive resources” from “controlled processes based on cognitive resources” (p. 5).

7. We corrected the description of the illusory truth effect (p.5)

8. Aside from mentioned https://doi.org/10.1098/rstb.2020.0145, we added also https://doi.org/10.1098/rsos.180593, https://doi.org/10.1098/rsos.160802, https://doi.org/10.1111/pops.12586, when it comes to discussing the role of worldview effects to make the discussion more balanced (p. 5).

9. We mentioned https://doi.org/10.1371/journal.pone.0210746 to the discussion of factors useful to reduce misinformation reliance (p. 6).

10. We change some phrases about the fragment where techniques of limiting the CIE were discussed, that previously claimed that „no novel techniques limiting the impact of misinformation in the CIE paradigm were identified”. We meant the particular laboratory „narrative paradigm” of CIE, as it is usually performed (Ecker et al., 2010; Johnson and Seifert, 1994) and as it was defined by Connor Desai (2018) in her work. Nevertheless, we added also some clarification about it when discussing Tay et al.’s (2021) study (p. 9).

11. We mentioned that in the Cook et al.’s (2017) study the inoculation led to the generalization of resistance from the explanation of strategy used by the tobacco industry to misinformation about climate change (p. 8).

12. We discussed Tay et al.’s (2021) 10.31234/osf.io/48zqn work about the inoculation against fictional misinformation (p. 9). We also refer to this study in the discussion when it comes to failing to replicate Rich & Zaragoza’s implied vs. explicit misinformation difference (p. 22) and in the discussion about the link between misinformation compliance at cognitive and behavioral levels (p. 29).

13. We repeated the power analysis in the G*Power, but we achieved different results than You did. Perhaps it happened because You changed the content of the field „Number of groups” to the field „Number of measurements”. Note that in our study groups were 2 and measurements were 5. We attach a screenshot of our calculations below.

14. We added clarification about the rating scales. The rating scales concerned only the 3 retraction conditions. Each participant had access to only one scenario for each retraction condition, so the scales were for only three scenarios. As there were 2 scales for each scenario, 6 scales per person are obtained (p. 14).

15. We added changes when it comes to the examples of the participants’ responses – now we stick only to two scenarios (warehouse fire and car accident) (p. 14 and throughout).

16. We deleted the „one-way” from the ANOVA description (p. 16).

17. We rephrase „main effect of the interaction” to „interaction effect” (p. 16).

18. We changed Figure 1 that the no-correction control condition is now presented on the left (we also made appropriate changes to the description of the figure) and specified error bars in the descriptions of figures. We also found out that the issues with y-axis labels were due to the conversion of the files from the .cdr to .eps. We couldn't fix this in the EPS file, so we're sending the corrected figures in a PDF version.

19. We corrected errors resulting from the translation from Polish to English, e.g. exaggerated language (throughout the text).

20. We added a reference to the Brydges et al.’s (2020) https://doi.org/10.1080/20445911.2020.1849226 study when it comes to the finding that belief in misinformation predicted the reliance on misinformation (p. 23).

21. We rephrased and added clarification when it comes to speculations about skepticism as a result of maintaining the coherence of the mental model, also adding a reference to the motivational factors (https://doi.org/10.3758/s13421-021-01232-8) (p. 21).

22. We did our best but unfortunately we couldn’t find any research concerning both response criterion effects and skepticism.

23. In the response to the comment about why „it could be considered “right to rely on critical information” in the expert-correction condition?”: we have deleted the fragment (p. 22) while shortening the discussion part, but in the latter text (p. 24) there is an explanation of that reasoning with the reference to the Connor Desai et al.’s (2020) study, in which authors concluded that under certain conditions (e.g. when the source of the retraction is unreliable) it may be considered rational to rely on misinformation. In this study, the expert source of the retraction was operationalized as in Guillory & Geraci’s (2013) study, and therefore it could be considered by participants as unreliable - thus it may be rational to rely on critical information.

24. We rephrased the part about generated information to be more clear (p. 22).

25. We deleted the fragment about the repetition of misinformation in the context of a correction that may lead to increased misinformation reliance (previously on p. 24).

26. We deleted the fragment „inoculation does not immunize to the scenario schema” as it was not clear and redundant, and also in the process of shortening the discussion section (previously on p. 24).

27. We added a note that two theoretical CIE accounts are not exclusive and may be complementary both in the introduction (p. 3) and in the discussion (p. 25).

28. We corrected some sentences mentioned:

a. “The results showed that the reliability of the sources of corrections did not affect their processing when participants were not inoculated, but, at the same time, a significant reduction in the reliance on misinformation among vaccinated participants when the correction was made from a highly credible source was observed.” was broke down into: „The results showed that the reliability of the sources of corrections did not affect their processing when participants were not inoculated. However, under inoculation condition, a significant reduction in the reliance on misinformation among vaccinated participants when the correction was made from a highly credible source was observed.” (p. 2).

b. We added clarification what “within the remembering framework” means by adding to it sentence: “within the remembering framework describing the conversion from memory traces to behavioral memory statements” (p. 2).

c. The sentence “where the whole model is reconstructed and new is created” was changed to „where the whole model is reconstructed and a new one is created” (p. 4).

d. The sentence “information being coded is always treated as it was true and may be falsified later by attaching a negation tag to itself” was changed into “information being encoded is always treated initially as if it were true and is only falsified subsequently by attaching a negation tag” (p. 5).

e. The sentence “non-memory factors, e.g. motivational (e.g. attitudes and the worldview)” was changed into “non-memory factors such as motivational factors (e.g., person’s attitudes and worldview)” (p. 5).

f. We rephrased sentence containing “attitudes that are not vaccinated” to the „Inoculation treatments are also able to protect attitudes that were not the subject of protection by vaccination but are somewhat related to the target attitude” (p. 7).

g. The sentence “Given that retractions are more effective if their source is trusted but not necessarily expert, not only replication of existing reports can be expected (Hypothesis 1), but also different vaccination effects depending on the reliability of the sources of the retractions.” was changed into „Given that retractions are more effective if their source is trusted but not necessarily expert (16,19,50), replication of existing reports can be expected (Hypothesis 1). We also expected different vaccination effects depending on the reliability of the sources of the retractions.” (p. 10).

h. The fragment “For ratings 0 or 1, message informing of surviving the misinformation attack was presented; for ratings 2-4 it informed about the occurrence of resistance, but the impact of misinformation on the participant’s decision was accented” was rephrased into „For ratings 0 or 1 message informed participants about their low misinformation compliance. For ratings 2-4 message informed participants about their medium misinformation compliance: it accented to both occurrence of some resistance and some acceptance of misinformation” (p. 15).

i. We corrected a mistake in the sentence “The effect of inoculation on belief in misinformation and retraction, as hypothesized, was observed for the credible source condition, *whereas* inoculation significantly increased belief in retraction and lowered belief in misinformation” (p. 19).

29. As possible, due to comments, we tried to simplify sentence structures, specified variables when referring to them, and unified terms used in the discussion section.

30. We corrected a typo „Huyhn” into „Huynh” (p. 16).

Reviewer B

1. We added reference to the research on cynicism and decreasing trust in the accurate news as a result of the inoculation procedure (https://doi.org/10.1080/15205436.2018.1511807, https://doi.org/10.1080/08838151.2016.1273923) in the discussion section (p. 22).

Yours sincerely

Romuald Polczyk

Attachment

Submitted filename: Response to reviewers.doc

pone.0267463.s003.doc (96KB, doc)

Decision Letter 1

Margarida Vaz Garrido

10 Jan 2022

PONE-D-21-17666R1Vaccination against misinformation: The inoculation technique reduces the continued influence effectPLOS ONE

Dear Dr. Polczyk

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

I commend your efforts in revising the manuscript, which is now greatly improved. Below you can find the reviewers’ comments. As you can see, these are mostly minor points that you should be easily able to address. However, I emphasize the importance of addressing all the comments provided, particularly the power issues (following the recommendations of Rev 1) and the discussion of source characteristics and clarification of the hypotheses (suggested by Rev 2).

Please consider the additional references suggested by the reviewers

Please proofread the paper (Rev 1 offers several suggestions in this regard)

Please submit your revised manuscript by March 15. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Margarida Vaz Garrido

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: (No Response)

Reviewer #3: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: (No Response)

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Review of PONE-D-21-17666-R1

The authors should be commended for a thorough revision. I only have a few remaining points, all minor.

1. I believe the power analysis was conducted incorrectly. Effect sizes provided by commercial statistics programs already account for correlation among repeated measures. To avoid this, the option “as in SPSS” should be selected. This returns N = 340. If the authors believe this is incorrect, they need to at least report and justify their chosen value for the correlation. Note that even greater power is needed to test some of the specific hypotheses, so technically I would argue the study is underpowered, especially given how much emphasis is placed in the discussion on some of the simple contrasts between two specific cells (and given no correction for multiple comparisons is applied). If an (online) replication is possible, I would strongly suggest that, even if it were just one that focuses on key conditions. If not, achieved power should be discussed as a limitation.

2. Abstract: Unclear what “behavioral memory statements” are—perhaps use “behavioral manifestations of memory” as in the GD?

3. P.13: fix odd format of “Guillory & Geraci”

4. P.21 “most likely be an effect” should be “most likely an effect”

5. P.23 “as was in … condition” should be “as was the case in the … condition”

6. P.23 It should be “only when one generates … but not when one is”; on p.26 it should also be “one may .. with one’s worldview”; p.27 “If one experiences … one may want to … keep one’s assumptions”; p.28 “one can…one declares that one does not believe”

7. P.25 “not exclusively for the previous explanation” should be “not mutually exclusive with the previous explanation”

8. P.26: “a crucial role may play a number of factors” should be “a number of factors may play a crucial role”

9. P.27 “yet take advantage of misinformation” should be “yet rely [or: make use] of misinformation”; “failure to the retraction retrieval” should be “failure of retraction retrieval”

10. P.27 “by, for example, by”

11. P.27: Re “there must be some explanation for certain events", the authors may find this interesting: https://doi.org/10.1111/j.1475-682X.2009.00280.x

12. P.28 “exposed to some problem-solving process” should be “involved in some problem-solving process”

13. P.30 “under retractions’ highly credible source condition” – please rephrase (e.g., “under conditions where a retraction came from a highly credible source”)

14. P.30 either say “demonstrate consistent inoculation efficacy” or “demonstrate a consistent inoculation effect”

15. P.31 “exploring the inoculation” should be “exploring inoculation”

16. P.31 “impact of inoculation on misinformation” should be “impact of inoculation on misinformation reliance”

Reviewer #3: This manuscript presents one study testing the effects of the inoculation technique on the continued influence effect (CIE), while also addressing the role that source characteristics like credibility, trustworthiness, and expertise play on the effectiveness of inoculation (and of retractions of misinformation, in general). The topic of the manuscript is a timely and relevant one. When reading the manuscript, I liked that the authors addressed the issue of misinformation correction by bringing together solid theoretical frameworks – those involved in the CIE and the inoculation theory. I also liked the Discussion section; I specifically liked that the authors tried to integrate the different theories in light of their results and put forward an alternative conceptual proposal for the CIE.

In general, I feel positive towards the publication of this work. But I have some comments that I’d like the authors to address before the manuscript is accepted for publication. Most of these comments have to do with the source characteristics that were manipulated, and one final comment refers to integration of the inoculation technique with relevant research regarding the detection of fake news and misinformation. I list my comments below:

1) Given that the manipulation of source characteristics is central to the study, I was surprised that those characteristics were not even briefly discussed in the introduction section. As such, the reader is left wondering why different effects of trustworthiness and expertise should be expected, or why a credible source should be more efficient in what comes to reducing/eliminating the effects of misinformation. The characteristics of the source should also be discussed in terms of their relation to the mechanisms underlying the CIE, and the inoculation technique, in processual terms - how do they interfere with the processing of information and how can this be integrated with the mechanisms underlying the CIE and inoculation?

2) Related to point 1, the hypotheses are confusing, and some seem to mix simple main effects with qualification/interaction effects. For example, in H1, the authors say “Half of the participants had previously undergone the inoculation procedure. Given that retractions are more effective if their source is trusted but not necessarily expert (16,19,50), replication of existing reports can be expected (Hypothesis 1).” So, first it seems they will refer to the inoculation effect, which they don’t. Then, they talk about trustworthiness vs. expertise effects. And finally, they say “replication of existing reports can be expected (Hypothesis 1).” What does this mean? Which reports? Do the authors mean previous studies, such as Ecker and Antonio’s study? H2 is also confusing and unclear. The authors say “We also expected different vaccination effects depending on the reliability of the sources of the retractions. The inoculated individual could rely less on misinformation if the source of the correction was credible in both dimensions: trustworthiness and expertise (85), but not if it was solely expert or trustworthy (Hypothesis 2)”. Why is that so? Why should we expect these qualifications of the retraction effects (the authors never explain why they expect that corrections from sources will be accepted more if the source is both trustworthy and expert, rather than just having one characteristic? Why is that?), and why only for the inoculated participants? I can’t find the rational for the hypotheses in the information about source characteristics that is provided in the introduction (which is very little). The rational for H3 and H4 also needs to be better explained. I believe all the hypotheses would be clearer if the introduction gave the reader more information on source characteristics effects and their relation to correction of misinformation, the CIE, and inoculation.

3) I found the Methods section a little confusing, especially in what concerns the description of the materials and the procedure of the retraction scenarios. Regarding the operationalization of the different types of sources, it is difficult to understand exactly what it meant by expert, trustworthy, or credible sources. I know the materials are available in an appendix, but it would make the comprehension of the manipulations easier if one example could be given in the materials and procedure section.

4) Related to this, when examining the different scenarios, I felt that expert sources were somewhat different between the scenarios. As an example, in scenario 1, the expert source is “technicians employed at the municipal wastewater treatment plant”; in scenario 2, the expert source is “Evan's friend”. It seems to me that expert sources in scenario 1 align a lot more with the definition provided by the authors (“a source with access to information [not as a result of professionalism, but simply because of the possibility to access true information]”) than the expert source in scenario 2. Also, in scenario 5, why is a local newspaper trustworthy and not expert? Were the scenarios and the sources taken from previous research? To what extent can we be sure that sources were indeed interpreted as credible, expert, and trustworthy? And to what extent can we be sure that there were no differences in these interpretations between scenarios? Unless the manipulations were taken from previous research (it’s not clear, and if so, it should be clearly stated which scenario and which source operationalization came from which previous study) that had measures to make sure the sources are indeed interpreted as having the characteristics they’re intended to have, the possibility of differences between the scenarios should at least be discussed.

One note regarding the materials appendix, scenario 4 seem to be incomplete (the first message introducing the story is missing?)

5) Given the extension of the analyses and results presented, it would be easier to follow if the authors referred to the specific hypotheses stated in the introduction that each analysis and effects relate to.

6) Regarding the practical implications, I was not entirely convinced how one can implement the inoculation technique in day-to-day situations and in the general situations where misinformation is encountered. What comes to my mind is to have the kind of exercise the authors used to inoculate participants in media and information websites, as a way to make people aware of the prevalence of misinformation which in turn may make them more careful and detailed in their analysis and processing of the information they get. And this made me think of the work by Gordon Pennycook and his collaborators showing that propensity for analytical thinking is positively correlated with the ability to discern fake news from real news (e.g., Pennycook & Rand, 2019). I believe the manuscript would gain if the authors discussed the effects of inoculation also in light of this line of research trying to find ways to combat misinformation.

7) The full text should be proof read by a native English speaker, as there are some typos and some strange sentence formulations. This may be one of the reasons why some sections were confusing (e.g., the hypotheses, the methods).

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Apr 28;17(4):e0267463. doi: 10.1371/journal.pone.0267463.r004

Author response to Decision Letter 1


31 Mar 2022

Prof. Romuald Polczyk

Institute of Psychology, Jagiellonian University

6 Romana Ingardena Street, 30-060 Kraków, Poland

Tel. (+48) 12 663 24 36

Email: romuald.polczyk@uj.edu.pl

RE: Revised version of the article: “Vaccination against misinformation: The inoculation technique reduces the continued influence effect”

Dear Editors,

Please find enclosed the revised version of our article. Below we list all changes that have been made:

Major changes:

1. We have added a paragraph about the source characteristics and their relation to the CIE and inoculation mechanisms (p. 10/11).

2. We have rebuilt the hypothesis section, linking it to the newly added paragraph mentioned above (p. 11/12) and adding many more explanations.

3. We have added a new “Power analysis” section, in which we have reworked the fragment about power analysis (p. 13). We also have added a fragment about the limitations of achieved power in the discussion (p. 33).

4. We have expanded the fragment about the practical implications of our results in improving inoculation interventions (p. 32/33).

5. We have corrected some linguistic mistakes thorough the text.

6. Aside from the research mentioned by reviewers, we have added some new research references in the introduction and in the discussion sections:

a. https://doi.org/10.1111/j.1475-682X.2009.00280.x

b. https://doi.org/10.1080/10463280802643640

c. https://doi.org/10.1111/j.1559-1816.2010.00620.x

d. https://doi.org/10.1016/j.cognition.2018.06.011

e. https://doi.org/10.1038/s44159-021-00006-y

f. Szpitalak, M. (2015). W kierunku poprawy jakości zeznań świadków. Pozytywne i negatywne następstwa ostrzegania o dezinformacji. Wydawnictwo Uniwersytetu Jagiellońskiego.

We have also tried to address to all comments and recommendations made by the reviewers. Here is the list of the specific changes and corrections:

Reviewer 1:

1. We have added a new “Power analysis” section in which we have reworked the fragment about power analysis (p. 13). We hope that it is more adequate now. Also, we have added a fragment about the limitations of achieved power in the discussion (p. 33).

2. We have changed “behavioral memory statements” to “behavioral manifestations of memory” (p. 2, Abstract).

3. Unfortunately, we did not find any oddities in the format of “Guillory & Geraci”, therefore we did not make any corrections.

4. We have corrected “most likely be an effect” to “most likely an effect” (p. 24).

5. We have corrected “as was in … condition” to “as was the case in the … condition” (p. 25).

6. We have corrected:

a. “only when one generates … but not when they are” to “only when one generates … but not when one is” (p. 25);

b. “one may ... with their worldview” to “one may ... with one’s worldview” (p. 28);

c. “If one experiences … they may want to … keep their assumptions” to “If one experiences … one may want to … keep one’s assumptions” (p. 29);

d. “one can … they declare that they don’t believe” to “one can…one declares that one does not believe” (p. 30).

7. We have corrected “not exclusively for the previous explanation” to “not mutually exclusive with the previous explanation” (p. 27).

8. We corrected “a crucial role may play a number of factors” to “a number of factors may play a crucial role” (p. 28).

9. We have corrected:

a. “yet take advantage of misinformation” to “yet make use of misinformation” (p. 29);

b. “failure to the retraction retrieval” to “failure of retraction retrieval” (p. 29).

10. We have corrected “by, for example, by” to “for example, by”.

11. We have added https://doi.org/10.1111/j.1475-682X.2009.00280.x to the discussion, as suggested (p. 30).

12. We have corrected “exposed to some problem-solving process” to “involved in some problem-solving process” (p. 30).

13. We have rephrased “under retractions’ highly credible source condition” to “under conditions where a retraction came from a highly credible source” (p. 32).

14. We have changed “inoculation efficacy” to “inoculation effectiveness” (p. 33).

15. We have corrected “exploring the inoculation” to “exploring inoculation” (p. 34).

16. We have corrected “impact of inoculation on misinformation” to “impact of inoculation on misinformation reliance” (p. 34).

Reviewer 3

1. We have added a paragraph about the source characteristics and their relation to the CIE and inoculation mechanisms (p. 10/11).

2. We have rebuilt the hypothesis section, linking it to the newly added paragraph mentioned above (p. 11/12) and adding much more explanations. We hope that the hypotheses are now more understandable.

3. We have added a table with examples of certain sources from one of the scenarios (p. 15).

4. Unfortunately, it is true that we have no way of objectively assessing whether and to what extent may be the differences between the same source conditions in different scenarios. The best way to standardize these examples would be to perform a pilot study; however, for obvious reasons, we cannot do this. Thus, we made the decision to cover this issue on p. 33.

Also, Scenario 4 in the Appendix seems to be complete (the scenario was taken from Ecker & Antonio (2021), where it starts similarly).

5. We have added references to the specific hypotheses in the Results sections

a. H1 – p. 19

b. H2 – p. 19

c. H3 – p. 19

d. H4 – p. 20/21

e. H5 – p. 21/22

6. We have expanded the fragment about the practical implications of our results in improving inoculation interventions (p. 32/33). Although we could not add the proposed text, because our study did not include analytical thinking, we added it on p. 33.

7. Although we could not add the proposed Pennycook and Rand text in this excerpt, because our study did not include analytical thinking, we have added it to p. 32/33.

Yours sincerely

Romuald Polczyk

Attachment

Submitted filename: Response to reviewers - 2nd review.doc

pone.0267463.s004.doc (38KB, doc)

Decision Letter 2

Margarida Vaz Garrido

11 Apr 2022

Vaccination against misinformation: The inoculation technique reduces the continued influence effect

PONE-D-21-17666R2

Dear Dr. Polczyk

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Margarida Vaz Garrido

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The authors did a great job in addressing most of the final comments. I think this paper is now greatly improved and ready for publication. I wish the authors much success in their future research endeavors.

Reviewers' comments:

Acceptance letter

Margarida Vaz Garrido

19 Apr 2022

PONE-D-21-17666R2

Vaccination against misinformation: The inoculation technique reduces the continued influence effect

Dear Dr. Polczyk:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Margarida Vaz Garrido

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Data. Raw data.

    (XLSX)

    pone.0267463.s001.xlsx (19.1KB, xlsx)
    S1 File. Materials used in the study.

    (DOCX)

    pone.0267463.s002.docx (49.2KB, docx)
    Attachment

    Submitted filename: Response to reviewers.doc

    pone.0267463.s003.doc (96KB, doc)
    Attachment

    Submitted filename: Response to reviewers - 2nd review.doc

    pone.0267463.s004.doc (38KB, doc)

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES